Stratified sampling design based on data mining.
Kim, Yeonkook J; Oh, Yoonhwan; Park, Sunghoon; Cho, Sungzoon; Park, Hayoung
2013-09-01
To explore classification rules based on data mining methodologies which are to be used in defining strata in stratified sampling of healthcare providers with improved sampling efficiency. We performed k-means clustering to group providers with similar characteristics, then, constructed decision trees on cluster labels to generate stratification rules. We assessed the variance explained by the stratification proposed in this study and by conventional stratification to evaluate the performance of the sampling design. We constructed a study database from health insurance claims data and providers' profile data made available to this study by the Health Insurance Review and Assessment Service of South Korea, and population data from Statistics Korea. From our database, we used the data for single specialty clinics or hospitals in two specialties, general surgery and ophthalmology, for the year 2011 in this study. Data mining resulted in five strata in general surgery with two stratification variables, the number of inpatients per specialist and population density of provider location, and five strata in ophthalmology with two stratification variables, the number of inpatients per specialist and number of beds. The percentages of variance in annual changes in the productivity of specialists explained by the stratification in general surgery and ophthalmology were 22% and 8%, respectively, whereas conventional stratification by the type of provider location and number of beds explained 2% and 0.2% of variance, respectively. This study demonstrated that data mining methods can be used in designing efficient stratified sampling with variables readily available to the insurer and government; it offers an alternative to the existing stratification method that is widely used in healthcare provider surveys in South Korea.
Vollert, Jan; Maier, Christoph; Attal, Nadine; Bennett, David L.H.; Bouhassira, Didier; Enax-Krumova, Elena K.; Finnerup, Nanna B.; Freynhagen, Rainer; Gierthmühlen, Janne; Haanpää, Maija; Hansson, Per; Hüllemann, Philipp; Jensen, Troels S.; Magerl, Walter; Ramirez, Juan D.; Rice, Andrew S.C.; Schuh-Hofer, Sigrid; Segerdahl, Märta; Serra, Jordi; Shillo, Pallai R.; Sindrup, Soeren; Tesfaye, Solomon; Themistocleous, Andreas C.; Tölle, Thomas R.; Treede, Rolf-Detlef; Baron, Ralf
2017-01-01
Abstract In a recent cluster analysis, it has been shown that patients with peripheral neuropathic pain can be grouped into 3 sensory phenotypes based on quantitative sensory testing profiles, which are mainly characterized by either sensory loss, intact sensory function and mild thermal hyperalgesia and/or allodynia, or loss of thermal detection and mild mechanical hyperalgesia and/or allodynia. Here, we present an algorithm for allocation of individual patients to these subgroups. The algorithm is nondeterministic—ie, a patient can be sorted to more than one phenotype—and can separate patients with neuropathic pain from healthy subjects (sensitivity: 78%, specificity: 94%). We evaluated the frequency of each phenotype in a population of patients with painful diabetic polyneuropathy (n = 151), painful peripheral nerve injury (n = 335), and postherpetic neuralgia (n = 97) and propose sample sizes of study populations that need to be screened to reach a subpopulation large enough to conduct a phenotype-stratified study. The most common phenotype in diabetic polyneuropathy was sensory loss (83%), followed by mechanical hyperalgesia (75%) and thermal hyperalgesia (34%, note that percentages are overlapping and not additive). In peripheral nerve injury, frequencies were 37%, 59%, and 50%, and in postherpetic neuralgia, frequencies were 31%, 63%, and 46%. For parallel study design, either the estimated effect size of the treatment needs to be high (>0.7) or only phenotypes that are frequent in the clinical entity under study can realistically be performed. For crossover design, populations under 200 patients screened are sufficient for all phenotypes and clinical entities with a minimum estimated treatment effect size of 0.5. PMID:28595241
Bayesian Stratified Sampling to Assess Corpus Utility
Hochberg, J; Thomas, T; Hall, S; Hochberg, Judith; Scovel, Clint; Thomas, Timothy; Hall, Sam
1998-01-01
This paper describes a method for asking statistical questions about a large text corpus. We exemplify the method by addressing the question, "What percentage of Federal Register documents are real documents, of possible interest to a text researcher or analyst?" We estimate an answer to this question by evaluating 200 documents selected from a corpus of 45,820 Federal Register documents. Stratified sampling is used to reduce the sampling uncertainty of the estimate from over 3100 documents to fewer than 1000. The stratification is based on observed characteristics of real documents, while the sampling procedure incorporates a Bayesian version of Neyman allocation. A possible application of the method is to establish baseline statistics used to estimate recall rates for information retrieval systems.
A. Martín Andrés
2015-01-01
Full Text Available The Mantel-Haenszel test is the most frequent asymptotic test used for analyzing stratified 2 × 2 tables. Its exact alternative is the test of Birch, which has recently been reconsidered by Jung. Both tests have a conditional origin: Pearson’s chi-squared test and Fisher’s exact test, respectively. But both tests have the same drawback that the result of global test (the stratified test may not be compatible with the result of individual tests (the test for each stratum. In this paper, we propose to carry out the global test using a multiple comparisons method (MC method which does not have this disadvantage. By refining the method (MCB method an alternative to the Mantel-Haenszel and Birch tests may be obtained. The new MC and MCB methods have the advantage that they may be applied from an unconditional view, a methodology which until now has not been applied to this problem. We also propose some sample size calculation methods.
On the Impact of Bootstrap in Stratified Random Sampling
LIU Cheng; ZHAO Lian-wen
2009-01-01
In general the accuracy of mean estimator can be improved by stratified random sampling. In this paper, we provide an idea different from empirical methods that the accuracy can be more improved through bootstrap resampling method under some conditions. The determination of sample size by bootstrap method is also discussed, and a simulation is made to verify the accuracy of the proposed method. The simulation results show that the sample size based on bootstrapping is smaller than that based on central limit theorem.
Greil, Arthur L; McQuillan, Julia; Shreffler, Karina M; Johnson, Katherine M; Slauson-Blevins, Kathleen S
2011-12-01
Evidence of group differences in reproductive control and access to reproductive health care suggests the continued existence of "stratified reproduction" in the United States. Women of color are overrepresented among people with infertility but are underrepresented among those who receive medical services. The authors employ path analysis to uncover mechanisms accounting for these differences among black, Hispanic, Asian, and non-Hispanic white women using a probability-based sample of 2,162 U.S. women. Black and Hispanic women are less likely to receive services than other women. The enabling conditions of income, education, and private insurance partially mediate the relationship between race-ethnicity and receipt of services but do not fully account for the association at all levels of service. For black and Hispanic women, social cues, enabling conditions, and predisposing conditions contribute to disparities in receipt of services. Most of the association between race-ethnicity and service receipt is indirect rather than direct.
Global and Partial Errors in Stratified and Clustering Sampling
Giovanna Nicolini; Anna Lo Presti
2005-01-01
In this paper we split up the sampling error occurred in stratified and clustering sampling, called global error and measured by the variance of estimator, in many partial errors each one referred to a single stratum or cluster. In particular, we study, for clustering sampling, the empirical distribution of the homogeneity coefficient that is very important for settlement of partial errors.
Stratified source-sampling techniques for Monte Carlo eigenvalue analysis.
Mohamed, A.
1998-07-10
In 1995, at a conference on criticality safety, a special session was devoted to the Monte Carlo ''Eigenvalue of the World'' problem. Argonne presented a paper, at that session, in which the anomalies originally observed in that problem were reproduced in a much simplified model-problem configuration, and removed by a version of stratified source-sampling. In this paper, stratified source-sampling techniques are generalized and applied to three different Eigenvalue of the World configurations which take into account real-world statistical noise sources not included in the model problem, but which differ in the amount of neutronic coupling among the constituents of each configuration. It is concluded that, in Monte Carlo eigenvalue analysis of loosely-coupled arrays, the use of stratified source-sampling reduces the probability of encountering an anomalous result over that if conventional source-sampling methods are used. However, this gain in reliability is substantially less than that observed in the model-problem results.
Sequential stratified sampling belief propagation for multiple targets tracking
无
2006-01-01
Rather than the difficulties of highly non-linear and non-Gaussian observation process and the state distribution in single target tracking, the presence of a large, varying number of targets and their interactions place more challenge on visual tracking. To overcome these difficulties, we formulate multiple targets tracking problem in a dynamic Markov network which consists of three coupled Markov random fields that model the following: a field for joint state of multi-target, one binary process for existence of individual target, and another binary process for occlusion of dual adjacent targets. By introducing two robust functions, we eliminate the two binary processes, and then apply a novel version of belief propagation called sequential stratified sampling belief propagation algorithm to obtain the maximum a posteriori (MAP) estimation in the dynamic Markov network. By using stratified sampler, we incorporate bottom-up information provided by a learned detector (e.g. SVM classifier) and belief information for the messages updating. Other low-level visual cues (e.g. color and shape) can be easily incorporated in our multi-target tracking model to obtain better tracking results. Experimental results suggest that our method is comparable to the state-of-the-art multiple targets tracking methods in several test cases.
Hillson, Roger; Alejandre, Joel D; Jacobsen, Kathryn H; Ansumana, Rashid; Bockarie, Alfred S; Bangura, Umaru; Lamin, Joseph M; Stenger, David A
2015-01-01
There is a need for better estimators of population size in places that have undergone rapid growth and where collection of census data is difficult. We explored simulated estimates of urban population based on survey data from Bo, Sierra Leone, using two approaches: (1) stratified sampling from across 20 neighborhoods and (2) stratified single-stage cluster sampling of only four randomly-sampled neighborhoods. The stratification variables evaluated were (a) occupants per individual residence, (b) occupants per neighborhood, and (c) residential structures per neighborhood. For method (1), stratification variable (a) yielded the most accurate re-estimate of the current total population. Stratification variable (c), which can be estimated from aerial photography and zoning type verification, and variable (b), which could be ascertained by surveying a limited number of households, increased the accuracy of method (2). Small household-level surveys with appropriate sampling methods can yield reasonably accurate estimations of urban populations.
Kretzschmar, A; Durand, E; Maisonnasse, A; Vallon, J; Le Conte, Y
2015-06-01
A new procedure of stratified sampling is proposed in order to establish an accurate estimation of Varroa destructor populations on sticky bottom boards of the hive. It is based on the spatial sampling theory that recommends using regular grid stratification in the case of spatially structured process. The distribution of varroa mites on sticky board being observed as spatially structured, we designed a sampling scheme based on a regular grid with circles centered on each grid element. This new procedure is then compared with a former method using partially random sampling. Relative error improvements are exposed on the basis of a large sample of simulated sticky boards (n=20,000) which provides a complete range of spatial structures, from a random structure to a highly frame driven structure. The improvement of varroa mite number estimation is then measured by the percentage of counts with an error greater than a given level.
Bases of Schur algebras associated to cellularly stratified diagram algebras
Bowman, C
2011-01-01
We examine homomorphisms between induced modules for a certain class of cellularly stratified diagram algebras, including the BMW algebra, Temperley-Lieb algebra, Brauer algebra, and (quantum) walled Brauer algebra. We define the `permutation' modules for these algebras, these are one-sided ideals which allow us to study the diagrammatic Schur algebras of Hartmann, Henke, Koenig and Paget. We construct bases of these Schur algebras in terms of modified tableaux. On the way we prove that the (quantum) walled Brauer algebra and the Temperley-Lieb algebra are both cellularly stratified and therefore have well-defined Specht filtrations.
Atta Ullah
2014-01-01
Full Text Available In practical utilization of stratified random sampling scheme, the investigator meets a problem to select a sample that maximizes the precision of a finite population mean under cost constraint. An allocation of sample size becomes complicated when more than one characteristic is observed from each selected unit in a sample. In many real life situations, a linear cost function of a sample size nh is not a good approximation to actual cost of sample survey when traveling cost between selected units in a stratum is significant. In this paper, sample allocation problem in multivariate stratified random sampling with proposed cost function is formulated in integer nonlinear multiobjective mathematical programming. A solution procedure is proposed using extended lexicographic goal programming approach. A numerical example is presented to illustrate the computational details and to compare the efficiency of proposed compromise allocation.
A FAMILY OF ESTIMATORS FOR ESTIMATING POPULATION MEAN IN STRATIFIED SAMPLING UNDER NON-RESPONSE
Chaudhary, Manoj K.; RAJESH SINGH; Rakesh K. Shukla; MUKESH KUMAR; FLORENTIN SMARANDACHE
2015-01-01
Khoshnevisan et al. (2007) proposed a general family of estimators for population mean using known value of some population parameters in simple random sampling. The objective of this paper is to propose a family of combined-type estimators in stratified random sampling adapting the family of estimators proposed by Khoshnevisan et al. (2007) under non-response. The properties of proposed family have been discussed. We have also obtained the expressions for optimum sampl...
A FAMILY OF ESTIMATORS FOR ESTIMATING POPULATION MEAN IN STRATIFIED SAMPLING UNDER NON-RESPONSE
Chaudhary, Manoj K.; Rajesh Singh; Rakesh K. Shukla; Mukesh Kumar; Florentin Smarandache
2009-01-01
Khoshnevisan et al. (2007) proposed a general family of estimators for population mean using known value of some population parameters in simple random sampling. The objective of this paper is to propose a family of combined-type estimators in stratified random sampling adapting the family of estimators proposed by Khoshnevisan et al. (2007) under non-response. The properties of proposed family have been discussed. We have also obtained the expressions for optimum sample sizes of the strata i...
A stratified two-stage sampling design for digital soil mapping in a Mediterranean basin
Blaschek, Michael; Duttmann, Rainer
2015-04-01
The quality of environmental modelling results often depends on reliable soil information. In order to obtain soil data in an efficient manner, several sampling strategies are at hand depending on the level of prior knowledge and the overall objective of the planned survey. This study focuses on the collection of soil samples considering available continuous secondary information in an undulating, 16 km²-sized river catchment near Ussana in southern Sardinia (Italy). A design-based, stratified, two-stage sampling design has been applied aiming at the spatial prediction of soil property values at individual locations. The stratification based on quantiles from density functions of two land-surface parameters - topographic wetness index and potential incoming solar radiation - derived from a digital elevation model. Combined with four main geological units, the applied procedure led to 30 different classes in the given test site. Up to six polygons of each available class were selected randomly excluding those areas smaller than 1ha to avoid incorrect location of the points in the field. Further exclusion rules were applied before polygon selection masking out roads and buildings using a 20m buffer. The selection procedure was repeated ten times and the set of polygons with the best geographical spread were chosen. Finally, exact point locations were selected randomly from inside the chosen polygon features. A second selection based on the same stratification and following the same methodology (selecting one polygon instead of six) was made in order to create an appropriate validation set. Supplementary samples were obtained during a second survey focusing on polygons that have either not been considered during the first phase at all or were not adequately represented with respect to feature size. In total, both field campaigns produced an interpolation set of 156 samples and a validation set of 41 points. The selection of sample point locations has been done using
A FAMILY OF ESTIMATORS FOR ESTIMATING POPULATION MEAN IN STRATIFIED SAMPLING UNDER NON-RESPONSE
Manoj K. Chaudhary
2009-01-01
Full Text Available Khoshnevisan et al. (2007 proposed a general family of estimators for population mean using known value of some population parameters in simple random sampling. The objective of this paper is to propose a family of combined-type estimators in stratified random sampling adapting the family of estimators proposed by Khoshnevisan et al. (2007 under non-response. The properties of proposed family have been discussed. We have also obtained the expressions for optimum sample sizes of the strata in respect to cost of the survey. Results are also supported by numerical analysis.
Petraki, Ioanna; Arkoudis, Chrisoula; Terzidis, Agis; Smyrnakis, Emmanouil; Benos, Alexis; Panagiotopoulos, Takis
2017-01-01
Abstract Background: Research on Roma health is fragmentary as major methodological obstacles often exist. Reliable estimates on vaccination coverage of Roma children at a national level and identification of risk factors for low coverage could play an instrumental role in developing evidence-based policies to promote vaccination in this marginalized population group. Methods: We carried out a national vaccination coverage survey of Roma children. Thirty Roma settlements, stratified by geographical region and settlement type, were included; 7–10 children aged 24–77 months were selected from each settlement using systematic sampling. Information on children’s vaccination coverage was collected from multiple sources. In the analysis we applied weights for each stratum, identified through a consensus process. Results: A total of 251 Roma children participated in the study. A vaccination document was presented for the large majority (86%). We found very low vaccination coverage for all vaccines. In 35–39% of children ‘minimum vaccination’ (DTP3 and IPV2 and MMR1) was administered, while 34–38% had received HepB3 and 31–35% Hib3; no child was vaccinated against tuberculosis in the first year of life. Better living conditions and primary care services close to Roma settlements were associated with higher vaccination indices. Conclusions: Our study showed inadequate vaccination coverage of Roma children in Greece, much lower than that of the non-minority child population. This serious public health challenge should be systematically addressed, or, amid continuing economic recession, the gap may widen. Valid national estimates on important characteristics of the Roma population can contribute to planning inclusion policies. PMID:27694159
Huang, S.R. [Feng Chia Univ., Taichung (Taiwan, Province of China). Electrical Engineering Dept.
1997-03-01
A combined Monte Carlo and optimum stratified sampling method is presented to better estimate copper loss of a transmission system during a prespecified future period. This design seeks to enhance the precision of copper loss of transmission system estimation, while reducing computation time. The techniques included are optimum stratified sampling and separate ratio estimation. The optimum stratification rule aims to remove any judgemental input and to render the stratification process entirely mechanistic. The estimator, provided by ratio statistics of the sample, can avoid identification of the regression model and thus save computation time. The effectiveness of precision improvement is demonstrated. (UK)
Øren Anita
2008-12-01
Full Text Available Abstract Background Prior studies on the impact of problem gambling in the family mainly include help-seeking populations with small numbers of participants. The objective of the present stratified probability sample study was to explore the epidemiology of problem gambling in the family in the general population. Methods Men and women 16–74 years-old randomly selected from the Norwegian national population database received an invitation to participate in this postal questionnaire study. The response rate was 36.1% (3,483/9,638. Given the lack of validated criteria, two survey questions ("Have you ever noticed that a close relative spent more and more money on gambling?" and "Have you ever experienced that a close relative lied to you about how much he/she gambles?" were extrapolated from the Lie/Bet Screen for pathological gambling. Respondents answering "yes" to both questions were defined as Concerned Significant Others (CSOs. Results Overall, 2.0% of the study population was defined as CSOs. Young age, female gender, and divorced marital status were factors positively associated with being a CSO. CSOs often reported to have experienced conflicts in the family related to gambling, worsening of the family's financial situation, and impaired mental and physical health. Conclusion Problematic gambling behaviour not only affects the gambling individual but also has a strong impact on the quality of life of family members.
Jian-Gao Fan; Xiao-Bu Cai; Lui Li; Xing-Jian Li; Fei Dai; Jun Zhu
2008-01-01
AIM: To examine the relations of alcohol consumption to the prevalence of metabolic syndrome in Shanghai adults.METHODS: We performed a cross-sectional analysis of data from the randomized multistage stratified cluster sampling of Shanghai adults, who were evaluated for alcohol consumption and each component of metabolic syndrome, using the adapted U.S. National Cholesterol Education Program criteria. Current alcohol consumption was defined as more than once of alcohol drinking per month.RESULTS: The study population consisted of 3953participants (1524 men) with a mean age of 54.3 ± 12.1years. Among them, 448 subjects (11.3%) were current alcohol drinkers, including 405 males and 43 females.After adjustment for age and sex, the prevalence of current alcohol drinking and metabolic syndrome in the general population of Shanghai was 13.0% and 15.3%,respectively. Compared with nondrinkers, the prevalence of hypertriglyceridemia and hypertension was higher while the prevalence of abdominal obesity, low serum high-density-lipoprotein cholesterol (HDL-C) and diabetes mellitus was lower in subjects who consumed alcohol twice or more per month, with a trend toward reducing the prevalence of metabolic syndrome. Among the current alcohol drinkers, systolic blood pressure, HDL-C, fasting plasma glucose, and prevalence of hypertriglyceridemia tended to increase with increased alcohol consumption.However, Iow-density-lipoprotein cholesterol concentration,prevalence of abdominal obesity, low serum HDL-C andmetabolic syndrome showed the tendency to decrease.Moreover, these statistically significant differences were independent of gender and age.CONCLUSION: Current alcohol consumption is associatedwith a lower prevalence of metabolic syndrome irrespe-ctive of alcohol intake (g/d), and has a favorable influence on HDL-C, waist circumference, and possible diabetes mellitus. However, alcohol intake increases the likelihoodof hypertension, hypertriglyceridemia and hyperglycemia
Bhatia, Triptish; Gettig, Elizabeth A; Gottesman, Irving I; Berliner, Jonathan; Mishra, N N; Nimgaonkar, Vishwajit L; Deshpande, Smita N
2016-12-01
Schizophrenia (SZ) has an estimated heritability of 64-88%, with the higher values based on twin studies. Conventionally, family history of psychosis is the best individual-level predictor of risk, but reliable risk estimates are unavailable for Indian populations. Genetic, environmental, and epigenetic factors are equally important and should be considered when predicting risk in 'at risk' individuals. To estimate risk based on an Indian schizophrenia participant's family history combined with selected demographic factors. To incorporate variables in addition to family history, and to stratify risk, we constructed a regression equation that included demographic variables in addition to family history. The equation was tested in two independent Indian samples: (i) an initial sample of SZ participants (N=128) with one sibling or offspring; (ii) a second, independent sample consisting of multiply affected families (N=138 families, with two or more sibs/offspring affected with SZ). The overall estimated risk was 4.31±0.27 (mean±standard deviation). There were 19 (14.8%) individuals in the high risk group, 75 (58.6%) in the moderate risk and 34 (26.6%) in the above average risk (in Sample A). In the validation sample, risks were distributed as: high (45%), moderate (38%) and above average (17%). Consistent risk estimates were obtained from both samples using the regression equation. Familial risk can be combined with demographic factors to estimate risk for SZ in India. If replicated, the proposed stratification of risk may be easier and more realistic for family members. Copyright © 2016. Published by Elsevier B.V.
Predominantly Low Metallicities Measured in a Stratified Sample of Lyman Limit Systems at z=3.7
Glidden, Ana; Cooksey, Kathy L; Simcoe, Robert A; O'Meara, John M
2016-01-01
We analyzed metallicities for 33 z=3.4-4.2 absorption line systems with large neutral hydrogen column densities, drawn from a sample of H I-selected of Lyman limit systems (LLSs) identified in Sloan Digital Sky Survey (SDSS) quasar spectra, and stratified based on metal line features. We obtained higher-resolution spectra with the Keck Echellette Spectrograph and Imager (ESI), selecting targets according to our stratification scheme in an effort to fully sample the LLS population metallicity distribution. We established a plausible range of H I column densities and measured the metal column densities (or limits) for ions of carbon, silicon, and aluminum. With simulations, we found ionization-corrected metallicities or upper limits, when appropriate. Interestingly, our ionization models were better constrained with enhanced {\\alpha}-to-aluminum abundances, with a median abundance ratio of [{\\alpha}/Al]=0.3. Measured metallicities were generally low, ranging from [M/H]=-3 to -1.68, with even lower metallicities...
Tim A. Moore
2016-01-01
Full Text Available DOI: 10.17014/ijog.3.1.29-51Stratified sampling of coal seams for petrographic analysis using block samples is a viable alternative to standard methods of channel sampling and particulate pellet mounts. Although petrographic analysis of particulate pellets is employed widely, it is both time consuming and does not allow variation within sampling units to be assessed - an important measure in any study whether it be for paleoenvironmental reconstruction or in obtaining estimates of industrial attributes. Also, samples taken as intact blocks provide additional information, such as texture and botanical affinity that cannot be gained using particulate pellets. Stratified sampling can be employed both on ‘fine’ and ‘coarse’ grained coal units. Fine-grained coals are defined as those coal intervals that do not contain vitrain bands greater than approximately 1 mm in thickness (as measured perpendicular to bedding. In fine-grained coal seams, a reasonable sized block sample (with a polished surface area of ~3 cm2 can be taken that encapsulates the macroscopic variability. However, for coarse-grained coals (vitrain bands >1 mm a different system has to be employed in order to accurately account for the larger particles. Macroscopic point counting of vitrain bands can accurately account for those particles>1 mm within a coal interval. This point counting method is conducted using something as simple as string on a coal face with marked intervals greater than the largest particle expected to be encountered (although new technologies are being developed to capture this type of information digitally. Comparative analyses of particulate pellets and blocks on the same interval show less than 6% variation between the two sample types when blocks are recalculated to include macroscopic counts of vitrain. Therefore even in coarse-grained coals, stratified sampling can be used effectively and representatively.
Lydia Leonardo
2012-01-01
Full Text Available For the first time in the country, a national baseline prevalence survey using a well-defined sampling design such as a stratified two-step systematic cluster sampling was conducted in 2005 to 2008. The purpose of the survey was to stratify the provinces according to prevalence of schistosomiasis such as high, moderate, and low prevalence which in turn would be used as basis for the intervention program to be implemented. The national survey was divided into four phases. Results of the first two phases conducted in Mindanao and the Visayas were published in 2008. Data from the last two phases showed three provinces with prevalence rates higher than endemic provinces surveyed in the first two phases thus changing the overall ranking of endemic provinces at the national level. Age and sex distribution of schistosomiasis remained the same in Luzon and Maguindanao. Soil-transmitted and food-borne helminthes were also recorded in these surveys. This paper deals with the results of the last 2 phases done in Luzon and Maguindanao and integrates all four phases in the discussion.
Sutor, Malinda M.; Dagg, Michael J.
2008-06-01
The effects of vertical sampling resolution on estimates of plankton biomass and grazing calculations were examined using data collected in two different areas with vertically stratified water columns. Data were collected from one site in the upwelling region off Oregon and from four sites in the Northern Gulf of Mexico, three within the Mississippi River plume and one in adjacent oceanic waters. Plankton were found to be concentrated in discrete layers with sharp vertical gradients at all the stations. Phytoplankton distributions were correlated with gradients in temperature and salinity, but microzooplankton and mesozooplankton distributions were not. Layers of zooplankton were sometimes collocated with layers of phytoplankton, but this was not always the case. Simulated calculations demonstrate that when averages are taken over the water column, or coarser scale vertical sampling resolution is used, biomass and mesozooplankton grazing and filtration rates can be greatly underestimated. This has important implications for understanding the ecological significance of discrete layers of plankton and for assessing rates of grazing and production in stratified water columns.
Leonardo, Lydia; Rivera, Pilarita; Saniel, Ofelia; Villacorte, Elena; Lebanan, May Antonnette; Crisostomo, Bobby; Hernandez, Leda; Baquilod, Mario; Erce, Edgardo; Martinez, Ruth; Velayudhan, Raman
2012-01-01
For the first time in the country, a national baseline prevalence survey using a well-defined sampling design such as a stratified two-step systematic cluster sampling was conducted in 2005 to 2008. The purpose of the survey was to stratify the provinces according to prevalence of schistosomiasis such as high, moderate, and low prevalence which in turn would be used as basis for the intervention program to be implemented. The national survey was divided into four phases. Results of the first two phases conducted in Mindanao and the Visayas were published in 2008. Data from the last two phases showed three provinces with prevalence rates higher than endemic provinces surveyed in the first two phases thus changing the overall ranking of endemic provinces at the national level. Age and sex distribution of schistosomiasis remained the same in Luzon and Maguindanao. Soil-transmitted and food-borne helminthes were also recorded in these surveys. This paper deals with the results of the last 2 phases done in Luzon and Maguindanao and integrates all four phases in the discussion.
Khewal Bhupendra Kesur
2013-01-01
Full Text Available This paper examines the application of Latin Hypercube Sampling (LHS and Antithetic Variables (AVs to reduce the variance of estimated performance measures from microscopic traffic simulators. LHS and AV allow for a more representative coverage of input probability distributions through stratification, reducing the standard error of simulation outputs. Two methods of implementation are examined, one where stratification is applied to headways and routing decisions of individual vehicles and another where vehicle counts and entry times are more evenly sampled. The proposed methods have wider applicability in general queuing systems. LHS is found to outperform AV, and reductions of up to 71% in the standard error of estimates of traffic network performance relative to independent sampling are obtained. LHS allows for a reduction in the execution time of computationally expensive microscopic traffic simulators as fewer simulations are required to achieve a fixed level of precision with reductions of up to 84% in computing time noted on the test cases considered. The benefits of LHS are amplified for more congested networks and as the required level of precision increases.
Node Redeployment Algorithm Based on Stratified Connected Tree for Underwater Sensor Networks
Jun Liu
2016-12-01
Full Text Available During the underwater sensor networks (UWSNs operation, node drift with water environment causes network topology changes. Periodic node location examination and adjustment are needed to maintain good network monitoring quality as long as possible. In this paper, a node redeployment algorithm based on stratified connected tree for UWSNs is proposed. At every network adjustment moment, self-examination and adjustment on node locations are performed firstly. If a node is outside the monitored space, it returns to the last location recorded in its memory along straight line. Later, the network topology is stratified into a connected tree that takes the sink node as the root node by broadcasting ready information level by level, which can improve the network connectivity rate. Finally, with synthetically considering network coverage and connectivity rates, and node movement distance, the sink node performs centralized optimization on locations of leaf nodes in the stratified connected tree. Simulation results show that the proposed redeployment algorithm can not only keep the number of nodes in the monitored space as much as possible and maintain good network coverage and connectivity rates during network operation, but also reduce node movement distance during node redeployment and prolong the network lifetime.
Komada, Kenichi; Sugiyama, Masaya; Vongphrachanh, Phengta; Xeuatvongsa, Anonh; Khamphaphongphane, Bouaphan; Kitamura, Tomomi; Kiyohara, Tomoko; Wakita, Takaji; Oshitani, Hitoshi; Hachiya, Masahiko
2015-07-01
There is limited information regarding the prevalence of hepatitis B in Lao PDR, where the hepatitis disease burden is substantial. Thus, reliable seroprevalence data is needed for the disease, based on probability sampling. A stratified, multistage, cluster sampling survey of hepatitis B surface antigen (HBsAg) positivity among children aged 5-9 years and their mothers aged 15-45 years was conducted. Participants were selected randomly from the central region of Lao PDR via probability-proportional-to-size sampling. Blood samples were collected onto filter paper and subsequently analyzed using a chemiluminescent microparticle immunoassay. A total of 911 mother-and-child pairs were collected; the seroprevalence of HBsAg was estimated to be 2.1% (95% confidence interval 0.8-3.4%) among children and 4.1% (95% confidence interval 2.6-5.5%) in their mothers after taking into account the sampling design and the weight of each sample. The children's HBsAg positivity was positively associated with maternal infection and being born in a non-health facility, while the maternal infection status was not associated with any background characteristic. Lao PDR has a relatively lower HBsAg prevalence in the general population compared to surrounding countries. To ensure comparability to other countries and to future data, rapid field tests are recommended for a nationwide prevalence survey. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.
Study on Reform of College English Stratified Teaching Based on School-Based Characteristics
Yang, Liu
2012-01-01
Considering the status quo of college English teaching, we implement stratified teaching, which reflects the idea of stratification in terms of teaching objects, teaching management, teaching process and assessment and evaluation, makes each students get development to the greatest extent in interactive teaching practice of teaching and learning…
Miller, Ezer; Huppert, Amit; Novikov, Ilya; Warburg, Alon; Hailu, Asrat; Abbasi, Ibrahim; Freedman, Laurence S
2015-11-10
In this work, we describe a two-stage sampling design to estimate the infection prevalence in a population. In the first stage, an imperfect diagnostic test was performed on a random sample of the population. In the second stage, a different imperfect test was performed in a stratified random sample of the first sample. To estimate infection prevalence, we assumed conditional independence between the diagnostic tests and develop method of moments estimators based on expectations of the proportions of people with positive and negative results on both tests that are functions of the tests' sensitivity, specificity, and the infection prevalence. A closed-form solution of the estimating equations was obtained assuming a specificity of 100% for both tests. We applied our method to estimate the infection prevalence of visceral leishmaniasis according to two quantitative polymerase chain reaction tests performed on blood samples taken from 4756 patients in northern Ethiopia. The sensitivities of the tests were also estimated, as well as the standard errors of all estimates, using a parametric bootstrap. We also examined the impact of departures from our assumptions of 100% specificity and conditional independence on the estimated prevalence. Copyright © 2015 John Wiley & Sons, Ltd.
Testing of RANS Turbulence Models for Stratified Flows Based on DNS Data
Venayagamoorthy, S. K.; Koseff, J. R.; Ferziger, J. H.; Shih, L. H.
2003-01-01
In most geophysical flows, turbulence occurs at the smallest scales and one of the two most important additional physical phenomena to account for is strati cation (the other being rotation). In this paper, the main objective is to investigate proposed changes to RANS turbulence models which include the effects of stratifi- cation more explicitly. These proposed changes were developed using a DNS database on strati ed and sheared homogenous turbulence developed by Shih et al. (2000) and are described more fully in Ferziger et al. (2003). The data generated by Shih, et al. (2000) (hereinafter referred to as SKFR) are used to study the parameters in the k- model as a function of the turbulent Froude number, Frk. A modified version of the standard k- model based on the local turbulent Froude number is proposed. The proposed model is applied to a stratified open channel flow, a test case that differs significantly from the flows from which the modified parameters were derived. The turbulence modeling and results are discussed in the next two sections followed by suggestions for future work.
Thiele, Uwe; Frastia, Lubor
2007-01-01
A dynamical model is proposed to describe the coupled decomposition and profile evolution of a free surface film of a binary mixture. An example is a thin film of a polymer blend on a solid substrate undergoing simultaneous phase separation and dewetting. The model is based on model-H describing the coupled transport of the mass of one component (convective Cahn-Hilliard equation) and momentum (Navier-Stokes-Korteweg equations) supplemented by appropriate boundary conditions at the solid substrate and the free surface. General transport equations are derived using phenomenological non-equilibrium thermodynamics for a general non-isothermal setting taking into account Soret and Dufour effects and interfacial viscosity for the internal diffuse interface between the two components. Focusing on an isothermal setting the resulting model is compared to literature results and its base states corresponding to homogeneous or vertically stratified flat layers are analysed.
Marcos Adami
2010-06-01
Full Text Available O objetivo deste trabalho foi avaliar o desempenho de um modelo probabilístico de amostragem estratificada por pontos, e definir um tamanho de amostra adequado para estimar a área cultivada com soja no Rio Grande do Sul. A área foi estratificada de acordo com a percentagem de soja cultivada em cada município do estado: menor que 20, de 20 a 40 e maior que 40%. Foram avaliadas estimativas obtidas por meio de seis tamanhos de amostras, resultantes da combinação de três níveis de significância (10, 5 e 1% e dois valores de erro amostral (5 e 2,5%. Para cada tamanho de amostra, foram realizados 400 sorteios aleatórios. As estimativas foram avaliadas com base na área de soja obtida de um mapa temático de referência proveniente de uma cuidadosa classificação automática e visual de imagens multitemporais dos satélites TM/Landsat-5 e ETM+/Landsat-7 disponível para a safra 2000/2001. A área de soja no Rio Grande do Sul pode ser estimada por meio de um modelo de amostragem probabilística estratificada por pontos, sendo que a melhor estimativa é obtida para o maior tamanho amostral (1.990 pontos, com diferença de apenas -0,14% em relação à estimativa do mapa de referência e um coeficiente de variação de 6,98%.The objective of this work was to evaluate the performance of a probabilistic sampling model stratified by points and to define an appropriate sample size to estimate the cultivated soybean area in the state of Rio Grande do Sul, Brazil. The area was stratified according to the percentage of soybean cultivated in each state municipality: less than 20, from 20 to 40 and more than 40%. Estimates were evaluated based on six sample sizes, resulting from the combination of three significance levels (10, 5 and 1% and two sampling errors (5 and 2,5%, choosing 400 random samples for each sample size. The estimates were compared to a reference soybean thematic map available for the crop year 2000/2001 that was derived from a careful
Elizabeth A. Freeman; Gretchen G. Moisen; Tracy S. Frescino
2012-01-01
Random Forests is frequently used to model species distributions over large geographic areas. Complications arise when data used to train the models have been collected in stratified designs that involve different sampling intensity per stratum. The modeling process is further complicated if some of the target species are relatively rare on the landscape leading to an...
Bakri, Barbara; Weimer, Marco; Hauck, Gerrit; Reich, Gabriele
2015-11-01
Scope of the study was (1) to develop a lean quantitative calibration for real-time near-infrared (NIR) blend monitoring, which meets the requirements in early development of pharmaceutical products and (2) to compare the prediction performance of this approach with the results obtained from stratified sampling using a sample thief in combination with off-line high pressure liquid chromatography (HPLC) and at-line near-infrared chemical imaging (NIRCI). Tablets were manufactured from powder blends and analyzed with NIRCI and HPLC to verify the real-time results. The model formulation contained 25% w/w naproxen as a cohesive active pharmaceutical ingredient (API), microcrystalline cellulose and croscarmellose sodium as cohesive excipients and free-flowing mannitol. Five in-line NIR calibration approaches, all using the spectra from the end of the blending process as reference for PLS modeling, were compared in terms of selectivity, precision, prediction accuracy and robustness. High selectivity could be achieved with a "reduced" approach i.e. API and time saving approach (35% reduction of API amount) based on six concentration levels of the API with three levels realized by three independent powder blends and the additional levels obtained by simply increasing the API concentration in these blends. Accuracy and robustness were further improved by combining this calibration set with a second independent data set comprising different excipient concentrations and reflecting different environmental conditions. The combined calibration model was used to monitor the blending process of independent batches. For this model formulation the target concentration of the API could be achieved within 3 min indicating a short blending time. The in-line NIR approach was verified by stratified sampling HPLC and NIRCI results. All three methods revealed comparable results regarding blend end point determination. Differences in both mean API concentration and RSD values could be
Valverde Arias, Omar; Valencia, José; Saa Requejo, António; Garrido, Alberto
2017-04-01
Based-index insurance for farming has become in an efficient alternative for farmers to transfer risk to another instances. Actually, in Ecuador, there is a conventional agricultural insurance for rice crop, although it is necessary to develop based-index insurance that could cover more farmers for extreme events with catastrophic consequences. This based-index insurance could consist to estimate crop losses by drought through NDVI (Normalized Difference Vegetation Index). A first step was to establish homogeneous areas based on Principal Component Analysis of soil properties (Valverde et al., 2016) where rice is cultivated. Two main areas were found (f7 and f15) that was based mainly on slope, texture and effective depth. These ones are the sites considered to sampling and study the NDVI. MODIS images of 250x250 m resolution were selected of the study area, Babahoyo canton (Los Rios province, Ecuador), and calculated the NDVI index at rice growth stage in both sites at several years. The number of samples in each site was proportional to the area of cultivated rice. NDVI distribution values were calculated in each homogeneous zones (f7 and f15) through years. Several statistical analysis were performed to investigate the difference between both sites. Results are discussed in the context of based index insurance.
Li, Jiao Jiao; Kim, Kyungsook; Roohani-Esfahani, Seyed-Iman; Guo, Jin; Kaplan, David L; Zreiqat, Hala
2015-07-14
Significant clinical challenges encountered in the effective long-term treatment of osteochondral defects have inspired advancements in scaffold-based tissue engineering techniques to aid repair and regeneration. This study reports the development of a biphasic scaffold produced via a rational combination of silk fibroin and bioactive ceramic with stratified properties to satisfy the complex and diverse regenerative requirements of osteochondral tissue. Structural examination showed that the biphasic scaffold contained two phases with different pore morphologies to match the cartilage and bone segments of osteochondral tissue, which were joined at a continuous interface. Mechanical assessment showed that the two phases of the biphasic scaffold imitated the load-bearing behaviour of native osteochondral tissue and matched its compressive properties. In vitro testing showed that different compositions in the two phases of the biphasic scaffold could direct the preferential differentiation of human mesenchymal stem cells towards the chondrogenic or osteogenic lineage. By featuring simple and reproducible fabrication and a well-integrated interface, the biphasic scaffold strategy established in this study circumvented the common problems experienced with integrated scaffold designs and could provide an effective approach for the regeneration of osteochondral tissue.
Effect of Sampling Depth on Air-Sea CO2 Flux Estimates in River-Stratified Arctic Coastal Waters
Miller, L. A.; Papakyriakou, T. N.
2015-12-01
In summer-time Arctic coastal waters that are strongly influenced by river run-off, extreme stratification severely limits wind mixing, making it difficult to effectively sample the surface 'mixed layer', which can be as shallow as 1 m, from a ship. During two expeditions in southwestern Hudson Bay, off the Nelson, Hayes, and Churchill River estuaries, we confirmed that sampling depth has a strong impact on estimates of 'surface' pCO2 and calculated air-sea CO2 fluxes. We determined pCO2 in samples collected from 5 m, using a typical underway system on the ship's seawater supply; from the 'surface' rosette bottle, which was generally between 1 and 3 m; and using a niskin bottle deployed at 1 m and just below the surface from a small boat away from the ship. Our samples confirmed that the error in pCO2 derived from typical ship-board versus small-boat sampling at a single station could be nearly 90 μatm, leading to errors in the calculated air-sea CO2 flux of more than 0.1 mmol/(m2s). Attempting to extrapolate such fluxes over the 6,000,000 km2 area of the Arctic shelves would generate an error approaching a gigamol CO2/s. Averaging the station data over a cruise still resulted in an error of nearly 50% in the total flux estimate. Our results have implications not only for the design and execution of expedition-based sampling, but also for placement of in-situ sensors. Particularly in polar waters, sensors are usually deployed on moorings, well below the surface, to avoid damage and destruction from drifting ice. However, to obtain accurate information on air-sea fluxes in these areas, it is necessary to deploy sensors on ice-capable buoys that can position the sensors in true 'surface' waters.
Chowdhury, Susmita; Henneman, Lidewij; Dent, Tom; Hall, Alison; Burton, Alice; Pharoah, Paul; Pashayan, Nora; Burton, Hilary
2015-06-09
There is growing evidence that inclusion of genetic information about known common susceptibility variants may enable population risk-stratification and personalized prevention for common diseases including cancer. This would require the inclusion of genetic testing as an integral part of individual risk assessment of an asymptomatic individual. Front line health professionals would be expected to interact with and assist asymptomatic individuals through the risk stratification process. In that case, additional knowledge and skills may be needed. Current guidelines and frameworks for genetic competencies of non-specialist health professionals place an emphasis on rare inherited genetic diseases. For common diseases, health professionals do use risk assessment tools but such tools currently do not assess genetic susceptibility of individuals. In this article, we compare the skills and knowledge needed by non-genetic health professionals, if risk-stratified prevention is implemented, with existing competence recommendations from the UK, USA and Europe, in order to assess the gaps in current competences. We found that health professionals would benefit from understanding the contribution of common genetic variations in disease risk, the rationale for a risk-stratified prevention pathway, and the implications of using genomic information in risk-assessment and risk management of asymptomatic individuals for common disease prevention.
Susmita Chowdhury
2015-06-01
Full Text Available There is growing evidence that inclusion of genetic information about known common susceptibility variants may enable population risk-stratification and personalized prevention for common diseases including cancer. This would require the inclusion of genetic testing as an integral part of individual risk assessment of an asymptomatic individual. Front line health professionals would be expected to interact with and assist asymptomatic individuals through the risk stratification process. In that case, additional knowledge and skills may be needed. Current guidelines and frameworks for genetic competencies of non-specialist health professionals place an emphasis on rare inherited genetic diseases. For common diseases, health professionals do use risk assessment tools but such tools currently do not assess genetic susceptibility of individuals. In this article, we compare the skills and knowledge needed by non-genetic health professionals, if risk-stratified prevention is implemented, with existing competence recommendations from the UK, USA and Europe, in order to assess the gaps in current competences. We found that health professionals would benefit from understanding the contribution of common genetic variations in disease risk, the rationale for a risk-stratified prevention pathway, and the implications of using genomic information in risk-assessment and risk management of asymptomatic individuals for common disease prevention.
A Sweeping based Kinematic Simulation for the Stably Stratified Surface Layer
Ghate, Aditya; Lele, Sanjiva
2014-11-01
A Kinematic Simulation (KS) for a statistically stationary and stably stratified surface layer is proposed. The Fourier coefficients are obtained by numerically solving the linearized NS equations with Boussinesq approximation in spectral space, under the assumption of ``rapid'' deformation (RDT) due to combined shear and stratification. The linearization of RDT, which is unrealistic for the surface layer, is rectified using Mann's (JFM, 1994) idea of wavenumber dependent eddy lifetime. The input parameters required by the KS are estimated using either Monin-Obukhov theory, or an appropriate Second Moment Closure. In order to overcome the frozen turbulence hypothesis made in the Mann model, we incorporate inter-scale ``sweeping'' of eddies following the ideas of Fung et al. (JFM, 1992), along with temporal decorrelation associated with the natural eddy time scale. The solenoidal velocity field generated by the KS allows inclusion of a wide range of scales with correct space-time correlations, making it ideal to investigate particle dispersion in a stably stratified environment, and can also serve as inflow for the study of Wind Farm-PBL interactions. The effect of varying Obukhov length will be discussed by analyzing the frozen Eulerian spectra and Lagrangian particle dispersion.
Rosanowski, S M; Cogger, N; Rogers, C W; Benschop, J; Stevenson, M A
2012-12-01
We conducted a cross-sectional survey to determine the demographic characteristics of non-commercial horses in New Zealand. A sampling frame of properties with non-commercial horses was derived from the national farms database, AgriBase™. Horse properties were stratified by property size and a generalised random-tessellated stratified (GRTS) sampling strategy was used to select properties (n=2912) to take part in the survey. The GRTS sampling design allowed for the selection of properties that were spatially balanced relative to the distribution of horse properties throughout the country. The registered decision maker of the property, as identified in AgriBase™, was sent a questionnaire asking them to describe the demographic characteristics of horses on the property, including the number and reason for keeping horses, as well as information about other animals kept on the property and the proximity of boundary neighbours with horses. The response rate to the survey was 38% (1044/2912) and the response rate was not associated with property size or region. A total of 5322 horses were kept for recreation, competition, racing, breeding, stock work, or as pets. The reasons for keeping horses and the number and class of horses varied significantly between regions and by property size. Of the properties sampled, less than half kept horses that could have been registered with Equestrian Sports New Zealand or either of the racing codes. Of the respondents that reported knowing whether their neighbours had horses, 58.6% (455/776) of properties had at least one boundary neighbour that kept horses. The results of this study have important implications for New Zealand, which has an equine population that is naïve to many equine diseases considered endemic worldwide. The ability to identify, and apply accurate knowledge of the population at risk to infectious disease control strategies would lead to more effective strategies to control and prevent disease spread during an
Fritjof Luethje
2017-01-01
Full Text Available Very high spatial resolution (VHSR stereo-imagery-derived digital surface models (DSM can be used to generate digital elevation models (DEM. Filtering algorithms and triangular irregular network (TIN densification are the most common approaches. Most filter-based techniques focus on image-smoothing. We propose a new approach which makes use of integrated object-based image analysis (OBIA techniques. An initial land cover classification is followed by stratified land cover ground point sample detection, using object-specific features to enhance the sampling quality. The detected ground point samples serve as the basis for the interpolation of the DEM. A regional uncertainty index (RUI is calculated to express the quality of the generated DEM in regard to the DSM, based on the number of samples per land cover object. The results of our approach are compared to a high resolution Light Detection and Ranging (LiDAR-DEM, and a high level of agreement is observed—especially for non-vegetated and scarcely-vegetated areas. Results show that the accuracy of the DEM is highly dependent on the quality of the initial DSM and—in accordance with the RUI—differs between the different land cover classes.
Hussey, Michael A; Koch, Gary G; Preisser, John S; Saville, Benjamin R
2016-01-01
Time-to-event or dichotomous outcomes in randomized clinical trials often have analyses using the Cox proportional hazards model or conditional logistic regression, respectively, to obtain covariate-adjusted log hazard (or odds) ratios. Nonparametric Randomization-Based Analysis of Covariance (NPANCOVA) can be applied to unadjusted log hazard (or odds) ratios estimated from a model containing treatment as the only explanatory variable. These adjusted estimates are stratified population-averaged treatment effects and only require a valid randomization to the two treatment groups and avoid key modeling assumptions (e.g., proportional hazards in the case of a Cox model) for the adjustment variables. The methodology has application in the regulatory environment where such assumptions cannot be verified a priori. Application of the methodology is illustrated through three examples on real data from two randomized trials.
Lapeyrouse, L M; Morera, O; Heyman, J M C; Amaya, M A; Pingitore, N E; Balcazar, H
2012-04-01
Examination of border-specific characteristics such as trans-border mobility and transborder health service illuminates the heterogeneity of border Hispanics and may provide greater insight toward understanding differential health behaviors and status among these populations. In this study, we create a descriptive profile of the concept of trans-border mobility by exploring the relationship between mobility status and a series of demographic, economic and socio-cultural characteristics among mobile and non-mobile Hispanics living in the El Paso-Juarez border region. Using a two-stage stratified random sampling design, bilingual interviewers collected survey data from border residents (n = 1,002). Findings show that significant economic, cultural, and behavioral differences exist between mobile and non-mobile respondents. While non-mobile respondents were found to have higher social economic status than their mobile counterparts, mobility across the border was found to offer less acculturated and poorer Hispanics access to alternative sources of health care and other services.
Sampling Based Average Classifier Fusion
Jian Hou
2014-01-01
fusion algorithms have been proposed in literature, average fusion is almost always selected as the baseline for comparison. Little is done on exploring the potential of average fusion and proposing a better baseline. In this paper we empirically investigate the behavior of soft labels and classifiers in average fusion. As a result, we find that; by proper sampling of soft labels and classifiers, the average fusion performance can be evidently improved. This result presents sampling based average fusion as a better baseline; that is, a newly proposed classifier fusion algorithm should at least perform better than this baseline in order to demonstrate its effectiveness.
Efficacy of the smartphone-based glucose management application stratified by user satisfaction.
Kim, Hun-Sung; Choi, Wona; Baek, Eun Kyoung; Kim, Yun A; Yang, So Jung; Choi, In Young; Yoon, Kun-Ho; Cho, Jae-Hyoung
2014-06-01
We aimed to assess the efficacy of the smartphone-based health application for glucose control and patient satisfaction with the mobile network system used for glucose self-monitoring. Thirty-five patients were provided with a smartphone device, and self-measured blood glucose data were automatically transferred to the medical staff through the smartphone application over the course of 12 weeks. The smartphone user group was divided into two subgroups (more satisfied group vs. less satisfied group) based on the results of questionnaire surveys regarding satisfaction, comfort, convenience, and functionality, as well as their willingness to use the smartphone application in the future. The control group was set up via a review of electronic medical records by group matching in terms of age, sex, doctor in charge, and glycated hemoglobin (HbA1c). Both the smartphone group and the control group showed a tendency towards a decrease in the HbA1c level after 3 months (7.7%±0.7% to 7.5%±0.7%, P=0.077). In the more satisfied group (n=27), the HbA1c level decreased from 7.7%±0.8% to 7.3%±0.6% (P=0.001), whereas in the less satisfied group (n=8), the HbA1c result increased from 7.7%±0.4% to 8.1%±0.5% (P=0.062), showing values much worse than that of the no-smartphone control group (from 7.7%±0.5% to 7.7%±0.7%, P=0.093). In addition to medical feedback, device and network-related patient satisfaction play a crucial role in blood glucose management. Therefore, for the smartphone app-based blood glucose monitoring to be effective, it is essential to provide the patient with a well-functioning high quality tool capable of increasing patient satisfaction and willingness to use.
Efficacy of the Smartphone-Based Glucose Management Application Stratified by User Satisfaction
Hun-Sung Kim
2014-06-01
Full Text Available BackgroundWe aimed to assess the efficacy of the smartphone-based health application for glucose control and patient satisfaction with the mobile network system used for glucose self-monitoring.MethodsThirty-five patients were provided with a smartphone device, and self-measured blood glucose data were automatically transferred to the medical staff through the smartphone application over the course of 12 weeks. The smartphone user group was divided into two subgroups (more satisfied group vs. less satisfied group based on the results of questionnaire surveys regarding satisfaction, comfort, convenience, and functionality, as well as their willingness to use the smartphone application in the future. The control group was set up via a review of electronic medical records by group matching in terms of age, sex, doctor in charge, and glycated hemoglobin (HbA1c.ResultsBoth the smartphone group and the control group showed a tendency towards a decrease in the HbA1c level after 3 months (7.7%±0.7% to 7.5%±0.7%, P=0.077. In the more satisfied group (n=27, the HbA1c level decreased from 7.7%±0.8% to 7.3%±0.6% (P=0.001, whereas in the less satisfied group (n=8, the HbA1c result increased from 7.7%±0.4% to 8.1%±0.5% (P=0.062, showing values much worse than that of the no-smartphone control group (from 7.7%±0.5% to 7.7%±0.7%, P=0.093.ConclusionIn addition to medical feedback, device and network-related patient satisfaction play a crucial role in blood glucose management. Therefore, for the smartphone app-based blood glucose monitoring to be effective, it is essential to provide the patient with a well-functioning high quality tool capable of increasing patient satisfaction and willingness to use.
Eide, Helene K; Šaltytė Benth, Jūratė; Sortland, Kjersti; Halvorsen, Kristin; Almendingen, Kari
2015-01-01
There is a lack of accurate prevalence data on undernutrition and the risk of undernutrition among the hospitalised elderly in Europe and Norway. We aimed at estimating the prevalence of nutritional risk by using stratified sampling along with adequate power calculations. A cross-sectional study was carried out in the period 2011 to 2013 at a university hospital in Norway. Second-year nursing students in acute care clinical studies in twenty hospital wards screened non-demented elderly patients for nutritional risk, by employing the Nutritional Risk Screening 2002 (NRS2002) form. In total, 508 patients (48·8 % women and 51·2 % men) with a mean age of 79·6 (sd 6·4) years were screened by the students. Mean BMI was 24·9 (sd 4·9) kg/m(2), and the patients had been hospitalised for on average 5·3 (sd 6·3) d. WHO's BMI cut-off values identified 6·5 % as underweight, 48·0 % of normal weight and 45·5 % as overweight. Patients nutritionally at risk had been in hospital longer and had lower average weight and BMI compared with those not at risk (all P nutritional risk was estimated to be 45·4 (95 % CI 41·7 %, 49·0) %, ranging between 20·0 and 65·0 % on different hospital wards. The present results show that the prevalence of nutritional risk among elderly patients without dementia is high, suggesting that a large proportion of the hospitalised elderly are in need of nutritional treatment.
A method for selecting training samples based on camera response
Zhang, Leihong; Li, Bei; Pan, Zilan; Liang, Dong; Kang, Yi; Zhang, Dawei; Ma, Xiuhua
2016-09-01
In the process of spectral reflectance reconstruction, sample selection plays an important role in the accuracy of the constructed model and in reconstruction effects. In this paper, a method for training sample selection based on camera response is proposed. It has been proved that the camera response value has a close correlation with the spectral reflectance. Consequently, in this paper we adopt the technique of drawing a sphere in camera response value space to select the training samples which have a higher correlation with the test samples. In addition, the Wiener estimation method is used to reconstruct the spectral reflectance. Finally, we find that the method of sample selection based on camera response value has the smallest color difference and root mean square error after reconstruction compared to the method using the full set of Munsell color charts, the Mohammadi training sample selection method, and the stratified sampling method. Moreover, the goodness of fit coefficient of this method is also the highest among the four sample selection methods. Taking all the factors mentioned above into consideration, the method of training sample selection based on camera response value enhances the reconstruction accuracy from both the colorimetric and spectral perspectives.
伍长春; 张润楚
2006-01-01
In stratified survey sampling, sometimes we have complete auxiliary information. One of the fundamental questions is how to effectively use the complete auxiliary information at the estimation stage. In this paper, we extend the model-calibration method to obtain estimators of the finite population mean by using complete auxiliary information from stratified sampling survey data. We show that the resulting estimators effectively use auxiliary information at the estimation stage and possess a number of attractive features such as asymptotically design-unbiased irrespective of the working model and approximately model-unbiased under the model. When a linear working-model is used, the resulting estimators reduce to the usual calibration estimator(or GREG).
Theory of sampling and its application in tissue based diagnosis
Kayser Gian
2009-02-01
Full Text Available Abstract Background A general theory of sampling and its application in tissue based diagnosis is presented. Sampling is defined as extraction of information from certain limited spaces and its transformation into a statement or measure that is valid for the entire (reference space. The procedure should be reproducible in time and space, i.e. give the same results when applied under similar circumstances. Sampling includes two different aspects, the procedure of sample selection and the efficiency of its performance. The practical performance of sample selection focuses on search for localization of specific compartments within the basic space, and search for presence of specific compartments. Methods When a sampling procedure is applied in diagnostic processes two different procedures can be distinguished: I the evaluation of a diagnostic significance of a certain object, which is the probability that the object can be grouped into a certain diagnosis, and II the probability to detect these basic units. Sampling can be performed without or with external knowledge, such as size of searched objects, neighbourhood conditions, spatial distribution of objects, etc. If the sample size is much larger than the object size, the application of a translation invariant transformation results in Kriege's formula, which is widely used in search for ores. Usually, sampling is performed in a series of area (space selections of identical size. The size can be defined in relation to the reference space or according to interspatial relationship. The first method is called random sampling, the second stratified sampling. Results Random sampling does not require knowledge about the reference space, and is used to estimate the number and size of objects. Estimated features include area (volume fraction, numerical, boundary and surface densities. Stratified sampling requires the knowledge of objects (and their features and evaluates spatial features in relation to
Hanasoge, S M; Gizon, L
2010-01-01
Perfectly matched layers are a very efficient and accurate way to absorb waves in media. We present a stable convolutional unsplit perfectly matched formulation designed for the linearized stratified Euler equations. However, the technique as applied to the Magneto-hydrodynamic (MHD) equations requires the use of a sponge, which, despite placing the perfectly matched status in question, is still highly efficient at absorbing outgoing waves. We study solutions of the equations in the backdrop of models of linearized wave propagation in the Sun. We test the numerical stability of the schemes by integrating the equations over a large number of wave periods.
Tchoubi, Sébastien; Sobngwi-Tambekou, Joëlle; Noubiap, Jean Jacques N.; Asangbeh, Serra Lem; Nkoum, Benjamin Alexandre; Sobngwi, Eugene
2015-01-01
Background Childhood obesity is one of the most serious public health challenges of the 21st century. The prevalence of overweight and obesity among children (obesity among children aged 6 months to 5 years in Cameroon in 2011. Methods Four thousand five hundred and eighteen children (2205 boys and 2313 girls) aged between 6 to 59 months were sampled in the 2011 Demographic Health Survey (DHS) database. Body Mass Index (BMI) z-scores based on WHO 2006 reference population was chosen to estimate overweight (BMI z-score > 2) and obesity (BMI for age > 3). Regression analyses were performed to investigate risk factors of overweight/obesity. Results The prevalence of overweight and obesity was 8% (1.7% for obesity alone). Boys were more affected by overweight than girls with a prevalence of 9.7% and 6.4% respectively. The highest prevalence of overweight was observed in the Grassfield area (including people living in West and North-West regions) (15.3%). Factors that were independently associated with overweight and obesity included: having overweight mother (adjusted odds ratio (aOR) = 1.51; 95% CI 1.15 to 1.97) and obese mother (aOR = 2.19; 95% CI = 155 to 3.07), compared to having normal weight mother; high birth weight (aOR = 1.69; 95% CI 1.24 to 2.28) compared to normal birth weight; male gender (aOR = 1.56; 95% CI 1.24 to 1.95); low birth rank (aOR = 1.35; 95% CI 1.06 to 1.72); being aged between 13–24 months (aOR = 1.81; 95% CI = 1.21 to 2.66) and 25–36 months (aOR = 2.79; 95% CI 1.93 to 4.13) compared to being aged 45 to 49 months; living in the grassfield area (aOR = 2.65; 95% CI = 1.87 to 3.79) compared to living in Forest area. Muslim appeared as a protective factor (aOR = 0.67; 95% CI 0.46 to 0.95).compared to Christian religion. Conclusion This study underlines a high prevalence of early childhood overweight with significant disparities between ecological areas of Cameroon. Risk factors of overweight included high maternal BMI, high birth weight, male
Adaptive Rate Sampling and Filtering Based on Level Crossing Sampling
Saeed Mian Qaisar
2009-01-01
Full Text Available The recent sophistications in areas of mobile systems and sensor networks demand more and more processing resources. In order to maintain the system autonomy, energy saving is becoming one of the most difficult industrial challenges, in mobile computing. Most of efforts to achieve this goal are focused on improving the embedded systems design and the battery technology, but very few studies target to exploit the input signal time-varying nature. This paper aims to achieve power efficiency by intelligently adapting the processing activity to the input signal local characteristics. It is done by completely rethinking the processing chain, by adopting a non conventional sampling scheme and adaptive rate filtering. The proposed approach, based on the LCSS (Level Crossing Sampling Scheme presents two filtering techniques, able to adapt their sampling rate and filter order by online analyzing the input signal variations. Indeed, the principle is to intelligently exploit the signal local characteristics—which is usually never considered—to filter only the relevant signal parts, by employing the relevant order filters. This idea leads towards a drastic gain in the computational efficiency and hence in the processing power when compared to the classical techniques.
Silicon based ultrafast optical waveform sampling
Ji, Hua; Galili, Michael; Pu, Minhao
2010-01-01
A 300 nmx450 nmx5 mm silicon nanowire is designed and fabricated for a four wave mixing based non-linear optical gate. Based on this silicon nanowire, an ultra-fast optical sampling system is successfully demonstrated using a free-running fiber laser with a carbon nanotube-based mode-locker...
Reachability Analysis of Sampling Based Planners
Geraerts, R.J.; Overmars, M.H.
2005-01-01
The last decade, sampling based planners like the Probabilistic Roadmap Method have proved to be successful in solving complex motion planning problems. We give a reachability based analysis for these planners which leads to a better understanding of the success of the approach and enhancements of t
Koch, David E; Mohler, Rhett L; Goodin, Douglas G
2007-11-01
Landscape epidemiology has made significant strides recently, driven in part by increasing availability of land cover data derived from remotely-sensed imagery. Using an example from a study of land cover effects on hantavirus dynamics at an Atlantic Forest site in eastern Paraguay, we demonstrate how automated classification methods can be used to stratify remotely-sensed land cover for studies of infectious disease dynamics. For this application, it was necessary to develop a scheme that could yield both land cover and land use data from the same classification. Hypothesizing that automated discrimination between classes would be more accurate using an object-based method compared to a per-pixel method, we used a single Landsat Enhanced Thematic Mapper+ (ETM+) image to classify land cover into eight classes using both per-pixel and object-based classification algorithms. Our results show that the object-based method achieves 84% overall accuracy, compared to only 43% using the per-pixel method. Producer's and user's accuracies for the object-based map were higher for every class compared to the per-pixel classification. The Kappa statistic was also significantly higher for the object-based classification. These results show the importance of using image information from domains beyond the spectral domain, and also illustrate the importance of object-based techniques for remote sensing applications in epidemiological studies.
Fluttering in Stratified Flows
Lam, Try; Vincent, Lionel; Kanso, Eva
2016-11-01
The descent motion of heavy objects under the influence of gravitational and aerodynamic forces is relevant to many branches of engineering and science. Examples range from estimating the behavior of re-entry space vehicles to studying the settlement of marine larvae and its influence on underwater ecology. The behavior of regularly shaped objects freely falling in homogeneous fluids is relatively well understood. For example, the complex interaction of a rigid coin with the surrounding fluid will cause it to either fall steadily, flutter, tumble, or be chaotic. Less is known about the effect of density stratification on the descent behavior. Here, we experimentally investigate the descent of discs in both pure water and in a linearly salt-stratified fluids where the density is varied from 1.0 to 1.14 of that of water where the Brunt-Vaisala frequency is 1.7 rad/sec and the Froude number Fr robots for space exploration and underwater missions.
David E. Koch
2007-11-01
Full Text Available Landscape epidemiology has made significant strides recently, driven in part by increasing availability of land cover data derived from remotely-sensed imagery. Using an example from a study of land cover effects on hantavirus dynamics at an Atlantic Forest site in eastern Paraguay, we demonstrate how automated classification methods can be used to stratify remotely-sensed land cover for studies of infectious disease dynamics. For this application, it was necessary to develop a scheme that could yield both land cover and land use data from the same classification. Hypothesizing that automated discrimination between classes would be more accurate using an object-based method compared to a per-pixel method, we used a single Landsat Enhanced Thematic Mapper+ (ETM+ image to classify land cover into eight classes using both per-pixel and object-based classification algorithms. Our results show that the objectbased method achieves 84% overall accuracy, compared to only 43% using the per-pixel method. Producer’s and user’s accuracies for the object-based map were higher for every class compared to the per-pixel classification. The Kappa statistic was also significantly higher for the object-based classification. These results show the importance of using image information from domains beyond the spectral domain, and also illustrate the importance of object-based techniques for remote sensing applications in epidemiological studies.
Hou, Wen-Hsuan; Ni, Cheng-Hua; Li, Chung-Yi; Tsai, Pei-Shan; Lin, Li-Fong; Shen, Hsiu-Nien
2015-06-01
To determine the survival of patients with stroke for up to 10 years after a first-time stroke and to investigate whether stroke rehabilitation within the first 3 months reduced long-term mortality in these patients. We used the medical claims data for a random sample of 1 million insured Taiwanese registered in the year 2000. A total of 7767 patients admitted for a first-time stroke between 2000 and 2005; 1285 (16.7%) received rehabilitation within the first 3 months after stroke admission. The other 83.3% of patients served as a comparison cohort. A Cox proportional hazards model was used to estimate the relative risk of mortality in relation to the rehabilitation intervention. In all, 181 patients with rehabilitation and 1123 controls died, representing respective mortality rates of 25.0 and 32.7 per 1000 person-years. Rehabilitation was significantly associated with a lower risk of mortality (hazard ratio .68, 95% confidence interval .58-.79). Such a beneficial effect tended to be more obvious as the frequency of rehabilitation increased (P for the trend Stroke rehabilitation initiated in the first 3 months after a stroke admission may significantly reduce the risk of mortality for 10 years after the stroke. Copyright © 2015 National Stroke Association. Published by Elsevier Inc. All rights reserved.
Stratified Medicine and Reimbursement Issues
Hans-Joerg eFugel
2012-10-01
Full Text Available Stratified Medicine (SM has the potential to target patient populations who will most benefit from a therapy while reducing unnecessary health interventions associated with side effects. The link between clinical biomarkers/diagnostics and therapies provides new opportunities for value creation to strengthen the value proposition to pricing and reimbursement (P&R authorities. However, the introduction of SM challenges current reimbursement schemes in many EU countries and the US as different P&R policies have been adopted for drugs and diagnostics. Also, there is a lack of a consistent process for value assessment of more complex diagnostics in these markets. New, innovative approaches and more flexible P&R systems are needed to reflect the added value of diagnostic tests and to stimulate investments in new technologies. Yet, the framework for access of diagnostic–based therapies still requires further development while setting the right incentives and appropriate align stakeholders interests when realizing long- term patient benefits. This article addresses the reimbursement challenges of SM approaches in several EU countries and the US outlining some options to overcome existing reimbursement barriers for stratified medicine.
Explaining Adaptive Radial-Based Direction Sampling
L. Bauwens (Luc); C.S. Bos (Charles); H.K. van Dijk (Herman); R.D. van Oest (Rutger)
2003-01-01
textabstractIn this short paper we summarize the computational steps of Adaptive Radial-Based Direction Sampling (ARDS), which can be used for Bayesian analysis of ill behaved target densities. We consider one simulation experiment in order to illustrate the good performance of ARDS relative to the
Stably stratified magnetized stars in general relativity
Yoshida, Shijun; Shibata, Masaru
2012-01-01
We construct magnetized stars composed of a fluid stably stratified by entropy gradients in the framework of general relativity, assuming ideal magnetohydrodynamics and employing a barotropic equation of state. We first revisit basic equations for describing stably-stratified stationary axisymmetric stars containing both poloidal and toroidal magnetic fields. As sample models, the magnetized stars considered by Ioka and Sasaki (2004), inside which the magnetic fields are confined, are modified to the ones stably stratified. The magnetized stars newly constructed in this study are believed to be more stable than the existing relativistic models because they have both poloidal and toroidal magnetic fields with comparable strength, and magnetic buoyancy instabilities near the surface of the star, which can be stabilized by the stratification, are suppressed.
Tae Un Yang
Full Text Available OBJECTIVES: This study aims to identify clinical case definitions of influenza with higher accuracy in patients stratified by age group and influenza activity using hospital-based surveillance system. METHODS: In seven tertiary hospitals across South Korea during 2011-2012 influenza season, respiratory specimens were obtained from patients presenting an influenza-like illness (ILI, defined as having fever plus at least one of following symptoms: cough, sore throat or rhinorrhea. Influenza was confirmed by reverse transcriptase-polymerase chain reaction. We performed multivariate logistic regression analyses to identify clinical variables with better relation with laboratory-confirmed influenza, and compared the accuracy of combinations. RESULTS: Over the study period, we enrolled 1417 patients, of which 647 had laboratory-confirmed influenza. Patients with cough, rhinorrhea, sore throat or headache were more likely to have influenza (p<0.05. The most accurate criterion across the study population was the combination of cough, rhinorrhea, sore throat and headache (sensitivity 71.3%, specificity 60.1% and AUROC 0.66. The combination of rhinorrhea, sore throat and sputum during the peak influenza activity period in the young age group showed higher accuracy than that using the whole population (sensitivity 89.3%, specificity 72.1%, and AUROC 0.81. CONCLUSIONS: The accuracy of clinical case definitions of influenza differed across age groups and influenza activity periods. Categorizing the entire population into subgroups would improve the detection of influenza patients in the hospital-based surveillance system.
Yang, Tae Un; Cheong, Hee Jin; Song, Joon Young; Lee, Jin Soo; Wie, Seong-Heon; Kim, Young Keun; Choi, Won Suk; Lee, Jacob; Jeong, Hye Won; Kim, Woo Joo
2014-01-01
Objectives This study aims to identify clinical case definitions of influenza with higher accuracy in patients stratified by age group and influenza activity using hospital-based surveillance system. Methods In seven tertiary hospitals across South Korea during 2011–2012 influenza season, respiratory specimens were obtained from patients presenting an influenza-like illness (ILI), defined as having fever plus at least one of following symptoms: cough, sore throat or rhinorrhea. Influenza was confirmed by reverse transcriptase-polymerase chain reaction. We performed multivariate logistic regression analyses to identify clinical variables with better relation with laboratory-confirmed influenza, and compared the accuracy of combinations. Results Over the study period, we enrolled 1417 patients, of which 647 had laboratory-confirmed influenza. Patients with cough, rhinorrhea, sore throat or headache were more likely to have influenza (p<0.05). The most accurate criterion across the study population was the combination of cough, rhinorrhea, sore throat and headache (sensitivity 71.3%, specificity 60.1% and AUROC 0.66). The combination of rhinorrhea, sore throat and sputum during the peak influenza activity period in the young age group showed higher accuracy than that using the whole population (sensitivity 89.3%, specificity 72.1%, and AUROC 0.81). Conclusions The accuracy of clinical case definitions of influenza differed across age groups and influenza activity periods. Categorizing the entire population into subgroups would improve the detection of influenza patients in the hospital-based surveillance system. PMID:24475034
Sample-Based Vegetation Distribution Information Synthesis.
Chanchan Xu
Full Text Available In constructing and visualizing a virtual three-dimensional forest scene, we must first obtain the vegetation distribution, namely, the location of each plant in the forest. Because the forest contains a large number of plants, the distribution of each plant is difficult to obtain from actual measurement methods. Random approaches are used as common solutions to simulate a forest distribution but fail to reflect the specific biological arrangements among types of plants. Observations show that plants in the forest tend to generate particular distribution patterns due to growth competition and specific habitats. This pattern, which represents a local feature in the distribution and occurs repeatedly in the forest, is in line with the "locality" and "static" characteristics in the "texture data", making it possible to use a sample-based texture synthesis strategy to build the distribution. We propose a vegetation distribution data generation method that uses sample-based vector pattern synthesis. A sample forest stand is obtained first and recorded as a two-dimensional vector-element distribution pattern. Next, the large-scale vegetation distribution pattern is synthesized automatically using the proposed vector pattern synthesis algorithm. The synthesized distribution pattern resembles the sample pattern in the distribution features. The vector pattern synthesis algorithm proposed in this paper adopts a neighborhood comparison technique based on histogram matching, which makes it efficient and easy to implement. Experiments show that the distribution pattern synthesized with this method can sufficiently preserve the features of the sample distribution pattern, making our method meaningful for constructing realistic forest scenes.
Falagán, Carmen; Sánchez-España, Javier; Johnson, David Barrie
2014-01-01
The indigenous microbial communities of two extremely acidic, metal-rich stratified pit lakes, located in the Iberian Pyrite Belt (Spain), were identified, and their roles in mediating transformations of carbon, iron, and sulfur were confirmed. A combined cultivation-based and culture-independent approach was used to elucidate microbial communities at different depths and to examine the physiologies of isolates, which included representatives of at least one novel genus and several species of acidophilic Bacteria. Phosphate availability correlated with redox transformations of iron, and this (rather than solar radiation) dictated where primary production was concentrated. Carbon fixed and released as organic compounds by acidophilic phototrophs acted as electron donors for acidophilic heterotrophic prokaryotes, many of which catalyzed the dissimilatory reduction in ferric iron; the ferrous iron generated was re-oxidized by chemolithotrophic acidophiles. Bacteria that catalyze redox transformations of sulfur were also identified, although these Bacteria appeared to be less abundant than the iron oxidizers/reducers. Primary production and microbial numbers were greatest, and biogeochemical transformation of carbon, iron, and sulfur, most intense, within a zone of c. 8-10 m depth, close to the chemocline, in both pit lakes. Archaea detected in sediments included two Thaumarchaeota clones, indicating that members of this recently described phylum can inhabit extremely acidic environments.
Mixing by microorganisms in stratified fluids
Wagner, Gregory L; Lauga, Eric
2014-01-01
We examine the vertical mixing induced by the swimming of microorganisms at low Reynolds and P\\'eclet numbers in a stably stratified ocean, and show that the global contribution of oceanic microswimmers to vertical mixing is negligible. We propose two approaches to estimating the mixing efficiency, $\\eta$, or the ratio of the rate of potential energy creation to the total rate-of-working on the ocean by microswimmers. The first is based on scaling arguments and estimates $\\eta$ in terms of the ratio between the typical organism size, $a$, and an intrinsic length scale for the stratified flow, $\\ell = \\left ( \
Nitrogen transformations in stratified aquatic microbial ecosystems
Revsbech, Niels Peter; Risgaard-Petersen, N.; Schramm, Andreas
2006-01-01
Abstract New analytical methods such as advanced molecular techniques and microsensors have resulted in new insights about how nitrogen transformations in stratified microbial systems such as sediments and biofilms are regulated at a µm-mm scale. A large and ever-expanding knowledge base about n...
2015-01-01
The premarital sex of senior students in some universities of Anhui province is investigated. To protect the privacy of respondents, applying randomized response technique and stratified three-stage method, the proportion of senior students premari-tal sex is studied using attribute characteristic Warner model. According to total probability formulas and variance's basic properties in Probability and Mathematical Statistics and the classical sampling theory of Cochran, the proportion and variance of senior college students premarital sex are deduced at all levels and stages. The survey reveals that the proportion of senior students premarital sex is high. Therefore, we should actively instruct the undergraduates to treat the issues of premarital sex properly and rationally.%对安徽省某高校大四学生婚前性行为进行抽样调查,为保护被调查对象的隐私,采用随机应答技术( Random-ized Response Technique,简写为RRT)结合分层三阶段抽样调查方法,利用属性特征敏感问题Warner模型分析该校大四学生发生婚前性行为的比例。运用全概率公式及方差的基本性质等概率论与数理统计知识,结合Cochran W. G的经典抽样理论,推导出各层各阶段大四学生发生婚前性行为的比例及其方差。调查结果显示大四学生婚前性行为发生比例高。为此,应该积极引导大学生理性正确的对待婚前性行为。
Sampling strategies for estimating forest cover from remote sensing-based two-stage inventories
Piermaria; Corona; Lorenzo; Fattorini; Maria; Chiara; Pagliarella
2015-01-01
Background: Remote sensing-based inventories are essential in estimating forest cover in tropical and subtropical countries, where ground inventories cannot be performed periodically at a large scale owing to high costs and forest inaccessibility(e.g. REDD projects) and are mandatory for constructing historical records that can be used as forest cover baselines. Given the conditions of such inventories, the survey area is partitioned into a grid of imagery segments of pre-fixed size where the proportion of forest cover can be measured within segments using a combination of unsupervised(automated or semi-automated) classification of satellite imagery and manual(i.e. visual on-screen)enhancements. Because visual on-screen operations are time expensive procedures, manual classification can be performed only for a sample of imagery segments selected at a first stage, while forest cover within each selected segment is estimated at a second stage from a sample of pixels selected within the segment. Because forest cover data arising from unsupervised satellite imagery classification may be freely available(e.g. Landsat imagery)over the entire survey area(wall-to-wall data) and are likely to be good proxies of manually classified cover data(sample data), they can be adopted as suitable auxiliary information.Methods: The question is how to choose the sample areas where manual classification is carried out. We have investigated the efficiency of one-per-stratum stratified sampling for selecting segments and pixels, where to carry out manual classification and to determine the efficiency of the difference estimator for exploiting auxiliary information at the estimation level. The performance of this strategy is compared with simple random sampling without replacement.Results: Our results were obtained theoretically from three artificial populations constructed from the Landsat classification(forest/non forest) available at pixel level for a study area located in central Italy
Refining Approximating Betweenness Centrality Based on Samplings
Ji, Shiyu
2016-01-01
Betweenness Centrality (BC) is an important measure used widely in complex network analysis, such as social network, web page search, etc. Computing the exact BC values is highly time consuming. Currently the fastest exact BC determining algorithm is given by Brandes, taking $O(nm)$ time for unweighted graphs and $O(nm+n^2\\log n)$ time for weighted graphs, where $n$ is the number of vertices and $m$ is the number of edges in the graph. Due to the extreme difficulty of reducing the time complexity of exact BC determining problem, many researchers have considered the possibility of any satisfactory BC approximation algorithms, especially those based on samplings. Bader et al. give the currently best BC approximation algorithm, with a high probability to successfully estimate the BC of one vertex within a factor of $1/\\varepsilon$ using $\\varepsilon t$ samples, where $t$ is the ratio between $n^2$ and the BC value of the vertex. However, some of the algorithmic parameters in Bader's work are not yet tightly boun...
Smart, S.M.; Clarke, R.T.; Poll, van de H.M.; Robertson, E.J.; Shield, E.R.; Bunce, R.G.H.; Maskell, L.C.
2003-01-01
Patterns of vegetation across Great Britain (GB) between 1990 and 1998 were quantified based on an analysis of plant species data from a total of 9596 fixed plots. Plots were established on a stratified random basis within 501 1 km sample squares located as part of the Countryside Survey of GB. Resu
Electromagnetic waves in stratified media
Wait, James R; Fock, V A; Wait, J R
2013-01-01
International Series of Monographs in Electromagnetic Waves, Volume 3: Electromagnetic Waves in Stratified Media provides information pertinent to the electromagnetic waves in media whose properties differ in one particular direction. This book discusses the important feature of the waves that enables communications at global distances. Organized into 13 chapters, this volume begins with an overview of the general analysis for the electromagnetic response of a plane stratified medium comprising of any number of parallel homogeneous layers. This text then explains the reflection of electromagne
Rougier, Simon; Puissant, Anne; Stumpf, André; Lachiche, Nicolas
2016-09-01
Vegetation monitoring is becoming a major issue in the urban environment due to the services they procure and necessitates an accurate and up to date mapping. Very High Resolution satellite images enable a detailed mapping of the urban tree and herbaceous vegetation. Several supervised classifications with statistical learning techniques have provided good results for the detection of urban vegetation but necessitate a large amount of training data. In this context, this study proposes to investigate the performances of different sampling strategies in order to reduce the number of examples needed. Two windows based active learning algorithms from state-of-art are compared to a classical stratified random sampling and a third combining active learning and stratified strategies is proposed. The efficiency of these strategies is evaluated on two medium size French cities, Strasbourg and Rennes, associated to different datasets. Results demonstrate that classical stratified random sampling can in some cases be just as effective as active learning methods and that it should be used more frequently to evaluate new active learning methods. Moreover, the active learning strategies proposed in this work enables to reduce the computational runtime by selecting multiple windows at each iteration without increasing the number of windows needed.
Bebu, Ionut; Lachin, John M
2016-01-01
Composite outcomes are common in clinical trials, especially for multiple time-to-event outcomes (endpoints). The standard approach that uses the time to the first outcome event has important limitations. Several alternative approaches have been proposed to compare treatment versus control, including the proportion in favor of treatment and the win ratio. Herein, we construct tests of significance and confidence intervals in the context of composite outcomes based on prioritized components using the large sample distribution of certain multivariate multi-sample U-statistics. This non-parametric approach provides a general inference for both the proportion in favor of treatment and the win ratio, and can be extended to stratified analyses and the comparison of more than two groups. The proposed methods are illustrated with time-to-event outcomes data from a clinical trial.
Stratified medicine and reimbursement issues
Fugel, Hans-Joerg; Nuijten, Mark; Postma, Maarten
2012-01-01
Stratified Medicine (SM) has the potential to target patient populations who will most benefit from a therapy while reducing unnecessary health interventions associated with side effects. The link between clinical biomarkers/diagnostics and therapies provides new opportunities for value creation to
Micro contactor based on isotachophoretic sample transport.
Goet, Gabriele; Baier, Tobias; Hardt, Steffen
2009-12-21
It is demonstrated how isotachophoresis (ITP) in a microfluidic device may be utilized to bring two small sample volumes into contact in a well-controlled manner. The ITP contactor serves a similar purpose as micromixers that are designed to mix two species rapidly in a microfluidic channel. In contrast to many micromixers, the ITP contactor does not require complex channel architectures and allows a sample processing in the spirit of "digital microfluidics", i.e. the samples always remain in a compact volume. It is shown that the ITP zone transport through microchannels proceeds in a reproducible and predictable manner, and that the sample trajectories follow simple relationships obtained from Ohm's law. Firstly, the micro contactor can be used to synchronize two ITP zones having reached a channel at different points in time. Secondly, fulfilling its actual purpose it is capable of bringing two samples in molecular contact via an interpenetration of ITP zones. It is demonstrated that the contacting time is proportional to the ITP zone extension. This opens up the possibility of using that type of device as a special type of micromixer with "mixing times" significantly below one second and an option to regulate the duration of contact through specific parameters such as the sample volume. Finally, it is shown how the micro contactor can be utilized to conduct a hybridization reaction between two ITP zones containing complementary DNA strands.
Harpoon-based sample Acquisition System
Bernal, Javier; Nuth, Joseph; Wegel, Donald
2012-02-01
Acquiring information about the composition of comets, asteroids, and other near Earth objects is very important because they may contain the primordial ooze of the solar system and the origins of life on Earth. Sending a spacecraft is the obvious answer, but once it gets there it needs to collect and analyze samples. Conceptually, a drill or a shovel would work, but both require something extra to anchor it to the comet, adding to the cost and complexity of the spacecraft. Since comets and asteroids are very low gravity objects, drilling becomes a problem. If you do not provide a grappling mechanism, the drill would push the spacecraft off the surface. Harpoons have been proposed as grappling mechanisms in the past and are currently flying on missions such as ROSETTA. We propose to use a hollow, core sampling harpoon, to act as the anchoring mechanism as well as the sample collecting device. By combining these two functions, mass is reduced, more samples can be collected and the spacecraft can carry more propellant. Although challenging, returning the collected samples to Earth allows them to be analyzed in laboratories with much greater detail than possible on a spacecraft. Also, bringing the samples back to Earth allows future generations to study them.
Thermal mixing in a stratified environment
Kraemer, Damian; Cotel, Aline
1999-11-01
Laboratory experiments of a thermal impinging on a stratified interface have been performed. The thermal was released from a cylindrical reservoir located at the bottom of a Lucite tank. The stratified interface was created by filling the tank with two different saline solutions. The density of the lower layer is greater than that of the upper layer and the thermal fluid, thereby creating a stable stratification. A pH indicator, phenolphthalein, is used to visualize and quantify the amount of mixing produced by the impingement of the thermal at the interface. The upper layer contains a mixture of water, salt and sodium hydroxide. The thermal fluid is composed of water, sulfuric acid and phenolphthalein. When the thermal entrains and mixes fluid from the upper layer, a chemical reaction takes place, and the resulting mixed fluid is now visible. The ratio of base to acid, called the equivalence ratio, was varied throughout the experiments, as well as the Richardson number. The Richardson number is the ratio of potential to kinetic energy, and is based on the thermal quantities at the interface. Results indicate that the amount of mixing produced is proportional to the Richardson number raised to the -3/2 power. Previous experiments (Zhang and Cotel 1999) revealed that the entrainment rate of a thermal in a stratified environment follows the same power law.
A model-based 'varimax' sampling strategy for a heterogeneous population.
Akram, Nuzhat A; Farooqi, Shakeel R
2014-01-01
Sampling strategies are planned to enhance the homogeneity of a sample, hence to minimize confounding errors. A sampling strategy was developed to minimize the variation within population groups. Karachi, the largest urban agglomeration in Pakistan, was used as a model population. Blood groups ABO and Rh factor were determined for 3000 unrelated individuals selected through simple random sampling. Among them five population groups, namely Balochi, Muhajir, Pathan, Punjabi and Sindhi, based on paternal ethnicity were identified. An index was designed to measure the proportion of admixture at parental and grandparental levels. Population models based on index score were proposed. For validation, 175 individuals selected through stratified random sampling were genotyped for the three STR loci CSF1PO, TPOX and TH01. ANOVA showed significant differences across the population groups for blood groups and STR loci distribution. Gene diversity was higher across the sub-population model than in the agglomerated population. At parental level gene diversities are significantly higher across No admixture models than Admixture models. At grandparental level the difference was not significant. A sub-population model with no admixture at parental level was justified for sampling the heterogeneous population of Karachi.
Stably Stratified Flow in a Shallow Valley
Mahrt, L.
2017-01-01
Stratified nocturnal flow above and within a small valley of approximately 12-m depth and a few hundred metres width is examined as a case study, based on a network of 20 sonic anemometers and a central 20-m tower with eight levels of sonic anemometers. Several regimes of stratified flow over gentle topography are conceptually defined for organizing the data analysis and comparing with the existing literature. In our case study, a marginal cold pool forms within the shallow valley in the early evening but yields to larger ambient wind speeds after a few hours, corresponding to stratified terrain-following flow where the flow outside the valley descends to the valley floor. The terrain-following flow lasts about 10 h and then undergoes transition to an intermittent marginal cold pool towards the end of the night when the larger-scale flow collapses. During this 10-h period, the stratified terrain-following flow is characterized by a three-layer structure, consisting of a thin surface boundary layer of a few metres depth on the valley floor, a deeper boundary layer corresponding to the larger-scale flow, and an intermediate transition layer with significant wind-directional shear and possible advection of lee turbulence that is generated even for the gentle topography of our study. The flow in the valley is often modulated by oscillations with a typical period of 10 min. Cold events with smaller turbulent intensity and duration of tens of minutes move through the observational domain throughout the terrain-following period. One of these events is examined in detail.
Representativeness-based sampling network design for the State of Alaska
Forrest M. Hoffman; Jitendra Kumar; Richard T. Mills; William W. Hargrove
2013-01-01
Resource and logistical constraints limit the frequency and extent of environmental observations, particularly in the Arctic, necessitating the development of a systematic sampling strategy to maximize coverage and objectively represent environmental variability at desired scales. A quantitative methodology for stratifying sampling domains, informing site selection,...
Web-Based Statistical Sampling and Analysis
Quinn, Anne; Larson, Karen
2016-01-01
Consistent with the Common Core State Standards for Mathematics (CCSSI 2010), the authors write that they have asked students to do statistics projects with real data. To obtain real data, their students use the free Web-based app, Census at School, created by the American Statistical Association (ASA) to help promote civic awareness among school…
Race-Ethnicity, Poverty, Urban Stressors, and Telomere Length in a Detroit Community-based Sample.
Geronimus, Arline T; Pearson, Jay A; Linnenbringer, Erin; Schulz, Amy J; Reyes, Angela G; Epel, Elissa S; Lin, Jue; Blackburn, Elizabeth H
2015-06-01
Residents of distressed urban areas suffer early aging-related disease and excess mortality. Using a community-based participatory research approach in a collaboration between social researchers and cellular biologists, we collected a unique data set of 239 black, white, or Mexican adults from a stratified, multistage probability sample of three Detroit neighborhoods. We drew venous blood and measured telomere length (TL), an indicator of stress-mediated biological aging, linking respondents' TL to their community survey responses. We regressed TL on socioeconomic, psychosocial, neighborhood, and behavioral stressors, hypothesizing and finding an interaction between poverty and racial-ethnic group. Poor whites had shorter TL than nonpoor whites; poor and nonpoor blacks had equivalent TL; and poor Mexicans had longer TL than nonpoor Mexicans. Findings suggest unobserved heterogeneity bias is an important threat to the validity of estimates of TL differences by race-ethnicity. They point to health impacts of social identity as contingent, the products of structurally rooted biopsychosocial processes.
Kwack, Won G; Ho, Won J; Kim, Jae H; Lee, Jin H; Kim, Eo J; Kang, Hyoun W; Lee, Jun K
2016-07-01
Although there are general guidelines on endoscopic biopsy for diagnosing gastric neoplasms, they are predominantly based on outdated literature obtained with fiberscopes without analyses specific to tumor characteristics.This study aims to comprehensively characterize the contemporary endoscopic biopsy by determining the diagnostic yield across different lesion morphologies and histological stages, especially exploring how the number and site of biopsy may influence the overall yield.Biopsy samples from suspected gastric neoplasms were collected prospectively from May 2011 to August 2014 in a tertiary care medical center. A standardized methodology was used to obtain a total of 6 specimens from 2 defined sites per lesion. Rate of positive diagnosis based on the biopsy number and site was assessed for specific gastric lesion morphologies and histological stages.A total of 1080 biopsies from 180 pathologically diagnosed neoplastic lesions in 176 patients were obtained during the study. For depressed/ulcerative and polypoid lesions, the yield was already >99% by the fourth biopsy without further gain from additional biopsies. Lower overall yield was observed for infiltrative lesions (57.1% from 4 biopsies). The site of biopsy did not influence the diagnostic yield except for with infiltrative lesions in which biopsies from thickened mucosal folds were of higher yield than erosive regions.Obtaining 4 specimens may be sufficient for accurate diagnosis of a depressed/ulcerative or polypoid gastric lesion regardless of its histological stage. For infiltrative lesions, at least 5 to 6 biopsies per lesion with more representative sampling from thickened mucosal folds may be preferable.
The fully nonlinear stratified geostrophic adjustment problem
Coutino, Aaron; Stastna, Marek
2017-01-01
The study of the adjustment to equilibrium by a stratified fluid in a rotating reference frame is a classical problem in geophysical fluid dynamics. We consider the fully nonlinear, stratified adjustment problem from a numerical point of view. We present results of smoothed dam break simulations based on experiments in the published literature, with a focus on both the wave trains that propagate away from the nascent geostrophic state and the geostrophic state itself. We demonstrate that for Rossby numbers in excess of roughly 2 the wave train cannot be interpreted in terms of linear theory. This wave train consists of a leading solitary-like packet and a trailing tail of dispersive waves. However, it is found that the leading wave packet never completely separates from the trailing tail. Somewhat surprisingly, the inertial oscillations associated with the geostrophic state exhibit evidence of nonlinearity even when the Rossby number falls below 1. We vary the width of the initial disturbance and the rotation rate so as to keep the Rossby number fixed, and find that while the qualitative response remains consistent, the Froude number varies, and these variations are manifested in the form of the emanating wave train. For wider initial disturbances we find clear evidence of a wave train that initially propagates toward the near wall, reflects, and propagates away from the geostrophic state behind the leading wave train. We compare kinetic energy inside and outside of the geostrophic state, finding that for long times a Rossby number of around one-quarter yields an equal split between the two, with lower (higher) Rossby numbers yielding more energy in the geostrophic state (wave train). Finally we compare the energetics of the geostrophic state as the Rossby number varies, finding long-lived inertial oscillations in the majority of the cases and a general agreement with the past literature that employed either hydrostatic, shallow-water equation-based theory or
Information content of household-stratified epidemics
T.M. Kinyanjui
2016-09-01
Full Text Available Household structure is a key driver of many infectious diseases, as well as a natural target for interventions such as vaccination programs. Many theoretical and conceptual advances on household-stratified epidemic models are relatively recent, but have successfully managed to increase the applicability of such models to practical problems. To be of maximum realism and hence benefit, they require parameterisation from epidemiological data, and while household-stratified final size data has been the traditional source, increasingly time-series infection data from households are becoming available. This paper is concerned with the design of studies aimed at collecting time-series epidemic data in order to maximize the amount of information available to calibrate household models. A design decision involves a trade-off between the number of households to enrol and the sampling frequency. Two commonly used epidemiological study designs are considered: cross-sectional, where different households are sampled at every time point, and cohort, where the same households are followed over the course of the study period. The search for an optimal design uses Bayesian computationally intensive methods to explore the joint parameter-design space combined with the Shannon entropy of the posteriors to estimate the amount of information in each design. For the cross-sectional design, the amount of information increases with the sampling intensity, i.e., the designs with the highest number of time points have the most information. On the other hand, the cohort design often exhibits a trade-off between the number of households sampled and the intensity of follow-up. Our results broadly support the choices made in existing epidemiological data collection studies. Prospective problem-specific use of our computational methods can bring significant benefits in guiding future study designs.
Silicon waveguide based 320 Gbit/s optical sampling
Ji, Hua; Galili, Michael; Pu, Minhao
2010-01-01
A silicon waveguide-based ultra-fast optical sampling system is successfully demonstrated using a free-running fiber laser with a carbon nanotube-based mode-locker as the sampling source. A clear eye-diagram of a 320 Gbit/s data signal is obtained.......A silicon waveguide-based ultra-fast optical sampling system is successfully demonstrated using a free-running fiber laser with a carbon nanotube-based mode-locker as the sampling source. A clear eye-diagram of a 320 Gbit/s data signal is obtained....
Hawkins, K A; Tulsky, D S
2001-11-01
Since memory performance expectations may be IQ-based, unidirectional base rate data for IQ-Memory Score discrepancies are provided in the WAIS-III/WMS-III Technical Manual. The utility of these data partially rests on the assumption that discrepancy base rates do not vary across ability levels. FSIQ stratified base rate data generated from the standardization sample, however, demonstrate substantial variability across the IQ spectrum. A superiority of memory score over FSIQ is typical at lower IQ levels, whereas the converse is true at higher IQ levels. These data indicate that the use of IQ-memory score unstratified "simple difference" tables could lead to erroneous conclusions for clients with low or high IQ. IQ stratified standardization base rate data are provided as a complement to the "predicted difference" method detailed in the Technical Manual.
Function approximation for learning control : a key sample based approach
2004-01-01
Two function approximators are introduced in this thesis for use in learning control. These function approximators identify a relation between input and output based on samples. Two different, but closely related function approximators are introduced: the key sample machine and the recursive key sample machine.
Control charts for location based on different sampling schemes
Mehmood, R.; Riaz, M.; Does, R.J.M.M.
2013-01-01
Control charts are the most important statistical process control tool for monitoring variations in a process. A number of articles are available in the literature for the X̄ control chart based on simple random sampling, ranked set sampling, median-ranked set sampling (MRSS), extreme-ranked set
Suppression of stratified explosive interactions
Meeks, M.K.; Shamoun, B.I.; Bonazza, R.; Corradini, M.L. [Wisconsin Univ., Madison, WI (United States). Dept. of Nuclear Engineering and Engineering Physics
1998-01-01
Stratified Fuel-Coolant Interaction (FCI) experiments with Refrigerant-134a and water were performed in a large-scale system. Air was uniformly injected into the coolant pool to establish a pre-existing void which could suppress the explosion. Two competing effects due to the variation of the air flow rate seem to influence the intensity of the explosion in this geometrical configuration. At low flow rates, although the injected air increases the void fraction, the concurrent agitation and mixing increases the intensity of the interaction. At higher flow rates, the increase in void fraction tends to attenuate the propagated pressure wave generated by the explosion. Experimental results show a complete suppression of the vapor explosion at high rates of air injection, corresponding to an average void fraction of larger than 30%. (author)
Methane metabolism in a stratified boreal lake
Nykänen, Hannu; Peura, Sari; Kankaala, Paula; Jones, Roger
2013-04-01
Stratified lakes, typical of the boreal zone, are naturally anoxic from their bottoms. In these lakes methanogenesis can account for up to half of organic matter degradation. However, a major part of the methane (CH4) is oxidized in the water column before reaching the atmosphere. Since methanotrophs use CH4 as their sole carbon and energy source, much CH4-derived carbon is incorporated into their biomass. Microbially produced CH4 has strongly negative δ13C compared to other carbon forms in ecosystems, making it possible to follow its route in food webs. However, only a few studies have estimated the amount of this microbial biomass or its carbon stable isotopic composition due to difficulties in separating it from other biomass or from other carbon forms in the water column. We estimated methanotrophic biomass from measured CH4 oxidation, and δ13C of the biomass from measured δ13C values of CH4, DIC, POM and DOC. An estimate of the fraction of methanotrophs in total microbial biomass is derived from bacterial community composition measurements. The study was made in, Alinen Mustajärvi, a small (area 0.75 ha, maximum depth 6.5 m, mean depth 4.2 m,), oligotrophic, mesohumic headwater lake located in boreal coniferous forest in southern Finland. CH4 and DIC concentrations and their δ13C were measured over the deepest point of the lake at 1 m intervals. 13C of DOM and POM were analyzed from composite samples from epi-, meta-, and hypolimnion. Evasion of CH4 and carbon dioxide from the lake surface to the atmosphere was estimated with boundary layer diffusion equations. CH4oxidation was estimated by comparing differences between observed concentrations and CH4potentially transported by turbulent diffusion between different vertical layers in the lake and also by actual methanotrophy measurements and from vertical differences in δ13C-CH4. The estimate of CH4 production was based on the sum of oxidized and released CH4. Molecular microbiology methods were used to
Braeken DCW
2017-08-01
Full Text Available Dionne CW Braeken,1–3 Gernot GU Rohde,2 Frits ME Franssen,1,2 Johanna HM Driessen,3–5 Tjeerd P van Staa,3,6 Patrick C Souverein,3 Emiel FM Wouters,1,2 Frank de Vries3,4,7 1Department of Research and Education, CIRO, Horn, 2Department of Respiratory Medicine, Maastricht University Medical Centre (MUMC+, Maastricht, 3Division of Pharmacoepidemiology and Clinical Pharmacology, Utrecht Institute of Pharmaceutical Sciences, Utrecht, 4Department of Clinical Pharmacy and Toxicology, Maastricht University Medical Centre (MUMC+, Maastricht, 5Department of Epidemiology, Care and Public Health Research Institute (CAPHRI, Maastricht, the Netherlands; 6Department of Health eResearch, University of Manchester, Manchester, 7MRC Lifecourse Epidemiology Unit, Southampton General Hospital, Southampton, UK Background: Smoking increases the risk of community-acquired pneumonia (CAP and is associated with the development of COPD. Until now, it is unclear whether CAP in COPD is due to smoking-related effects, or due to COPD pathophysiology itself. Objective: To evaluate the association between COPD and CAP by smoking status. Methods: In total, 62,621 COPD and 191,654 control subjects, matched by year of birth, gender and primary care practice, were extracted from the Clinical Practice Research Datalink (2005–2014. Incidence rates (IRs were estimated by dividing the total number of CAP cases by the cumulative person-time at risk. Time-varying Cox proportional hazard models were used to estimate the hazard ratios (HRs for CAP in COPD patients versus controls. HRs of CAP by smoking status were calculated by stratified analyses in COPD patients versus controls and within both subgroups with never smoking as reference. Results: IRs of CAP in COPD patients (32.00/1,000 person-years and controls (6.75/1,000 person-years increased with age and female gender. The risk of CAP in COPD patients was higher than in controls (HR 4.51, 95% CI: 4.27–4.77. Current smoking
Similarity-based denoising of point-sampled surface
Ren-fang WANG; Wen-zhi CHEN; San-yuan ZHANG; Yin ZHANG; Xiu-zi YE
2008-01-01
A non-local denoising (NLD) algorithm for point-sampled surfaces (PSSs) is presented based on similarities, including geometry intensity and features of sample points. By using the trilateral filtering operator, the differential signal of each sample point is determined and called "geometry intensity". Based on covariance analysis, a regular grid of geometry intensity of a sample point is constructed, and the geometry-intensity similarity of two points is measured according to their grids. Based on mean shift clustering, the PSSs are clustered in terms of the local geometry-features similarity. The smoothed geometry intensity, i.e., offset distance, of the sample point is estimated according to the two similarities. Using the resulting intensity, the noise component from PSSs is finally removed by adjusting the position of each sample point along its own normal direction. Experimental results demonstrate that the algorithm is robust and can produce a more accurate denoising result while having better feature preservation.
Textile-based sampling for potentiometric determination of ions.
Lisak, Grzegorz; Arnebrant, Thomas; Ruzgas, Tautgirdas; Bobacka, Johan
2015-06-02
Potentiometric sensing utilizing textile-based micro-volume sampling was applied and evaluated for the determination of clinically (Na(+), K(+), Cl(-)) and environmentally (Cd(2+), Pb(2+) and pH) relevant analytes. In this technological design, calibration solutions and samples were absorbed into textiles while the potentiometric cells (ion-selective electrodes and reference electrode) were pressed against the textile. Once the liquid, by wicking action, reached the place where the potentiometric cell was pressed onto the textile, hence closing the electric circuit, the potentiometric response was obtained. Cotton, polyamide, polyester and their blends with elastane were applied for micro-volume sampling. The textiles were found to influence the determination of pH in environmental samples with pH close to neutral and Pb(2+) at low analyte concentrations. On the other hand, textile-based micro-volume sampling was successfully applied in measurements of Na(+) using solid-contact sodium-selective electrodes utilizing all the investigated textiles for sampling. It was found that in order to extend the application of textile-based sampling toward environmental analysis of ions it will be necessary to tailor the physio-chemical properties of the textile materials. In general, textile-based sampling opens new possibilities for direct chemical analysis of small-volume samples and provide a simple and low-cost method to screen various textiles for their effects on samples to identify which textiles are the most suitable for on-body sensing.
2013-01-01
Background Antibiograms created by aggregating hospital-wide susceptibility data from diverse patients can be misleading. To demonstrate the utility of age- and location-stratified antibiograms, we compared stratified antibiograms for three common bacterial pathogens, E. coli, S. aureus, and S. pneumoniae. We created stratified antibiograms based on patient age (/=65 years), and inpatient or outpatient location using all 2009 E. coli and S. aureus, and all 2008–2009 S. pneumoniae isolates sub...
Stratified wake of an accelerating hydrofoil
Ben-Gida, Hadar; Gurka, Roi
2015-01-01
Wakes of towed and self-propelled bodies in stratified fluids are significantly different from non-stratified wakes. Long time effects of stratification on the development of the wakes of bluff bodies moving at constant speed are well known. In this experimental study we demonstrate how buoyancy affects the initial growth of vortices developing in the wake of a hydrofoil accelerating from rest. Particle image velocimetry measurements were applied to characterize the wake evolution behind a NACA 0015 hydrofoil accelerating in water and for low Reynolds number and relatively strong and stably stratified fluid (Re=5,000, Fr~O(1)). The analysis of velocity and vorticity fields, following vortex identification and an estimate of the circulation, reveal that the vortices in the stratified fluid case are stretched along the streamwise direction in the near wake. The momentum thickness profiles show lower momentum thickness values for the stratified late wake compared to the non-stratified wake, implying that the dra...
How stratified is mantle convection?
Puster, Peter; Jordan, Thomas H.
1997-04-01
We quantify the flow stratification in the Earth's mid-mantle (600-1500 km) in terms of a stratification index for the vertical mass flux, Sƒ (z) = 1 - ƒ(z) / ƒref (z), in which the reference value ƒref(z) approximates the local flux at depth z expected for unstratified convection (Sƒ=0). Although this flux stratification index cannot be directly constrained by observations, we show from a series of two-dimensional convection simulations that its value can be related to a thermal stratification index ST(Z) defined in terms of the radial correlation length of the temperature-perturbation field δT(z, Ω). ST is a good proxy for Sƒ at low stratifications (SƒUniformitarian Principle. The bound obtained here from global tomography is consistent with local seismological evidence for slab flux into the lower mantle; however, the total material flux has to be significantly greater (by a factor of 2-3) than that due to slabs alone. A stratification index, Sƒ≲0.2, is sufficient to exclude many stratified convection models still under active consideration, including most forms of chemical layering between the upper and lower mantle, as well as the more extreme versions of avalanching convection governed by a strong endothermic phase change.
Fault Diagnosis and Prognosis Based on Lebesgue Sampling
2014-10-02
has been physically reached and is com- pared with the RUL estimation from prognosis. Traditional ways to design FDP algorithms adopt periodic sampling...also called “ Riemann sampling (RS)”) where sam- ples are taken in a periodic manner and the diagnostic and prognostic algorithms are executed at the...executed on an “as-needed” basis and is promising in reducing the computational cost compared with the traditional Riemann sampling-based FDP (RS-FDP
Function approximation for learning control : a key sample based approach
Kruif, de Bastiaan Johannes
2004-01-01
Two function approximators are introduced in this thesis for use in learning control. These function approximators identify a relation between input and output based on samples. Two different, but closely related function approximators are introduced: the key sample machine and the recursive key sam
Nonprobability and probability-based sampling strategies in sexual science.
Catania, Joseph A; Dolcini, M Margaret; Orellana, Roberto; Narayanan, Vasudah
2015-01-01
With few exceptions, much of sexual science builds upon data from opportunistic nonprobability samples of limited generalizability. Although probability-based studies are considered the gold standard in terms of generalizability, they are costly to apply to many of the hard-to-reach populations of interest to sexologists. The present article discusses recent conclusions by sampling experts that have relevance to sexual science that advocates for nonprobability methods. In this regard, we provide an overview of Internet sampling as a useful, cost-efficient, nonprobability sampling method of value to sex researchers conducting modeling work or clinical trials. We also argue that probability-based sampling methods may be more readily applied in sex research with hard-to-reach populations than is typically thought. In this context, we provide three case studies that utilize qualitative and quantitative techniques directed at reducing limitations in applying probability-based sampling to hard-to-reach populations: indigenous Peruvians, African American youth, and urban men who have sex with men (MSM). Recommendations are made with regard to presampling studies, adaptive and disproportionate sampling methods, and strategies that may be utilized in evaluating nonprobability and probability-based sampling methods.
JinKui Wu; ShiWei Liu; LePing Ma; Jia Qin; JiaXin Zhou; Hong Wei
2016-01-01
The accuracy of spatial interpolation of precipitation data is determined by the actual spatial variability of the precipitation, the interpolation method, and the distribution of observatories whose selections are particularly important. In this paper, three spatial sampling programs, including spatial random sampling, spatial stratified sampling, and spatial sandwich sampling, are used to analyze the data from meteorological stations of northwestern China. We compared the accuracy of ordinary Kriging interpolation methods on the basis of the sampling results. The error values of the regional annual pre-cipitation interpolation based on spatial sandwich sampling, including ME (0.1513), RMSE (95.91), ASE (101.84), MSE (−0.0036), and RMSSE (1.0397), were optimal under the premise of abundant prior knowledge. The result of spatial stratified sampling was poor, and spatial random sampling was even worse. Spatial sandwich sampling was the best sampling method, which minimized the error of regional precipitation estimation. It had a higher degree of accuracy compared with the other two methods and a wider scope of application.
Stability of stratified two-phase flows in inclined channels
Barmak, Ilya; Ullmann, Amos; Brauner, Neima
2016-01-01
Linear stability of stratified gas-liquid and liquid-liquid plane-parallel flows in inclined channels is studied with respect to all wavenumber perturbations. The main objective is to predict parameter regions in which stable stratified configuration in inclined channels exists. Up to three distinct base states with different holdups exist in inclined flows, so that the stability analysis has to be carried out for each branch separately. Special attention is paid to the multiple solution regions to reveal the feasibility of non-unique stable stratified configurations in inclined channels. The stability boundaries of each branch of steady state solutions are presented on the flow pattern map and are accompanied by critical wavenumbers and spatial profiles of the most unstable perturbations. Instabilities of different nature are visualized by streamlines of the neutrally stable perturbed flows, consisting of the critical perturbation superimposed on the base flow. The present analysis confirms the existence of ...
Different Random Distributions Research on Logistic-Based Sample Assumption
Jing Pan
2014-01-01
Full Text Available Logistic-based sample assumption is proposed in this paper, with a research on different random distributions through this system. It provides an assumption system of logistic-based sample, including its sample space structure. Moreover, the influence of different random distributions for inputs has been studied through this logistic-based sample assumption system. In this paper, three different random distributions (normal distribution, uniform distribution, and beta distribution are used for test. The experimental simulations illustrate the relationship between inputs and outputs under different random distributions. Thereafter, numerical analysis infers that the distribution of outputs depends on that of inputs to some extent, and this assumption system is not independent increment process, but it is quasistationary.
An algorithm to improve sampling efficiency for uncertainty propagation using sampling based method
Campolina, Daniel; Lima, Paulo Rubens I., E-mail: campolina@cdtn.br, E-mail: pauloinacio@cpejr.com.br [Centro de Desenvolvimento da Tecnologia Nuclear (CDTN/CNEN-MG), Belo Horizonte, MG (Brazil). Servico de Tecnologia de Reatores; Pereira, Claubia; Veloso, Maria Auxiliadora F., E-mail: claubia@nuclear.ufmg.br, E-mail: dora@nuclear.ufmg.br [Universidade Federal de Minas Gerais (UFMG), Belo Horizonte, MG (Brazil). Dept. de Engenharia Nuclear
2015-07-01
Sample size and computational uncertainty were varied in order to investigate sample efficiency and convergence of the sampling based method for uncertainty propagation. Transport code MCNPX was used to simulate a LWR model and allow the mapping, from uncertain inputs of the benchmark experiment, to uncertain outputs. Random sampling efficiency was improved through the use of an algorithm for selecting distributions. Mean range, standard deviation range and skewness were verified in order to obtain a better representation of uncertainty figures. Standard deviation of 5 pcm in the propagated uncertainties for 10 n-samples replicates was adopted as convergence criterion to the method. Estimation of 75 pcm uncertainty on reactor k{sub eff} was accomplished by using sample of size 93 and computational uncertainty of 28 pcm to propagate 1σ uncertainty of burnable poison radius. For a fixed computational time, in order to reduce the variance of the uncertainty propagated, it was found, for the example under investigation, it is preferable double the sample size than double the amount of particles followed by Monte Carlo process in MCNPX code. (author)
Core science: Stratified by a sunken impactor
Nakajima, Miki
2016-10-01
There is potential evidence for a stratified layer at the top of the Earth's core, but its origin is not well understood. Laboratory experiments suggest that the stratified layer could be a sunken remnant of the giant impact that formed the Moon.
El Serafy, G.Y.H.; Mynett, A.E.
2008-01-01
Numerical models of a water system are always based on assumptions and simplifications that may result in errors in the model's predictions. Such errors can be reduced through the use of data assimilation and thus can significantly improve the success rate of the predictions and operational forecast
A Fixpoint Semantics for Stratified Databases
沈一栋
1993-01-01
Przmusinski extended the notion of stratified logic programs,developed by Apt,Blair and Walker,and by van Gelder,to stratified databases that allow both negative premises and disjunctive consequents.However,he did not provide a fixpoint theory for such class of databases.On the other hand,although a fixpoint semantics has been developed by Minker and Rajasekar for non-Horn logic programs,it is tantamount to traditional minimal model semantics which is not sufficient to capture the intended meaning of negation in the premises of clauses in stratified databases.In this paper,a fixpoint approach to stratified databases is developed,which corresponds with the perfect model semantics.Moreover,algorithms are proposed for computing the set of perfect models of a stratified database.
Towards Cost-efficient Sampling Methods
Peng, Luo; Yongli, Li; Chong, Wu
2014-01-01
The sampling method has been paid much attention in the field of complex network in general and statistical physics in particular. This paper presents two new sampling methods based on the perspective that a small part of vertices with high node degree can possess the most structure information of a network. The two proposed sampling methods are efficient in sampling the nodes with high degree. The first new sampling method is improved on the basis of the stratified random sampling method and...
Stratified scaffold design for engineering composite tissues.
Mosher, Christopher Z; Spalazzi, Jeffrey P; Lu, Helen H
2015-08-01
A significant challenge to orthopaedic soft tissue repair is the biological fixation of autologous or allogeneic grafts with bone, whereby the lack of functional integration between such grafts and host bone has limited the clinical success of anterior cruciate ligament (ACL) and other common soft tissue-based reconstructive grafts. The inability of current surgical reconstruction to restore the native fibrocartilaginous insertion between the ACL and the femur or tibia, which minimizes stress concentration and facilitates load transfer between the soft and hard tissues, compromises the long-term clinical functionality of these grafts. To enable integration, a stratified scaffold design that mimics the multiple tissue regions of the ACL interface (ligament-fibrocartilage-bone) represents a promising strategy for composite tissue formation. Moreover, distinct cellular organization and phase-specific matrix heterogeneity achieved through co- or tri-culture within the scaffold system can promote biomimetic multi-tissue regeneration. Here, we describe the methods for fabricating a tri-phasic scaffold intended for ligament-bone integration, as well as the tri-culture of fibroblasts, chondrocytes, and osteoblasts on the stratified scaffold for the formation of structurally contiguous and compositionally distinct regions of ligament, fibrocartilage and bone. The primary advantage of the tri-phasic scaffold is the recapitulation of the multi-tissue organization across the native interface through the layered design. Moreover, in addition to ease of fabrication, each scaffold phase is similar in polymer composition and therefore can be joined together by sintering, enabling the seamless integration of each region and avoiding delamination between scaffold layers.
Roberge, Cornelia; Wulff, Sören; Reese, Heather; Ståhl, Göran
2016-04-01
Many countries have a national forest inventory (NFI) designed to produce statistically sound estimates of forest parameters. However, this type of inventory may not provide reliable results for forest damage which usually affects only small parts of the forest in a country. For this reason, specially designed forest damage inventories are performed in many countries, sometimes in coordination with the NFIs. In this study, we evaluated a new approach for damage inventory where existing NFI data form the basis for two-phase sampling for stratification and remotely sensed auxiliary data are applied for further improvement of precision through post-stratification. We applied Monte Carlo sampling simulation to evaluate different sampling strategies linked to different damage scenarios. The use of existing NFI data in a two-phase sampling for stratification design resulted in a relative efficiency of 50 % or lower, i.e., the variance was at least halved compared to a simple random sample of the same size. With post-stratification based on simulated remotely sensed auxiliary data, there was additional improvement, which depended on the accuracy of the auxiliary data and the properties of the forest damage. In many cases, the relative efficiency was further reduced by as much as one-half. In conclusion, the results show that substantial gains in precision can be obtained by utilizing auxiliary information in forest damage surveys, through two-phase sampling, through post-stratification, and through the combination of these two approaches, i.e., post-stratified two-phase sampling for stratification.
Fuel Burning Rate Model for Stratified Charge Engine
SONG Jin'ou; JIANG Zejun; YAO Chunde; WANG Hongfu
2006-01-01
A zero-dimensional single-zone double-curve model is presented to predict fuel burning rate in stratified charge engines, and it is integrated with GT-Power to predict the overall performance of the stratified charge engines.The model consists of two exponential functions for calculating the fuel burning rate in different charge zones.The model factors are determined by a non-linear curve fitting technique, based on the experimental data obtained from 30 cases in middle and low loads.The results show good agreement between the measured and calculated cylinder pressures,and the deviation between calculated and measured cylinder pressures is less than 5%.The zerodimensional single-zone double-curve model is successful in the combustion modeling for stratified charge engines.
Numerical Simulation on Stratified Flow over an Isolated Mountain Ridge
LI Ling; Shigeo Kimura
2007-01-01
The characteristics of stratified flow over an isolated mountain ridge have been investigated numerically. The two-dimensional model equations, based on the time-dependent Reynolds averaged NavierStokes equations, are solved numerically using an implicit time integration in a fitted body grid arrangement to simulate stratified flow over an isolated ideally bell-shaped mountain. The simulation results are in good agreement with the existing corresponding analytical and approximate solutions. It is shown that for atmospheric conditions where non-hydrostatic effects become dominant, the model is able to reproduce typical flow features. The dispersion characteristics of gaseous pollutants in the stratified flow have also been studied. The dispersion patterns for two typical atmospheric conditions are compared. The results show that the presence of a gravity wave causes vertical stratification of the pollutant concentration and affects the diffusive characteristics of the pollutants.
Orientation sampling for dictionary-based diffraction pattern indexing methods
Singh, S.; De Graef, M.
2016-12-01
A general framework for dictionary-based indexing of diffraction patterns is presented. A uniform sampling method of orientation space using the cubochoric representation is introduced and used to derive an empirical relation between the average disorientation between neighboring sampling points and the number of grid points sampled along the semi-edge of the cubochoric cube. A method to uniformly sample misorientation iso-surfaces is also presented. This method is used to show that the dot product serves as a proxy for misorientation. Furthermore, it is shown that misorientation iso-surfaces in Rodrigues space are quadractic surfaces. Finally, using the concept of Riesz energies, it is shown that the sampling method results in a near optimal covering of orientation space.
Andreoni, Laura; Russo, Antonio Giampiero
2017-01-01
to describe an innovative algorithm to classify the general population, in homogeneous groups of severity and complexity of disease and real needs, by using three dimensions: health, frailty, and disability. retrospective cohort study. the study includes the population covered by the Agency for health protection of Metropolitan Area of Milan (3,4 million of habitants). We identified two cohorts of residents: the first at 01.01.2015 and the second at 01.01.2016, classified in four different and mutually exclusive groups based on health and social data of the previous year. we estimated prevalence by age of the four main groups and we studied the transition, observed among groups, from 2015 to 2016. The algorithm validation was performed using non-conditional logistic regression+R14 models to estimate the association with total mortality with increasing levels of severity through the odds ratio (OR) and corresponding 95% confidence intervals (95%CI). The model performance, i.e., its predictive power and calibration, was evaluated by means of C-index and Hosmer-Lemeshow test, respectively. a total of 19% of subjects is healthy (group A); 41.6% has non-specific access to the health regional system (group B); 17% is a vulnerable (group C); and 22% has a chronic condition (group D). Combining chronic conditions with the frailty level, we classified population into subgroups. The risk of death within a year increases linearly in relation with increasing complexity of the health category and frailty level, with a grow of estimates from 0.83 to 135.6, using the healthy subjects as reference. The evaluation of the overall predictive power of the model, calculated by the C-index, shows a value of 0.94. The calibration of the model evaluated using the Hosmer-Lemeshow test returns a value of 327.2 (χ2 8 df, p-value models allocating health resources based on the real needs of patients.
Variance optimal sampling based estimation of subset sums
Cohen, Edith; Kaplan, Haim; Lund, Carsten; Thorup, Mikkel
2008-01-01
From a high volume stream of weighted items, we want to maintain a generic sample of a certain limited size $k$ that we can later use to estimate the total weight of arbitrary subsets. This is the classic context of on-line reservoir sampling, thinking of the generic sample as a reservoir. We present a reservoir sampling scheme providing variance optimal estimation of subset sums. More precisely, if we have seen $n$ items of the stream, then for any subset size $m$, our scheme based on $k$ samples minimizes the average variance over all subsets of size $m$. In fact, the optimality is against any off-line sampling scheme tailored for the concrete set of items seen: no off-line scheme based on $k$ samples can perform better than our on-line scheme when it comes to average variance over any subset size. Our scheme has no positive covariances between any pair of item estimates. Also, our scheme can handle each new item of the stream in $O(\\log k)$ time, which is optimal even on the word RAM.
Thompson, Steven K
2012-01-01
Praise for the Second Edition "This book has never had a competitor. It is the only book that takes a broad approach to sampling . . . any good personal statistics library should include a copy of this book." —Technometrics "Well-written . . . an excellent book on an important subject. Highly recommended." —Choice "An ideal reference for scientific researchers and other professionals who use sampling." —Zentralblatt Math Features new developments in the field combined with all aspects of obtaining, interpreting, and using sample data Sampling provides an up-to-date treat
Design-based inference in time-location sampling.
Leon, Lucie; Jauffret-Roustide, Marie; Le Strat, Yann
2015-07-01
Time-location sampling (TLS), also called time-space sampling or venue-based sampling is a sampling technique widely used in populations at high risk of infectious diseases. The principle is to reach individuals in places and at times where they gather. For example, men who have sex with men meet in gay venues at certain times of the day, and homeless people or drug users come together to take advantage of services provided to them (accommodation, care, meals). The statistical analysis of data coming from TLS surveys has been comprehensively discussed in the literature. Two issues of particular importance are the inclusion or not of sampling weights and how to deal with the frequency of venue attendance (FVA) of individuals during the course of the survey. The objective of this article is to present TLS in the context of sampling theory, to calculate sampling weights and to propose design-based inference taking into account the FVA. The properties of an estimator ignoring the FVA and of the design-based estimator are assessed and contrasted both through a simulation study and using real data from a recent cross-sectional survey conducted in France among drug users. We show that the estimators of prevalence or a total can be strongly biased if the FVA is ignored, while the design-based estimator taking FVA into account is unbiased even when declarative errors occur in the FVA. © The Author 2015. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Comparison of metatranscriptomic samples based on k-tuple frequencies.
Ying Wang
Full Text Available BACKGROUND: The comparison of samples, or beta diversity, is one of the essential problems in ecological studies. Next generation sequencing (NGS technologies make it possible to obtain large amounts of metagenomic and metatranscriptomic short read sequences across many microbial communities. De novo assembly of the short reads can be especially challenging because the number of genomes and their sequences are generally unknown and the coverage of each genome can be very low, where the traditional alignment-based sequence comparison methods cannot be used. Alignment-free approaches based on k-tuple frequencies, on the other hand, have yielded promising results for the comparison of metagenomic samples. However, it is not known if these approaches can be used for the comparison of metatranscriptome datasets and which dissimilarity measures perform the best. RESULTS: We applied several beta diversity measures based on k-tuple frequencies to real metatranscriptomic datasets from pyrosequencing 454 and Illumina sequencing platforms to evaluate their effectiveness for the clustering of metatranscriptomic samples, including three d2-type dissimilarity measures, one dissimilarity measure in CVTree, one relative entropy based measure S2 and three classical 1p-norm distances. Results showed that the measure d2(S can achieve superior performance on clustering metatranscriptomic samples into different groups under different sequencing depths for both 454 and Illumina datasets, recovering environmental gradients affecting microbial samples, classifying coexisting metagenomic and metatranscriptomic datasets, and being robust to sequencing errors. We also investigated the effects of tuple size and order of the background Markov model. A software pipeline to implement all the steps of analysis is built and is available at http://code.google.com/p/d2-tools/. CONCLUSIONS: The k-tuple based sequence signature measures can effectively reveal major groups and
SAMURAI: Polar AUV-Based Autonomous Dexterous Sampling
Akin, D. L.; Roberts, B. J.; Smith, W.; Roderick, S.; Reves-Sohn, R.; Singh, H.
2006-12-01
While autonomous undersea vehicles are increasingly being used for surveying and mapping missions, as of yet there has been little concerted effort to create a system capable of performing physical sampling or other manipulation of the local environment. This type of activity has typically been performed under teleoperated control from ROVs, which provides high-bandwidth real-time human direction of the manipulation activities. Manipulation from an AUV will require a completely autonomous sampling system, which implies both advanced technologies such as machine vision and autonomous target designation, but also dexterous robot manipulators to perform the actual sampling without human intervention. As part of the NASA Astrobiology Science and Technology for Exploring the Planets (ASTEP) program, the University of Maryland Space Systems Laboratory has been adapting and extending robotics technologies developed for spacecraft assembly and maintenance to the problem of autonomous sampling of biologicals and soil samples around hydrothermal vents. The Sub-polar ice Advanced Manipulator for Universal Sampling and Autonomous Intervention (SAMURAI) system is comprised of a 6000-meter capable six-degree-of-freedom dexterous manipulator, along with an autonomous vision system, multi-level control system, and sampling end effectors and storage mechanisms to allow collection of samples from vent fields. SAMURAI will be integrated onto the Woods Hole Oceanographic Institute (WHOI) Jaguar AUV, and used in Arctic during the fall of 2007 for autonomous vent field sampling on the Gakkel Ridge. Under the current operations concept, the JAGUAR and PUMA AUVs will survey the water column and localize on hydrothermal vents. Early mapping missions will create photomosaics of the vents and local surroundings, allowing scientists on the mission to designate desirable sampling targets. Based on physical characteristics such as size, shape, and coloration, the targets will be loaded into the
Sampling-based Algorithms for Optimal Motion Planning
Karaman, Sertac
2011-01-01
During the last decade, sampling-based path planning algorithms, such as Probabilistic RoadMaps (PRM) and Rapidly-exploring Random Trees (RRT), have been shown to work well in practice and possess theoretical guarantees such as probabilistic completeness. However, little effort has been devoted to the formal analysis of the quality of the solution returned by such algorithms, e.g., as a function of the number of samples. The purpose of this paper is to fill this gap, by rigorously analyzing the asymptotic behavior of the cost of the solution returned by stochastic sampling-based algorithms as the number of samples increases. A number of negative results are provided, characterizing existing algorithms, e.g., showing that, under mild technical conditions, the cost of the solution returned by broadly used sampling-based algorithms converges almost surely to a non-optimal value. The main contribution of the paper is the introduction of new algorithms, namely, PRM* and RRT*, which are provably asymptotically opti...
Sample Size Requirements for Traditional and Regression-Based Norms.
Oosterhuis, Hannah E M; van der Ark, L Andries; Sijtsma, Klaas
2016-04-01
Test norms enable determining the position of an individual test taker in the group. The most frequently used approach to obtain test norms is traditional norming. Regression-based norming may be more efficient than traditional norming and is rapidly growing in popularity, but little is known about its technical properties. A simulation study was conducted to compare the sample size requirements for traditional and regression-based norming by examining the 95% interpercentile ranges for percentile estimates as a function of sample size, norming method, size of covariate effects on the test score, test length, and number of answer categories in an item. Provided the assumptions of the linear regression model hold in the data, for a subdivision of the total group into eight equal-size subgroups, we found that regression-based norming requires samples 2.5 to 5.5 times smaller than traditional norming. Sample size requirements are presented for each norming method, test length, and number of answer categories. We emphasize that additional research is needed to establish sample size requirements when the assumptions of the linear regression model are violated.
Reliability analysis method for slope stability based on sample weight
Zhi-gang YANG
2009-09-01
Full Text Available The single safety factor criteria for slope stability evaluation, derived from the rigid limit equilibrium method or finite element method (FEM, may not include some important information, especially for steep slopes with complex geological conditions. This paper presents a new reliability method that uses sample weight analysis. Based on the distribution characteristics of random variables, the minimal sample size of every random variable is extracted according to a small sample t-distribution under a certain expected value, and the weight coefficient of each extracted sample is considered to be its contribution to the random variables. Then, the weight coefficients of the random sample combinations are determined using the Bayes formula, and different sample combinations are taken as the input for slope stability analysis. According to one-to-one mapping between the input sample combination and the output safety coefficient, the reliability index of slope stability can be obtained with the multiplication principle. Slope stability analysis of the left bank of the Baihetan Project is used as an example, and the analysis results show that the present method is reasonable and practicable for the reliability analysis of steep slopes with complex geological conditions.
Advanced Technology-Based Low Cost Mars Sample Return Missions
Wallace, R. A.; Gamber, R. T.; Clark, B. C.
1995-01-01
Mars Sample Return (MSR) has for many years been considered one of the most ambitious as well as most scientifically interesting of the suite of desired future planetary missions. This paper defines low- cost MSR mission concepts based on several exciting new technologies planned for space missions launching over the next 10 years. Key to reducing cost is use of advanced spacecraft & electronics technology.
A sampling-based approach to probabilistic pursuit evasion
Mahadevan, Aditya
2012-05-01
Probabilistic roadmaps (PRMs) are a sampling-based approach to motion-planning that encodes feasible paths through the environment using a graph created from a subset of valid positions. Prior research has shown that PRMs can be augmented with useful information to model interesting scenarios related to multi-agent interaction and coordination. © 2012 IEEE.
Race/Ethnicity, Poverty, Urban Stressors and Telomere Length in a Detroit Community-Based Sample
Geronimus, Arline T.; Pearson, Jay A.; Linnenbringer, Erin; Schulz, Amy J.; Reyes, Angela G.; Epel, Elissa S.; Lin, Jue; Blackburn, Elizabeth H.
2015-01-01
Residents of distressed urban areas suffer early aging-related disease and excess mortality. Using a community-based participatory research approach in a collaboration between social researchers and cellular biologists, we collected a unique data set of 239 black, white, or Mexican adults from a stratified, multi-stage probability sample of three Detroit neighborhoods. We drew venous blood and measured Telomere Length (TL), an indicator of stress-mediated biological aging, linking respondents’ TL to their community survey responses. We regressed TL on socioeconomic, psychosocial, neighborhood, and behavioral stressors, hypothesizing and finding an interaction between poverty and racial/ethnic group. Poor whites had shorter TL than nonpoor whites; poor and nonpoor blacks had equivalent TL; poor Mexicans had longer TL than nonpoor Mexicans. Findings suggest unobserved heterogeneity bias is an important threat to the validity of estimates of TL differences by race/ethnicity. They point to health impacts of social identity as contingent, the products of structurally-rooted biopsychosocial processes. PMID:25930147
Suksmono, Andriyan Bayu
2011-01-01
This paper proposes a new fBm (fractional Brownian motion) interpolation/reconstruction method from partially known samples based on CS (Compressive Sampling). Since 1/f property implies power law decay of the fBm spectrum, the fBm signals should be sparse in frequency domain. This property motivates the adoption of CS in the development of the reconstruction method. Hurst parameter H that occurs in the power law determines the sparsity level, therefore the CS reconstruction quality of an fBm signal for a given number of known subsamples will depend on H. However, the proposed method does not require the information of H to reconstruct the fBm signal from its partial samples. The method employs DFT (Discrete Fourier Transform) as the sparsity basis and a random matrix derived from known samples positions as the projection basis. Simulated fBm signals with various values of H are used to show the relationship between the Hurst parameter and the reconstruction quality. Additionally, US-DJIA (Dow Jones Industria...
Batch Mode Active Sampling based on Marginal Probability Distribution Matching.
Chattopadhyay, Rita; Wang, Zheng; Fan, Wei; Davidson, Ian; Panchanathan, Sethuraman; Ye, Jieping
2012-01-01
Active Learning is a machine learning and data mining technique that selects the most informative samples for labeling and uses them as training data; it is especially useful when there are large amount of unlabeled data and labeling them is expensive. Recently, batch-mode active learning, where a set of samples are selected concurrently for labeling, based on their collective merit, has attracted a lot of attention. The objective of batch-mode active learning is to select a set of informative samples so that a classifier learned on these samples has good generalization performance on the unlabeled data. Most of the existing batch-mode active learning methodologies try to achieve this by selecting samples based on varied criteria. In this paper we propose a novel criterion which achieves good generalization performance of a classifier by specifically selecting a set of query samples that minimizes the difference in distribution between the labeled and the unlabeled data, after annotation. We explicitly measure this difference based on all candidate subsets of the unlabeled data and select the best subset. The proposed objective is an NP-hard integer programming optimization problem. We provide two optimization techniques to solve this problem. In the first one, the problem is transformed into a convex quadratic programming problem and in the second method the problem is transformed into a linear programming problem. Our empirical studies using publicly available UCI datasets and a biomedical image dataset demonstrate the effectiveness of the proposed approach in comparison with the state-of-the-art batch-mode active learning methods. We also present two extensions of the proposed approach, which incorporate uncertainty of the predicted labels of the unlabeled data and transfer learning in the proposed formulation. Our empirical studies on UCI datasets show that incorporation of uncertainty information improves performance at later iterations while our studies on 20
Interrogating Bronchoalveolar Lavage Samples via Exclusion-Based Analyte Extraction.
Tokar, Jacob J; Warrick, Jay W; Guckenberger, David J; Sperger, Jamie M; Lang, Joshua M; Ferguson, J Scott; Beebe, David J
2017-06-01
Although average survival rates for lung cancer have improved, earlier and better diagnosis remains a priority. One promising approach to assisting earlier and safer diagnosis of lung lesions is bronchoalveolar lavage (BAL), which provides a sample of lung tissue as well as proteins and immune cells from the vicinity of the lesion, yet diagnostic sensitivity remains a challenge. Reproducible isolation of lung epithelia and multianalyte extraction have the potential to improve diagnostic sensitivity and provide new information for developing personalized therapeutic approaches. We present the use of a recently developed exclusion-based, solid-phase-extraction technique called SLIDE (Sliding Lid for Immobilized Droplet Extraction) to facilitate analysis of BAL samples. We developed a SLIDE protocol for lung epithelial cell extraction and biomarker staining of patient BALs, testing both EpCAM and Trop2 as capture antigens. We characterized captured cells using TTF1 and p40 as immunostaining biomarkers of adenocarcinoma and squamous cell carcinoma, respectively. We achieved up to 90% (EpCAM) and 84% (Trop2) extraction efficiency of representative tumor cell lines. We then used the platform to process two patient BAL samples in parallel within the same sample plate to demonstrate feasibility and observed that Trop2-based extraction potentially extracts more target cells than EpCAM-based extraction.
A research on fast FCM algorithm based on weighted sample
KUANG Ping; ZHU Qing-xin; WANG Ming-wen; CHEN Xu-dong; QING Li
2006-01-01
To improve the computational performance of the fuzzy C-means (FCM) algorithm used in dataset clustering with large numbers,the concepts of the equivalent samples and the weighting samples based on eigenvalue distribution of the samples in the feature space were introduced and a novel fast cluster algorithm named weighted fuzzy C-means (WFCM) algorithm was put forward,which came from the traditional FCM algorithm.It was proved that the duster results were equivalent in dataset with two different cluster algorithms:WFCM and FCM.Furthermore,the WFCM algorithm had better computational performance than the ordinary FCM algorithm.The experiment of the gray image segmentation showed that the WFCM algorithm is a fast and effective cluster algorithm.
SVM Intrusion Detection Model Based on Compressed Sampling
Shanxiong Chen
2016-01-01
Full Text Available Intrusion detection needs to deal with a large amount of data; particularly, the technology of network intrusion detection has to detect all of network data. Massive data processing is the bottleneck of network software and hardware equipment in intrusion detection. If we can reduce the data dimension in the stage of data sampling and directly obtain the feature information of network data, efficiency of detection can be improved greatly. In the paper, we present a SVM intrusion detection model based on compressive sampling. We use compressed sampling method in the compressed sensing theory to implement feature compression for network data flow so that we can gain refined sparse representation. After that SVM is used to classify the compression results. This method can realize detection of network anomaly behavior quickly without reducing the classification accuracy.
Blue noise sampling method based on mixture distance
Qin, Hongxing; Hong, XiaoYang; Xiao, Bin; Zhang, Shaoting; Wang, Guoyin
2014-11-01
Blue noise sampling is a core component for a large number of computer graphic applications such as imaging, modeling, animation, and rendering. However, most existing methods are concentrated on preserving spatial domain properties like density and anisotropy, while ignoring feature preserving. In order to solve the problem, we present a new distance metric called mixture distance for blue noise sampling, which is a combination of geodesic and feature distances. Based on mixture distance, the blue noise property and features can be preserved by controlling the ratio of the geodesic distance to the feature distance. With the intention of meeting different requirements from various applications, an adaptive adjustment for parameters is also proposed to achieve a balance between the preservation of features and spatial properties. Finally, implementation on a graphic processing unit is introduced to improve the efficiency of computation. The efficacy of the method is demonstrated by the results of image stippling, surface sampling, and remeshing.
MOMENT-METHOD ESTIMATION BASED ON CENSORED SAMPLE
NI Zhongxin; FEI Heliang
2005-01-01
In reliability theory and survival analysis,the problem of point estimation based on the censored sample has been discussed in many literatures.However,most of them are focused on MLE,BLUE etc;little work has been done on the moment-method estimation in censoring case.To make the method of moment estimation systematic and unifiable,in this paper,the moment-method estimators(abbr.MEs) and modified momentmethod estimators(abbr.MMEs) of the parameters based on type I and type Ⅱ censored samples are put forward involving mean residual lifetime. The strong consistency and other properties are proved. To be worth mentioning,in the exponential distribution,the proposed moment-method estimators are exactly MLEs. By a simulation study,in the view point of bias and mean square of error,we show that the MEs and MMEs are better than MLEs and the "pseudo complete sample" technique introduced in Whitten et al.(1988).And the superiority of the MEs is especially conspicuous,when the sample is heavily censored.
LTRMP Fisheries Data - Stratified Random and Fixed Site Sampling
U.S. Geological Survey, Department of the Interior — The Long Term Resource Monitoring Programs (LTRMP) annual fish monitoring began on the Upper Mississippi and Illinois Rivers in 1989. During the first two years...
Prototypic Features of Loneliness in a Stratified Sample of Adolescents
Lasgaard, Mathias; Elklit, Ask
2009-01-01
Dominant theoretical approaches in loneliness research emphasize the value of personality characteristics in explaining loneliness. The present study examines whether dysfunctional social strategies and attributions in lonely adolescents can be explained by personality characteristics. A question......Dominant theoretical approaches in loneliness research emphasize the value of personality characteristics in explaining loneliness. The present study examines whether dysfunctional social strategies and attributions in lonely adolescents can be explained by personality characteristics...... loneliness independent of personality characteristics, demographics and social desirability. The study indicates that dysfunctional strategies and attributions in affiliative situations are directly related to loneliness in adolescence. These strategies and attributions may preclude lonely adolescents from...... guidance and intervention. Thus, professionals need to be knowledgeable about prototypic features of loneliness in addition to employing a pro-active approach when assisting adolescents who display prototypic features....
Sample preparation issues in NMR-based plant metabolomics: optimisation for Vitis wood samples.
Halabalaki, Maria; Bertrand, Samuel; Stefanou, Anna; Gindro, Katia; Kostidis, Sarantos; Mikros, Emmanuel; Skaltsounis, Leandros A; Wolfender, Jean-Luc
2014-01-01
Nuclear magnetic resonance (NMR) is one of the most commonly used analytical techniques in plant metabolomics. Although this technique is very reproducible and simple to implement, sample preparation procedures have a great impact on the quality of the metabolomics data. Investigation of different sample preparation methods and establishment of an optimised protocol for untargeted NMR-based metabolomics of Vitis vinifera L. wood samples. Wood samples from two different cultivars of V. vinifera with well-defined phenotypes (Gamaret and 2091) were selected as reference materials. Different extraction solvents (successively, dichloromethane, methanol and water, as well as ethyl acetate and 7:3 methanol-water (v/v)) and deuterated solvents (methanol-d4, 7:3 chloroform-d-methanol-d4 (v/v), dimethylsulphoxide-d6 and 9:1 dimethylsulphoxide-d6-water-d2 (v/v)) were evaluated for NMR acquisition, and the spectral quality was compared. The optimal extract concentration, chemical shift stability and peak area repeatability were also investigated. Ethyl acetate was found to be the most satisfactory solvent for the extraction of all representative chemical classes of secondary metabolites in V. vinifera wood. The optimal concentration of dried extract was 10 mg/mL and 7:3 chloroform-d-methanol-d4 (v/v) was the most suitable solvent system for NMR analysis. Multivariate data analysis was used to estimate the biological variation and clustering between different cultivars. Close attention should be paid to all required procedures before NMR analysis, especially to the selection of an extraction solvent and a deuterated solvent system to perform an extensive metabolomic survey of the specific matrix. Copyright © 2014 John Wiley & Sons, Ltd.
DNA barcoding: error rates based on comprehensive sampling.
Christopher P Meyer
2005-12-01
Full Text Available DNA barcoding has attracted attention with promises to aid in species identification and discovery; however, few well-sampled datasets are available to test its performance. We provide the first examination of barcoding performance in a comprehensively sampled, diverse group (cypraeid marine gastropods, or cowries. We utilize previous methods for testing performance and employ a novel phylogenetic approach to calculate intraspecific variation and interspecific divergence. Error rates are estimated for (1 identifying samples against a well-characterized phylogeny, and (2 assisting in species discovery for partially known groups. We find that the lowest overall error for species identification is 4%. In contrast, barcoding performs poorly in incompletely sampled groups. Here, species delineation relies on the use of thresholds, set to differentiate between intraspecific variation and interspecific divergence. Whereas proponents envision a "barcoding gap" between the two, we find substantial overlap, leading to minimal error rates of approximately 17% in cowries. Moreover, error rates double if only traditionally recognized species are analyzed. Thus, DNA barcoding holds promise for identification in taxonomically well-understood and thoroughly sampled clades. However, the use of thresholds does not bode well for delineating closely related species in taxonomically understudied groups. The promise of barcoding will be realized only if based on solid taxonomic foundations.
Optimal stratification of item pools in α-stratified computerized adaptive testing
Chang, Hua-Hua; Linden, van der Wim J.
2003-01-01
A method based on 0-1 linear programming (LP) is presented to stratify an item pool optimally for use in α-stratified adaptive testing. Because the 0-1 LP model belongs to the subclass of models with a network flow structure, efficient solutions are possible. The method is applied to a previous item
Missile trajectory shaping using sampling-based path planning
Pharpatara, Pawit; Pepy, Romain; Hérissé, Bruno; Bestaoui, Yasmina
2013-01-01
International audience; This paper presents missile guidance as a complex robotic problem: a hybrid non-linear system moving in a heterogeneous environment. The proposed solution to this problem combines a sampling-based path planner, Dubins' curves and a locally-optimal guidance law. This algorithm aims to find feasible trajectories that anticipate future flight conditions, especially the loss of manoeuverability at high altitude. Simulated results demonstrate the substantial performance imp...
Appraisal of jump distributions in ensemble-based sampling algorithms
Dejanic, Sanda; Scheidegger, Andreas; Rieckermann, Jörg; Albert, Carlo
2017-04-01
Sampling Bayesian posteriors of model parameters is often required for making model-based probabilistic predictions. For complex environmental models, standard Monte Carlo Markov Chain (MCMC) methods are often infeasible because they require too many sequential model runs. Therefore, we focused on ensemble methods that use many Markov chains in parallel, since they can be run on modern cluster architectures. Little is known about how to choose the best performing sampler, for a given application. A poor choice can lead to an inappropriate representation of posterior knowledge. We assessed two different jump moves, the stretch and the differential evolution move, underlying, respectively, the software packages EMCEE and DREAM, which are popular in different scientific communities. For the assessment, we used analytical posteriors with features as they often occur in real posteriors, namely high dimensionality, strong non-linear correlations or multimodality. For posteriors with non-linear features, standard convergence diagnostics based on sample means can be insufficient. Therefore, we resorted to an entropy-based convergence measure. We assessed the samplers by means of their convergence speed, robustness and effective sample sizes. For posteriors with strongly non-linear features, we found that the stretch move outperforms the differential evolution move, w.r.t. all three aspects.
Improved pulse laser ranging algorithm based on high speed sampling
Gao, Xuan-yi; Qian, Rui-hai; Zhang, Yan-mei; Li, Huan; Guo, Hai-chao; He, Shi-jie; Guo, Xiao-kang
2016-10-01
Narrow pulse laser ranging achieves long-range target detection using laser pulse with low divergent beams. Pulse laser ranging is widely used in military, industrial, civil, engineering and transportation field. In this paper, an improved narrow pulse laser ranging algorithm is studied based on the high speed sampling. Firstly, theoretical simulation models have been built and analyzed including the laser emission and pulse laser ranging algorithm. An improved pulse ranging algorithm is developed. This new algorithm combines the matched filter algorithm and the constant fraction discrimination (CFD) algorithm. After the algorithm simulation, a laser ranging hardware system is set up to implement the improved algorithm. The laser ranging hardware system includes a laser diode, a laser detector and a high sample rate data logging circuit. Subsequently, using Verilog HDL language, the improved algorithm is implemented in the FPGA chip based on fusion of the matched filter algorithm and the CFD algorithm. Finally, the laser ranging experiment is carried out to test the improved algorithm ranging performance comparing to the matched filter algorithm and the CFD algorithm using the laser ranging hardware system. The test analysis result demonstrates that the laser ranging hardware system realized the high speed processing and high speed sampling data transmission. The algorithm analysis result presents that the improved algorithm achieves 0.3m distance ranging precision. The improved algorithm analysis result meets the expected effect, which is consistent with the theoretical simulation.
A haplotype inference algorithm for trios based on deterministic sampling
Iliadis Alexandros
2010-08-01
Full Text Available Abstract Background In genome-wide association studies, thousands of individuals are genotyped in hundreds of thousands of single nucleotide polymorphisms (SNPs. Statistical power can be increased when haplotypes, rather than three-valued genotypes, are used in analysis, so the problem of haplotype phase inference (phasing is particularly relevant. Several phasing algorithms have been developed for data from unrelated individuals, based on different models, some of which have been extended to father-mother-child "trio" data. Results We introduce a technique for phasing trio datasets using a tree-based deterministic sampling scheme. We have compared our method with publicly available algorithms PHASE v2.1, BEAGLE v3.0.2 and 2SNP v1.7 on datasets of varying number of markers and trios. We have found that the computational complexity of PHASE makes it prohibitive for routine use; on the other hand 2SNP, though the fastest method for small datasets, was significantly inaccurate. We have shown that our method outperforms BEAGLE in terms of speed and accuracy for small to intermediate dataset sizes in terms of number of trios for all marker sizes examined. Our method is implemented in the "Tree-Based Deterministic Sampling" (TDS package, available for download at http://www.ee.columbia.edu/~anastas/tds Conclusions Using a Tree-Based Deterministic sampling technique, we present an intuitive and conceptually simple phasing algorithm for trio data. The trade off between speed and accuracy achieved by our algorithm makes it a strong candidate for routine use on trio datasets.
LÉVY-BASED ERROR PREDICTION IN CIRCULAR SYSTEMATIC SAMPLING
Kristjana Ýr Jónsdóttir
2013-06-01
Full Text Available In the present paper, Lévy-based error prediction in circular systematic sampling is developed. A model-based statistical setting as in Hobolth and Jensen (2002 is used, but the assumption that the measurement function is Gaussian is relaxed. The measurement function is represented as a periodic stationary stochastic process X obtained by a kernel smoothing of a Lévy basis. The process X may have an arbitrary covariance function. The distribution of the error predictor, based on measurements in n systematic directions is derived. Statistical inference is developed for the model parameters in the case where the covariance function follows the celebrated p-order covariance model.
Dense mesh sampling for video-based facial animation
Peszor, Damian; Wojciechowska, Marzena
2016-06-01
The paper describes an approach for selection of feature points on three-dimensional, triangle mesh obtained using various techniques from several video footages. This approach has a dual purpose. First, it allows to minimize the data stored for the purpose of facial animation, so that instead of storing position of each vertex in each frame, one could store only a small subset of vertices for each frame and calculate positions of others based on the subset. Second purpose is to select feature points that could be used for anthropometry-based retargeting of recorded mimicry to another model, with sampling density beyond that which can be achieved using marker-based performance capture techniques. Developed approach was successfully tested on artificial models, models constructed using structured light scanner, and models constructed from video footages using stereophotogrammetry.
Cooperative Sequential Spectrum Sensing Based on Level-triggered Sampling
Yilmaz, Yasin; Wang, Xiaodong
2011-01-01
We propose a new framework for cooperative spectrum sensing in cognitive radio networks, that is based on a novel class of non-uniform samplers, called the event-triggered samplers, and sequential detection. In the proposed scheme, each secondary user computes its local sensing decision statistic based on its own channel output; and whenever such decision statistic crosses certain predefined threshold values, the secondary user will send one (or several) bit of information to the fusion center. The fusion center asynchronously receives the bits from different secondary users and updates the global sensing decision statistic to perform a sequential probability ratio test (SPRT), to reach a sensing decision. We provide an asymptotic analysis for the above scheme, and under different conditions, we compare it against the cooperative sensing scheme that is based on traditional uniform sampling and sequential detection. Simulation results show that the proposed scheme, using even 1 bit, can outperform its uniform ...
Deirdre P Cronin-Fenton; Margaret M Mooney; Limin X Clegg; Linda C Harlan
2008-01-01
AIM:To examine the extent of use of specific therapies in clinical practice,and their relationship to therapies validated in clinical trials.METHODS:The US National Cancer Institutes' Patterns of Care study was used to examine therapies and survival of patients diagnosed in 2001 with histologically-confirmed gastroesophageal adenocarcinoma (n = 1356).The study re-abstracted data and verified therapy with treating physicians for a population-based stratified random sample.RESULTS:Approximately 62% of patients had stomach adenocarcinoma (SAC),while 22% had gastric-cardia adenocarcinoma (GCA),and 16% lower esophageal adenocarcinoma (EAC).Stage IV/ unstaged esophageal cancer patients were most likely and stage I -111 stomach cancer patients least likely to receive chemotherapy as all or part of their therapy;gastric-cardia patients received chemotherapy at a rate between these two.In multivariable analysis by anatomic site,patients 70 years and older were significantly less likely than younger patients to receive chemotherapy alone or chemoradiation for all three anatomic sites.Among esophageal and stomach cancer patients,receipt of chemotherapy was associated with lower mortality;but no association was found among gastric-cardia patients.CONCLUSION:This study highlights the relatively low use of clinical trials-validated anti-cancer therapies in community practice.Use of chemotherapy-based treatment was associated with lower mortality,dependent on anatomic site.Findings suggest that physicians treat lower esophageal and SAC as two distinct entities,while gastric-cardia patients receive a mix of the treatment strategies employed for the two other sites.
A random spatial sampling method in a rural developing nation
Michelle C. Kondo; Kent D.W. Bream; Frances K. Barg; Charles C. Branas
2014-01-01
Nonrandom sampling of populations in developing nations has limitations and can inaccurately estimate health phenomena, especially among hard-to-reach populations such as rural residents. However, random sampling of rural populations in developing nations can be challenged by incomplete enumeration of the base population. We describe a stratified random sampling method...
Thermals in stratified regions of the ISM
Rodriguez-Gonzalez, Ary
2013-01-01
We present a model of a "thermal" (i.e., a hot bubble) rising within an exponentially stratified region of the ISM. This model includes terms representing the ram pressure braking and the entrainment of environmental gas into the thermal. We then calibrate the free parameters associated with these two terms through a comparison with 3D numerical simulations of a rising bubble. Finally, we apply our "thermal" model to the case of a hot bubble produced by a SN within the stratified ISM of the Galactic disk.
On Stratified Vortex Motions under Gravity.
2014-09-26
AD-A156 930 ON STRATIFIED VORTEX MOTIONS UNDER GRAVITY (U) NAVAL i/i RESEARCH LAB WASHINGTON DC Y T FUNG 20 JUN 85 NRL-MIR-5564 UNCLASSIFIED F/G 20/4...Under Gravity LCn * Y. T. Fung Fluid Dynamics Branch - Marine Technologyv Division June 20, 1985 SO Cyk. NAVAL RESEARCH LABORATORY Washington, D.C...DN880-019 TITLE (Include Security Classification) On Stratified Vortex Motions Under Gravity 12 PERSONAL AUTHOR(S) Funa, Y.T. 13a. TYPE OF REPORT 13b
THERMALS IN STRATIFIED REGIONS OF THE ISM
A. Rodríguez-González
2013-01-01
Full Text Available We present a model of a “thermal” (i.e., a hot bubble rising within an exponentially stratified region of the ISM. This model includes terms representing the ram pressure braking and the entrainment of environmental gas into the thermal. We then calibrate the free parameters associated with these two terms through a comparison with 3D numerical simulations of a rising bubble. Finally, we apply our “thermal” model to the case of a hot bubble produced by a SN within the stratified ISM of the Galactic disk.
The effect of sampling technique on PCR-based bacteriological results of bovine milk samples.
Hiitiö, Heidi; Simojoki, Heli; Kalmus, Piret; Holopainen, Jani; Pyörälä, Satu; Taponen, Suvi
2016-08-01
The aim of the study was to evaluate the effect of sampling technique on the microbiological results of bovine milk samples using multiplex real-time PCR. Comparison was made between a technique where the milk sample was taken directly from the udder cistern of the udder quarter using a needle and vacuum tube and conventional sampling. The effect of different cycle threshold (Ct) cutoff limits on the results was also tested to estimate the amount of amplified DNA in the samples. A total of 113 quarters from 53 cows were tested pairwise using both techniques, and each sample was studied with real-time PCR. Sampling from the udder cistern reduced the number of species per sample compared with conventional sampling. In conventional samples, the number of positive Staphylococcus spp. results was over twice that of samples taken with the needle technique, indicating that most of the Staphylococcus spp. originated from the teat or environmental sources. The Ct values also showed that Staphylococcus spp. were present in most samples only in low numbers. Routine use of multiplex real-time PCR in mastitis diagnostics could benefit from critical evaluation of positive Staphylococcus spp. results with Ct values between 34.0 and 37.0. Our results emphasize the importance of a careful aseptic milk sampling technique and a microbiologically positive result for a milk sample should not be automatically interpreted as an intramammary infection or mastitis. Copyright © 2016 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Recognition of human face based on improved multi-sample
LIU Xia; LI Lei-lei; LI Ting-jun; LIU Lu; ZHANG Ying
2009-01-01
In order to solve the problem caused by variation illumination in human face recognition, we bring forward a face recognition algorithm based on the improved muhi-sample. In this algorithm, the face image is processed with Retinex theory, meanwhile, the Gabor filter is adopted to perform the feature extraction. The experimental results show that the application of Retinex theory improves the recognition accuracy, and makes the algorithm more robust to the variation illumination. The Gabor filter is more effective and accurate for extracting more useable facial local features. It is proved that the proposed algorithm has good recognition accuracy and it is stable under variation illumination.
Assessing iron dynamics in the release from a stratified reservoir
Ashby, S.L.; Faulkner, S.P.; Gambrell, R.P.; Smith, B.A.
2004-01-01
Field and laboratory studies were conducted to describe the fate of total, dissolved, and ferrous (Fe2.) iron in the release from a stratified reservoir with an anoxic hypolimnion. Concentrations of total iron in the tail water indicated a first order removal process during a low flow release (0.6 m3sec1), yet negligible loss was observed during a period of increased discharge (2.8 m 3 sec-1). Dissolved and ferrous iron concentrations in the tailwater were highly variable during both release regimes and did not follow responses based on theoretical predictions. Ferrous iron concentrations in unfiltered samples were consistently greater than concentrations observed in samples filtered separately through 0.4, 0.2, and 0.1 ??m filters. Total iron removal in laboratory studies followed first order kinetics, but was twice that rate (0.077 mg.L-1 .hr 1) observed during low flow discharge in the tailwater (0.036 mg. L1 .hr1). Dissolved and ferrous iron losses in laboratory studies were rapid (???75% in the first 15 minutes and 95% within 1 hour), followed theoretical predictions, and were much faster than observations in the tailwater (???30% within the first hour). The presence of particulate forms of ferrous iron in the field and differences in removal rates observed in field and laboratory studies indicate a need for improved field assessment techniques and consideration of complexation reactions when assessing the dynamics of iron in reservoir releases and downstream impacts as a result of operation regimes. ?? Copyright by the North American Lake Management Society 2004.
Variational level set segmentation for forest based on MCMC sampling
Yang, Tie-Jun; Huang, Lin; Jiang, Chuan-xian; Nong, Jian
2014-11-01
Environmental protection is one of the themes of today's world. The forest is a recycler of carbon dioxide and natural oxygen bar. Protection of forests, monitoring of forest growth is long-term task of environmental protection. It is very important to automatically statistic the forest coverage rate using optical remote sensing images and the computer, by which we can timely understand the status of the forest of an area, and can be freed from tedious manual statistics. Towards the problem of computational complexity of the global optimization using convexification, this paper proposes a level set segmentation method based on Markov chain Monte Carlo (MCMC) sampling and applies it to forest segmentation in remote sensing images. The presented method needs not to do any convexity transformation for the energy functional of the goal, and uses MCMC sampling method with global optimization capability instead. The possible local minima occurring by using gradient descent method is also avoided. There are three major contributions in the paper. Firstly, by using MCMC sampling, the convexity of the energy functional is no longer necessary and global optimization can still be achieved. Secondly, taking advantage of the data (texture) and knowledge (a priori color) to guide the construction of Markov chain, the convergence rate of Markov chains is improved significantly. Finally, the level set segmentation method by integrating a priori color and texture for forest is proposed. The experiments show that our method can efficiently and accurately segment forest in remote sensing images.
CdTe detector based PIXE mapping of geological samples
Chaves, P.C., E-mail: cchaves@ctn.ist.utl.pt [Centro de Física Atómica da Universidade de Lisboa, Av. Prof. Gama Pinto 2, 1649-003 Lisboa (Portugal); IST/ITN, Instituto Superior Técnico, Universidade Técnica de Lisboa, Campus Tecnológico e Nuclear, EN10, 2686-953 Sacavém (Portugal); Taborda, A. [Centro de Física Atómica da Universidade de Lisboa, Av. Prof. Gama Pinto 2, 1649-003 Lisboa (Portugal); IST/ITN, Instituto Superior Técnico, Universidade Técnica de Lisboa, Campus Tecnológico e Nuclear, EN10, 2686-953 Sacavém (Portugal); Oliveira, D.P.S. de [Laboratório Nacional de Energia e Geologia (LNEG), Apartado 7586, 2611-901 Alfragide (Portugal); Reis, M.A. [Centro de Física Atómica da Universidade de Lisboa, Av. Prof. Gama Pinto 2, 1649-003 Lisboa (Portugal); IST/ITN, Instituto Superior Técnico, Universidade Técnica de Lisboa, Campus Tecnológico e Nuclear, EN10, 2686-953 Sacavém (Portugal)
2014-01-01
A sample collected from a borehole drilled approximately 10 km ESE of Bragança, Trás-os-Montes, was analysed by standard and high energy PIXE at both CTN (previous ITN) PIXE setups. The sample is a fine-grained metapyroxenite grading to coarse-grained in the base with disseminated sulphides and fine veinlets of pyrrhotite and pyrite. Matrix composition was obtained at the standard PIXE setup using a 1.25 MeV H{sup +} beam at three different spots. Medium and high Z elemental concentrations were then determined using the DT2fit and DT2simul codes (Reis et al., 2008, 2013 [1,2]), on the spectra obtained in the High Resolution and High Energy (HRHE)-PIXE setup (Chaves et al., 2013 [3]) by irradiation of the sample with a 3.8 MeV proton beam provided by the CTN 3 MV Tandetron accelerator. In this paper we present results, discuss detection limits of the method and the added value of the use of the CdTe detector in this context.
Statistic inference system for complex sample survey based on B/S framework%基于B/S架构的复杂抽样调查统计推断系统
罗智超; 管河山; 曹礼华
2011-01-01
基于B/S构架的复杂抽样调查统计推断系统采用跨平台设计,与业务系统及数据库无缝链接.系统提供随机抽样、分层抽样、Neyman分层抽样、不等概率抽样(PPS)、多阶段抽样等常用抽样算法.系统用户根据研究目标自定义组合抽样方法,系统根据样本和总体属性及用户抽样方案自动推算统计推断结果及置信区间,实现抽样调查、统计推断的自动化与系统化.%A statistic inference system of complex sample survey based on B/S framework used cross platform construction, and can interconnect business system and database seamlessly. The system includes usual sampling methods, such as random sampling, stratified sampling, Neyman stratified sampling. Unequal Probability Sampling (PPS) and multistage sampling. Users can choose user-defined sampling method; system will calculate and inference the population score based on the information input into the system including population detail, sample size, sample methods and etc. Such system can make sampling and inference process automatically and systematically.
Turbulent Mixing in Stably Stratified Flows
2008-03-01
Liege Colloquium on Ocean Hydrodynamics, volume 46, page 19889898. Elsevier, 1987. R. M. Kerr. Higher-order derivative correlations and the alignment of...19th International Liege Colloquium on Ocean Hydrodynamics, volume 46, pages 3-9. Elsevier, 1988. P. Meunier and G. Spedding. Stratified propelled
Corticosteroids and pediatric septic shock outcomes: a risk stratified analysis.
Sarah J Atkinson
Full Text Available The potential benefits of corticosteroids for septic shock may depend on initial mortality risk.We determined associations between corticosteroids and outcomes in children with septic shock who were stratified by initial mortality risk.We conducted a retrospective analysis of an ongoing, multi-center pediatric septic shock clinical and biological database. Using a validated biomarker-based stratification tool (PERSEVERE, 496 subjects were stratified into three initial mortality risk strata (low, intermediate, and high. Subjects receiving corticosteroids during the initial 7 days of admission (n = 252 were compared to subjects who did not receive corticosteroids (n = 244. Logistic regression was used to model the effects of corticosteroids on 28-day mortality and complicated course, defined as death within 28 days or persistence of two or more organ failures at 7 days.Subjects who received corticosteroids had greater organ failure burden, higher illness severity, higher mortality, and a greater requirement for vasoactive medications, compared to subjects who did not receive corticosteroids. PERSEVERE-based mortality risk did not differ between the two groups. For the entire cohort, corticosteroids were associated with increased risk of mortality (OR 2.3, 95% CI 1.3-4.0, p = 0.004 and a complicated course (OR 1.7, 95% CI 1.1-2.5, p = 0.012. Within each PERSEVERE-based stratum, corticosteroid administration was not associated with improved outcomes. Similarly, corticosteroid administration was not associated with improved outcomes among patients with no comorbidities, nor in groups of patients stratified by PRISM.Risk stratified analysis failed to demonstrate any benefit from corticosteroids in this pediatric septic shock cohort.
李宏; 吴东亮; 吴飞影; 张雪飞
2013-01-01
植物区系是某一区域内所有植物种类的总和,是体现某一区域内物种多样性的重要指标.以河北省雾灵山自然保护区种子植物为对象,基于分层抽样原理,根据裸子植物或被子植物、科物种数、属物种数、某一物种在某一群落中的重要值及保护等级等标志,探讨了自然保护区解说对象筛选程序、方法,拟定了雾灵山100种种子植物重点解说对象.%Flora is sum of all plants in an area and it is an important index of species diversity. This paper takes seed plants of Wulingshan Natural Reserve in Hebai Province as case. Based-on the theory of stratified sampling, it discusses procedure and ways of selecting interpretation objects according to the species numbers of families or genera of gymnosperms or angiosperm. At last it provides 100 seed plants as important interpretation objects for Wulingshan Natural Reserve.
Study of MRI in Stratified Viscous Plasma Configuration
Carlevaro, Nakia; Renzi, Fabrizio
2016-01-01
We analyze the morphology of the Magneto-rotational Instability (MRI) for a stratified viscous plasma disk configuration in differential rotation, taking into account the so-called corotation theorem for the background profile. In order to select the intrinsic Alfv\\'enic nature of MRI, we deal with an incompressible plasma and we adopt a formulation of the perturbation analysis based on the use of the magnetic flux function as a dynamical variable. Our study outlines, as consequence of the corotation condition, a marked asymmetry of the MRI with respect to the equatorial plane, particularly evident in a complete damping of the instability over a positive critical height on the equatorial plane. We also emphasize how such a feature is already present (although less pronounced) even in the ideal case, restoring a dependence of the MRI on the stratified morphology of the gravitational field.
A fragment-based approach to the SAMPL3 Challenge
Kulp, John L.; Blumenthal, Seth N.; Wang, Qiang; Bryan, Richard L.; Guarnieri, Frank
2012-05-01
The success of molecular fragment-based design depends critically on the ability to make predictions of binding poses and of affinity ranking for compounds assembled by linking fragments. The SAMPL3 Challenge provides a unique opportunity to evaluate the performance of a state-of-the-art fragment-based design methodology with respect to these requirements. In this article, we present results derived from linking fragments to predict affinity and pose in the SAMPL3 Challenge. The goal is to demonstrate how incorporating different aspects of modeling protein-ligand interactions impact the accuracy of the predictions, including protein dielectric models, charged versus neutral ligands, ΔΔGs solvation energies, and induced conformational stress. The core method is based on annealing of chemical potential in a Grand Canonical Monte Carlo (GC/MC) simulation. By imposing an initially very high chemical potential and then automatically running a sequence of simulations at successively decreasing chemical potentials, the GC/MC simulation efficiently discovers statistical distributions of bound fragment locations and orientations not found reliably without the annealing. This method accounts for configurational entropy, the role of bound water molecules, and results in a prediction of all the locations on the protein that have any affinity for the fragment. Disregarding any of these factors in affinity-rank prediction leads to significantly worse correlation with experimentally-determined free energies of binding. We relate three important conclusions from this challenge as applied to GC/MC: (1) modeling neutral ligands—regardless of the charged state in the active site—produced better affinity ranking than using charged ligands, although, in both cases, the poses were almost exactly overlaid; (2) simulating explicit water molecules in the GC/MC gave better affinity and pose predictions; and (3) applying a ΔΔGs solvation correction further improved the ranking of the
Stability of stratified two-phase flows in inclined channels
Barmak, I.; Gelfgat, A. Yu.; Ullmann, A.; Brauner, N.
2016-08-01
Linear stability of the stratified gas-liquid and liquid-liquid plane-parallel flows in the inclined channels is studied with respect to all wavenumber perturbations. The main objective is to predict the parameter regions in which the stable stratified configuration in inclined channels exists. Up to three distinct base states with different holdups exist in the inclined flows, so that the stability analysis has to be carried out for each branch separately. Special attention is paid to the multiple solution regions to reveal the feasibility of the non-unique stable stratified configurations in inclined channels. The stability boundaries of each branch of the steady state solutions are presented on the flow pattern map and are accompanied by the critical wavenumbers and the spatial profiles of the most unstable perturbations. Instabilities of different nature are visualized by the streamlines of the neutrally stable perturbed flows, consisting of the critical perturbation superimposed on the base flow. The present analysis confirms the existence of two stable stratified flow configurations in a region of low flow rates in the countercurrent liquid-liquid flows. These configurations become unstable with respect to the shear mode of instability. It was revealed that in slightly upward inclined flows the lower and middle solutions for the holdup are stable in the part of the triple solution region, while the upper solution is always unstable. In the case of downward flows, in the triple solution region, none of the solutions are stable with respect to the short-wave perturbations. These flows are stable only in the single solution region at low flow rates of the heavy phase, and the long-wave perturbations are the most unstable ones.
Solution-based targeted genomic enrichment for precious DNA samples
Shearer Aiden
2012-05-01
Full Text Available Abstract Background Solution-based targeted genomic enrichment (TGE protocols permit selective sequencing of genomic regions of interest on a massively parallel scale. These protocols could be improved by: 1 modifying or eliminating time consuming steps; 2 increasing yield to reduce input DNA and excessive PCR cycling; and 3 enhancing reproducible. Results We developed a solution-based TGE method for downstream Illumina sequencing in a non-automated workflow, adding standard Illumina barcode indexes during the post-hybridization amplification to allow for sample pooling prior to sequencing. The method utilizes Agilent SureSelect baits, primers and hybridization reagents for the capture, off-the-shelf reagents for the library preparation steps, and adaptor oligonucleotides for Illumina paired-end sequencing purchased directly from an oligonucleotide manufacturing company. Conclusions This solution-based TGE method for Illumina sequencing is optimized for small- or medium-sized laboratories and addresses the weaknesses of standard protocols by reducing the amount of input DNA required, increasing capture yield, optimizing efficiency, and improving reproducibility.
Preview-based sampling for controlling gaseous simulations
Huang, Ruoguan
2011-01-01
In this work, we describe an automated method for directing the control of a high resolution gaseous fluid simulation based on the results of a lower resolution preview simulation. Small variations in accuracy between low and high resolution grids can lead to divergent simulations, which is problematic for those wanting to achieve a desired behavior. Our goal is to provide a simple method for ensuring that the high resolution simulation matches key properties from the lower resolution simulation. We first let a user specify a fast, coarse simulation that will be used for guidance. Our automated method samples the data to be matched at various positions and scales in the simulation, or allows the user to identify key portions of the simulation to maintain. During the high resolution simulation, a matching process ensures that the properties sampled from the low resolution simulation are maintained. This matching process keeps the different resolution simulations aligned even for complex systems, and can ensure consistency of not only the velocity field, but also advected scalar values. Because the final simulation is naturally similar to the preview simulation, only minor controlling adjustments are needed, allowing a simpler control method than that used in prior keyframing approaches. Copyright © 2011 by the Association for Computing Machinery, Inc.
All-polymer microfluidic systems for droplet based sample analysis
Poulsen, Carl Esben
In this PhD project, I pursued to develop an all-polymer injection moulded microfluidic platform with integrated droplet based single cell interrogation. To allow for a proper ”one device - one experiment” methodology and to ensure a high relevancy to non-academic settings, the systems presented...... bonded by ultrasonic welding. In the sub-projects of this PhD, improvements have been made to multiple aspects of fabricating and conducting droplet (or multiphase) microfluidics: • Design phase: Numerical prediction of the capillary burst pressure of a multiphase system. • Fabrication: Two new types...... here were fabricated exclusive using commercially relevant fabrication methods such as injection moulding and ultrasonic welding. Further, to reduce the complexity of the final system, I have worked towards an all-in-one device which includes sample loading, priming (removal of air), droplet formation...
Quantum Ensemble Classification: A Sampling-Based Learning Control Approach.
Chen, Chunlin; Dong, Daoyi; Qi, Bo; Petersen, Ian R; Rabitz, Herschel
2017-06-01
Quantum ensemble classification (QEC) has significant applications in discrimination of atoms (or molecules), separation of isotopes, and quantum information extraction. However, quantum mechanics forbids deterministic discrimination among nonorthogonal states. The classification of inhomogeneous quantum ensembles is very challenging, since there exist variations in the parameters characterizing the members within different classes. In this paper, we recast QEC as a supervised quantum learning problem. A systematic classification methodology is presented by using a sampling-based learning control (SLC) approach for quantum discrimination. The classification task is accomplished via simultaneously steering members belonging to different classes to their corresponding target states (e.g., mutually orthogonal states). First, a new discrimination method is proposed for two similar quantum systems. Then, an SLC method is presented for QEC. Numerical results demonstrate the effectiveness of the proposed approach for the binary classification of two-level quantum ensembles and the multiclass classification of multilevel quantum ensembles.
The relation between exercise and glaucoma in a South Korean population-based sample
Lin, Shuai-Chun; Wang, Sophia Y.; Pasquale, Louis R.; Singh, Kuldev; Lin, Shan C.
2017-01-01
Purpose To investigate the association between exercise and glaucoma in a South Korean population-based sample. Design Population-based, cross-sectional study. Participants A total of 11,246 subjects, 40 years and older who underwent health care assessment as part of the 2008–2011 Korean National Health and Nutrition Examination Survey. Methods Variables regarding the duration (total minutes per week), frequency (days per week), and intensity of exercise (vigorous, moderate exercise and walking) as well as glaucoma prevalence were ascertained for 11,246 survey participants. Demographic, comorbidity, and health-related behavior information was obtained via interview. Multivariable logistic regression analyses were performed to determine the association between the exercise-related parameters and odds of a glaucoma diagnosis. Main outcome measure(s) Glaucoma defined by International Society for Geographical and Epidemiological Ophthalmology criteria. Results Overall, 336 (2.7%) subjects met diagnostic criteria for glaucomatous disease. After adjustment for potential confounding variables, subjects engaged in vigorous exercise 7 days per week had higher odds of having glaucoma compared with those exercising 3 days per week (Odds Ratio [OR] 3.33, 95% confidence interval [CI] 1.16–9.54). High intensity of exercise, as categorized by the guidelines of the American College of Sports Medicine (ACSM), was also associated with greater glaucoma prevalence compared with moderate intensity of exercise (OR 1.55, 95% CI 1.03–2.33). There was no association between other exercise parameters including frequency of moderate exercise, walking, muscle strength exercise, flexibility training, or total minutes of exercise per week, and the prevalence of glaucoma. In sub-analyses stratifying by gender, the association between frequency of vigorous exercise 7 days per week and glaucoma diagnosis remained significant in men (OR 6.05, 95% CI 1.67–21.94) but not in women (OR 0.96 95
Design-based Sample and Probability Law-Assumed Sample: Their Role in Scientific Investigation.
Ojeda, Mario Miguel; Sahai, Hardeo
2002-01-01
Discusses some key statistical concepts in probabilistic and non-probabilistic sampling to provide an overview for understanding the inference process. Suggests a statistical model constituting the basis of statistical inference and provides a brief review of the finite population descriptive inference and a quota sampling inferential theory.…
SAR imaging method based on coprime sampling and nested sparse sampling
Hongyin Shi; Baojing Jia
2015-01-01
As the signal bandwidth and the number of channels increase, the synthetic aperture radar (SAR) imaging system pro-duces huge amount of data according to the Shannon-Nyquist theorem, causing a huge burden for data transmission. This pa-per concerns the coprime sampling and nested sparse sampling, which are proposed recently but have never been applied to real world for target detection, and proposes a novel way which uti-lizes these new sub-Nyquist sampling structures for SAR sam-pling in azimuth and reconstructs the data of SAR sampling by compressive sensing (CS). Both the simulated and real data are processed to test the algorithm, and the results indicate the way which combines these new undersampling structures and CS is able to achieve the SAR imaging effectively with much less data than regularly ways required. Final y, the influence of a little sam-pling jitter to SAR imaging is analyzed by theoretical analysis and experimental analysis, and then it concludes a little sampling jitter have no effect on image quality of SAR.
Survival analysis of cervical cancer using stratified Cox regression
Purnami, S. W.; Inayati, K. D.; Sari, N. W. Wulan; Chosuvivatwong, V.; Sriplung, H.
2016-04-01
Cervical cancer is one of the mostly widely cancer cause of the women death in the world including Indonesia. Most cervical cancer patients come to the hospital already in an advanced stadium. As a result, the treatment of cervical cancer becomes more difficult and even can increase the death's risk. One of parameter that can be used to assess successfully of treatment is the probability of survival. This study raises the issue of cervical cancer survival patients at Dr. Soetomo Hospital using stratified Cox regression based on six factors such as age, stadium, treatment initiation, companion disease, complication, and anemia. Stratified Cox model is used because there is one independent variable that does not satisfy the proportional hazards assumption that is stadium. The results of the stratified Cox model show that the complication variable is significant factor which influent survival probability of cervical cancer patient. The obtained hazard ratio is 7.35. It means that cervical cancer patient who has complication is at risk of dying 7.35 times greater than patient who did not has complication. While the adjusted survival curves showed that stadium IV had the lowest probability of survival.
Space Group Debris Imaging Based on Sparse Sample
Zhu Jiang
2016-02-01
Full Text Available Space group debris imaging is difficult with sparse data in low Pulse Repetition Frequency (PRF spaceborne radar. To solve this problem in the narrow band system, we propose a method for space group debris imaging based on sparse samples. Due to the diversity of mass, density, and other factors, space group debris typically rotates at a high speed in different ways. We can obtain angular velocity through the autocorrelation function based on the diversity in the angular velocity. The scattering field usually presents strong sparsity, so we can utilize the corresponding measurement matrix to extract the data of different debris and then combine it using the sparse method to reconstruct the image. Furthermore, we can solve the Doppler ambiguity with the measurement matrix in low PRF systems and suppress some energy of other debris. Theoretical analysis confirms the validity of this methodology. Our simulation results demonstrate that the proposed method can achieve high-resolution Inverse Synthetic Aperture Radar (ISAR images of space group debris in low PRF systems.
A Table-Based Random Sampling Simulation for Bioluminescence Tomography
Xiaomeng Zhang
2006-01-01
Full Text Available As a popular simulation of photon propagation in turbid media, the main problem of Monte Carlo (MC method is its cumbersome computation. In this work a table-based random sampling simulation (TBRS is proposed. The key idea of TBRS is to simplify multisteps of scattering to a single-step process, through randomly table querying, thus greatly reducing the computing complexity of the conventional MC algorithm and expediting the computation. The TBRS simulation is a fast algorithm of the conventional MC simulation of photon propagation. It retained the merits of flexibility and accuracy of conventional MC method and adapted well to complex geometric media and various source shapes. Both MC simulations were conducted in a homogeneous medium in our work. Also, we present a reconstructing approach to estimate the position of the fluorescent source based on the trial-and-error theory as a validation of the TBRS algorithm. Good agreement is found between the conventional MC simulation and the TBRS simulation.
Computation of mixing in large stably stratified enclosures
Zhao, Haihua
This dissertation presents a set of new numerical models for the mixing and heat transfer problems in large stably stratified enclosures. Basing on these models, a new computer code, BMIX++ (Berkeley mechanistic MIXing code in C++), was developed by Christensen (2001) and the author. Traditional lumped control volume methods and zone models cannot model the detailed information about the distributions of temperature, density, and pressure in enclosures and therefore can have significant errors. 2-D and 3-D CFD methods require very fine grid resolution to resolve thin substructures such as jets, wall boundaries, yet such fine grid resolution is difficult or impossible to provide due to computational expense. Peterson's scaling (1994) showed that stratified mixing processes in large stably stratified enclosures can be described using one-dimensional differential equations, with the vertical transport by free and wall jets modeled using standard integral techniques. This allows very large reductions in computational effort compared to three-dimensional numerical modeling of turbulent mixing in large enclosures. The BMIX++ code was developed to implement the above ideas. The code uses a Lagrangian approach to solve 1-D transient governing equations for the ambient fluid and uses analytical models or 1-D integral models to compute substructures. 1-D transient conduction model for the solid boundaries, pressure computation and opening models are also included to make the code more versatile. The BMIX++ code was implemented in C++ and the Object-Oriented-Programming (OOP) technique was intensively used. The BMIX++ code was successfully applied to different types of mixing problems such as stratification in a water tank due to a heater inside, water tank exchange flow experiment simulation, early stage building fire analysis, stratification produced by multiple plumes, and simulations for the UCB large enclosure experiments. Most of these simulations gave satisfying
Drainage in a model stratified porous medium
Datta, Sujit S; 10.1209/0295-5075/101/14002
2013-01-01
We show that when a non-wetting fluid drains a stratified porous medium at sufficiently small capillary numbers Ca, it flows only through the coarsest stratum of the medium; by contrast, above a threshold Ca, the non-wetting fluid is also forced laterally, into part of the adjacent, finer strata. The spatial extent of this partial invasion increases with Ca. We quantitatively understand this behavior by balancing the stratum-scale viscous pressure driving the flow with the capillary pressure required to invade individual pores. Because geological formations are frequently stratified, we anticipate that our results will be relevant to a number of important applications, including understanding oil migration, preventing groundwater contamination, and sub-surface CO$_{2}$ storage.
Gas slug ascent through rheologically stratified conduits
Capponi, Antonio; James, Mike R.; Lane, Steve J.
2016-04-01
Textural and petrological evidence has indicated the presence of viscous, degassed magma layers at the top of the conduit at Stromboli. This layer acts as a plug through which gas slugs burst and it is thought to have a role in controlling the eruptive dynamics. Here, we present the results of laboratory experiments which detail the range of slug flow configurations that can develop in a rheologically stratified conduit. A gas slug can burst (1) after being fully accommodated within the plug volume, (2) whilst its base is still in the underlying low-viscosity liquid or (3) within a low-viscosity layer dynamically emplaced above the plug during the slug ascent. We illustrate the relevance of the same flow configurations at volcanic-scale through a new experimentally-validated 1D model and 3D computational fluid dynamic simulations. Applied to Stromboli, our results show that gas volume, plug thickness, plug viscosity and conduit radius control the transition between each configuration; in contrast, the configuration distribution seems insensitive to the viscosity of magma beneath the plug, which acts mainly to deliver the slug into the plug. Each identified flow configuration encompasses a variety of processes including dynamic narrowing and widening of the conduit, generation of instabilities along the falling liquid film, transient blockages of the slug path and slug break-up. All these complexities, in turn, lead to variations in the slug overpressure, mirrored by changes in infrasonic signatures which are also associated to different eruptive styles. Acoustic amplitudes are strongly dependent on the flow configuration in which the slugs burst, with both acoustic peak amplitudes and waveform shapes reflecting different burst dynamics. When compared to infrasonic signals from Stromboli, the similarity between real signals and laboratory waveforms suggests that the burst of a slug through a plug may represent a viable first-order mechanism for the generation of
An Optimization-Based Sampling Scheme for Phylogenetic Trees
Misra, Navodit; Blelloch, Guy; Ravi, R.; Schwartz, Russell
Much modern work in phylogenetics depends on statistical sampling approaches to phylogeny construction to estimate probability distributions of possible trees for any given input data set. Our theoretical understanding of sampling approaches to phylogenetics remains far less developed than that for optimization approaches, however, particularly with regard to the number of sampling steps needed to produce accurate samples of tree partition functions. Despite the many advantages in principle of being able to sample trees from sophisticated probabilistic models, we have little theoretical basis for concluding that the prevailing sampling approaches do in fact yield accurate samples from those models within realistic numbers of steps. We propose a novel approach to phylogenetic sampling intended to be both efficient in practice and more amenable to theoretical analysis than the prevailing methods. The method depends on replacing the standard tree rearrangement moves with an alternative Markov model in which one solves a theoretically hard but practically tractable optimization problem on each step of sampling. The resulting method can be applied to a broad range of standard probability models, yielding practical algorithms for efficient sampling and rigorous proofs of accurate sampling for some important special cases. We demonstrate the efficiency and versatility of the method in an analysis of uncertainty in tree inference over varying input sizes. In addition to providing a new practical method for phylogenetic sampling, the technique is likely to prove applicable to many similar problems involving sampling over combinatorial objects weighted by a likelihood model.
Multi Dimensional CTL and Stratified Datalog
Theodore Andronikos
2010-02-01
Full Text Available In this work we define Multi Dimensional CTL (MD-CTL in short by extending CTL which is thedominant temporal specification language in practice. The need for Multi Dimensional CTL is mainlydue to the advent of semi-structured data. The common path nature of CTL and XPath which provides asuitable model for semi-structured data, has caused the emergence of work on specifying a relation amongthem aiming at exploiting the nice properties of CTL. Although the advantages of such an approach havealready been noticed [36, 26, 5], no formal definition of MD-CTL has been given. The goal of this workis twofold; a we define MD-CTL and prove that the “nice” properties of CTL (linear model checking andbounded model property transfer also to MD-CTL, b we establish new results on stratified Datalog. Inparticular, we define a fragment of stratified Datalog called Multi Branching Temporal (MBT in shortprograms that has the same expressive power as MD-CTL. We prove that by devising a linear translationbetween MBT and MD-CTL. We actually give the exact translation rules for both directions. We furtherbuild on this relation to prove that query evaluation is linear and checking satisfiability, containment andequivalence are EXPTIME–complete for MBT programs. The class MBT is the largest fragment of stratifiedDatalog for which such results exist in the literature.
STRATIFIED MODEL FOR ESTIMATING FATIGUE CRACK GROWTH RATE OF METALLIC MATERIALS
YANG Yong-yu; LIU Xin-wei; YANG Fan
2005-01-01
The curve of relationship between fatigue crack growth rate and the stress strength factor amplitude represented an important fatigue property in designing of damage tolerance limits and predicting life of metallic component parts. In order to have a morereasonable use of testing data, samples from population were stratified suggested by the stratified random sample model (SRAM). The data in each stratum corresponded to the same experiment conditions. A suitable weight was assigned to each stratified sample according to the actual working states of the pressure vessel, so that the estimation of fatigue crack growth rate equation was more accurate for practice. An empirical study shows that the SRAM estimation by using fatigue crack growth rate data from different stoves is obviously better than the estimation from simple random sample model.
A spatial sampling based 13.3 Gs/s sample-and-hold circuit.
Sun, Jiwei; Wang, Haibo; Wang, Pingshan
2014-09-01
This paper presents a high-speed sample-and-hold circuit (SHC) for very fast signal analysis. Spatial sampling techniques are exploited with CMOS transmission lines in a 0.13 μm standard CMOS process. The SHC includes on chip coplanar waveguides for signal and clock pulse transmission, a clock pulse generator, and three elementary samplers periodically (L = 7.2 mm) placed along the signal propagation line. The SHC samples at 13.3 Gs/s. The circuit occupies an area of 1660 μm × 820 μm and consumes ~6 mW at a supply voltage of 1.2 V. The obtained input bandwidth is ~11.5 GHz.
薛玉珠; 王少光; 苏金月; 朱绍英
2014-01-01
Objective To understand the nurse knowledge ,attitude ,behavior of dietary status among the different nurs-ing age ,provides the evidence for the choice of key population nutrition education .Methods There were 1153 nurses had been chosen from all levels of hospitals in Zhengzhou area by the cluster stratified random sampling method ,and participated in the dietary nutrition KAP survey .Counted the knowledge score ,attitude score ,behavior score and total score of the diet-ary nutrition questionnaire by the assignment method .It was divided into 8 groups according to the nursing age ,regroup . Results Different age groups of nursing knowledge ,attitude ,behavior and total score difference has statistical significance , P values(P<0 .01) .Pairwise comparison of scores found the difference between groups protected age 2 to 3 .9 years and 4 to 5 .9 years was statistical significantly (P<0 .01) ,Analogous ,the difference between groups protected age 6~7 .9 years and 10~14 .9 years was statistical significantly (P<0 .05) ,it same with the difference between 8~9 .9 years and 15~19 .9 years (P<0 .05) ,Knowledge and behavior score pairwise comparison the same with total score ,but attitude score difference is small .Conclusion The dietary questionnaire score increases with age increasing in the nurses of more than four years old . The nutrition education should be focus on the protected age of 2 to 3 .9 years that was the lowest score population .%目的：了解不同护龄的护理人员膳食营养知识、态度、行为（KAP）现状，为选择营养宣教重点人群提供参考依据。方法采用整群分层随机抽样方法，选择郑州地区各级医院1153名护理人员进行膳食营养知-信-行问卷调查，采用赋值法计算调查问卷知识、态度、行为及总分，以护龄时间分为8组。结果8组不同护龄人群知识、态度、行为及总分比较有显著性差异（P＜0．01），组间总分两两比较，护龄在2～3．9
The Toggle Local Planner for sampling-based motion planning
Denny, Jory
2012-05-01
Sampling-based solutions to the motion planning problem, such as the probabilistic roadmap method (PRM), have become commonplace in robotics applications. These solutions are the norm as the dimensionality of the planning space grows, i.e., d > 5. An important primitive of these methods is the local planner, which is used for validation of simple paths between two configurations. The most common is the straight-line local planner which interpolates along the straight line between the two configurations. In this paper, we introduce a new local planner, Toggle Local Planner (Toggle LP), which extends local planning to a two-dimensional subspace of the overall planning space. If no path exists between the two configurations in the subspace, then Toggle LP is guaranteed to correctly return false. Intuitively, more connections could be found by Toggle LP than by the straight-line planner, resulting in better connected roadmaps. As shown in our results, this is the case, and additionally, the extra cost, in terms of time or storage, for Toggle LP is minimal. Additionally, our experimental analysis of the planner shows the benefit for a wide array of robots, with DOF as high as 70. © 2012 IEEE.
Rajat Malik
Full Text Available A class of discrete-time models of infectious disease spread, referred to as individual-level models (ILMs, are typically fitted in a Bayesian Markov chain Monte Carlo (MCMC framework. These models quantify probabilistic outcomes regarding the risk of infection of susceptible individuals due to various susceptibility and transmissibility factors, including their spatial distance from infectious individuals. The infectious pressure from infected individuals exerted on susceptible individuals is intrinsic to these ILMs. Unfortunately, quantifying this infectious pressure for data sets containing many individuals can be computationally burdensome, leading to a time-consuming likelihood calculation and, thus, computationally prohibitive MCMC-based analysis. This problem worsens when using data augmentation to allow for uncertainty in infection times. In this paper, we develop sampling methods that can be used to calculate a fast, approximate likelihood when fitting such disease models. A simple random sampling approach is initially considered followed by various spatially-stratified schemes. We test and compare the performance of our methods with both simulated data and data from the 2001 foot-and-mouth disease (FMD epidemic in the U.K. Our results indicate that substantial computation savings can be obtained--albeit, of course, with some information loss--suggesting that such techniques may be of use in the analysis of very large epidemic data sets.
Maria Hernandez-Valladares
2016-08-01
Full Text Available Global mass spectrometry (MS-based proteomic and phosphoproteomic studies of acute myeloid leukemia (AML biomarkers represent a powerful strategy to identify and confirm proteins and their phosphorylated modifications that could be applied in diagnosis and prognosis, as a support for individual treatment regimens and selection of patients for bone marrow transplant. MS-based studies require optimal and reproducible workflows that allow a satisfactory coverage of the proteome and its modifications. Preparation of samples for global MS analysis is a crucial step and it usually requires method testing, tuning and optimization. Different proteomic workflows that have been used to prepare AML patient samples for global MS analysis usually include a standard protein in-solution digestion procedure with a urea-based lysis buffer. The enrichment of phosphopeptides from AML patient samples has previously been carried out either with immobilized metal affinity chromatography (IMAC or metal oxide affinity chromatography (MOAC. We have recently tested several methods of sample preparation for MS analysis of the AML proteome and phosphoproteome and introduced filter-aided sample preparation (FASP as a superior methodology for the sensitive and reproducible generation of peptides from patient samples. FASP-prepared peptides can be further fractionated or IMAC-enriched for proteome or phosphoproteome analyses. Herein, we will review both in-solution and FASP-based sample preparation workflows and encourage the use of the latter for the highest protein and phosphorylation coverage and reproducibility.
Bonavolontà, Francesco; D'Apuzzo, Massimo; Liccardo, Annalisa; Vadursi, Michele
2014-10-13
The paper deals with the problem of improving the maximum sample rate of analog-to-digital converters (ADCs) included in low cost wireless sensing nodes. To this aim, the authors propose an efficient acquisition strategy based on the combined use of high-resolution time-basis and compressive sampling. In particular, the high-resolution time-basis is adopted to provide a proper sequence of random sampling instants, and a suitable software procedure, based on compressive sampling approach, is exploited to reconstruct the signal of interest from the acquired samples. Thanks to the proposed strategy, the effective sample rate of the reconstructed signal can be as high as the frequency of the considered time-basis, thus significantly improving the inherent ADC sample rate. Several tests are carried out in simulated and real conditions to assess the performance of the proposed acquisition strategy in terms of reconstruction error. In particular, the results obtained in experimental tests with ADC included in actual 8- and 32-bits microcontrollers highlight the possibility of achieving effective sample rate up to 50 times higher than that of the original ADC sample rate.
On the Estimation of Heritability with Family-Based and Population-Based Samples
Youngdoe Kim
2015-01-01
Full Text Available For a family-based sample, the phenotypic variance-covariance matrix can be parameterized to include the variance of a polygenic effect that has then been estimated using a variance component analysis. However, with the advent of large-scale genomic data, the genetic relationship matrix (GRM can be estimated and can be utilized to parameterize the variance of a polygenic effect for population-based samples. Therefore narrow sense heritability, which is both population and trait specific, can be estimated with both population- and family-based samples. In this study we estimate heritability from both family-based and population-based samples, collected in Korea, and the heritability estimates from the pooled samples were, for height, 0.60; body mass index (BMI, 0.32; log-transformed triglycerides (log TG, 0.24; total cholesterol (TCHL, 0.30; high-density lipoprotein (HDL, 0.38; low-density lipoprotein (LDL, 0.29; systolic blood pressure (SBP, 0.23; and diastolic blood pressure (DBP, 0.24. Furthermore, we found differences in how heritability is estimated—in particular the amount of variance attributable to common environment in twins can be substantial—which indicates heritability estimates should be interpreted with caution.
Relevant sampling applied to event-based state-estimation
Marck, J.W.; Sijs, J.
2010-01-01
To reduce the amount of data transfer in networked control systems and wireless sensor networks, measurements are usually sampled only when an event occurs, rather than synchronous in time. Today's event sampling methodologies are triggered by the current value of the sensor. State-estimators are de
Relevant sampling applied to event-based state-estimation
Marck, J.W.; Sijs, J.
2010-01-01
To reduce the amount of data transfer in networked control systems and wireless sensor networks, measurements are usually sampled only when an event occurs, rather than synchronous in time. Today's event sampling methodologies are triggered by the current value of the sensor. State-estimators are de
Gupta, Nalini; Bhar, Vikrant; Dey, Pranab; Rajwanshi, Arvind; Suri, Vanita
2014-01-01
Cervical sample is routinely taken to identify squamous dysplastic lesions of the cervix. Glandular lesions are far less commonly reported on cervical samples. The most common glandular lesion reported on cervical smear is endocervical adenocarcinoma, followed by endometrial adenocarcinoma. Direct sampling by Cervex brush is possible even in endometrial adenocarcinoma, if the tumor directly involves lower uterine segment/endocervical canal. Metastases to cervix are rare but have occasionally been reported in previous reports. We wish to highlight in this case, metastatic ovarian carcinoma directly sampled in cervical liquid-based cytology (LBC) sample, which mimicked cytomorphologically a well-differentiated endocervical adenocarcinoma. To the best of our knowledge, a similar case has not been previously published in SurePath LBC sample. PMID:25538388
曹阳; 黄越辉; 袁越; 王敏; 李鹏; 郭思琪
2015-01-01
中国风能和太阳能产业发展迅猛，由于其规划和建设周期短，开发过程中与地区电源、电网规划脱节，导致“弃风”、“弃光”现象严重。该文综合考虑区域资源特性，提出基于时序仿真的风光容量配比分层优化算法。内层建立省级电网年度风电、光伏接纳能力优化模型，采用分支定界法优化系统全年运行方式，最大化提升电网的节能减排效益，使规划结果更加符合电力系统实际运行。外层以内层模型的电网节能减排效益作为适应度函数，建立风光配比优化模型。采用细菌觅食算法结合粒子群算法求解风光最佳配比，提高计算效率和求解精度。以某省级电网为例进行研究，计算结果验证了提出的模型合理、算法可行。该方法可为地区风电和光伏建设、实际电力系统调度以及政府相关政策的制定提供指导。%With increasing wind and solar penetration into power systems, curtailment has become a particular concern in regional power networks in China. This is largely because the electricity generated by renewable sources cannot be fully consumed due to limited peak load regulation ability of local thermal units and transmission capacity is insufficient to deliver the power to load central afar. A stratified optimization algorithm which is took into account of regional wind solar energy resources was proposed to optimize the proportion of wind and solar capacity on the system based on time sequence simulations. An optimization model for assessing annual wind and solar accommodating capacity in a provincial power system was developed in inner tier. The branch and bound method was applied to optimize annual operation of the power system and thus maximize the environmental benefits and ensure the planning results conforming to actual system operations. An optimization model for evaluating the proportion of wind and solar was developed using the
Inverse scattering of dispersive stratified structures
Skaar, Johannes
2012-01-01
We consider the inverse scattering problem of retrieving the structural parameters of a stratified medium consisting of dispersive materials, given knowledge of the complex reflection coefficient in a finite frequency range. It is shown that the inverse scattering problem does not have a unique solution in general. When the dispersion is sufficiently small, such that the time-domain Fresnel reflections have durations less than the round-trip time in the layers, the solution is unique and can be found by layer peeling. Numerical examples with dispersive and lossy media are given, demonstrating the usefulness of the method for e.g. THz technology.
Topological Structures in Rotating Stratified Flows
Redondo, J. M.; Carrillo, A.; Perez, E.
2003-04-01
Detailled 2D Particle traking and PIV visualizations performed on a series of large scale laboratory experiments at the Coriolis Platform of the SINTEF in Trondheim have revealed several resonances which scale on the Strouhal, the Rossby and the Richardson numbers. More than 100 experiments spanned a wide range of Rossby Deformation Radii and the topological structures (Parabolic /Eliptic /Hyperbolic) of the quasi-balanced stratified-rotating flows were studied when stirring (akin to coastal mixing) occured at a side of the tank. The strong asymetry favored by the total vorticity produces a wealth of mixing patterns.
Zhang, Lingyi; Sheng, Xiaoling; Zhang, Runsheng; Xiong, Zhichao; Wu, Zhongping; Yan, Songmao; Zhang, Yurong; Zhang, Weibing
2014-12-07
A field sampling method based on magnetic core-shell silica nanoparticles was developed for field sampling and the enrichment of low concentrations of pesticides in aqueous samples. The magnetic nanoparticles could be easily extracted from water samples by a custom-made magnetic nanoparticle collector. The recovery of 15 mg of magnetic particles from a 500 mL water sample was 90.8%. Mixtures of seven pesticides spiked into pure water and pond water were used as marker samples to evaluate the field sampling method. The average recoveries at three levels of spiking were in the range 60.0-104.7% with relative standard deviations method has good linearity with a correlation coefficient >0.9990 in the concentration range 0.5-15 μg L(-1). The results of the analysis of a sample of poisoned pond water indicate that this method is fast, convenient and efficient for the field sampling and enrichment of pesticides in aqueous samples.
ZHONG; Fengquan(仲峰泉); LIU; Nansheng(刘难生); LU; Xiyun(陆夕云); ZHUANG; Lixian(庄礼贤)
2002-01-01
In the present paper, a new dynamic subgrid-scale (SGS) model of turbulent stress and heat flux for stratified shear flow is proposed. Based on our calculated results of stratified channel flow, the dynamic subgrid-scale model developed in this paper is shown to be effective for large eddy simulation (LES) of stratified turbulent shear flows. The new SGS model is then applied to the LES of the stratified turbulent channel flow to investigate the coupled shear and buoyancy effects on the behavior of turbulent statistics, turbulent heat transfer and flow structures at different Richardson numbers.
Peter Potapov
2013-04-01
Full Text Available Insular Southeast Asia is a hotspot of humid tropical forest cover loss. A sample-based monitoring approach quantifying forest cover loss from Landsat imagery was implemented to estimate gross forest cover loss for two eras, 1990–2000 and 2000–2005. For each time interval, a probability sample of 18.5 km × 18.5 km blocks was selected, and pairs of Landsat images acquired per sample block were interpreted to quantify forest cover area and gross forest cover loss. Stratified random sampling was implemented for 2000–2005 with MODIS-derived forest cover loss used to define the strata. A probability proportional to x (πpx design was implemented for 1990–2000 with AVHRR-derived forest cover loss used as the x variable to increase the likelihood of including forest loss area in the sample. The estimated annual gross forest cover loss for Malaysia was 0.43 Mha/yr (SE = 0.04 during 1990–2000 and 0.64 Mha/yr (SE = 0.055 during 2000–2005. Our use of the πpx sampling design represents a first practical trial of this design for sampling satellite imagery. Although the design performed adequately in this study, a thorough comparative investigation of the πpx design relative to other sampling strategies is needed before general design recommendations can be put forth.
Buchholz, B; Paquet, V; Punnett, L; Lee, D; Moir, S
1996-06-01
A high prevalence and incidence of work-related musculoskeletal disorders have been reported in construction work. Unlike industrial production-line activity, construction work, as well as work in many other occupations (e.g. agriculture, mining), is non-repetitive in nature; job tasks are non-cyclic, or consist of long or irregular cycles. PATH (Posture, Activity, Tools and Handling), a work sampling-based approach, was developed to characterize the ergonomic hazards of construction and other non-repetitive work. The posture codes in the PATH method are based on the Ovako Work Posture Analysing System (OWAS), with other codes included for describing worker activity, tool use, loads handled and grasp type. For heavy highway construction, observations are stratified by construction stage and operation, using a taxonomy developed specifically for this purpose. Observers can code the physical characteristics of the job reliably after about 30 h of training. A pilot study of six construction laborers during four road construction operations suggests that laborers spend large proportions of time in nonneutral trunk postures and spend approximately 20% of their time performing manual material handling tasks. These results demonstrate how the PATH method can be used to identify specific construction operations and tasks that are ergonomically hazardous.
A BRDF Measurement Apparatus for Lab-based Samples
Gunderson, K.; Whitby, J.; Thomas, N.
2004-03-01
We have constructed an instrument to make full-hemisphere bidirectional reflectance distribution function (BRDF) laboratory measurements of terrestrial samples in order to validate BRDF models and help interpret data of solar system objects.
Stratified spin-up in a sliced, square cylinder
Munro, R. J. [Faculty of Engineering, University of Nottingham, Nottingham NG7 2RD (United Kingdom); Foster, M. R. [Department of Mathematical Sciences, Rensselaer Polytechnic Institute, Troy, New York 12180 (United States)
2014-02-15
We previously reported experimental and theoretical results on the linear spin-up of a linearly stratified, rotating fluid in a uniform-depth square cylinder [M. R. Foster and R. J. Munro, “The linear spin-up of a stratified, rotating fluid in a square cylinder,” J. Fluid Mech. 712, 7–40 (2012)]. Here we extend that analysis to a “sliced” square cylinder, which has a base-plane inclined at a shallow angle α. Asymptotic results are derived that show the spin-up phase is achieved by a combination of the Ekman-layer eruptions (from the perimeter region of the cylinder's lid and base) and cross-slope-propagating stratified Rossby waves. The final, steady state limit for this spin-up phase is identical to that found previously for the uniform depth cylinder, but is reached somewhat more rapidly on a time scale of order E{sup −1/2}Ω{sup −1}/log (α/E{sup 1/2}) (compared to E{sup −1/2}Ω{sup −1} for the uniform-depth cylinder), where Ω is the rotation rate and E the Ekman number. Experiments were performed for Burger numbers, S, between 0.4 and 16, and showed that for S≳O(1), the Rossby modes are severely damped, and it is only at small S, and during the early stages, that the presence of these wave modes was evident. These observations are supported by the theory, which shows the damping factors increase with S and are numerically large for S≳O(1)
Wei Lin Teoh
Full Text Available Designs of the double sampling (DS X chart are traditionally based on the average run length (ARL criterion. However, the shape of the run length distribution changes with the process mean shifts, ranging from highly skewed when the process is in-control to almost symmetric when the mean shift is large. Therefore, we show that the ARL is a complicated performance measure and that the median run length (MRL is a more meaningful measure to depend on. This is because the MRL provides an intuitive and a fair representation of the central tendency, especially for the rightly skewed run length distribution. Since the DS X chart can effectively reduce the sample size without reducing the statistical efficiency, this paper proposes two optimal designs of the MRL-based DS X chart, for minimizing (i the in-control average sample size (ASS and (ii both the in-control and out-of-control ASSs. Comparisons with the optimal MRL-based EWMA X and Shewhart X charts demonstrate the superiority of the proposed optimal MRL-based DS X chart, as the latter requires a smaller sample size on the average while maintaining the same detection speed as the two former charts. An example involving the added potassium sorbate in a yoghurt manufacturing process is used to illustrate the effectiveness of the proposed MRL-based DS X chart in reducing the sample size needed.
Gruijter, de J.J.; Braak, ter C.J.F.
1992-01-01
Two fundamentally different sources of randomness exist on which design and inference in spatial sampling can be based: (a) variation that would occur on resampling the same spatial population with other sampling configurations generated by the same design, and (b) variation occurring on sampling
Gao, Xingwen; Tang, Dewei; Yue, Honghao; Jiang, Shengyuan; Deng, Zongquan
2017-06-01
The direct push sampling method is one of the most commonly used sampling methods in lunar regolith exploration. However, the disturbance of in situ bedding information during the sampling process has remained an unresolved problem. In this paper, the discrete element method is used to establish a numerical lunar soil simulant basing on the Hertz-Mindlin contact model. The result of simulated triaxial test shows that the macro mechanical parameters of the simulant accurately simulate most known lunar soil samples. The friction coefficient between the simulant and the wall of the sampling tube is also tested and used as the key variable in the following simulation and study. The disturbance phenomenon is evaluated by the displacement of marked layers, and a swirling structure is observed. The changing trend of the friction coefficient on the soil simulant void ratio and stress distribution is also determined.
Efficiency of Event-Based Sampling According to Error Energy Criterion
Marek Miskowicz
2010-01-01
The paper belongs to the studies that deal with the effectiveness of the particular event-based sampling scheme compared to the conventional periodic sampling as a reference. In the present study, the event-based sampling according to a constant energy of sampling error is analyzed. This criterion is suitable for applications where the energy of sampling error should be bounded (i.e., in building automation, or in greenhouse climate monitoring and control). Compared to the integral sampling c...
Paula Costa Mosca Macedo
2009-06-01
Full Text Available OBJECTIVE: To evaluate the quality of life during the first three years of training and identify its association with sociodemographicoccupational characteristics, leisure time and health habits. METHOD: A cross-sectional study with a random sample of 128 residents stratified by year of training was conducted. The Medical Outcome Study -short form 36 was administered. Mann-Whitney tests were carried out to compare percentile distributions of the eight quality of life domains, according to sociodemographic variables, and a multiple linear regression analysis was performed, followed by a validity checking for the resulting models. RESULTS: The physical component presented higher quality of life medians than the mental component. Comparisons between the three years showed that in almost all domains the quality of life scores of the second year residents were higher than the first year residents (p OBJETIVO: Avaliar a qualidade de vida do médico residente durante os três anos do treinamento e identificar sua associação com as características sociodemográficas-ocupacionais, tempo de lazer e hábitos de saúde. MÉTODO: Foi realizado um estudo transversal com amostra randomizada de 128 residentes, estratificada por ano de residência. O Medical Outcome Study-Short Form 36 foi aplicado; as distribuições percentis dos domínios de qualidade de vida de acordo com variáveis sociodemográficas foram analisadas pelo teste de Mann-Whitney e regressão linear múltipla, bem como estudo de validação pós-regressão. RESULTADOS: O componente físico da qualidade de vida apresentou medianas mais altas do que o mental. Comparações entre os três anos mostraram que quase todos os domínios de qualidade de vida tiveram escores maiores no segundo do que no primeiro ano (p < 0,01; em relação ao componente mental observamos maiores escores no terceiro ano do que nos demais (p < 0,01. Preditores de maior qualidade de vida foram: estar no segundo ou
Traffic Prediction Based on SVM Training Sample Divided by Time
Lingli Li
2013-07-01
Full Text Available In recent years, the volume of traffic is rapidly increasing. When vehicles running through the tunnel are more intensive or move slowly, the tunnel environment occurs deteriorated sharply, which affects the normal operation of the vehicle in the tunnel. This paper uses the result of previous mining association rules to select feature items and to establish four training samples divided by time. Then the training samples are utilized to create the SVM classification model. Finally the trained SVM model is used to prediction the tunnel traffic situation. Through traffic situation prediction, effective decisions can be made before traffic jams, and ensure that the tunnel traffic is normal.
Jones Suzanne P
2007-08-01
Full Text Available Abstract Background Historically there has been a wide variation in the proportion of inadequate smears between general practices. Cervical screening in the UK is undergoing a fundamental change by moving from conventional to liquid based cytology (LBC. The main driver for this change has been a predicted reduction in the proportions of inadequate samples. This study investigates the effect of LBC on the variation in the proportion of inadequate samples between general practices using Shewhart's theory of variation and control charts. Methods Routinely collected cervical cytology data was obtained for all general practices in two localities in South Staffordshire for periods before and after the introduction of liquid based cytology. Control charts of the proportion of inadequate smears were plotted for the practices stratified by laboratory. A standardised measure of variation for all of the practices in each laboratory and each time period was also calculated. Results Following the introduction of liquid based cytology the overall proportion of inadequate samples in the two localities fell from 11.8 to 1.3% (p Conclusion A reduction in the proportion of inadequate samples has been realised in these localities. The reduction in the overall proportion of inadequate samples has also been accompanied by a reduction in variation between GP practices.
Experimental Study of Fluorine Transport Rules in Unsaturated Stratified Soil
ZHANG Hong-mei; SU Bao-yu; LIU Peng-hua; ZHANG Wei
2007-01-01
With the aid of soil column test models, the transport rules of fluorine contaminants in unsaturated stratified soils are discussed. Curves of F- concentrations at different times and sites in the unsaturated stratified soil were obtained under conditions of continuous injection of fluoride contaminants and water. Based on the analysis of the actual observation data, the values between computed results and observed data were compared. It is shown that the chemical properties of fluorine ions are active. The migration process of fluorine ions in soils is complex. Because of the effect of adsorption and desorption, the curve of the fluorine ion breakthrough curve is not symmetric. Its concentration peak value at each measuring point gradually decays. The tail of the breakthrough curve is long and the process of leaching and purifying using water requires considerable time. Along with the release of OHˉ in the process of fluorine absorption, the pH value of the soil solution changed from neutral to alkalinity during the test process. The first part of the breakthrough curve fitted better than the second part. The main reason is that fluorine does not always exist in the form of fluorinions in groundwater. Given the long test time, fluorinions possibly react with other ions in the soil solution to form complex water-soluble fluorine compounds. Only the retardation factor and source-sink term have been considered in our numerical model, which may leads to errors of computed values. But as a whole the migration rules of fluorine ions are basically correct, which indicates that the established numerical model can be used to simulate the transport rules of fluorine contaminants in unsaturated stratified soils.
Stratified growth in Pseudomonas aeruginosa biofilms
Werner, E.; Roe, F.; Bugnicourt, A.;
2004-01-01
In this study, stratified patterns of protein synthesis and growth were demonstrated in Pseudomonas aeruginosa biofilms. Spatial patterns of protein synthetic activity inside biofilms were characterized by the use of two green fluorescent protein (GFP) reporter gene constructs. One construct...... carried an isopropyl-beta-D-thiogalactopyranoside (IPTG)-inducible gfpmut2 gene encoding a stable GFP. The second construct carried a GFP derivative, gfp-AGA, encoding an unstable GFP under the control of the growth-rate-dependent rrnBp(1) promoter. Both GFP reporters indicated that active protein...... of oxygen limitation in the biofilm. Oxygen microelectrode measurements showed that oxygen only penetrated approximately 50 mum into the biofilm. P. aeruginosa was incapable of anaerobic growth in the medium used for this investigation. These results show that while mature P. aeruginosa biofilms contain...
Clustering of floating particles in stratified turbulence
Boffetta, Guido; de Lillo, Filippo; Musacchio, Stefano; Sozza, Alessandro
2016-11-01
We study the dynamics of small floating particles transported by stratified turbulence in presence of a mean linear density profile as a simple model for the confinement and the accumulation of plankton in the ocean. By means of extensive direct numerical simulations we investigate the statistical distribution of floaters as a function of the two dimensionless parameters of the problem. We find that vertical confinement of particles is mainly ruled by the degree of stratification, with a weak dependency on the particle properties. Conversely, small scale fractal clustering, typical of non-neutral particles in turbulence, depends on the particle relaxation time and is only weakly dependent on the flow stratification. The implications of our findings for the formation of thin phytoplankton layers are discussed.
On turbulence in a stratified environment
Sarkar, Sutanu
2015-11-01
John Lumley, motivated by atmospheric observations, made seminal contributions to the statistical theory (Lumley and Panofsky 1964, Lumley 1964) and second-order modeling (Zeman and Lumley 1976) of turbulence in the environment. Turbulent processes in the ocean share many features with the atmosphere, e.g., shear, stratification, rotation and rough topography. Results from direct and large eddy simulations of two model problems will be used to illustrate some of the features of turbulence in a stratified environment. The first problem concerns a shear layer in nonuniform stratification, a situation typical of both the atmosphere and the ocean. The second problem, considered to be responsible for much of the turbulent mixing that occurs in the ocean interior, concerns topographically generated internal gravity waves. Connections will be made to data taken during observational campaigns in the ocean.
Accumulator-Based Deep-Sea Microbe Gastight Sampling Technique
HUANG Zhong-hua; LIU Shao-jun; JIN Bo
2006-01-01
The accumulator is used as a pressure compensation device to realize deep-sea microbe gastight sampling. Four key states of the accumulator are proposed to describe the pressure compensation process and a corresponding mathematical model is established to investigate the relationship between the results of pressure compensation and the parameters of the accumulator. Simulation results show that during the falling process of the sampler, the accumulator's real opening pressure is greater than its precharge pressure; when the sampling depth is 6000 m and the accumulator's precharge pressure is less than 30 MPa, to increase the accumulator's precharge pressure can improve pressure compensation results obviously. Laboratory experiments at 60 MPa show that the accumulator is an effective and reliable pressure compensation device for deep-sea microbe samplers. The success in sea trial at a depth of 2000 m in the South China Sea shows that the mathematical model and laboratory experiment results are reliable.
The Performance Analysis Based on SAR Sample Covariance Matrix
Esra Erten
2012-03-01
Full Text Available Multi-channel systems appear in several fields of application in science. In the Synthetic Aperture Radar (SAR context, multi-channel systems may refer to different domains, as multi-polarization, multi-interferometric or multi-temporal data, or even a combination of them. Due to the inherent speckle phenomenon present in SAR images, the statistical description of the data is almost mandatory for its utilization. The complex images acquired over natural media present in general zero-mean circular Gaussian characteristics. In this case, second order statistics as the multi-channel covariance matrix fully describe the data. For practical situations however, the covariance matrix has to be estimated using a limited number of samples, and this sample covariance matrix follow the complex Wishart distribution. In this context, the eigendecomposition of the multi-channel covariance matrix has been shown in different areas of high relevance regarding the physical properties of the imaged scene. Specifically, the maximum eigenvalue of the covariance matrix has been frequently used in different applications as target or change detection, estimation of the dominant scattering mechanism in polarimetric data, moving target indication, etc. In this paper, the statistical behavior of the maximum eigenvalue derived from the eigendecomposition of the sample multi-channel covariance matrix in terms of multi-channel SAR images is simplified for SAR community. Validation is performed against simulated data and examples of estimation and detection problems using the analytical expressions are as well given.
Towards Cost-efficient Sampling Methods
Peng, Luo; Chong, Wu
2014-01-01
The sampling method has been paid much attention in the field of complex network in general and statistical physics in particular. This paper presents two new sampling methods based on the perspective that a small part of vertices with high node degree can possess the most structure information of a network. The two proposed sampling methods are efficient in sampling the nodes with high degree. The first new sampling method is improved on the basis of the stratified random sampling method and selects the high degree nodes with higher probability by classifying the nodes according to their degree distribution. The second sampling method improves the existing snowball sampling method so that it enables to sample the targeted nodes selectively in every sampling step. Besides, the two proposed sampling methods not only sample the nodes but also pick the edges directly connected to these nodes. In order to demonstrate the two methods' availability and accuracy, we compare them with the existing sampling methods in...
Efficiency of event-based sampling according to error energy criterion.
Miskowicz, Marek
2010-01-01
The paper belongs to the studies that deal with the effectiveness of the particular event-based sampling scheme compared to the conventional periodic sampling as a reference. In the present study, the event-based sampling according to a constant energy of sampling error is analyzed. This criterion is suitable for applications where the energy of sampling error should be bounded (i.e., in building automation, or in greenhouse climate monitoring and control). Compared to the integral sampling criteria, the error energy criterion gives more weight to extreme sampling error values. The proposed sampling principle extends a range of event-based sampling schemes and makes the choice of particular sampling criterion more flexible to application requirements. In the paper, it is proved analytically that the proposed event-based sampling criterion is more effective than the periodic sampling by a factor defined by the ratio of the maximum to the mean of the cubic root of the signal time-derivative square in the analyzed time interval. Furthermore, it is shown that the sampling according to energy criterion is less effective than the send-on-delta scheme but more effective than the sampling according to integral criterion. On the other hand, it is indicated that higher effectiveness in sampling according to the selected event-based criterion is obtained at the cost of increasing the total sampling error defined as the sum of errors for all the samples taken.
Li, Tiandong
2012-01-01
In large-scale assessments, such as the National Assessment of Educational Progress (NAEP), plausible values based on Multiple Imputations (MI) have been used to estimate population characteristics for latent constructs under complex sample designs. Mislevy (1991) derived a closed-form analytic solution for a fixed-effect model in creating…
IMAGE PROFILE AREA CALCULATION BASED ON CIRCULAR SAMPLE MEASUREMENT CALIBRATION
无
2005-01-01
A practical approach of measurement calibration is presented for obtaining the true area of the photographed objects projected in the 2-D image scene. The calibration is performed using three circular samples with given diameters. The process is first to obtain the ratio mm/pixel in two orthogonal directions, and then use the obtained ratios with the total number of pixels scanned within projected area of the object of interest to compute the desired area. Compared the optically measured areas with their corresponding true areas, the results show that the proposed method is quite encouraging and the relevant application also proves the approach adequately accurate.
Fugel, Hans-Joerg; Nuijten, Mark; Postma, Maarten
2016-01-01
RATIONALE: Stratified Medicine (SM) is becoming a natural result of advances in biomedical science and a promising path for the innovation-based biopharmaceutical industry to create new investment opportunities. While the use of biomarkers to improve R&D efficiency and productivity is very much
Adaptive sampling for nonlinear dimensionality reduction based on manifold learning
Franz, Thomas; Zimmermann, Ralf; Goertz, Stefan
2017-01-01
We make use of the non-intrusive dimensionality reduction method Isomap in order to emulate nonlinear parametric flow problems that are governed by the Reynolds-averaged Navier-Stokes equations. Isomap is a manifold learning approach that provides a low-dimensional embedding space...... that is approximately isometric to the manifold that is assumed to be formed by the high-fidelity Navier-Stokes flow solutions under smooth variations of the inflow conditions. The focus of the work at hand is the adaptive construction and refinement of the Isomap emulator: We exploit the non-Euclidean Isomap metric...... to detect and fill up gaps in the sampling in the embedding space. The performance of the proposed manifold filling method will be illustrated by numerical experiments, where we consider nonlinear parameter-dependent steady-state Navier-Stokes flows in the transonic regime....
Learning algorithms for feedforward networks based on finite samples
Rao, N.S.V.; Protopopescu, V.; Mann, R.C.; Oblow, E.M.; Iyengar, S.S.
1994-09-01
Two classes of convergent algorithms for learning continuous functions (and also regression functions) that are represented by feedforward networks, are discussed. The first class of algorithms, applicable to networks with unknown weights located only in the output layer, is obtained by utilizing the potential function methods of Aizerman et al. The second class, applicable to general feedforward networks, is obtained by utilizing the classical Robbins-Monro style stochastic approximation methods. Conditions relating the sample sizes to the error bounds are derived for both classes of algorithms using martingale-type inequalities. For concreteness, the discussion is presented in terms of neural networks, but the results are applicable to general feedforward networks, in particular to wavelet networks. The algorithms can be directly adapted to concept learning problems.
Sample Size Requirements for Traditional and Regression-Based Norms
Oosterhuis, H.E.M.; van der Ark, L.A.; Sijtsma, K.
2016-01-01
Test norms enable determining the position of an individual test taker in the group. The most frequently used approach to obtain test norms is traditional norming. Regression-based norming may be more efficient than traditional norming and is rapidly growing in popularity, but little is known about
The Risk-Stratified Osteoporosis Strategy Evaluation study (ROSE)
Rubin, Katrine Hass; Holmberg, Teresa; Rothmann, Mette Juel
2015-01-01
The risk-stratified osteoporosis strategy evaluation study (ROSE) is a randomized prospective population-based study investigating the effectiveness of a two-step screening program for osteoporosis in women. This paper reports the study design and baseline characteristics of the study population....... 35,000 women aged 65-80 years were selected at random from the population in the Region of Southern Denmark and-before inclusion-randomized to either a screening group or a control group. As first step, a self-administered questionnaire regarding risk factors for osteoporosis based on FRAX......(®) was issued to both groups. As second step, subjects in the screening group with a 10-year probability of major osteoporotic fractures ≥15 % were offered a DXA scan. Patients diagnosed with osteoporosis from the DXA scan were advised to see their GP and discuss pharmaceutical treatment according to Danish...
Feature-Based Digital Modulation Recognition Using Compressive Sampling
Zhuo Sun
2016-01-01
Full Text Available Compressive sensing theory can be applied to reconstruct the signal with far fewer measurements than what is usually considered necessary, while in many scenarios, such as spectrum detection and modulation recognition, we only expect to acquire useful characteristics rather than the original signals, where selecting the feature with sparsity becomes the main challenge. With the aim of digital modulation recognition, the paper mainly constructs two features which can be recovered directly from compressive samples. The two features are the spectrum of received data and its nonlinear transformation and the compositional feature of multiple high-order moments of the received data; both of them have desired sparsity required for reconstruction from subsamples. Recognition of multiple frequency shift keying, multiple phase shift keying, and multiple quadrature amplitude modulation are considered in our paper and implemented in a unified procedure. Simulation shows that the two identification features can work effectively in the digital modulation recognition, even at a relatively low signal-to-noise ratio.
Integrated microdroplet-based system for enzyme synthesis and sampling
Lapierre, Florian; Best, Michel; Stewart, Robert; Oakeshott, John; Peat, Thomas; Zhu, Yonggang
2013-12-01
Microdroplet-based microfluidic devices are emerging as powerful tools for a wide range of biochemical screenings and analyses. Monodispersed aqueous microdroplets from picoliters to nanoliters in volume are generated inside microfluidic channels within an immiscible oil phase. This results in the formation of emulsions which can contain various reagents for chemical reactions and can be considered as discrete bioreactors. In this paper an integrated microfluidic platform for the synthesis, screening and sorting of libraries of an organophosphate degrading enzyme is presented. The variants of the selected enzyme are synthesized from a DNA source using in-vitro transcription and translation method. The synthesis occurs inside water-in-oil emulsion droplets, acting as bioreactors. Through a fluorescence based detection system, only the most efficient enzymes are selected. All the necessary steps from the enzyme synthesis to selection of the best genes (producing the highest enzyme activity) are thus integrated inside a single and unique device. In the second part of the paper, an innovative design of the microfluidic platform is presented, integrating an electronic prototyping board for ensuring the communication between the various components of the platform (camera, syringe pumps and high voltage power supply), resulting in a future handheld, user-friendly, fully automated device for enzyme synthesis, screening and selection. An overview on the capabilities as well as future perspectives of this new microfluidic platform is provided.
Magnetic flux concentrations from turbulent stratified convection
Käpylä, P J; Kleeorin, N; Käpylä, M J; Rogachevskii, I
2015-01-01
(abridged) Context: The mechanisms that cause the formation of sunspots are still unclear. Aims: We study the self-organisation of initially uniform sub-equipartition magnetic fields by highly stratified turbulent convection. Methods: We perform simulations of magnetoconvection in Cartesian domains that are $8.5$-$24$ Mm deep and $34$-$96$ Mm wide. We impose either a vertical or a horizontal uniform magnetic field in a convection-driven turbulent flow. Results: We find that super-equipartition magnetic flux concentrations are formed near the surface with domain depths of $12.5$ and $24$ Mm. The size of the concentrations increases as the box size increases and the largest structures ($20$ Mm horizontally) are obtained in the 24 Mm deep models. The field strength in the concentrations is in the range of $3$-$5$ kG. The concentrations grow approximately linearly in time. The effective magnetic pressure measured in the simulations is positive near the surface and negative in the bulk of the convection zone. Its ...
Strongly Stratified Turbulence Wakes and Mixing Produced by Fractal Wakes
Dimitrieva, Natalia; Redondo, Jose Manuel; Chashechkin, Yuli; Fraunie, Philippe; Velascos, David
2017-04-01
-stationary dynamicss and structure of stratified fluid flows around a wedge were also studied based of the fundamental equations set using numerical modeling. Due to breaking of naturally existing background diffusion flux of stratifying agent by an impermeable surface of the wedge a complex multi-level vortex system of compensatory fluid motions is formed around the obstacle. The flow is characterized by a wide range of values of internal scales that are absent in a homogeneous liquid. Numerical solution of the fundamental system with the boundary conditions is constructed using a solver such as stratifiedFoam developed within the frame of the open source computational package OpenFOAM using the finite volume method. The computations were performed in parallel using computing resources of the Scientific Research Supercomputer Complex of MSU (SRCC MSU) and the technological platform UniHUB. The evolution of the flow pattern of the wedge by stratified flow has been demonstrated. The complex structure of the fields of physical quantities and their gradients has been shown. Observed in experiment are multiple flow components, including upstream disturbances, internal waves and the downstream wake with submerged transient vortices well reproduced. Structural elements of flow differ in size and laws of variation in space and time. Rich fine flow structure visualized in vicinity and far from the obstacle. The global efficiency of the mixing process is measured and compared with previous estimates of mixing efficiency.
Simulation model of stratified thermal energy storage tank using finite difference method
Waluyo, Joko
2016-06-01
Stratified TES tank is normally used in the cogeneration plant. The stratified TES tanks are simple, low cost, and equal or superior in thermal performance. The advantage of TES tank is that it enables shifting of energy usage from off-peak demand for on-peak demand requirement. To increase energy utilization in a stratified TES tank, it is required to build a simulation model which capable to simulate the charging phenomenon in the stratified TES tank precisely. This paper is aimed to develop a novel model in addressing the aforementioned problem. The model incorporated chiller into the charging of stratified TES tank system in a closed system. The model was developed in one-dimensional type involve with heat transfer aspect. The model covers the main factors affect to degradation of temperature distribution namely conduction through the tank wall, conduction between cool and warm water, mixing effect on the initial flow of the charging as well as heat loss to surrounding. The simulation model is developed based on finite difference method utilizing buffer concept theory and solved in explicit method. Validation of the simulation model is carried out using observed data obtained from operating stratified TES tank in cogeneration plant. The temperature distribution of the model capable of representing S-curve pattern as well as simulating decreased charging temperature after reaching full condition. The coefficient of determination values between the observed data and model obtained higher than 0.88. Meaning that the model has capability in simulating the charging phenomenon in the stratified TES tank. The model is not only capable of generating temperature distribution but also can be enhanced for representing transient condition during the charging of stratified TES tank. This successful model can be addressed for solving the limitation temperature occurs in charging of the stratified TES tank with the absorption chiller. Further, the stratified TES tank can be
The generalization ability of online SVM classification based on Markov sampling.
Xu, Jie; Yan Tang, Yuan; Zou, Bin; Xu, Zongben; Li, Luoqing; Lu, Yang
2015-03-01
In this paper, we consider online support vector machine (SVM) classification learning algorithms with uniformly ergodic Markov chain (u.e.M.c.) samples. We establish the bound on the misclassification error of an online SVM classification algorithm with u.e.M.c. samples based on reproducing kernel Hilbert spaces and obtain a satisfactory convergence rate. We also introduce a novel online SVM classification algorithm based on Markov sampling, and present the numerical studies on the learning ability of online SVM classification based on Markov sampling for benchmark repository. The numerical studies show that the learning performance of the online SVM classification algorithm based on Markov sampling is better than that of classical online SVM classification based on random sampling as the size of training samples is larger.
Development of an evaporation-based microfluidic sample concentrator
Sharma, Nigel R.; Lukyanov, Anatoly; Bardell, Ron L.; Seifried, Lynn; Shen, Mingchao
2008-02-01
MicroPlumbers Microsciences LLC, has developed a relatively simple concentrator device based on isothermal evaporation. The device allows for rapid concentration of dissolved or dispersed substances or microorganisms (e.g. bacteria, viruses, proteins, toxins, enzymes, antibodies, etc.) under conditions gentle enough to preserve their specific activity or viability. It is capable of removing of 0.8 ml of water per minute at 37°C, and has dimensions compatible with typical microfluidic devices. The concentrator can be used as a stand-alone device or integrated into various processes and analytical instruments, substantially increasing their sensitivity while decreasing processing time. The evaporative concentrator can find applications in many areas such as biothreat detection, environmental monitoring, forensic medicine, pathogen analysis, and agricultural industrial monitoring. In our presentation, we describe the design, fabrication, and testing of the concentrator. We discuss multiphysics simulations of the heat and mass transport in the device that we used to select the design of the concentrator and the protocol of performance testing. We present the results of experiments evaluating water removal performance.
Hydrodynamics of stratified epithelium: steady state and linearized dynamics
Yeh, Wei-Ting
2015-01-01
A theoretical model for stratified epithelium is presented. The viscoelastic properties of the tissue is assumed to be dependent on the spatial distribution of proliferative and differentiated cells. Based on this assumption, a hydrodynamic description for tissue dynamics at long-wavelength, long-time limit is developed, and the analysis reveals important insight for the dynamics of an epithelium close to its steady state. When the proliferative cells occupy a thin region close to the basal membrane, the relaxation rate towards the steady state is enhanced by cell division and cell apoptosis. On the other hand, when the region where proliferative cells reside becomes sufficiently thick, a flow induced by cell apoptosis close to the apical surface could enhance small perturbations. This destabilizing mechanism is general for continuous self-renewal multi-layered tissues, it could be related to the origin of certain tissue morphology and developing pattern.
Hydrodynamics of stratified epithelium: Steady state and linearized dynamics
Yeh, Wei-Ting; Chen, Hsuan-Yi
2016-05-01
A theoretical model for stratified epithelium is presented. The viscoelastic properties of the tissue are assumed to be dependent on the spatial distribution of proliferative and differentiated cells. Based on this assumption, a hydrodynamic description of tissue dynamics at the long-wavelength, long-time limit is developed, and the analysis reveals important insights into the dynamics of an epithelium close to its steady state. When the proliferative cells occupy a thin region close to the basal membrane, the relaxation rate towards the steady state is enhanced by cell division and cell apoptosis. On the other hand, when the region where proliferative cells reside becomes sufficiently thick, a flow induced by cell apoptosis close to the apical surface enhances small perturbations. This destabilizing mechanism is general for continuous self-renewal multilayered tissues; it could be related to the origin of certain tissue morphology, tumor growth, and the development pattern.
A dynamic subgrid-scale model for the large eddy simulation of stratified flow
刘宁宇; 陆夕云; 庄礼贤
2000-01-01
A new dynamic subgrid-scale (SGS) model, including subgrid turbulent stress and heat flux models for stratified shear flow is proposed by using Yoshizawa’ s eddy viscosity model as a base model. Based on our calculated results, the dynamic subgrid-scale model developed here is effective for the large eddy simulation (LES) of stratified turbulent channel flows. The new SGS model is then applied to the large eddy simulation of stratified turbulent channel flow under gravity to investigate the coupled shear and buoyancy effects on the near-wall turbulent statistics and the turbulent heat transfer at different Richardson numbers. The critical Richardson number predicted by the present calculation is in good agreement with the value of theoretical analysis.
A dynamic subgrid-scale model for the large eddy simulation of stratified flow
无
2000-01-01
A new dynamic subgrid-scale (SGS) model, including subgrid turbulent stress and heat flux models for stratified shear flow is proposed by using Yoshizawa's eddy viscosity model as a base model. Based on our calculated results, the dynamic subgrid-scale model developed here is effective for the large eddy simulation (LES) of stratified turbulent channel flows. The new SGS model is then applied to the large eddy simulation of stratified turbulent channel flow under gravity to investigate the coupled shear and buoyancy effects on the near-wall turbulent statistics and the turbulent heat transfer at different Richardson numbers. The critical Richardson number predicted by the present calculation is in good agreement with the value of theoretical analysis.
A NONHYDROSTATIC NUMERICAL MODEL FOR DENSITY STRATIFIED FLOW AND ITS APPLICATIONS
无
2008-01-01
A modular numerical model was developed for simulating density-stratified flow in domains with irregular bottom topography. The model was designed for examining interactions between stratified flow and topography, e.g., tidally driven flow over two-dimensional sills or internal solitary waves propagating over a shoaling bed. The model was based on the non-hydrostatic vorticity-stream function equations for a continuously stratified fluid in a rotating frame. A self-adaptive grid was adopted in the vertical coordinate, the Alternative Direction Implicit (ADI) scheme was used for the time marching equations while the Poisson equation for stream-function was solved based on the Successive Over Relaxation (SOR) iteration with the Chebyshev acceleration. The numerical techniques were described and three applications of the model were presented.
Whole arm manipulation planning based on feedback velocity fields and sampling-based techniques.
Talaei, B; Abdollahi, F; Talebi, H A; Omidi Karkani, E
2013-09-01
Changing the configuration of a cooperative whole arm manipulator is not easy while enclosing an object. This difficulty is mainly because of risk of jamming caused by kinematic constraints. To reduce this risk, this paper proposes a feedback manipulation planning algorithm that takes grasp kinematics into account. The idea is based on a vector field that imposes perturbation in object motion inducing directions when the movement is considerably along manipulator redundant directions. Obstacle avoidance problem is then considered by combining the algorithm with sampling-based techniques. As experimental results confirm, the proposed algorithm is effective in avoiding jamming as well as obstacles for a 6-DOF dual arm whole arm manipulator. Copyright © 2013 ISA. Published by Elsevier Ltd. All rights reserved.
Stratified spaces constitute a Fra\\"iss\\'e category
Mijares, José Gregorio
2010-01-01
We prove that stratified spaces and stratified pseudomanifolds satisfy categorical Fra\\"{\\i}ss\\'e properties. This result was presented for the First Meeting of Logic and Algebra in Bogot\\'a, on Sept. 2010. This article has been submitted to the Revista Colombiana de Matem\\'aticas.
A case-base sampling method for estimating recurrent event intensities.
Saarela, Olli
2016-10-01
Case-base sampling provides an alternative to risk set sampling based methods to estimate hazard regression models, in particular when absolute hazards are also of interest in addition to hazard ratios. The case-base sampling approach results in a likelihood expression of the logistic regression form, but instead of categorized time, such an expression is obtained through sampling of a discrete set of person-time coordinates from all follow-up data. In this paper, in the context of a time-dependent exposure such as vaccination, and a potentially recurrent adverse event outcome, we show that the resulting partial likelihood for the outcome event intensity has the asymptotic properties of a likelihood. We contrast this approach to self-matched case-base sampling, which involves only within-individual comparisons. The efficiency of the case-base methods is compared to that of standard methods through simulations, suggesting that the information loss due to sampling is minimal.
Faraker, C A; Greenfield, J
2013-08-01
To investigate the sampling performance of individual cervical cytology practitioners using the transformation zone sampling rate (TZSR) as a performance indicator and to assess the impact of dedicated on site training for those identified with a low TZSR. The TZSR was calculated for all practitioners submitting ThinPrep(®) cervical cytology specimens to the Conquest laboratory between January 2010 and November 2011. After excluding those with less than 30 qualifying samples the 10th percentile of the TZSR was calculated. Practitioners with a TZSR below the 10th percentile were visited by a specialist cervical cytology screening facilitator after which the TZSR of these practitioners was closely monitored. After exclusions there were 175 practitioners who had collected 24 358 qualifying liquid-based cytology (LBC) samples. The average TZSR was 70% (range 12-96%). The 10th percentile was 44%; 18 scored below the 10th percentile. Failure to apply sufficient pressure when sampling was identified as the most common reason for a low TZSR. In some cases there was suspicion that the cervix was not always adequately visualized. Continuous monitoring after assessment identified improvement in the TZSRs of 13/18 practitioners. Identification of practitioners with low TZSRs compared with their peers allows these individuals to be selected for personalized observation and training by a specialist in cervical cytology which can lead to an improvement in TZSR. As previous studies show a significant correlation between the TZSR and the detection rate of cytological abnormality it is useful to investigate low TZSRs. © 2013 John Wiley & Sons Ltd.
Visualization periodic flows in a continuously stratified fluid.
Bardakov, R.; Vasiliev, A.
2012-04-01
To visualize the flow pattern of viscous continuously stratified fluid both experimental and computational methods were developed. Computational procedures were based on exact solutions of set of the fundamental equations. Solutions of the problems of flows producing by periodically oscillating disk (linear and torsion oscillations) were visualized with a high resolutions to distinguish small-scale the singular components on the background of strong internal waves. Numerical algorithm of visualization allows to represent both the scalar and vector fields, such as velocity, density, pressure, vorticity, stream function. The size of the source, buoyancy and oscillation frequency, kinematic viscosity of the medium effects were traced in 2D an 3D posing problems. Precision schlieren instrument was used to visualize the flow pattern produced by linear and torsion oscillations of strip and disk in a continuously stratified fluid. Uniform stratification was created by the continuous displacement method. The buoyancy period ranged from 7.5 to 14 s. In the experiments disks with diameters from 9 to 30 cm and a thickness of 1 mm to 10 mm were used. Different schlieren methods that are conventional vertical slit - Foucault knife, vertical slit - filament (Maksoutov's method) and horizontal slit - horizontal grating (natural "rainbow" schlieren method) help to produce supplementing flow patterns. Both internal wave beams and fine flow components were visualized in vicinity and far from the source. Intensity of high gradient envelopes increased proportionally the amplitude of the source. In domains of envelopes convergence isolated small scale vortices and extended mushroom like jets were formed. Experiments have shown that in the case of torsion oscillations pattern of currents is more complicated than in case of forced linear oscillations. Comparison with known theoretical model shows that nonlinear interactions between the regular and singular flow components must be taken
Population-based estimates of pesticide intake are needed to characterize exposure for particular demographic groups based on their dietary behaviors. Regression modeling performed on measurements of selected pesticides in composited duplicate diet samples allowed (1) estimation ...
Hameed, Omar; Humphrey, Peter A
2006-07-01
Typically glands of prostatic adenocarcinoma have a single cell lining, although stratification can be seen in invasive carcinomas with a cribriform architecture, including ductal carcinoma. The presence and diagnostic significance of stratified cells within non-cribriform carcinomatous prostatic glands has not been well addressed. The histomorphological features and immunohistochemical profile of cases of non-cribriform prostatic adenocarcinoma with stratified malignant glandular epithelium were analyzed. These cases were identified from needle biopsy cases from the consultation files of one of the authors and from a review of 150 consecutive in-house needle biopsy cases of prostatic adenocarcinoma. Immunohistochemistry was performed utilizing antibodies reactive against high molecular weight cytokeratin (34betaE12), p63 and alpha-methylacyl-coenzyme-A racemase (AMACR). A total of 8 cases were identified, including 2 from the 150 consecutive in-house cases (1.3%). In 4 cases, the focus with glands having stratified epithelium was the sole carcinomatous component in the biopsy, while such a component represented 5-30% of the invasive carcinoma seen elsewhere in the remaining cases. The main attribute in all these foci was the presence of glandular profiles lined by several layers of epithelial cells with cytological and architectural features resembling flat or tufted high-grade prostatic intraepithelial neoplasia, but lacking basal cells as confirmed by negative 34betaE12 and/or p63 immunostains in all cases. The AMACR staining profile of the stratified foci was variable, with 4 foci showing positivity, and 3 foci being negative, including two cases that displayed AMACR positivity in adjacent non-stratified prostatic adenocarcinoma. Prostatic adenocarcinoma with stratified malignant glandular epithelium can be identified in prostate needle biopsy samples harboring non-cribriform prostatic adenocarcinoma and resembles glands with high-grade prostatic
RF Sub-sampling Receiver Architecture based on Milieu Adapting Techniques
Behjou, Nastaran; Larsen, Torben; Jensen, Ole Kiel
2012-01-01
A novel sub-sampling based architecture is proposed which has the ability of reducing the problem of image distortion and improving the signal to noise ratio significantly. The technique is based on sensing the environment and adapting the sampling rate of the receiver to the best possible...... selection. The proposed technique is applied to an RF sub-sampling receiver and has revealed great improvements in the SNIR of the receiver. Measurements on an experimental sub-sampling receiver show that the presented method provides up to 85.9 dB improvements in the SNIR of the receiver when comparing...... it for best and worst choice of sampling rate....
A Pilot Sampling Design for Estimating Outdoor Recreation Site Visits on the National Forests
Stanley J. Zarnoch; S.M. Kocis; H. Ken Cordell; D.B.K. English
2002-01-01
A pilot sampling design is described for estimating site visits to National Forest System lands. The three-stage sampling design consisted of national forest ranger districts, site days within ranger districts, and last-exiting recreation visitors within site days. Stratification was used at both the primary and secondary stages. Ranger districts were stratified based...
Advances in paper-based sample pretreatment for point-of-care testing.
Tang, Rui Hua; Yang, Hui; Choi, Jane Ru; Gong, Yan; Feng, Shang Sheng; Pingguan-Murphy, Belinda; Huang, Qing Sheng; Shi, Jun Ling; Mei, Qi Bing; Xu, Feng
2017-06-01
In recent years, paper-based point-of-care testing (POCT) has been widely used in medical diagnostics, food safety and environmental monitoring. However, a high-cost, time-consuming and equipment-dependent sample pretreatment technique is generally required for raw sample processing, which are impractical for low-resource and disease-endemic areas. Therefore, there is an escalating demand for a cost-effective, simple and portable pretreatment technique, to be coupled with the commonly used paper-based assay (e.g. lateral flow assay) in POCT. In this review, we focus on the importance of using paper as a platform for sample pretreatment. We firstly discuss the beneficial use of paper for sample pretreatment, including sample collection and storage, separation, extraction, and concentration. We highlight the working principle and fabrication of each sample pretreatment device, the existing challenges and the future perspectives for developing paper-based sample pretreatment technique.
廖全全; 邹红梅; 王从华; 廖月红
2009-01-01
目的 利用现有人力资源,建立组长负责制分层管理模式,减少护理缺陷,减少护理纠纷,提高急诊护理质量.方法 制订考核标准,将护士分层使用和分层管理,实施组长负责制层级管理模式,通过考核,将结果进行分析、反馈.结果 实施前后护理质量、患者对护士的满意度、医生对护士的满意度、相关科室对急诊护理工作的满意度经χ2检验,差异明显.护理缺陷由2005年的5次,减少到2007年的1次;护理纠纷由2005年的5宗,减少到2007年的0宗;急救物品完好率由98%提高到100%.结论 组长负责制分层管理模式,保证急诊护理各时段、各环节护理质量均得到落实,减少了护理缺陷,减少了护理纠纷,提高了患者满意度,提高了护理质量,值得临床急诊科借鉴.%Objective Through utilizing the existing human resources and establishing stratified management modes based on the responsibility of the leader of nursing teamwork to reduce nursing de-fects and nursing disputes, and improve the quality of emergency care. Methods Nurses in the depart-ment of emergency were stratified, and the stratified management modes based on the responsibility of the leader of nursing teamwork was implemented. Then the results underwent analysis and feedback through assessment. Results Quality of care, the satisfaction degree of patients, doctors and the relevant depart-ments before and after the implementation had statistical significance by χ2 test. The cases of nursing de-fects reduced from 5 in 2005 to 1 in 2007 and nursing disputes reduced from 5 in 2005 to 0 in 2007.The intact rate of first aid materials increased from 98% in 2005 to 100% in 2007. Conclusions The strati-fied management modes can ensure that the quality of emergency care implemented in every hour and link, and can reduce the nursing defects and disputes, increase patient satisfaction degree and improve quality of care. So the stratified management modes are worthy
Behtani, A.; Bouazzouni, A.; Khatir, S.; Tiachacht, S.; Zhou, Y.-L.; Abdel Wahab, M.
2017-05-01
In this paper, the problem of using measured modal parameters to detect and locate damage in beam composite stratified structures with four layers of graphite/epoxy [0°/902°/0°] is investigated. A technique based on the residual force method is applied to composite stratified structure with different boundary conditions, the results of damage detection for several damage cases demonstrate that using residual force method as damage index, the damage location can be identified correctly and the damage extents can be estimated as well.
The Universal Aspect Ratio of Vortices in Rotating Stratifi?ed Flows: Experiments and Observations
Aubert, Oriane; Gal, Patrice Le; Marcus, Philip S
2012-01-01
We validate a new law for the aspect ratio $\\alpha = H/L$ of vortices in a rotating, stratified flow, where $H$ and $L$ are the vertical half-height and horizontal length scale of the vortices. The aspect ratio depends not only on the Coriolis parameter f and buoyancy (or Brunt-Vaisala) frequency $\\bar{N}$ of the background flow, but also on the buoyancy frequency $N_c$ within the vortex and on the Rossby number $Ro$ of the vortex such that $\\alpha = f \\sqrt{[Ro (1 + Ro)/(N_c^2- \\bar{N}^2)]}$. This law for $\\alpha$ is obeyed precisely by the exact equilibrium solution of the inviscid Boussinesq equations that we show to be a useful model of our laboratory vortices. The law is valid for both cyclones and anticyclones. Our anticyclones are generated by injecting fluid into a rotating tank filled with linearly-stratified salt water. The vortices are far from the top and bottom boundaries of the tank, so there is no Ekman circulation. In one set of experiments, the vortices viscously decay, but as they do, they c...
Filter-based assay for Escherichia coli in aqueous samples using bacteriophage-based amplification.
Derda, Ratmir; Lockett, Matthew R; Tang, Sindy K Y; Fuller, Renee C; Maxwell, E Jane; Breiten, Benjamin; Cuddemi, Christine A; Ozdogan, Aysegul; Whitesides, George M
2013-08-01
This paper describes a method to detect the presence of bacteria in aqueous samples, based on the capture of bacteria on a syringe filter, and the infection of targeted bacterial species with a bacteriophage (phage). The use of phage as a reagent provides two opportunities for signal amplification: (i) the replication of phage inside a live bacterial host and (ii) the delivery and expression of the complementing gene that turns on enzymatic activity and produces a colored or fluorescent product. Here we demonstrate a phage-based amplification scheme with an M13KE phage that delivers a small peptide motif to an F(+), α-complementing strain of Escherichia coli K12, which expresses the ω-domain of β-galactosidase (β-gal). The result of this complementation-an active form of β-gal-was detected colorimetrically, and the high level of expression of the ω-domain of β-gal in the model K12 strains allowed us to detect, on average, five colony-forming units (CFUs) of this strain in 1 L of water with an overnight culture-based assay. We also detected 50 CFUs of the model K12 strain in 1 L of water (or 10 mL of orange juice, or 10 mL of skim milk) in less than 4 h with a solution-based assay with visual readout. The solution-based assay does not require specialized equipment or access to a laboratory, and is more rapid than existing tests that are suitable for use at the point of access. This method could potentially be extended to detect many different bacteria with bacteriophages that deliver genes encoding a full-length enzyme that is not natively expressed in the target bacteria.
Sampling in interview-based qualitative research: A theoretical and practical guide
Robinson, Oliver
2014-01-01
Sampling is central to the practice of qualitative methods, but compared with data collection and analysis, its processes are discussed relatively little. A four-point approach to sampling in qualitative interview-based research is presented and critically discussed in this article, which integrates theory and process for the following: (1) Defining a sample universe, by way of specifying inclusion and exclusion criteria for potential participation; (2) Deciding upon a sample size, through th...
Sampling Theory of Food Safety System
LI, Bing; Chen, Guohua; Zhu, Ning
2009-01-01
We introduce the stratified sampling method, and put forward theoretical unbiased estimation of stratified sampling program, as well as the model and statistics of experimental design test. We also discuss the establishment of dietary exposure model, pollutant distribution model, and risk evaluation model. Finally, we present some methods for sampling design in China.
Sampling Theory of Food Safety System
Li, Bing; Chen, Guohua; Zhu, Ning
2009-01-01
We introduce the stratified sampling method, and put forward theoretical unbiased estimation of stratified sampling program, as well as the model and statistics of experimental design test. We also discuss the establishment of dietary exposure model, pollutant distribution model, and risk evaluation model. Finally, we present some methods for sampling design in China.
40 CFR 761.298 - Decisions based on PCB concentration measurements resulting from sampling.
2010-07-01
... 40 Protection of Environment 30 2010-07-01 2010-07-01 false Decisions based on PCB concentration....61(a)(6) § 761.298 Decisions based on PCB concentration measurements resulting from sampling. (a) For grid samples which are chemically analyzed individually, the PCB concentration applies to the area of...
Fujita, Hajime; Ishii, Shin
2007-11-01
Games constitute a challenging domain of reinforcement learning (RL) for acquiring strategies because many of them include multiple players and many unobservable variables in a large state space. The difficulty of solving such realistic multiagent problems with partial observability arises mainly from the fact that the computational cost for the estimation and prediction in the whole state space, including unobservable variables, is too heavy. To overcome this intractability and enable an agent to learn in an unknown environment, an effective approximation method is required with explicit learning of the environmental model. We present a model-based RL scheme for large-scale multiagent problems with partial observability and apply it to a card game, hearts. This game is a well-defined example of an imperfect information game and can be approximately formulated as a partially observable Markov decision process (POMDP) for a single learning agent. To reduce the computational cost, we use a sampling technique in which the heavy integration required for the estimation and prediction can be approximated by a plausible number of samples. Computer simulation results show that our method is effective in solving such a difficult, partially observable multiagent problem.
Ha, Ji Won; Hahn, Jong Hoon
2017-02-01
Acupuncture sample injection is a simple method to deliver well-defined nanoliter-scale sample plugs in PDMS microfluidic channels. This acupuncture injection method in microchip CE has several advantages, including minimization of sample consumption, the capability of serial injections of different sample solutions into the same microchannel, and the capability of injecting sample plugs into any desired position of a microchannel. Herein, we demonstrate that the simple and cost-effective acupuncture sample injection method can be used for PDMS microchip-based field amplified sample stacking in the most simplified straight channel by applying a single potential. We achieved the increase in electropherogram signals for the case of sample stacking. Furthermore, we present that microchip CGE of ΦX174 DNA-HaeⅢ digest can be performed with the acupuncture injection method on a glass microchip while minimizing sample loss and voltage control hardware. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Makowski, David; Bancal, Rémi; Bensadoun, Arnaud; Monod, Hervé; Messéan, Antoine
2017-02-23
According to E.U. regulations, the maximum allowable rate of adventitious transgene presence in non-genetically modified (GM) crops is 0.9%. We compared four sampling methods for the detection of transgenic material in agricultural non-GM maize fields: random sampling, stratified sampling, random sampling + ratio reweighting, random sampling + regression reweighting. Random sampling involves simply sampling maize grains from different locations selected at random from the field concerned. The stratified and reweighting sampling methods make use of an auxiliary variable corresponding to the output of a gene-flow model (a zero-inflated Poisson model) simulating cross-pollination as a function of wind speed, wind direction, and distance to the closest GM maize field. With the stratified sampling method, an auxiliary variable is used to define several strata with contrasting transgene presence rates, and grains are then sampled at random from each stratum. With the two methods involving reweighting, grains are first sampled at random from various locations within the field, and the observations are then reweighted according to the auxiliary variable. Data collected from three maize fields were used to compare the four sampling methods, and the results were used to determine the extent to which transgene presence rate estimation was improved by the use of stratified and reweighting sampling methods. We found that transgene rate estimates were more accurate and that substantially smaller samples could be used with sampling strategies based on an auxiliary variable derived from a gene-flow model.
Leung, Chi-Keung; Chang, Hua-Hua; Hau, Kit-Tai
The multistage alpha-stratified computerized adaptive testing (CAT) design advocated a new philosophy of pool management and item selection using low discriminating items first. It has been demonstrated through simulation studies to be effective both in reducing item overlap rate and enhancing pool utilization with certain pool types. Based on…
Sampling guidelines for oral fluid-based surveys of group-housed animals.
Rotolo, Marisa L; Sun, Yaxuan; Wang, Chong; Giménez-Lirola, Luis; Baum, David H; Gauger, Phillip C; Harmon, Karen M; Hoogland, Marlin; Main, Rodger; Zimmerman, Jeffrey J
2017-02-17
Formulas and software for calculating sample size for surveys based on individual animal samples are readily available. However, sample size formulas are not available for oral fluids and other aggregate samples that are increasingly used in production settings. Therefore, the objective of this study was to develop sampling guidelines for oral fluid-based porcine reproductive and respiratory syndrome virus (PRRSV) surveys in commercial swine farms. Oral fluid samples were collected in 9 weekly samplings from all pens in 3 barns on one production site beginning shortly after placement of weaned pigs. Samples (n=972) were tested by real-time reverse-transcription PCR (RT-rtPCR) and the binary results analyzed using a piecewise exponential survival model for interval-censored, time-to-event data with misclassification. Thereafter, simulation studies were used to study the barn-level probability of PRRSV detection as a function of sample size, sample allocation (simple random sampling vs fixed spatial sampling), assay diagnostic sensitivity and specificity, and pen-level prevalence. These studies provided estimates of the probability of detection by sample size and within-barn prevalence. Detection using fixed spatial sampling was as good as, or better than, simple random sampling. Sampling multiple barns on a site increased the probability of detection with the number of barns sampled. These results are relevant to PRRSV control or elimination projects at the herd, regional, or national levels, but the results are also broadly applicable to contagious pathogens of swine for which oral fluid tests of equivalent performance are available.
Nakamura, Munehiro; Kajiwara, Yusuke; Otsuka, Atsushi; Kimura, Haruhiko
2013-10-02
Over-sampling methods based on Synthetic Minority Over-sampling Technique (SMOTE) have been proposed for classification problems of imbalanced biomedical data. However, the existing over-sampling methods achieve slightly better or sometimes worse result than the simplest SMOTE. In order to improve the effectiveness of SMOTE, this paper presents a novel over-sampling method using codebooks obtained by the learning vector quantization. In general, even when an existing SMOTE applied to a biomedical dataset, its empty feature space is still so huge that most classification algorithms would not perform well on estimating borderlines between classes. To tackle this problem, our over-sampling method generates synthetic samples which occupy more feature space than the other SMOTE algorithms. Briefly saying, our over-sampling method enables to generate useful synthetic samples by referring to actual samples taken from real-world datasets. Experiments on eight real-world imbalanced datasets demonstrate that our proposed over-sampling method performs better than the simplest SMOTE on four of five standard classification algorithms. Moreover, it is seen that the performance of our method increases if the latest SMOTE called MWMOTE is used in our algorithm. Experiments on datasets for β-turn types prediction show some important patterns that have not been seen in previous analyses. The proposed over-sampling method generates useful synthetic samples for the classification of imbalanced biomedical data. Besides, the proposed over-sampling method is basically compatible with basic classification algorithms and the existing over-sampling methods.
Sampling uncertainty evaluation for data acquisition board based on Monte Carlo method
Ge, Leyi; Wang, Zhongyu
2008-10-01
Evaluating the data acquisition board sampling uncertainty is a difficult problem in the field of signal sampling. This paper analyzes the sources of dada acquisition board sampling uncertainty in the first, then introduces a simulation theory of dada acquisition board sampling uncertainty evaluation based on Monte Carlo method and puts forward a relation model of sampling uncertainty results, sampling numbers and simulation times. In the case of different sample numbers and different signal scopes, the author establishes a random sampling uncertainty evaluation program of a PCI-6024E data acquisition board to execute the simulation. The results of the proposed Monte Carlo simulation method are in a good agreement with the GUM ones, and the validities of Monte Carlo method are represented.
[Phylogenetic diversity of bacteria in soda lake stratified sediments].
Tourova, T P; Grechnikova, M A; Kuznetsov, V V; Sorokin, D Yu
2014-01-01
Various previously developed techniques for DNA extraction from the samples with complex physicochemical structure (soils, silts, and sediments) and modifications of these techniques developed in the present work were tested. Their usability for DNA extraction from the sediments of the Kulunda Steppe hypersaline soda lakes was assessed, and the most efficient procedure for indirect (two-stage) DNA extraction was proposed. Almost complete separation of the cell fraction was shown, as well as the inefficiency of nested PCR for analysis of the clone libraries obtained from washed sediments by amplification of the 16S rRNA gene fragments. Analysis of the clone library obtained from the cell fractions of stratified sediments (upper, medium, and lower layers) revealed that in the sediments of Lake Gorchina-3 most eubacterial phylotypes belonged to the class Clostridia, phylum Firmicutes. They were probably specific for this habitatand formed a new, presently unknown high-rank taxon. The data obtained revealed no pronounced stratification of the spe- cies diversity of the eubacterial component of the microbial community inhabiting the sediments (0-20 cm) in the inshore zone of Lake Gorchina-3.
Methodology series module 5: Sampling strategies
Maninder Singh Setia
2016-01-01
Full Text Available Once the research question and the research design have been finalised, it is important to select the appropriate sample for the study. The method by which the researcher selects the sample is the 'Sampling Method'. There are essentially two types of sampling methods: 1 probability sampling – based on chance events (such as random numbers, flipping a coin etc.; and 2 non-probability sampling – based on researcher's choice, population that accessible & available. Some of the non-probability sampling methods are: purposive sampling, convenience sampling, or quota sampling. Random sampling method (such as simple random sample or stratified random sample is a form of probability sampling. It is important to understand the different sampling methods used in clinical studies and mention this method clearly in the manuscript. The researcher should not misrepresent the sampling method in the manuscript (such as using the term 'random sample' when the researcher has used convenience sample. The sampling method will depend on the research question. For instance, the researcher may want to understand an issue in greater detail for one particular population rather than worry about the 'generalizability' of these results. In such a scenario, the researcher may want to use 'purposive sampling' for the study.
Methodology Series Module 5: Sampling Strategies.
Setia, Maninder Singh
2016-01-01
Once the research question and the research design have been finalised, it is important to select the appropriate sample for the study. The method by which the researcher selects the sample is the ' Sampling Method'. There are essentially two types of sampling methods: 1) probability sampling - based on chance events (such as random numbers, flipping a coin etc.); and 2) non-probability sampling - based on researcher's choice, population that accessible & available. Some of the non-probability sampling methods are: purposive sampling, convenience sampling, or quota sampling. Random sampling method (such as simple random sample or stratified random sample) is a form of probability sampling. It is important to understand the different sampling methods used in clinical studies and mention this method clearly in the manuscript. The researcher should not misrepresent the sampling method in the manuscript (such as using the term ' random sample' when the researcher has used convenience sample). The sampling method will depend on the research question. For instance, the researcher may want to understand an issue in greater detail for one particular population rather than worry about the ' generalizability' of these results. In such a scenario, the researcher may want to use ' purposive sampling' for the study.
Theunissen, Raf; Kadosh, Jesse S.; Allen, Christian B.
2015-06-01
Spatially varying signals are typically sampled by collecting uniformly spaced samples irrespective of the signal content. For signals with inhomogeneous information content, this leads to unnecessarily dense sampling in regions of low interest or insufficient sample density at important features, or both. A new adaptive sampling technique is presented directing sample collection in proportion to local information content, capturing adequately the short-period features while sparsely sampling less dynamic regions. The proposed method incorporates a data-adapted sampling strategy on the basis of signal curvature, sample space-filling, variable experimental uncertainty and iterative improvement. Numerical assessment has indicated a reduction in the number of samples required to achieve a predefined uncertainty level overall while improving local accuracy for important features. The potential of the proposed method has been further demonstrated on the basis of Laser Doppler Anemometry experiments examining the wake behind a NACA0012 airfoil and the boundary layer characterisation of a flat plate.
Sample injection and electrophoretic separation on a simple laminated paper based analytical device.
Xu, Chunxiu; Zhong, Minghua; Cai, Longfei; Zheng, Qingyu; Zhang, Xiaojun
2016-02-01
We described a strategy to perform multistep operations on a simple laminated paper-based separation device by using electrokinetic flow to manipulate the fluids. A laminated crossed-channel paper-based separation device was fabricated by cutting a filter paper sheet followed by lamination. Multiple function units including sample loading, sample injection, and electrophoretic separation were integrated on a single paper based analytical device for the first time, by applying potential at different reservoirs for sample, sample waste, buffer, and buffer waste. As a proof-of-concept demonstration, mixed sample solution containing carmine and sunset yellow were loaded in the sampling channel, and then injected into separation channel followed by electrophoretic separation, by adjusting the potentials applied at the four terminals of sampling and separation channel. The effects of buffer pH, buffer concentration, channel width, and separation time on resolution of electrophoretic separation were studied. This strategy may be used to perform multistep operations such as reagent dilution, sample injection, mixing, reaction, and separation on a single microfluidic paper based analytical device, which is very attractive for building micro total analysis systems on microfluidic paper based analytical devices.
Tangling clustering instability for small particles in temperature stratified turbulence
Elperin, Tov; Liberman, Michael; Rogachevskii, Igor
2013-01-01
We study particle clustering in a temperature stratified turbulence with small finite correlation time. It is shown that the temperature stratified turbulence strongly increases the degree of compressibility of particle velocity field. This results in the strong decrease of the threshold for the excitation of the tangling clustering instability even for small particles. The tangling clustering instability in the temperature stratified turbulence is essentially different from the inertial clustering instability that occurs in non-stratified isotropic and homogeneous turbulence. While the inertial clustering instability is caused by the centrifugal effect of the turbulent eddies, the mechanism of the tangling clustering instability is related to the temperature fluctuations generated by the tangling of the mean temperature gradient by the velocity fluctuations. Temperature fluctuations produce pressure fluctuations and cause particle clustering in regions with increased pressure fluctuations. It is shown that t...
Effects of rotation on turbulent buoyant plumes in stratified environments
Fabregat Tomàs, Alexandre; Poje, Andrew C; Özgökmen, Tamay M; Dewar, William K
2016-01-01
We numerically investigate the effects of rotation on the turbulent dynamics of thermally driven buoyant plumes in stratified environments at the large Rossby numbers characteristic of deep oceanic releases...
Stratified flows with variable density: mathematical modelling and numerical challenges.
Murillo, Javier; Navas-Montilla, Adrian
2017-04-01
Stratified flows appear in a wide variety of fundamental problems in hydrological and geophysical sciences. They may involve from hyperconcentrated floods carrying sediment causing collapse, landslides and debris flows, to suspended material in turbidity currents where turbulence is a key process. Also, in stratified flows variable horizontal density is present. Depending on the case, density varies according to the volumetric concentration of different components or species that can represent transported or suspended materials or soluble substances. Multilayer approaches based on the shallow water equations provide suitable models but are not free from difficulties when moving to the numerical resolution of the governing equations. Considering the variety of temporal and spatial scales, transfer of mass and energy among layers may strongly differ from one case to another. As a consequence, in order to provide accurate solutions, very high order methods of proved quality are demanded. Under these complex scenarios it is necessary to observe that the numerical solution provides the expected order of accuracy but also converges to the physically based solution, which is not an easy task. To this purpose, this work will focus in the use of Energy balanced augmented solvers, in particular, the Augmented Roe Flux ADER scheme. References: J. Murillo , P. García-Navarro, Wave Riemann description of friction terms in unsteady shallow flows: Application to water and mud/debris floods. J. Comput. Phys. 231 (2012) 1963-2001. J. Murillo B. Latorre, P. García-Navarro. A Riemann solver for unsteady computation of 2D shallow flows with variable density. J. Comput. Phys.231 (2012) 4775-4807. A. Navas-Montilla, J. Murillo, Energy balanced numerical schemes with very high order. The Augmented Roe Flux ADER scheme. Application to the shallow water equations, J. Comput. Phys. 290 (2015) 188-218. A. Navas-Montilla, J. Murillo, Asymptotically and exactly energy balanced augmented flux
Numerical Study on Saltwater Instrusion in a Heterogeneous Stratified Aquifer
2000-01-01
In a costal aquifer, saltwater intrusion is frequently observed due to an excess exploitation. There are many researches focused on the saltwater intrusion. However, there are few researches, which take into consideration the mixing processes in a stratified heterogeneous aquifer. In the present study, a laboratory experiment and numerical simulation are made in order to understand the phenomena in a stratified heterogeneous aquifer. The result of the numerical analysis agrees well with the m...
Large eddy simulation of unsteady lean stratified premixed combustion
Duwig, C. [Division of Fluid Mechanics, Department of Energy Sciences, Lund University, SE 221 00 Lund (Sweden); Fureby, C. [Division of Weapons and Protection, Warheads and Propulsion, The Swedish Defense Research Agency, FOI, SE 147 25 Tumba (Sweden)
2007-10-15
Premixed turbulent flame-based technologies are rapidly growing in importance, with applications to modern clean combustion devices for both power generation and aeropropulsion. However, the gain in decreasing harmful emissions might be canceled by rising combustion instabilities. Unwanted unsteady flame phenomena that might even destroy the whole device have been widely reported and are subject to intensive studies. In the present paper, we use unsteady numerical tools for simulating an unsteady and well-documented flame. Computations were performed for nonreacting, perfectly premixed and stratified premixed cases using two different numerical codes and different large-eddy-simulation-based flamelet models. Nonreacting simulations are shown to agree well with experimental data, with the LES results capturing the mean features (symmetry breaking) as well as the fluctuation level of the turbulent flow. For reacting cases, the uncertainty induced by the time-averaging technique limited the comparisons. Given an estimate of the uncertainty, the numerical results were found to reproduce well the experimental data in terms both of mean flow field and of fluctuation levels. In addition, it was found that despite relying on different assumptions/simplifications, both numerical tools lead to similar predictions, giving confidence in the results. Moreover, we studied the flame dynamics and particularly the response to a periodic pulsation. We found that above a certain excitation level, the flame dynamic changes and becomes rather insensitive to the excitation/instability amplitude. Conclusions regarding the self-growth of thermoacoustic waves were drawn. (author)
Microwave spectrum sensing based on photonic time stretch and compressive sampling.
Chi, Hao; Chen, Ying; Mei, Yuan; Jin, Xiaofeng; Zheng, Shilie; Zhang, Xianmin
2013-01-15
An approach to realizing microwave spectrum sensing based on photonic time stretch and compressive sampling is proposed. The time stretch system is used to slow down the input high-speed signal and the compressive sampling based on random demodulation can further decrease the sampling rate. A spectrally sparse signal in a wide bandwidth can be captured with a sampling rate far lower than the Nyquist rate thanks to both time stretch and compressive sampling. It is demonstrated that a system with a time stretch factor 5 and a compression factor 8 can be used to capture a signal with multiple tones in a 50 GHz bandwidth, which means a sampling rate 40 times lower than the Nyquist rate. In addition, the time stretch of the microwave signal largely decreases the data rate of random data sequence and therefore the speed of the mixer in the random demodulator.
Sampling for evaluation. Issues and strategies for community-based HIV prevention programs.
O'Connell, A A
2000-06-01
Sampling methods are an important issue in the evaluation of community-based HIV prevention initiatives because it is through responsible sampling procedures that a valid model of the population is produced and reliable estimates of behavior change determined. This article provides an overview on sampling with particular focus on the needs of community-based organizations (CBOs). As these organizations continue to improve their capacity for sampling and program evaluation activities, comparisons across CBOs can become more rigorous, resulting in valuable information collectively regarding the effectiveness of particular HIV prevention initiatives. The author reviews several probability and non-probability sampling designs; discusses bias, cost, and feasibility factors in design selection; and presents six guidelines designed to encourage community organizations to consider these important sampling issues as they plan their program evaluations.
Hoelzer, Karin; Pouillot, Régis
2013-11-01
While adequate, statistically designed sampling plans should be used whenever feasible, inference about the presence of pathogens in food occasionally has to be made based on smaller numbers of samples. To help the interpretation of such results, we reviewed the impact of small sample sizes on pathogen detection and prevalence estimation. In particular, we evaluated four situations commonly encountered in practice. The first two examples evaluate the combined impact of sample size and pathogen prevalence (i.e., fraction of contaminated food items in a given lot) on pathogen detection and prevalence estimation. The latter two examples extend the previous example to consider the impact of pathogen concentration and imperfect test sensitivity. The provided examples highlight the difficulties of making inference based on small numbers of samples, and emphasize the importance of using appropriate statistical sampling designs whenever possible.
Observer-Based Stabilization of Spacecraft Rendezvous with Variable Sampling and Sensor Nonlinearity
Zhuoshi Li
2013-01-01
Full Text Available This paper addresses the observer-based control problem of spacecraft rendezvous with nonuniform sampling period. The relative dynamic model is based on the classical Clohessy-Wiltshire equation, and sensor nonlinearity and sampling are considered together in a unified framework. The purpose of this paper is to perform an observer-based controller synthesis by using sampled and saturated output measurements, such that the resulting closed-loop system is exponentially stable. A time-dependent Lyapunov functional is developed which depends on time and the upper bound of the sampling period and also does not grow along the input update times. The controller design problem is solved in terms of the linear matrix inequality method, and the obtained results are less conservative than using the traditional Lyapunov functionals. Finally, a numerical simulation example is built to show the validity of the developed sampled-data control strategy.
Zhou, Rurui; Li, Yu; Lu, Di; Liu, Haixing; Zhou, Huicheng
2016-09-01
This paper investigates the use of an epsilon-dominance non-dominated sorted genetic algorithm II (ɛ-NSGAII) as a sampling approach with an aim to improving sampling efficiency for multiple metrics uncertainty analysis using Generalized Likelihood Uncertainty Estimation (GLUE). The effectiveness of ɛ-NSGAII based sampling is demonstrated compared with Latin hypercube sampling (LHS) through analyzing sampling efficiency, multiple metrics performance, parameter uncertainty and flood forecasting uncertainty with a case study of flood forecasting uncertainty evaluation based on Xinanjiang model (XAJ) for Qing River reservoir, China. Results obtained demonstrate the following advantages of the ɛ-NSGAII based sampling approach in comparison to LHS: (1) The former performs more effective and efficient than LHS, for example the simulation time required to generate 1000 behavioral parameter sets is shorter by 9 times; (2) The Pareto tradeoffs between metrics are demonstrated clearly with the solutions from ɛ-NSGAII based sampling, also their Pareto optimal values are better than those of LHS, which means better forecasting accuracy of ɛ-NSGAII parameter sets; (3) The parameter posterior distributions from ɛ-NSGAII based sampling are concentrated in the appropriate ranges rather than uniform, which accords with their physical significance, also parameter uncertainties are reduced significantly; (4) The forecasted floods are close to the observations as evaluated by three measures: the normalized total flow outside the uncertainty intervals (FOUI), average relative band-width (RB) and average deviation amplitude (D). The flood forecasting uncertainty is also reduced a lot with ɛ-NSGAII based sampling. This study provides a new sampling approach to improve multiple metrics uncertainty analysis under the framework of GLUE, and could be used to reveal the underlying mechanisms of parameter sets under multiple conflicting metrics in the uncertainty analysis process.
Sports and energy drink consumption among a population-based sample of young adults
Larson, Nicole; Laska, Melissa N.; Story, Mary; Neumark-Sztainer, Dianne
2017-01-01
Objective National data for the U.S. show increases in sports and energy drink consumption over the past decade with the largest increases among young adults ages 20–34. This study aimed to identify sociodemographic factors and health risk behaviors associated with sports and energy drink consumption among young adults. Design Cross-sectional analysis of survey data from the third wave of a cohort study (Project EAT-III: Eating and Activity in Teens and Young Adults). Regression models stratified on gender and adjusted for sociodemographic characteristics were used to examine associations of sports and energy drink consumption with eating behaviors, physical activity, media use, weight-control behaviors, sleep patterns, and substance use. Setting Participants completed baseline surveys in 1998–1999 as students at public secondary schools in Minneapolis/St. Paul, Minnesota and the EAT-III surveys online or by mail in 2008–2009. Subjects The sample consisted of 2,287 participants (55% female, mean age=25.3). Results Results showed 31.0% of young adults consumed sports drinks and 18.8% consumed energy drinks at least weekly. Among men and women, sports drink consumption was associated with higher sugar-sweetened soda and fruit juice intake, video game use, and use of muscle-enhancing substances like creatine (pEnergy drink consumption was associated with lower breakfast frequency and higher sugar-sweetened soda intake, video game use, use of unhealthy weight-control behaviors, trouble sleeping, and substance use among men and women (penergy drink consumption with other unhealthy behaviors in the design of programs and services for young adults. PMID:25683863
Haralambous, Yannis
2012-01-01
We present an extension of the Explicit Semantic Analysis method by Gabrilovich and Markovitch. Using their semantic relatedness measure, we weight the Wikipedia categories graph. Then, we extract a minimal spanning tree, using Chu-Liu-Edmonds' algorithm. We define a notion of stratified tfidf where the stratas, for a given Wikipedia page and a given term, are the classical tfidf and categorical tfidfs of the term in the ancestor categories of the page (ancestors in the sense of the minimal spanning tree). Our method is based on this stratified tfidf, which adds extra weight to terms that "survive" when climbing up the category tree. We evaluate our method by a text classification on the WikiNews corpus: it increases precision by 18%. Finally, we provide hints for future research.
PROBABILITY SAMPLING DESIGNS FOR VETERINARY EPIDEMIOLOGY
Xhelil Koleci; Coryn, Chris L.S.; Kristin A. Hobson; Rruzhdi Keci
2011-01-01
The objective of sampling is to estimate population parameters, such as incidence or prevalence, from information contained in a sample. In this paper, the authors describe sources of error in sampling; basic probability sampling designs, including simple random sampling, stratified sampling, systematic sampling, and cluster sampling; estimating a population size if unknown; and factors influencing sample size determination for epidemiological studies in veterinary medicine.
Deepak Swami; P K Sharma; C S P Ojha
2014-12-01
In this paper, we have studied the behaviour of reactive solute transport through stratified porous medium under the influence of multi-process nonequilibrium transport model. Various experiments were carried out in the laboratory and the experimental breakthrough curves were observed at spatially placed sampling points for stratified porous medium. Batch sorption studies were also performed to estimate the sorption parameters of the material used in stratified aquifer system. The effects of distance dependent dispersion and tailing are visible in the experimental breakthrough curves. The presence of physical and chemical non-equilibrium are observed from the pattern of breakthrough curves. Multi-process non-equilibrium model represents the combined effect of physical and chemical non-ideality in the stratified aquifer system. The results show that the incorporation of distance dependent dispersivity in multi-process non-equilibrium model provides best fit of observed data through stratified porous media. Also, the exponential distance dependent dispersivity is more suitable for large distances and at small distances, linear or constant dispersivity function can be considered for simulating reactive solute in stratified porous medium.
Eating Disorders among a Community-Based Sample of Chilean Female Adolescents
Granillo, M. Teresa; Grogan-Kaylor, Andrew; Delva, Jorge; Castillo, Marcela
2011-01-01
The purpose of this study was to explore the prevalence and correlates of eating disorders among a community-based sample of female Chilean adolescents. Data were collected through structured interviews with 420 female adolescents residing in Santiago, Chile. Approximately 4% of the sample reported ever being diagnosed with an eating disorder.…
Eating Disorders among a Community-Based Sample of Chilean Female Adolescents
Granillo, M. Teresa; Grogan-Kaylor, Andrew; Delva, Jorge; Castillo, Marcela
2011-01-01
The purpose of this study was to explore the prevalence and correlates of eating disorders among a community-based sample of female Chilean adolescents. Data were collected through structured interviews with 420 female adolescents residing in Santiago, Chile. Approximately 4% of the sample reported ever being diagnosed with an eating disorder.…
Credit-based accept-zero sampling schemes for the control of outgoing quality
Baillie, David H.; Klaassen, Chris A.J.
2000-01-01
A general procedure is presented for switching between accept-zero attributes or variables sampling plans to provide acceptance sampling schemes with a specified limit on the (suitably defined) average outgoing quality (AOQ). The switching procedure is based on credit, defined as the total number of
苗志敏; 赵世华; 王颜刚; 李长贵; 王忠超; 陈颖; 陈新焰; 阎胜利
2007-01-01
old in Shandong coastal area.DESIGN: A randomized, stratified cluster sampling survey.SETTING: D
National Aeronautics and Space Administration — This project will develop a multi-functional, automated sample preparation instrument for biological wet-lab workstations on the ISS. The instrument is based on a...
Rao, R. G. S.; Ulaby, F. T.
1977-01-01
The paper examines optimal sampling techniques for obtaining accurate spatial averages of soil moisture, at various depths and for cell sizes in the range 2.5-40 acres, with a minimum number of samples. Both simple random sampling and stratified sampling procedures are used to reach a set of recommended sample sizes for each depth and for each cell size. Major conclusions from statistical sampling test results are that (1) the number of samples required decreases with increasing depth; (2) when the total number of samples cannot be prespecified or the moisture in only one single layer is of interest, then a simple random sample procedure should be used which is based on the observed mean and SD for data from a single field; (3) when the total number of samples can be prespecified and the objective is to measure the soil moisture profile with depth, then stratified random sampling based on optimal allocation should be used; and (4) decreasing the sensor resolution cell size leads to fairly large decreases in samples sizes with stratified sampling procedures, whereas only a moderate decrease is obtained in simple random sampling procedures.
A methodological approach based on indirect sampling to survey the homeless people
Claudia De Vitiis; Stefano Falorsi; Francesca Inglese; Alessandra Masi; Nicoletta Pannuzi; Monica Russo
2014-01-01
The Italian National Institute of Statistics carried out the first survey on homeless population. The survey aims at estimating the unknown size and some demographic and social characteristics of this population. The methodological strategy used to investigate homeless population could not follow the standard approaches of official statistics usually based on the use of population lists. The sample strategy for the homeless survey refers to the theory of indirect sampling, based on the use of...
Lin, Wei; Tian, Lanxiang; Li, Jinhua; Pan, Yongxin
2008-02-01
The racetrack-based PCR approach is widely used in phylogenetic analysis of magnetotactic bacteria (MTB), which are isolated from environmental samples using the capillary racetrack method. To evaluate whether the capillary racetrack-based enrichment can truly reflect the diversity of MTB in the targeted environmental sample, phylogenetic diversity studies of MTB enriched from the Miyun lake near Beijing were carried out, using both the capillary racetrack-based PCR and a modified metagenome-based PCR approach. Magnetotactic cocci were identified in the studied sample using both approaches. Comparative studies showed that three clusters of magnetotactic cocci were revealed by the modified metagenome-based PCR approach, while only one of them (e.g. MYG-22 sequence) was detected by the racetrack-based PCR approach from the studied sample. This suggests that the result of capillary racetrack-based enrichment might have been biased by the magnetotaxis of magnetotactic bacteria. It appears that the metagenome-based PCR approach better reflects the original diversity of MTB in the environmental sample.
Fishing and the oceanography of a stratified shelf sea
Sharples, Jonathan; Ellis, Jim R.; Nolan, Glenn; Scott, Beth E.
2013-10-01
Fishing vessel position data from the Vessel Monitoring System (VMS) were used to investigate fishing activity in the Celtic Sea, a seasonally-stratifying, temperate region on the shelf of northwest Europe. The spatial pattern of fishing showed that three main areas are targeted: (1) the Celtic Deep (an area of deeper water with fine sediments), (2) the shelf edge, and (3) an area covering several large seabed banks in the central Celtic Sea. Data from each of these regions were analysed to examine the contrasting seasonality of fishing activity, and to highlight where the spring-neap tidal cycle appears to be important to fishing. The oceanographic characteristics of the Celtic Sea were considered alongside the distribution and timing of fishing, illustrating likely contrasts in the underlying environmental drivers of the different fished regions. In the central Celtic Sea, fishing mainly occurred during the stratified period between April and August. Based on evidence provided in other papers of this Special Issue, we suggest that the fishing in this area is supported by (1) a broad increase in primary production caused by lee-waves generated by seabed banks around spring tides driving large supplies of nutrients into the photic zone, and (2) greater concentrations of zooplankton within the region influenced by the seabed banks and elevated primary production. In contrast, while the shelf edge is a site of elevated surface chlorophyll, previous work has suggested that the periodic mixing generated by an internal tide at the shelf edge alters the size-structure of the phytoplankton community which fish larvae from the spawning stocks along the shelf edge are able to exploit. The fishery for Nephrops norvegicus in the Celtic Deep was the only one to show a significant spring-neap cycle, possibly linked to Nephrops foraging outside their burrows less during spring tides. More tentatively, the fishery for Nephrops correlated most strongly with a localised shift in
Suxiang He
2014-01-01
Full Text Available An implementable nonlinear Lagrange algorithm for stochastic minimax problems is presented based on sample average approximation method in this paper, in which the second step minimizes a nonlinear Lagrange function with sample average approximation functions of original functions and the sample average approximation of the Lagrange multiplier is adopted. Under a set of mild assumptions, it is proven that the sequences of solution and multiplier obtained by the proposed algorithm converge to the Kuhn-Tucker pair of the original problem with probability one as the sample size increases. At last, the numerical experiments for five test examples are performed and the numerical results indicate that the algorithm is promising.
Scaling up the DBSCAN Algorithm for Clustering Large Spatial Databases Based on Sampling Technique
无
2001-01-01
Clustering, in data mining, is a useful technique for discoveringinte resting data distributions and patterns in the underlying data, and has many app lication fields, such as statistical data analysis, pattern recognition, image p rocessing, and etc. We combine sampling technique with DBSCAN alg orithm to cluster large spatial databases, and two sampling-based DBSCAN (SDBSC A N) algorithms are developed. One algorithm introduces sampling technique inside DBSCAN, and the other uses sampling procedure outside DBSCAN. Experimental resul ts demonstrate that our algorithms are effective and efficient in clustering lar ge-scale spatial databases.
Designing a multiple dependent state sampling plan based on the coefficient of variation.
Yan, Aijun; Liu, Sanyang; Dong, Xiaojuan
2016-01-01
A multiple dependent state (MDS) sampling plan is developed based on the coefficient of variation of the quality characteristic which follows a normal distribution with unknown mean and variance. The optimal plan parameters of the proposed plan are solved by a nonlinear optimization model, which satisfies the given producer's risk and consumer's risk at the same time and minimizes the sample size required for inspection. The advantages of the proposed MDS sampling plan over the existing single sampling plan are discussed. Finally an example is given to illustrate the proposed plan.
The influence of sampling design on tree-ring-based quantification of forest growth.
Nehrbass-Ahles, Christoph; Babst, Flurin; Klesse, Stefan; Nötzli, Magdalena; Bouriaud, Olivier; Neukom, Raphael; Dobbertin, Matthias; Frank, David
2014-09-01
Tree-rings offer one of the few possibilities to empirically quantify and reconstruct forest growth dynamics over years to millennia. Contemporaneously with the growing scientific community employing tree-ring parameters, recent research has suggested that commonly applied sampling designs (i.e. how and which trees are selected for dendrochronological sampling) may introduce considerable biases in quantifications of forest responses to environmental change. To date, a systematic assessment of the consequences of sampling design on dendroecological and-climatological conclusions has not yet been performed. Here, we investigate potential biases by sampling a large population of trees and replicating diverse sampling designs. This is achieved by retroactively subsetting the population and specifically testing for biases emerging for climate reconstruction, growth response to climate variability, long-term growth trends, and quantification of forest productivity. We find that commonly applied sampling designs can impart systematic biases of varying magnitude to any type of tree-ring-based investigations, independent of the total number of samples considered. Quantifications of forest growth and productivity are particularly susceptible to biases, whereas growth responses to short-term climate variability are less affected by the choice of sampling design. The world's most frequently applied sampling design, focusing on dominant trees only, can bias absolute growth rates by up to 459% and trends in excess of 200%. Our findings challenge paradigms, where a subset of samples is typically considered to be representative for the entire population. The only two sampling strategies meeting the requirements for all types of investigations are the (i) sampling of all individuals within a fixed area; and (ii) fully randomized selection of trees. This result advertises the consistent implementation of a widely applicable sampling design to simultaneously reduce uncertainties in
DNS of stratified spatially-developing turbulent thermal boundary layers
Araya, Guillermo; Castillo, Luciano; Jansen, Kenneth
2012-11-01
Direct numerical simulations (DNS) of spatially-developing turbulent thermal boundary layers under stratification are performed. It is well known that the transport phenomena of the flow is significantly affected by buoyancy, particularly in urban environments where stable and unstable atmospheric boundary layers are encountered. In the present investigation, the Dynamic Multi-scale approach by Araya et al. (JFM, 670, 2011) for turbulent inflow generation is extended to thermally stratified boundary layers. Furthermore, the proposed Dynamic Multi-scale approach is based on the original rescaling-recycling method by Lund et al. (1998). The two major improvements are: (i) the utilization of two different scaling laws in the inner and outer parts of the boundary layer to better absorb external conditions such as inlet Reynolds numbers, streamwise pressure gradients, buoyancy effects, etc., (ii) the implementation of a Dynamic approach to compute scaling parameters from the flow solution without the need of empirical correlations as in Lund et al. (1998). Numerical results are shown for ZPG flows at high momentum thickness Reynolds numbers (~ 3,000) and a comparison with experimental data is also carried out.
Economic evaluation in stratified medicine: methodological issues and challenges
Hans-Joerg eFugel
2016-05-01
Full Text Available Background: Stratified Medicine (SM is becoming a practical reality with the targeting of medicines by using a biomarker or genetic-based diagnostic to identify the eligible patient sub-population. Like any healthcare intervention, SM interventions have costs and consequences that must be considered by reimbursement authorities with limited resources. Methodological standards and guidelines exist for economic evaluations in clinical pharmacology and are an important component for health technology assessments (HTAs in many countries. However, these guidelines have initially been developed for traditional pharmaceuticals and not for complex interventions with multiple components. This raises the issue as to whether these guidelines are adequate to SM interventions or whether new specific guidance and methodology is needed to avoid inconsistencies and contradictory findings when assessing economic value in SM.Objective: This article describes specific methodological challenges when conducting health economic (HE evaluations for SM interventions and outlines potential modifications necessary to existing evaluation guidelines /principles that would promote consistent economic evaluations for SM.Results/Conclusions: Specific methodological aspects for SM comprise considerations on the choice of comparator, measuring effectiveness and outcomes, appropriate modelling structure and the scope of sensitivity analyses. Although current HE methodology can be applied for SM, greater complexity requires further methodology development and modifications in the guidelines.
Mixing efficiency of turbulent patches in stably stratified flows
Garanaik, Amrapalli; Venayagamoorthy, Subhas Karan
2016-11-01
A key quantity that is essential for estimating the turbulent diapycnal (irreversible) mixing in stably stratified flow is the mixing efficiency Rf*, which is a measure of the amount of turbulent kinetic energy that is irreversibly converted into background potential energy. In particular, there is an ongoing debate in the oceanographic mixing community regarding the utility of the buoyancy Reynolds number (Reb) , particularly with regard to how mixing efficiency and diapycnal diffusivity vary with Reb . Specifically, is there a universal relationship between the intensity of turbulence and the strength of the stratification that supports an unambiguous description of mixing efficiency based on Reb ? The focus of the present study is to investigate the variability of Rf* by considering oceanic turbulence data obtained from microstructure profiles in conjunction with data from laboratory experiments and DNS. Field data analysis has done by identifying turbulent patches using Thorpe sorting method for potential density. The analysis clearly shows that high mixing efficiencies can persist at high buoyancy Reynolds numbers. This is contradiction to previous studies which predict that mixing efficiency should decrease universally for Reb greater than O (100) . Funded by NSF and ONR.
Sample Entropy-Based Approach to Evaluate the Stability of Double-Wire Pulsed MIG Welding
Ping Yao
2014-01-01
Full Text Available According to the sample entropy, this paper deals with a quantitative method to evaluate the current stability in double-wire pulsed MIG welding. Firstly, the sample entropy of current signals with different stability but the same parameters is calculated. The results show that the more stable the current, the smaller the value and the standard deviation of sample entropy. Secondly, four parameters, which are pulse width, peak current, base current, and frequency, are selected for four-level three-factor orthogonal experiment. The calculation and analysis of desired signals indicate that sample entropy values are affected by welding current parameters. Then, a quantitative method based on sample entropy is proposed. The experiment results show that the method can preferably quantify the welding current stability.
Modified Limiting Equilibrium Method for Stability Analysis of Stratified Rock Slopes
Rui Yong
2016-01-01
Full Text Available The stratified rock of Jurassic strata is widely distributed in Three Gorges Reservoir Region. The limit equilibrium method is generally utilized in the stability analysis of rock slope with single failure plane. However, the stratified rock slope cannot be accurately estimated by this method because of different bedding planes and their variable shear strength parameters. Based on the idealized model of rock slope with bedding planes, a modified limiting equilibrium method is presented to determine the potential sliding surface and the factor of safety for the stratified rock slope. In this method, the S-curve model is established to define the spatial variations of the shear strength parameters c and φ of bedding plane and the tensile strength of rock mass. This method was applied in the stability evaluation of typical stratified rock slope in Three Gorges Reservoir Region, China. The result shows that the factor of safety of the case study is 0.973, the critical sliding surface for the potential slip surface appears at bedding plane C, and the tension-controlled failure occurs at 10.5 m to the slope face.
Kemp, Alan E. S.; Villareal, Tracy A.
2013-12-01
It is widely held that increased stratification and reduced vertical mixing in the ocean driven by global warming will promote the replacement of diatoms by smaller phytoplankton and lead to an overall decrease in productivity and carbon export. Here we present contrary evidence from a synergy of modern observations and palaeo-records that reveal high diatom production and export from stratified waters. Diatom adaptations to stratified waters include the ability to grow in low light conditions in deep chlorophyll maxima; vertical migrations between nutricline depths and the surface, and symbioses with N2-fixing cyanobacteria in diatom-diazotroph associations (DDA). These strategies foster the maintenance of seed populations that may then exploit mixing events induced by storms or eddies, but may also inherently promote blooms. Recent oceanographic observations in the subtropical gyres, at increasingly high temporal and spatial resolutions, have monitored short-lived but often substantial blooms and export of stratified-adapted diatoms including rhizosolenids and the diazotroph-associated Hemiaulus hauckii. Aggregate formation by such diatoms is common and promotes rapid settling thereby minimizing water column remineralization and optimizing carbon flux. Convergence zones associated with oceanic fronts or mesoscale features may also generate substantial flux of stratified-adapted diatom species. Conventional oceanographic observing strategies and sampling techniques under-represent such activity due to the lack of adequate capability to sample the large sized diatoms and colonies involved, the subsurface location of many of these blooms, their common development in thin global warming. However, the key genera involved in such potential feedbacks are underrepresented in both laboratory and field studies and are poorly represented in models. Our findings suggest that a reappraisal is necessary of the way diatoms are represented as plankton functional types (PFTs) in
Royle, J. Andrew; Sutherland, Christopher S.; Fuller, Angela K.; Sun, Catherine C.
2015-01-01
We develop a likelihood analysis framework for fitting spatial capture-recapture (SCR) models to data collected on class structured or stratified populations. Our interest is motivated by the necessity of accommodating the problem of missing observations of individual class membership. This is particularly problematic in SCR data arising from DNA analysis of scat, hair or other material, which frequently yields individual identity but fails to identify the sex. Moreover, this can represent a large fraction of the data and, given the typically small sample sizes of many capture-recapture studies based on DNA information, utilization of the data with missing sex information is necessary. We develop the class structured likelihood for the case of missing covariate values, and then we address the scaling of the likelihood so that models with and without class structured parameters can be formally compared regardless of missing values. We apply our class structured model to black bear data collected in New York in which sex could be determined for only 62 of 169 uniquely identified individuals. The models containing sex-specificity of both the intercept of the SCR encounter probability model and the distance coefficient, and including a behavioral response are strongly favored by log-likelihood. Estimated population sex ratio is strongly influenced by sex structure in model parameters illustrating the importance of rigorous modeling of sex differences in capture-recapture models.
Dynamics of Vorticity Defects in Stratified Shear
2010-10-19
Salmon). Woods Hole Oceanographic Institution Technical Report. [16] A. E. Gill, A mechanism for instability of plane Couette flow and of Poiseuille flow ...airfoil, Gill[16] modelled the base-state to be a Couette flow with slight distortions. This severely simplified the linear stability calculations and...provided integral dispersion relationships. Next Lerner and Knobloch[24] performed long-wavelength stability studies on a distorted Couette flow . The
Differentials-Based Segmentation and Parameterization for Point-Sampled Surfaces
Yong-Wei Miao; Jie-Qing Feng; Chun-Xia Xiao; Qun-Sheng Peng; A.R. Forrest
2007-01-01
Efficient parameterization of point-sampled surfaces is a fundamental problem in the field of digital geometry processing. In order to parameterize a given point-sampled surface for minimal distance distortion, a differentials-based segmentation and parameterization approach is proposed in this paper. Our approach partitions the point-sampled geometry based on two criteria: variation of Euclidean distance between sample points, and angular difference between surface differential directions. According Co the analysis of normal curvatures for some specified directions, a new projection approach is adopted to estimate the local surface differentials. Then a k-means clustering (k-MC) algorithm is used for partitioning the model into a set of charts based on the estimated local surface attributes. Finally, each chart is parameterized with a statistical method - multidimensional scaling (MDS) approach, and the parameterizaCion results of all charts form an atlas for compact storage.
DSP based lunar sampling control system for the coiling-type sampler
Ling, Yun; Song, Aiguo; Lu, Wei
2011-12-01
The paper develops a control system based on DSP28334 for lunar sampling, and provides the main structure of it. The critical hardware and software design of the system are introduced in detail. The emphasis is placed on the design and realization of the vibration control of the coiling-type sampler in the process of lunar sampling. A control strategy which combines manual-control and local autonomous control is applied for the lunar sampling control. And the sampling mechanism being controlled can realizes multi-motor units working at time-sharing, which reduces the power comsumption and increases the stability of the sampling system greatly. The practical application of the control strategy used for the coiling-type sampler is verified by the finite element analysis. The experiments results show that the system works with low power consumption and high efficiency, and the proposed strategy enables greater depth and better efficiency during sampling.
Application of ATP-based bioluminescence for bioaerosol quantification: effect of sampling method.
Han, Taewon; Wren, Melody; DuBois, Kelsey; Therkorn, Jennifer; Mainelis, Gediminas
2015-12-01
An adenosine triphosphate (ATP)-based bioluminescence has potential to offer a quick and affordable method for quantifying bioaerosol samples. Here we report on our investigation into how different bioaerosol aerosolization parameters and sampling methods affect bioluminescence output per bacterium, and implications of that effect for bioaerosol research. Bacillus atrophaeus and Pseudomonas fluorescens bacteria were aerosolized by using a Collison nebulizer (BGI Inc., Waltham, MA) with a glass or polycarbonate jar and then collected for 15 and 60 min with: (1) Button Aerosol Sampler (SKC Inc., Eighty Four, PA) with polycarbonate, PTFE, and cellulose nitrate filters, (2) BioSampler (SKC Inc.) with 5 and 20 mL of collection liquid, and (3) our newly developed Electrostatic Precipitator with Superhydrophobic Surface (EPSS). For all aerosolization and sampling parameters we compared the ATP bioluminescence output per bacterium relative to that before aerosolization and sampling. In addition, we also determined the ATP reagent storage and preparation conditions that that do not affect the bioluminescence signal intensity. Our results show that aerosolization by a Collison nebulizer with a polycarbonate jar yields higher bioluminescence output per bacterium compared to the glass jar. Interestingly enough, the bioluminescence output by P. fluorescens increased substantially after its aerosolization compared to the fresh liquid suspension. For both test microorganisms, the bioluminescence intensity per bacterium after sampling was significantly lower than that before sampling suggesting negative effect of sampling stress on bioluminescence output. The decrease in bioluminescence intensity was more pronounces for longer sampling times and significantly and substantially depended on the sampling method. Among the investigated method, the EPSS was the least injurious for both microorganisms and sampling times. While the ATP-based bioluminescence offers a quick bioaerosol
Data processing of small samples based on grey distance information approach
Ke Hongfa; Chen Yongguang; Liu Yi
2007-01-01
Data processing of small samples is an important and valuable research problem in the electronic equipment test.Because it is difficult and complex to determine the probability distribution of small samples, it is difficult to use the traditional probability theory to process the samples and assess the degree of uncertainty.Using the grey relational theory and the norm theory, the grey distance information approach, which is based on the grey distance information quantity of a sample and the average grey distance information quantity of the samples, is proposed in this article.The definitions of the grey distance information quantity of a sample and the average grey distance information quantity of the samples, with their characteristics and algorithms, are introduced.The correlative problems, including the algorithm of estimated value, the standard deviation, and the acceptance and rejection criteria of the samples and estimated results, are also proposed.Moreover, the information whitening ratio is introduced to select the weight algorithm and to compare the different samples.Several examples are given to demonstrate the application of the proposed approach.The examples show that the proposed approach, which has no demand for the probability distribution of small samples, is feasible and effective.
Protocol Improvements for Low Concentration DNA-Based Bioaerosol Sampling and Analysis.
Irvan Luhung
Full Text Available As bioaerosol research attracts increasing attention, there is a need for additional efforts that focus on method development to deal with different environmental samples. Bioaerosol environmental samples typically have very low biomass concentrations in the air, which often leaves researchers with limited options in choosing the downstream analysis steps, especially when culture-independent methods are intended.This study investigates the impacts of three important factors that can influence the performance of culture-independent DNA-based analysis in dealing with bioaerosol environmental samples engaged in this study. The factors are: 1 enhanced high temperature sonication during DNA extraction; 2 effect of sampling duration on DNA recoverability; and 3 an alternative method for concentrating composite samples. In this study, DNA extracted from samples was analysed using the Qubit fluorometer (for direct total DNA measurement and quantitative polymerase chain reaction (qPCR.The findings suggest that additional lysis from high temperature sonication is crucial: DNA yields from both high and low biomass samples increased up to 600% when the protocol included 30-min sonication at 65°C. Long air sampling duration on a filter media was shown to have a negative impact on DNA recoverability with up to 98% of DNA lost over a 20-h sampling period. Pooling DNA from separate samples during extraction was proven to be feasible with margins of error below 30%.
Maharrey, Sean P.; Highley, Aaron M.; Behrens, Richard, Jr.; Wiese-Smith, Deneille
2007-12-01
The objective of this short-term LDRD project was to acquire the tools needed to use our chemical imaging precision mass analyzer (ChIPMA) instrument to analyze tissue samples. This effort was an outgrowth of discussions with oncologists on the need to find the cellular origin of signals in mass spectra of serum samples, which provide biomarkers for ovarian cancer. The ultimate goal would be to collect chemical images of biopsy samples allowing the chemical images of diseased and nondiseased sections of a sample to be compared. The equipment needed to prepare tissue samples have been acquired and built. This equipment includes an cyro-ultramicrotome for preparing thin sections of samples and a coating unit. The coating unit uses an electrospray system to deposit small droplets of a UV-photo absorbing compound on the surface of the tissue samples. Both units are operational. The tissue sample must be coated with the organic compound to enable matrix assisted laser desorption/ionization (MALDI) and matrix enhanced secondary ion mass spectrometry (ME-SIMS) measurements with the ChIPMA instrument Initial plans to test the sample preparation using human tissue samples required development of administrative procedures beyond the scope of this LDRD. Hence, it was decided to make two types of measurements: (1) Testing the spatial resolution of ME-SIMS by preparing a substrate coated with a mixture of an organic matrix and a bio standard and etching a defined pattern in the coating using a liquid metal ion beam, and (2) preparing and imaging C. elegans worms. Difficulties arose in sectioning the C. elegans for analysis and funds and time to overcome these difficulties were not available in this project. The facilities are now available for preparing biological samples for analysis with the ChIPMA instrument. Some further investment of time and resources in sample preparation should make this a useful tool for chemical imaging applications.
Toward stratified treatments for bipolar disorders.
Hasler, Gregor; Wolf, Andreas
2015-03-01
In bipolar disorders, there are unclear diagnostic boundaries with unipolar depression and schizophrenia, inconsistency of treatment guidelines, relatively long trial-and-error phases of treatment optimization, and increasing use of complex combination therapies lacking empirical evidence. These suggest that the current definition of bipolar disorders based on clinical symptoms reflects a clinically and etiologically heterogeneous entity. Stratification of treatments for bipolar disorders based on biomarkers and improved clinical markers are greatly needed to increase the efficacy of currently available treatments and improve the chances of developing novel therapeutic approaches. This review provides a theoretical framework to identify biomarkers and summarizes the most promising markers for stratification regarding beneficial and adverse treatment effects. State and stage specifiers, neuropsychological tests, neuroimaging, and genetic and epigenetic biomarkers will be discussed with respect to their ability to predict the response to specific pharmacological and psychosocial psychotherapies for bipolar disorders. To date, the most reliable markers are derived from psychopathology and history-taking, while no biomarker has been found that reliably predicts individual treatment responses. This review underlines both the importance of clinical diagnostic skills and the need for biological research to identify markers that will allow the targeting of treatment specifically to sub-populations of bipolar patients who are more likely to benefit from a specific treatment and less likely to develop adverse reactions.
Epistemic Information in Stratified M-Spaces
Mark Burgin
2011-12-01
Full Text Available Information is usually related to knowledge. However, the recent development of information theory demonstrated that information is a much broader concept, being actually present in and virtually related to everything. As a result, many unknown types and kinds of information have been discovered. Nevertheless, information that acts on knowledge, bringing new and updating existing knowledge, is of primary importance to people. It is called epistemic information, which is studied in this paper based on the general theory of information and further developing its mathematical stratum. As a synthetic approach, which reveals the essence of information, organizing and encompassing all main directions in information theory, the general theory of information provides efficient means for such a study. Different types of information dynamics representation use tools of mathematical disciplines such as the theory of categories, functional analysis, mathematical logic and algebra. Here we employ algebraic structures for exploration of information and knowledge dynamics. In Introduction (Section 1, we discuss previous studies of epistemic information. Section 2 gives a compressed description of the parametric phenomenological definition of information in the general theory of information. In Section 3, anthropic information, which is received, exchanged, processed and used by people is singled out and studied based on the Componential Triune Brain model. One of the basic forms of anthropic information called epistemic information, which is related to knowledge, is analyzed in Section 4. Mathematical models of epistemic information are studied in Section 5. In Conclusion, some open problems related to epistemic information are given.
Lima, Pedro; Steger, Stefan; Glade, Thomas
2017-04-01
, composed by 14,519 shallow landslides. Within this study, we introduce the following explanatory variables to test the effect of different non-landslide strategies: Lithological units, grouped by their geotechnical properties and topographic parameters such as aspect, elevation, slope gradient and the topographic position. Landslide susceptibility maps will be derived by applying logistic regression, while systematic comparisons will be carried out based on models created by different non-landslide sampling strategies. Models generated by the conventional random sampling are presented against models based on stratified and clustered sampling strategies. The modelling results will be compared in terms of their prediction performance measured by the AUROC (Area Under the Receiver Operating Characteristic Curve) obtained by means of a k-fold cross-validation and also by the spatial pattern of the maps. The outcomes of this study are intended to contribute to the understanding on how landslide-inventory based biases may be counteracted.
Asymptotic Effectiveness of the Event-Based Sampling According to the Integral Criterion
Marek Miskowicz
2007-01-01
Full Text Available A rapid progress in intelligent sensing technology creates new interest in a development of analysis and design of non-conventional sampling schemes. The investigation of the event-based sampling according to the integral criterion is presented in this paper. The investigated sampling scheme is an extension of the pure linear send-on- delta/level-crossing algorithm utilized for reporting the state of objects monitored by intelligent sensors. The motivation of using the event-based integral sampling is outlined. The related works in adaptive sampling are summarized. The analytical closed-form formulas for the evaluation of the mean rate of event-based traffic, and the asymptotic integral sampling effectiveness, are derived. The simulation results verifying the analytical formulas are reported. The effectiveness of the integral sampling is compared with the related linear send-on-delta/level-crossing scheme. The calculation of the asymptotic effectiveness for common signals, which model the state evolution of dynamic systems in time, is exemplified.
Adaptive Sampling-Based Information Collection for Wireless Body Area Networks.
Xu, Xiaobin; Zhao, Fang; Wang, Wendong; Tian, Hui
2016-08-31
To collect important health information, WBAN applications typically sense data at a high frequency. However, limited by the quality of wireless link, the uploading of sensed data has an upper frequency. To reduce upload frequency, most of the existing WBAN data collection approaches collect data with a tolerable error. These approaches can guarantee precision of the collected data, but they are not able to ensure that the upload frequency is within the upper frequency. Some traditional sampling based approaches can control upload frequency directly, however, they usually have a high loss of information. Since the core task of WBAN applications is to collect health information, this paper aims to collect optimized information under the limitation of upload frequency. The importance of sensed data is defined according to information theory for the first time. Information-aware adaptive sampling is proposed to collect uniformly distributed data. Then we propose Adaptive Sampling-based Information Collection (ASIC) which consists of two algorithms. An adaptive sampling probability algorithm is proposed to compute sampling probabilities of different sensed values. A multiple uniform sampling algorithm provides uniform samplings for values in different intervals. Experiments based on a real dataset show that the proposed approach has higher performance in terms of data coverage and information quantity. The parameter analysis shows the optimized parameter settings and the discussion shows the underlying reason of high performance in the proposed approach.
Survey of sampling-based methods for uncertainty and sensitivity analysis.
Johnson, Jay Dean; Helton, Jon Craig; Sallaberry, Cedric J. PhD. (.; .); Storlie, Curt B. (Colorado State University, Fort Collins, CO)
2006-06-01
Sampling-based methods for uncertainty and sensitivity analysis are reviewed. The following topics are considered: (1) Definition of probability distributions to characterize epistemic uncertainty in analysis inputs, (2) Generation of samples from uncertain analysis inputs, (3) Propagation of sampled inputs through an analysis, (4) Presentation of uncertainty analysis results, and (5) Determination of sensitivity analysis results. Special attention is given to the determination of sensitivity analysis results, with brief descriptions and illustrations given for the following procedures/techniques: examination of scatterplots, correlation analysis, regression analysis, partial correlation analysis, rank transformations, statistical tests for patterns based on gridding, entropy tests for patterns based on gridding, nonparametric regression analysis, squared rank differences/rank correlation coefficient test, two dimensional Kolmogorov-Smirnov test, tests for patterns based on distance measures, top down coefficient of concordance, and variance decomposition.
[Diagnosis of amniotic fluid embolism with blood samples by liquid-based cytology technique].
Liu, Bao-qin; Deng, Jian-qiang; Hou, An-chao; Cai, Ji-feng
2014-12-01
To establish the diagnosis of amniotic fluid embolism with blood samples by liquid-based cytology technique and to study the validity of method. The blood samples were collected from patients who suffered from amniotic fluid embolism. The components of amniotic fluid in blood samples were examined with blood smear by two direct smear methods (supernatant smear, sediment smear) and two liquid-based cytology methods (automatic smear, manual smear). The positive detection rate of each method was calculated. The positive detection rates of two liquid-based cytology methods (84.6% and 92.3%, respectively) were much higher than those of two direct methods (53.8% and 61.5%, respectively). The liquid-based cytology technique could improve the positive detection rate of amniotic fluid embolism.
Geochemical and petrological sampling and studies at the first moon base
Haskin, L. A.; Korotev, R. L.; Lindstrom, D. J.; Lindstrom, M. M.
Strategic sampling appropriate to the first-order lunar base can advance a variety of first-order lunar geochemical and petrological problems. Field observation and collection of samples would be done on the lunar surface, but detailed analysis would be done mainly in terrestrial laboratories. Among the most important areas of investigation for which field observations can be made and samples can be collected at the initial base are regolith studies, studies of mare and highlands stratigraphy, and a search for rare materials such as mantle nodules. Since the range of exploration may be limited to a radius of about 20 km from the first lunar base, locating the base near a mare-highlands boundary would enable the greatest latitude in addressing these problems.
Geochemical and petrological sampling and studies at the first moon base
Haskin, L. A.; Korotev, R. L.; Lindstrom, D. J.; Lindstrom, M. M.
1985-01-01
Strategic sampling appropriate to the first-order lunar base can advance a variety of first-order lunar geochemical and petrological problems. Field observation and collection of samples would be done on the lunar surface, but detailed analysis would be done mainly in terrestrial laboratories. Among the most important areas of investigation for which field observations can be made and samples can be collected at the initial base are regolith studies, studies of mare and highlands stratigraphy, and a search for rare materials such as mantle nodules. Since the range of exploration may be limited to a radius of about 20 km from the first lunar base, locating the base near a mare-highlands boundary would enable the greatest latitude in addressing these problems.
Ariful Azad
2016-08-01
Full Text Available We describe algorithms for discovering immunophenotypes from large collections of flow cytometry (FC samples, and using them to organize the samples into a hierarchy based on phenotypic similarity. The hierarchical organization is helpful for effective and robust cytometry data mining, including the creation of collections of cell populations characteristic of different classes of samples, robust classification, and anomaly detection. We summarize a set of samples belonging to a biological class or category with a statistically derived template for the class. Whereas individual samples are represented in terms of their cell populations (clusters, a template consists of generic meta-populations (a group of homogeneous cell populations obtained from the samples in a class that describe key phenotypes shared among all those samples. We organize an FC data collection in a hierarchical data structure that supports the identification of immunophenotypes relevant to clinical diagnosis. A robust template-based classification scheme is also developed, but our primary focus is in the discovery of phenotypic signatures and inter-sample relationships in an FC data collection. This collective analysis approach is more efficient and robust since templates describe phenotypic signatures common to cell populations in several samples, while ignoring noise and small sample-specific variations.We have applied the template-base scheme to analyze several data setsincluding one representing a healthy immune system, and one of Acute Myeloid Leukemia (AMLsamples. The last task is challenging due to the phenotypic heterogeneity of the severalsubtypes of AML. However, we identified thirteen immunophenotypes corresponding to subtypes of AML, and were able to distinguish Acute Promyelocytic Leukemia from other subtypes of AML.
Reachable Distance Space: Efficient Sampling-Based Planning for Spatially Constrained Systems
Xinyu Tang,
2010-01-25
Motion planning for spatially constrained robots is difficult due to additional constraints placed on the robot, such as closure constraints for closed chains or requirements on end-effector placement for articulated linkages. It is usually computationally too expensive to apply sampling-based planners to these problems since it is difficult to generate valid configurations. We overcome this challenge by redefining the robot\\'s degrees of freedom and constraints into a new set of parameters, called reachable distance space (RD-space), in which all configurations lie in the set of constraint-satisfying subspaces. This enables us to directly sample the constrained subspaces with complexity linear in the number of the robot\\'s degrees of freedom. In addition to supporting efficient sampling of configurations, we show that the RD-space formulation naturally supports planning and, in particular, we design a local planner suitable for use by sampling-based planners. We demonstrate the effectiveness and efficiency of our approach for several systems including closed chain planning with multiple loops, restricted end-effector sampling, and on-line planning for drawing/sculpting. We can sample single-loop closed chain systems with 1,000 links in time comparable to open chain sampling, and we can generate samples for 1,000-link multi-loop systems of varying topologies in less than a second. © 2010 The Author(s).
A new scoring system to stratify risk in unstable angina
Salzberg Simón
2003-08-01
Full Text Available Abstract Background We performed this study to develop a new scoring system to stratify different levels of risk in patients admitted to hospital with a diagnosis of unstable angina (UA, which is a complex syndrome that encompasses different outcomes. Many prognostic variables have been described but few efforts have been made to group them in order to enhance their individual predictive power. Methods In a first phase, 473 patients were prospectively analyzed to determine which factors were significantly associated with the in-hospital occurrence of refractory ischemia, acute myocardial infarction (AMI or death. A risk score ranging from 0 to 10 points was developed using a multivariate analysis. In a second phase, such score was validated in a new sample of 242 patients and it was finally applied to the entire population (n = 715. Results ST-segment deviation on the electrocardiogram, age ≥ 70 years, previous bypass surgery and troponin T ≥ 0.1 ng/mL were found as independent prognostic variables. A clear distinction was shown among categories of low, intermediate and high risk, defined according to the risk score. The incidence of the triple end-point was 6 %, 19.2 % and 44.7 % respectively, and the figures for AMI or death were 2 %, 11.4 % and 27.6 % respectively (p Conclusions This new scoring system is simple and easy to achieve. It allows a very good stratification of risk in patients having a clinical diagnosis of UA. They may be divided in three categories, which could be of help in the decision-making process.
Inferences from Genomic Models in Stratified Populations
Janss, Luc; de los Campos, Gustavo; Sheehan, Nuala
2012-01-01
Unaccounted population stratification can lead to spurious associations in genome-wide association studies (GWAS) and in this context several methods have been proposed to deal with this problem. An alternative line of research uses whole-genome random regression (WGRR) models that fit all markers...... are unsatisfactory. Here we address this problem and describe a reparameterization of a WGRR model, based on an eigenvalue decomposition, for simultaneous inference of parameters and unobserved population structure. This allows estimation of genomic parameters with and without inclusion of marker......-derived eigenvectors that account for stratification. The method is illustrated with grain yield in wheat typed for 1279 genetic markers, and with height, HDL cholesterol and systolic blood pressure from the British 1958 cohort study typed for 1 million SNP genotypes. Both sets of data show signs of population...
The Knowledge Economy, Gender and Stratified Migrations
Eleonore Kofman
2007-06-01
Full Text Available The promotion of knowledge economies and societies, equated with the mobile subject as bearer of technological, managerial and cosmopolitan competences, on the one hand, and insecurities about social order and national identities, on the other, have in the past few years led to increasing polarisation between skilled migrants and those deemed to lack useful skills. The former are considered to be bearers of human capital and have the capacity to assimilate seamlessly and are therefore worthy of citizenship; the latter are likely to pose problems of assimilation and dependency due to their economic and cultural ‘otherness’ and offered a transient status and partial citizenship by receiving states. In the European context this trend has been reinforced by the redrawing of European geopolitical space creating new boundaries of exclusion and social justice. The emphasis on the knowledge economy also generates gender inequalities and stratifications based on skills and types of knowledge with implications for citizenship and social justice.
The Knowledge Economy, Gender and Stratified Migrations
Eleonore Kofman
2007-12-01
Full Text Available The promotion of knowledge economies and societies, equated with the mobile subject as bearer of technological, managerial and cosmopolitan competences, on the one hand, and insecurities about social order and national identities, on the other, have in the past few years led to increasing polarisation between skilled migrants and those deemed to lack useful skills. The former are considered to be bearers of human capital and have the capacity to assimilate seamlessly and are therefore worthy of citizenship; the latter are likely to pose problems of assimilation and dependency due to their economic and cultural ‘otherness’ and offered a transient status and partial citizenship by receiving states. In the European context this trend has been reinforced by the redrawing of European geopolitical space creating new boundaries of exclusion and social justice. The emphasis on the knowledge economy also generates gender inequalities and stratifications based on skills and types of knowledge with implications for citizenship and social justice.
Towards Stratified Medicine in Plasma Cell Myeloma
Egan, Philip; Drain, Stephen; Conway, Caroline; Bjourson, Anthony J.; Alexander, H. Denis
2016-01-01
Plasma cell myeloma is a clinically heterogeneous malignancy accounting for approximately one to 2% of newly diagnosed cases of cancer worldwide. Treatment options, in addition to long-established cytotoxic drugs, include autologous stem cell transplant, immune modulators, proteasome inhibitors and monoclonal antibodies, plus further targeted therapies currently in clinical trials. Whilst treatment decisions are mostly based on a patient’s age, fitness, including the presence of co-morbidities, and tumour burden, significant scope exists for better risk stratification, sub-classification of disease, and predictors of response to specific therapies. Clinical staging, recurring acquired cytogenetic aberrations, and serum biomarkers such as β-2 microglobulin, and free light chains are in widespread use but often fail to predict the disease progression or inform treatment decision making. Recent scientific advances have provided considerable insight into the biology of myeloma. For example, gene expression profiling is already making a contribution to enhanced understanding of the biology of the disease whilst Next Generation Sequencing has revealed great genomic complexity and heterogeneity. Pathways involved in the oncogenesis, proliferation of the tumour and its resistance to apoptosis are being unravelled. Furthermore, knowledge of the tumour cell surface and its interactions with bystander cells and the bone marrow stroma enhance this understanding and provide novel targets for cell and antibody-based therapies. This review will discuss the development in understanding of the biology of the tumour cell and its environment in the bone marrow, the implementation of new therapeutic options contributing to significantly improved outcomes, and the progression towards more personalised medicine in this disorder. PMID:27775669
Towards Stratified Medicine in Plasma Cell Myeloma
Philip Egan
2016-10-01
Full Text Available Plasma cell myeloma is a clinically heterogeneous malignancy accounting for approximately one to 2% of newly diagnosed cases of cancer worldwide. Treatment options, in addition to long-established cytotoxic drugs, include autologous stem cell transplant, immune modulators, proteasome inhibitors and monoclonal antibodies, plus further targeted therapies currently in clinical trials. Whilst treatment decisions are mostly based on a patient’s age, fitness, including the presence of co-morbidities, and tumour burden, significant scope exists for better risk stratification, sub-classification of disease, and predictors of response to specific therapies. Clinical staging, recurring acquired cytogenetic aberrations, and serum biomarkers such as β-2 microglobulin, and free light chains are in widespread use but often fail to predict the disease progression or inform treatment decision making. Recent scientific advances have provided considerable insight into the biology of myeloma. For example, gene expression profiling is already making a contribution to enhanced understanding of the biology of the disease whilst Next Generation Sequencing has revealed great genomic complexity and heterogeneity. Pathways involved in the oncogenesis, proliferation of the tumour and its resistance to apoptosis are being unravelled. Furthermore, knowledge of the tumour cell surface and its interactions with bystander cells and the bone marrow stroma enhance this understanding and provide novel targets for cell and antibody-based therapies. This review will discuss the development in understanding of the biology of the tumour cell and its environment in the bone marrow, the implementation of new therapeutic options contributing to significantly improved outcomes, and the progression towards more personalised medicine in this disorder.
M.O.L.P. Costa
2015-01-01
Full Text Available In the present study, we compared the performance of a ThinPrep cytological method with the conventional Papanicolaou test for diagnosis of cytopathological changes, with regard to unsatisfactory results achieved at the Central Public Health Laboratory of the State of Pernambuco. A population-based, cross-sectional study was performed with women aged 18 to 65 years, who spontaneously sought gynecological services in Public Health Units in the State of Pernambuco, Northeast Brazil, between April and November 2011. All patients in the study were given a standardized questionnaire on sociodemographics, sexual characteristics, reproductive practices, and habits. A total of 525 patients were assessed by the two methods (11.05% were under the age of 25 years, 30.86% were single, 4.4% had had more than 5 sexual partners, 44% were not using contraception, 38.85% were users of alcohol, 24.38% were smokers, 3.24% had consumed drugs previously, 42.01% had gynecological complaints, and 12.19% had an early history of sexually transmitted diseases. The two methods showed poor correlation (k=0.19; 95%CI=0.11–0.26; P<0.001. The ThinPrep method reduced the rate of unsatisfactory results from 4.38% to 1.71% (χ2=5.28; P=0.02, and the number of cytopathological changes diagnosed increased from 2.47% to 3.04%. This study confirmed that adopting the ThinPrep method for diagnosis of cervical cytological samples was an improvement over the conventional method. Furthermore, this method may reduce possible losses from cytological resampling and reduce obstacles to patient follow-up, improving the quality of the public health system in the State of Pernambuco, Northeast Brazil.
A venue-based method for sampling hard-to-reach populations.
Muhib, F B; Lin, L S; Stueve, A; Miller, R L; Ford, W L; Johnson, W D; Smith, P J
2001-01-01
Constructing scientifically sound samples of hard-to-reach populations, also known as hidden populations, is a challenge for many research projects. Traditional sample survey methods, such as random sampling from telephone or mailing lists, can yield low numbers of eligible respondents while non-probability sampling introduces unknown biases. The authors describe a venue-based application of time-space sampling (TSS) that addresses the challenges of accessing hard-to-reach populations. The method entails identifying days and times when the target population gathers at specific venues, constructing a sampling frame of venue, day-time units (VDTs), randomly selecting and visiting VDTs (the primary sampling units), and systematically intercepting and collecting information from consenting members of the target population. This allows researchers to construct a sample with known properties, make statistical inference to the larger population of venue visitors, and theorize about the introduction of biases that may limit generalization of results to the target population. The authors describe their use of TSS in the ongoing Community Intervention Trial for Youth (CITY) project to generate a systematic sample of young men who have sex with men. The project is an ongoing community level HIV prevention intervention trial funded by the Centers for Disease Control and Prevention. The TSS method is reproducible and can be adapted to hard-to-reach populations in other situations, environments, and cultures.
LI Yan; SHI Zhou; WU Ci-fang; LI Feng; LI Hong-yi
2007-01-01
The acquisition of precise soil data representative of the entire survey area,is a critical issue for many treatments such as irrigation or fertilization in precision agriculture.The aim of this study was to investigate the spatial variability of soil bulk electrical conductivity(ECb)in a coastal saline field and design an optimized spatial sampling scheme of ECb based on a sampling design algorithm,the variance quad-tree(VQT)method.Soil ECb data were collected from the field at 20m interval in a regular grid scheme.The smooth contour map of the whole field was obtained by ordinary kriging interpolation,VQT algorithm was then used to split the smooth contour map into strata of different number desired,the sampling locations can be selected within each stratum in subsequent sampling.The result indicated that the probability of choosing representative sampling sites was increased significantly by using VQT method with the sampling number being greatly reduced compared to grid sampling design while retaining the same prediction accuracy.The advantage of the VQT method is that this scheme samples sparsely in fields where the spatial variability is relatively uniform and more intensive where the variability is large.Thus the sampling efficiency can be improved,hence facilitate an assessment methodology that can be applied in a rapid,practical and cost-effective manner.
Brus, D.J.; Gruijter, de J.J.
2012-01-01
This paper launches a hybrid sampling approach, entailing a design-based approach in space followed by a model-based approach in time, for estimating temporal trends of spatial means or totals. The underlying space–time process that generated the soil data is only partly described, viz. by a linear
Rounded Data Analysis Based on Multi-Layer Ranked Set Sampling
Wei Ming LI; Zhi Dong BAI
2011-01-01
Observations of sampling are often subject to rounding,but are modeled as though they were unrounded.This paper examines the impact of rounding errors on parameter estimation with multi-layer ranked set sampling.It shows that the rounding errors seriously distort the behavior of covariance matrix estimate,and lead to inconsistent estimation.Taking this into account,we present a new approach to implement the estimation for this model,and further establish the strong consistency and asymptotic normality of the proposed estimators.Simulation experiments show that our estimates based on rounded multi-layer ranked set sampling are always more efficient than those based on rounded simple random sampling.
Interfacial instabilities in a stratified flow of two superposed fluids
Schaflinger, Uwe
1994-06-01
Here we shall present a linear stability analysis of a laminar, stratified flow of two superposed fluids which are a clear liquid and a suspension of solid particles. The investigation is based upon the assumption that the concentration remains constant within the suspension layer. Even for moderate flow-rates the base-state results for a shear induced resuspension flow justify the latter assumption. The numerical solutions display the existence of two different branches that contribute to convective instability: long and short waves which coexist in a certain range of parameters. Also, a range exists where the flow is absolutely unstable. That means a convectively unstable resuspension flow can be only observed for Reynolds numbers larger than a lower, critical Reynolds number but still smaller than a second critical Reynolds number. For flow rates which give rise to a Reynolds number larger than the second critical Reynolds number, the flow is absolutely unstable. In some cases, however, there exists a third bound beyond that the flow is convectively unstable again. Experiments show the same phenomena: for small flow-rates short waves were usually observed but occasionally also the coexistence of short and long waves. These findings are qualitatively in good agreement with the linear stability analysis. Larger flow-rates in the range of the second critical Reynolds number yield strong interfacial waves with wave breaking and detached particles. In this range, the measured flow-parameters, like the resuspension height and the pressure drop are far beyond the theoretical results. Evidently, a further increase of the Reynolds number indicates the transition to a less wavy interface. Finally, the linear stability analysis also predicts interfacial waves in the case of relatively small suspension heights. These results are in accordance with measurements for ripple-type instabilities as they occur under laminar and viscous conditions for a mono-layer of particles.
Stability of stratified two-phase flows in horizontal channels
Barmak, Ilya; Ullmann, Amos; Brauner, Neima; Vitoshkin, Helen
2016-01-01
Linear stability of stratified two-phase flows in horizontal channels to arbitrary wavenumber disturbances is studied. The problem is reduced to Orr-Sommerfeld equations for the stream function disturbances, defined in each sublayer and coupled via boundary conditions that account also for possible interface deformation and capillary forces. Applying the Chebyshev collocation method, the equations and interface boundary conditions are reduced to the generalized eigenvalue problems solved by standard means of numerical linear algebra for the entire spectrum of eigenvalues and the associated eigenvectors. Some additional conclusions concerning the instability nature are derived from the most unstable perturbation patterns. The results are summarized in the form of stability maps showing the operational conditions at which a stratified-smooth flow pattern is stable. It is found that for gas-liquid and liquid-liquid systems the stratified flow with smooth interface is stable only in confined zone of relatively lo...
Background Oriented Schlieren in a Density Stratified Fluid
Verso, Lilly
2015-01-01
Non-intrusive quantitative fluid density measurements methods are essential in stratified flow experiments. Digital imaging leads to synthetic Schlieren methods in which the variations of the index of refraction are reconstructed computationally. In this study, an important extension to one of these methods, called Background Oriented Schlieren (BOS), is proposed. The extension enables an accurate reconstruction of the density field in stratified liquid experiments. Typically, the experiments are performed by the light source, background pattern, and the camera positioned on the opposite sides of a transparent vessel. The multi-media imaging through air-glass-water-glass-air leads to an additional aberration that destroys the reconstruction. A two-step calibration and image remapping transform are the key components that correct the images through the stratified media and provide non-intrusive full-field density measurements of transparent liquids.
Background oriented schlieren in a density stratified fluid
Verso, Lilly; Liberzon, Alex
2015-10-01
Non-intrusive quantitative fluid density measurement methods are essential in the stratified flow experiments. Digital imaging leads to synthetic schlieren methods in which the variations of the index of refraction are reconstructed computationally. In this study, an extension to one of these methods, called background oriented schlieren, is proposed. The extension enables an accurate reconstruction of the density field in stratified liquid experiments. Typically, the experiments are performed by the light source, background pattern, and the camera positioned on the opposite sides of a transparent vessel. The multimedia imaging through air-glass-water-glass-air leads to an additional aberration that destroys the reconstruction. A two-step calibration and image remapping transform are the key components that correct the images through the stratified media and provide a non-intrusive full-field density measurements of transparent liquids.
SINDA/FLUINT Stratified Tank Modeling for Cryrogenic Propellant Tanks
Sakowski, Barbara
2014-01-01
A general purpose SINDA/FLUINT (S/F) stratified tank model was created to simulate self-pressurization and axial jet TVS; Stratified layers in the vapor and liquid are modeled using S/F lumps.; The stratified tank model was constructed to permit incorporating the following additional features:, Multiple or singular lumps in the liquid and vapor regions of the tank, Real gases (also mixtures) and compressible liquids, Venting, pressurizing, and draining, Condensation and evaporation/boiling, Wall heat transfer, Elliptical, cylindrical, and spherical tank geometries; Extensive user logic is used to allow detailed tailoring - Don't have to rebuilt everything from scratch!!; Most code input for a specific case is done through the Registers Data Block:, Lump volumes are determined through user input:; Geometric tank dimensions (height, width, etc); Liquid level could be input as either a volume percentage of fill level or actual liquid level height
350 keV accelerator based PGNAA setup to detect nitrogen in bulk samples
Naqvi, A. A.; Al-Matouq, Faris A.; Khiari, F. Z.; Gondal, M. A.; Rehman, Khateeb-ur; Isab, A. A.; Raashid, M.; Dastageer, M. A.
2013-11-01
Nitrogen concentration was measured in explosive and narcotics proxy material, e.g. anthranilic acid, caffeine, melamine, and urea samples, bulk samples through thermal neutron capture reaction using 350 keV accelerator based prompt gamma ray neutron activation (PGNAA) setup. Intensity of 2.52, 3.53-3.68, 4.51, 5.27-5.30 and 10.38 MeV prompt gamma rays of nitrogen from the bulk samples was measured using a cylindrical 100 mm×100 mm (diameter×height ) BGO detector. Inspite of interference of nitrogen gamma rays from bulk samples with capture prompt gamma rays from BGO detector material, an excellent agreement between the experimental and calculated yields of nitrogen gamma rays has been obtained. This is an indication of the excellent performance of the PGNAA setup for detection of nitrogen in bulk samples.
Calibrating passive sampling and passive dosing techniques to lipid based concentrations
Mayer, Philipp; Schmidt, Stine Nørgaard; Annika, A.
2011-01-01
external partitioning standards in vegetable or fish oil for the complete calibration of equilibrium sampling techniques without additional steps. Equilibrium in tissue sampling in three different fish yielded lipid based PCB concentrations in good agreement with those determined using total extraction...... and lipid normalization. These results support the validity of the in tissue sampling technique, while at the same time confirming that the fugacity capacity of these lipid-rich fish tissues for PCBs was dominated by the lipid fraction. Equilibrium sampling of PCB contaminated lake sediments with PDMS......Equilibrium sampling into various formats of the silicone polydimethylsiloxane (PDMS) is increasingly used to measure the exposure of hydrophobic organic chemicals in environmental matrices, and passive dosing from silicone is increasingly used to control and maintain their exposure in laboratory...
350 keV accelerator based PGNAA setup to detect nitrogen in bulk samples
Naqvi, A.A., E-mail: aanaqvi@kfupm.edu.sa [Department of Physics and King Fahd University of Petroleum and Minerals, Dhahran (Saudi Arabia); Al-Matouq, Faris A.; Khiari, F.Z.; Gondal, M.A.; Rehman, Khateeb-ur [Department of Physics and King Fahd University of Petroleum and Minerals, Dhahran (Saudi Arabia); Isab, A.A. [Department of Chemistry, King Fahd University of Petroleum and Minerals, Dhahran (Saudi Arabia); Raashid, M.; Dastageer, M.A. [Department of Physics and King Fahd University of Petroleum and Minerals, Dhahran (Saudi Arabia)
2013-11-21
Nitrogen concentration was measured in explosive and narcotics proxy material, e.g. anthranilic acid, caffeine, melamine, and urea samples, bulk samples through thermal neutron capture reaction using 350 keV accelerator based prompt gamma ray neutron activation (PGNAA) setup. Intensity of 2.52, 3.53–3.68, 4.51, 5.27–5.30 and 10.38 MeV prompt gamma rays of nitrogen from the bulk samples was measured using a cylindrical 100 mm×100 mm (diameter×height ) BGO detector. Inspite of interference of nitrogen gamma rays from bulk samples with capture prompt gamma rays from BGO detector material, an excellent agreement between the experimental and calculated yields of nitrogen gamma rays has been obtained. This is an indication of the excellent performance of the PGNAA setup for detection of nitrogen in bulk samples.
Flexible Time-Triggered Sampling in Smart Sensor-Based Wireless Control Systems
Xia, Feng
2008-01-01
Wireless control systems (WCSs) often have to operate in dynamic environments where the network traffic load may vary unpredictably over time. The sampling in sensors is conventionally time triggered with fixed periods. In this context, only worse-than-possible quality of control (QoC) can be achieved when the network is underloaded, while overloaded conditions may significantly degrade the QoC, even causing system instability. This is particularly true when the bandwidth of the wireless network is limited and shared by multiple control loops. To address these problems, a flexible time-triggered sampling scheme is presented in this work. Smart sensors are used to facilitate dynamic adjustment of sampling periods, which enhances the flexibility and resource efficiency of the system based on time-triggered sampling. Feedback control technology is exploited for adapting sampling periods in a periodic manner. The deadline miss ratio in each control loop is maintained at/around a desired level, regardless of workl...
A design-based approximation to the Bayes Information Criterion in finite population sampling
Enrico Fabrizi
2014-05-01
Full Text Available In this article, various issues related to the implementation of the usual Bayesian Information Criterion (BIC are critically examined in the context of modelling a finite population. A suitable design-based approximation to the BIC is proposed in order to avoid the derivation of the exact likelihood of the sample which is often very complex in a finite population sampling. The approximation is justified using a theoretical argument and a Monte Carlo simulation study.
Downs, Timothy J.; Ogneva-Himmelberger, Yelena; Aupont, Onesky; Wang, Yangyang; Raj, Ann; Zimmerman, Paula; Goble, Robert; Taylor, Octavia; Churchill, Linda; Lemay, Celeste; McLaughlin, Thomas; Felice, Marianne
2010-01-01
Background The National Children’s Study is the most ambitious study ever attempted in the United States to assess how environmental factors impact child health and development. It aims to follow 100,000 children from gestation until 21 years of age. Success requires breaking new interdisciplinary ground, starting with how to select the sample of > 1,000 children in each of 105 study sites; no standardized protocol exists for stratification of the target population by factoring in the diverse environments it inhabits. Worcester County, Massachusetts, like other sites, stratifies according to local conditions and local knowledge, subject to probability sampling rules. Objectives We answer the following questions: How do we divide Worcester County into viable strata that represent its health-relevant environmental and sociodemographic heterogeneity, subject to sampling rules? What potential does our approach have to inform stratification at other sites? Results We developed a multivariable, vulnerability-based method for spatial sampling consisting of two descriptive indices: a hazards/stressors exposure index (comprising three proxy variables), and an adaptive capacity/sociodemographic character index (five variables). Multivariable, health-relevant stratification at the start of the study may improve detection power for environment–child health associations down the line. Eighteen strata capture countywide heterogeneity in the indices and have optimal relative homogeneity within each. They achieve comparable expected birth counts and conform to local concepts of space. Conclusion The approach offers moderate to high potential to inform other sites, limited by intersite differences in data availability, geodemographics, and technical capacity. Energetic community engagement from the start promotes local stratification coherence, plus vital researcher–community trust and co-ownership for sustainability. PMID:20211802
Microcontroller-based system for collecting anaerobic blood samples from a running greyhound.
Schmalzried, R T; Toll, P W; Devore, J J; Fedde, M R
1992-04-01
Many physiological variables change rapidly in the blood during sprint exercise. To characterize the dynamics and extent of these changes, blood samples must be obtained during exercise. We describe herein a portable, microcontroller-based system used to automatically obtain repeated, anaerobic, arterial blood samples from greyhounds before, during, and following a race on a track. In addition, the system also records the blood temperature in the pulmonary artery each time a blood sample is taken. The system has been tested for more than 2 years and has proven to be reliable and effective.
SROT: Sparse representation-based over-sampling technique for classification of imbalanced dataset
Zou, Xionggao; Feng, Yueping; Li, Huiying; Jiang, Shuyu
2017-08-01
As one of the most popular research fields in machine learning, the research on imbalanced dataset receives more and more attentions in recent years. The imbalanced problem usually occurs in when minority classes have extremely fewer samples than the others. Traditional classification algorithms have not taken the distribution of dataset into consideration, thus they fail to deal with the problem of class-imbalanced learning, and the performance of classification tends to be dominated by the majority class. SMOTE is one of the most effective over-sampling methods processing this problem, which changes the distribution of training sets by increasing the size of minority class. However, SMOTE would easily result in over-fitting on account of too many repetitive data samples. According to this issue, this paper proposes an improved method based on sparse representation theory and over-sampling technique, named SROT (Sparse Representation-based Over-sampling Technique). The SROT uses a sparse dictionary to create synthetic samples directly for solving the imbalanced problem. The experiments are performed on 10 UCI datasets using C4.5 as the learning algorithm. The experimental results show that compared our algorithm with Random Over-sampling techniques, SMOTE and other methods, SROT can achieve better performance on AUC value.
The positive effects of population-based preferential sampling in environmental epidemiology.
Antonelli, Joseph; Cefalu, Matthew; Bornn, Luke
2016-10-01
SummaryIn environmental epidemiology, exposures are not always available at subject locations and must be predicted using monitoring data. The monitor locations are often outside the control of researchers, and previous studies have shown that "preferential sampling" of monitoring locations can adversely affect exposure prediction and subsequent health effect estimation. We adopt a slightly different definition of preferential sampling than is typically seen in the literature, which we call population-based preferential sampling. Population-based preferential sampling occurs when the location of the monitors is dependent on the subject locations. We show the impact that population-based preferential sampling has on exposure prediction and health effect estimation using analytic results and a simulation study. A simple, one-parameter model is proposed to measure the degree to which monitors are preferentially sampled with respect to population density. We then discuss these concepts in the context of PM2.5 and the EPA Air Quality System monitoring sites, which are generally placed in areas of higher population density to capture the population's exposure.
Rappertk: a versatile engine for discrete restraint-based conformational sampling of macromolecules
Karmali Anjum M
2007-03-01
Full Text Available Abstract Background Macromolecular structures are modeled by conformational optimization within experimental and knowledge-based restraints. Discrete restraint-based sampling generates high-quality structures within these restraints and facilitates further refinement in a continuous all-atom energy landscape. This approach has been used successfully for protein loop modeling, comparative modeling and electron density fitting in X-ray crystallography. Results Here we present a software toolkit (Rappertk which generalizes discrete restraint-based sampling for use in structural biology. Modular design and multi-layered architecture enables Rappertk to sample conformations of any macromolecule at many levels of detail and within a variety of experimental restraints. Performance against a Cα-tracing benchmark shows that the efficiency has not suffered despite the overhead required by this flexibility. We demonstrate the toolkit's capabilities by building high-quality β-sheets and by introducing restraint-driven sampling. RNA sampling is demonstrated by rebuilding a protein-RNA interface. Ability to construct arbitrary ligands is used in sampling protein-ligand interfaces within electron density. Finally, secondary structure and shape information derived from EM are combined to generate multiple conformations of a protein consistent with the observed density. Conclusion Through its modular design and ease of use, Rappertk enables exploration of a wide variety of interesting avenues in structural biology. This toolkit, with illustrative examples, is freely available to academic users from http://www-cryst.bioc.cam.ac.uk/~swanand/mysite/rtk/index.html.
Linear Inviscid Damping for Couette Flow in Stratified Fluid
Yang, Jincheng
2016-01-01
We study the inviscid damping of Coutte flow with an exponentially stratified density. The optimal decay rates of the velocity field and density are obtained for general perturbations with minimal regularity. For Boussinesq approximation model, the decay rates we get are consistent with the previous results in the literature. We also study the decay rates for the full equations of stratified fluids, which were not studied before. For both models, the decay rates depend on the Richardson number in a very similar way. Besides, we also study the inviscid damping of perturbations due to the exponential stratification when there is no shear.
The Role of Vertex Consistency in Sampling-based Algorithms for Optimal Motion Planning
Arslan, Oktay
2012-01-01
Motion planning problems have been studied by both the robotics and the controls research communities for a long time, and many algorithms have been developed for their solution. Among them, incremental sampling-based motion planning algorithms, such as the Rapidly-exploring Random Trees (RRTs), and the Probabilistic Road Maps (PRMs) have become very popular recently, owing to their implementation simplicity and their advantages in handling high-dimensional problems. Although these algorithms work very well in practice, the quality of the computed solution is often not good, i.e., the solution can be far from the optimal one. A recent variation of RRT, namely the RRT* algorithm, bypasses this drawback of the traditional RRT algorithm, by ensuring asymptotic optimality as the number of samples tends to infinity. Nonetheless, the convergence rate to the optimal solution may still be slow. This paper presents a new incremental sampling-based motion planning algorithm based on Rapidly-exploring Random Graphs (RRG...
Recommended Mass Spectrometry-Based Strategies to Identify Ricin-Containing Samples
Suzanne R. Kalb
2015-11-01
Full Text Available Ricin is a protein toxin produced by the castor bean plant (Ricinus communis together with a related protein known as R. communis agglutinin (RCA120. Mass spectrometric (MS assays have the capacity to unambiguously identify ricin and to detect ricin’s activity in samples with complex matrices. These qualitative and quantitative assays enable detection and differentiation of ricin from the less toxic RCA120 through determination of the amino acid sequence of the protein in question, and active ricin can be monitored by MS as the release of adenine from the depurination of a nucleic acid substrate. In this work, we describe the application of MS-based methods to detect, differentiate and quantify ricin and RCA120 in nine blinded samples supplied as part of the EQuATox proficiency test. Overall, MS-based assays successfully identified all samples containing ricin or RCA120 with the exception of the sample spiked with the lowest concentration (0.414 ng/mL. In fact, mass spectrometry was the most successful method for differentiation of ricin and RCA120 based on amino acid determination. Mass spectrometric methods were also successful at ranking the functional activities of the samples, successfully yielding semi-quantitative results. These results indicate that MS-based assays are excellent techniques to detect, differentiate, and quantify ricin and RCA120 in complex matrices.
Panzeri, R.; Saggin, S.; Scaccabarozzi, D.; Tarabini, M.
2016-10-01
This paper compares different data processing techniques for FTS with the aim of assessing the feasibility of a spectrometer leveraging on standard DAC boards, without dedicated hardware for sampling and speed control of the moving mirrors. Fourier transform spectrometers rely on the sampling of the interferogram at constant steps of the optical path difference (OPD) to evaluate the spectra through standard discrete Fourier transform. Constant OPD sampling is traditionally achieved with dedicated hardware but, recently, sampling methods based on the use of common analog to digital converters with large dynamic range and high sampling frequency have become viable when associated with specific data processing techniques. These methods offer advantages from the point of view of insensitivity to disturbances, in particular mechanical vibrations, and should be less sensitive to OPD speed errors. In this work the performances of three algorithms, two taken from literature based on phase demodulation of a reference interferogram have been compared with a method based on direct phase computation of the reference interferogram in terms of robustness against mechanical vibrations and OPD speed errors. All methods provided almost correct spectra with vibrations amplitudes up to 10% of the average OPD speed and speed drifts within the scan up to 20% of the average, as long as the disturbance frequency was lower than the reference signal nominal one. The developed method based on the arccosine function keeps working also with frequencies of the disturbances larger than the reference channel one, the common limit for the other two.
Sung, Yun Ju; Winkler, Thomas W; Manning, Alisa K; Aschard, Hugues; Gudnason, Vilmundur; Harris, Tamara B; Smith, Albert V; Boerwinkle, Eric; Brown, Michael R; Morrison, Alanna C; Fornage, Myriam; Lin, Li-An; Richard, Melissa; Bartz, Traci M; Psaty, Bruce M; Hayward, Caroline; Polasek, Ozren; Marten, Jonathan; Rudan, Igor; Feitosa, Mary F; Kraja, Aldi T; Province, Michael A; Deng, Xuan; Fisher, Virginia A; Zhou, Yanhua; Bielak, Lawrence F; Smith, Jennifer; Huffman, Jennifer E; Padmanabhan, Sandosh; Smith, Blair H; Ding, Jingzhong; Liu, Yongmei; Lohman, Kurt; Bouchard, Claude; Rankinen, Tuomo; Rice, Treva K; Arnett, Donna; Schwander, Karen; Guo, Xiuqing; Palmas, Walter; Rotter, Jerome I; Alfred, Tamuno; Bottinger, Erwin P; Loos, Ruth J F; Amin, Najaf; Franco, Oscar H; van Duijn, Cornelia M; Vojinovic, Dina; Chasman, Daniel I; Ridker, Paul M; Rose, Lynda M; Kardia, Sharon; Zhu, Xiaofeng; Rice, Kenneth; Borecki, Ingrid B; Rao, Dabeeru C; Gauderman, W James; Cupples, L Adrienne
2016-07-01
Studying gene-environment (G × E) interactions is important, as they extend our knowledge of the genetic architecture of complex traits and may help to identify novel variants not detected via analysis of main effects alone. The main statistical framework for studying G × E interactions uses a single regression model that includes both the genetic main and G × E interaction effects (the "joint" framework). The alternative "stratified" framework combines results from genetic main-effect analyses carried out separately within the exposed and unexposed groups. Although there have been several investigations using theory and simulation, an empirical comparison of the two frameworks is lacking. Here, we compare the two frameworks using results from genome-wide association studies of systolic blood pressure for 3.2 million low frequency and 6.5 million common variants across 20 cohorts of European ancestry, comprising 79,731 individuals. Our cohorts have sample sizes ranging from 456 to 22,983 and include both family-based and population-based samples. In cohort-specific analyses, the two frameworks provided similar inference for population-based cohorts. The agreement was reduced for family-based cohorts. In meta-analyses, agreement between the two frameworks was less than that observed in cohort-specific analyses, despite the increased sample size. In meta-analyses, agreement depended on (1) the minor allele frequency, (2) inclusion of family-based cohorts in meta-analysis, and (3) filtering scheme. The stratified framework appears to approximate the joint framework well only for common variants in population-based cohorts. We conclude that the joint framework is the preferred approach and should be used to control false positives when dealing with low-frequency variants and/or family-based cohorts.
Research on the deep learning of the small sample data based on transfer learning
Zhao, Wei
2017-08-01
The Convolutional Neural Network of Deep Learning has been a huge success in the field of image recognition, however, it requires a lot of data samples to train a network of deep learning. In actual work, it's too difficult to get a large number of training samples, and it will be easy to overfitting under the condition of the limited dataset. According to this problem, design a kind of Deep Convolutional Neural Network which based on the Transfer Learning to solve the problem of the small sample dataset. First of all, it uses the method of Data Augmentation to enlarge the number of sample dataset. Secondly, it uses the Transfer Learning to transfer the trained network (CNN) from the big sample dataset to our small sample dataset for secondary training, It use a Global Average Pooling instead of the fully connected layers to train the network, and wo use Soft max for classification. The method solves the problem of the small sample dataset in the deep learning, and improve the operation efficiency. The experimental results show that it has high recognition rate of the classification in small sample dataset.
Conflict-cost based random sampling design for parallel MRI with low rank constraints
Kim, Wan; Zhou, Yihang; Lyu, Jingyuan; Ying, Leslie
2015-05-01
In compressed sensing MRI, it is very important to design sampling pattern for random sampling. For example, SAKE (simultaneous auto-calibrating and k-space estimation) is a parallel MRI reconstruction method using random undersampling. It formulates image reconstruction as a structured low-rank matrix completion problem. Variable density (VD) Poisson discs are typically adopted for 2D random sampling. The basic concept of Poisson disc generation is to guarantee samples are neither too close to nor too far away from each other. However, it is difficult to meet such a condition especially in the high density region. Therefore the sampling becomes inefficient. In this paper, we present an improved random sampling pattern for SAKE reconstruction. The pattern is generated based on a conflict cost with a probability model. The conflict cost measures how many dense samples already assigned are around a target location, while the probability model adopts the generalized Gaussian distribution which includes uniform and Gaussian-like distributions as special cases. Our method preferentially assigns a sample to a k-space location with the least conflict cost on the circle of the highest probability. To evaluate the effectiveness of the proposed random pattern, we compare the performance of SAKEs using both VD Poisson discs and the proposed pattern. Experimental results for brain data show that the proposed pattern yields lower normalized mean square error (NMSE) than VD Poisson discs.
van Haren, H.
2015-01-01
The character of turbulent overturns in a weakly stratified deep-sea is investigated in some detail using 144 high-resolution temperature sensors at 0.7 m intervals, starting 5 m above the bottom. A 9-day, 1 Hz sampled record from the 912 m depth flat-bottom (<0.5% bottom-slope) mooring site in the
van Haren, H.
2015-01-01
The character of turbulent overturns in a weakly stratified deep-sea is investigated in some detail using 144 high-resolution temperature sensors at 0.7 m intervals, starting 5 m above the bottom. A 9-day, 1 Hz sampled record from the 912 m depth flat-bottom (<0.5% bottom-slope) mooring site in the
Risk-Based Sampling: I Don't Want to Weight in Vain.
Powell, Mark R
2015-12-01
Recently, there has been considerable interest in developing risk-based sampling for food safety and animal and plant health for efficient allocation of inspection and surveillance resources. The problem of risk-based sampling allocation presents a challenge similar to financial portfolio analysis. Markowitz (1952) laid the foundation for modern portfolio theory based on mean-variance optimization. However, a persistent challenge in implementing portfolio optimization is the problem of estimation error, leading to false "optimal" portfolios and unstable asset weights. In some cases, portfolio diversification based on simple heuristics (e.g., equal allocation) has better out-of-sample performance than complex portfolio optimization methods due to estimation uncertainty. Even for portfolios with a modest number of assets, the estimation window required for true optimization may imply an implausibly long stationary period. The implications for risk-based sampling are illustrated by a simple simulation model of lot inspection for a small, heterogeneous group of producers. © 2015 Society for Risk Analysis.
Women's experience with home-based self-sampling for human papillomavirus testing.
Sultana, Farhana; Mullins, Robyn; English, Dallas R; Simpson, Julie A; Drennan, Kelly T; Heley, Stella; Wrede, C David; Brotherton, Julia M L; Saville, Marion; Gertig, Dorota M
2015-11-04
Increasing cervical screening coverage by reaching inadequately screened groups is essential for improving the effectiveness of cervical screening programs. Offering HPV self-sampling to women who are never or under-screened can improve screening participation, however participation varies widely between settings. Information on women's experience with self-sampling and preferences for future self-sampling screening is essential for programs to optimize participation. The survey was conducted as part of a larger trial ("iPap") investigating the effect of HPV self-sampling on participation of never and under-screened women in Victoria, Australia. Questionnaires were mailed to a) most women who participated in the self-sampling to document their experience with and preference for self-sampling in future, and b) a sample of the women who did not participate asking reasons for non-participation and suggestions for enabling participation. Reasons for not having a previous Pap test were also explored. About half the women who collected a self sample for the iPap trial returned the subsequent questionnaire (746/1521). Common reasons for not having cervical screening were that having Pap test performed by a doctor was embarrassing (18 %), not having the time (14 %), or that a Pap test was painful and uncomfortable (11 %). Most (94 %) found the home-based self-sampling less embarrassing, less uncomfortable (90 %) and more convenient (98%) compared with their last Pap test experience (if they had one); however, many were unsure about the test accuracy (57 %). Women who self-sampled thought the instructions were clear (98 %), it was easy to use the swab (95 %), and were generally confident that they did the test correctly (81 %). Most preferred to take the self-sample at home in the future (88 %) because it was simple and did not require a doctor's appointment. Few women (126/1946, 7 %) who did not return a self-sample in the iPap trial returned the questionnaire. Their main
Ceylan Koydemir, Hatice; Gorocs, Zoltan; McLeod, Euan; Tseng, Derek; Ozcan, Aydogan
2015-03-01
Giardia lamblia is a waterborne parasite that causes an intestinal infection, known as giardiasis, and it is found not only in countries with inadequate sanitation and unsafe water but also streams and lakes of developed countries. Simple, sensitive, and rapid detection of this pathogen is important for monitoring of drinking water. Here we present a cost-effective and field portable mobile-phone based fluorescence microscopy platform designed for automated detection of Giardia lamblia cysts in large volume water samples (i.e., 10 ml) to be used in low-resource field settings. This fluorescence microscope is integrated with a disposable water-sampling cassette, which is based on a flow-through porous polycarbonate membrane and provides a wide surface area for fluorescence imaging and enumeration of the captured Giardia cysts on the membrane. Water sample of interest, containing fluorescently labeled Giardia cysts, is introduced into the absorbent pads that are in contact with the membrane in the cassette by capillary action, which eliminates the need for electrically driven flow for sample processing. Our fluorescence microscope weighs ~170 grams in total and has all the components of a regular microscope, capable of detecting individual fluorescently labeled cysts under light-emitting-diode (LED) based excitation. Including all the sample preparation, labeling and imaging steps, the entire measurement takes less than one hour for a sample volume of 10 ml. This mobile phone based compact and cost-effective fluorescent imaging platform together with its machine learning based cyst counting interface is easy to use and can even work in resource limited and field settings for spatio-temporal monitoring of water quality.
Gamut Volume Index: a color preference metric based on meta-analysis and optimized colour samples.
Liu, Qiang; Huang, Zheng; Xiao, Kaida; Pointer, Michael R; Westland, Stephen; Luo, M Ronnier
2017-07-10
A novel metric named Gamut Volume Index (GVI) is proposed for evaluating the colour preference of lighting. This metric is based on the absolute gamut volume of optimized colour samples. The optimal colour set of the proposed metric was obtained by optimizing the weighted average correlation between the metric predictions and the subjective ratings for 8 psychophysical studies. The performance of 20 typical colour metrics was also investigated, which included colour difference based metrics, gamut based metrics, memory based metrics as well as combined metrics. It was found that the proposed GVI outperformed the existing counterparts, especially for the conditions where correlated colour temperatures differed.
Identification of forensic samples by using an infrared-based automatic DNA sequencer.
Ricci, Ugo; Sani, Ilaria; Klintschar, Michael; Cerri, Nicoletta; De Ferrari, Francesco; Giovannucci Uzielli, Maria Luisa
2003-06-01
We have recently introduced a new protocol for analyzing all core loci of the Federal Bureau of Investigation's (FBI) Combined DNA Index System (CODIS) with an infrared (IR) automatic DNA sequencer (LI-COR 4200). The amplicons were labeled with forward oligonucleotide primers, covalently linked to a new infrared fluorescent molecule (IRDye 800). The alleles were displayed as familiar autoradiogram-like images with real-time detection. This protocol was employed for paternity testing, population studies, and identification of degraded forensic samples. We extensively analyzed some simulated forensic samples and mixed stains (blood, semen, saliva, bones, and fixed archival embedded tissues), comparing the results with donor samples. Sensitivity studies were also performed for the four multiplex systems. Our results show the efficiency, reliability, and accuracy of the IR system for the analysis of forensic samples. We also compared the efficiency of the multiplex protocol with ultraviolet (UV) technology. Paternity tests, undegraded DNA samples, and real forensic samples were analyzed with this approach based on IR technology and with UV-based automatic sequencers in combination with commercially-available kits. The comparability of the results with the widespread UV methods suggests that it is possible to exchange data between laboratories using the same core group of markers but different primer sets and detection methods.
Efficient Bayes-Adaptive Reinforcement Learning using Sample-Based Search
Guez, Arthur; Dayan, Peter
2012-01-01
Bayesian model-based reinforcement learning is a formally elegant approach to learning optimal behaviour under model uncertainty. In this setting, a Bayes-optimal policy captures the ideal trade-off between exploration and exploitation. Unfortunately, finding Bayes-optimal policies is notoriously taxing due to the enormous search space in the augmented belief-state MDP. In this paper we exploit recent advances in sample-based planning, based on Monte-Carlo tree search, to introduce a tractable method for approximate Bayes-optimal planning. Unlike prior work in this area, we avoid expensive applications of Bayes rule within the search tree, by lazily sampling models from the current beliefs. Our approach outperformed prior Bayesian model-based RL algorithms by a significant margin on several well-known benchmark problems.
Design-based stereology: Planning, volumetry and sampling are crucial steps for a successful study.
Tschanz, Stefan; Schneider, Jan Philipp; Knudsen, Lars
2014-01-01
Quantitative data obtained by means of design-based stereology can add valuable information to studies performed on a diversity of organs, in particular when correlated to functional/physiological and biochemical data. Design-based stereology is based on a sound statistical background and can be used to generate accurate data which are in line with principles of good laboratory practice. In addition, by adjusting the study design an appropriate precision can be achieved to find relevant differences between groups. For the success of the stereological assessment detailed planning is necessary. In this review we focus on common pitfalls encountered during stereological assessment. An exemplary workflow is included, and based on authentic examples, we illustrate a number of sampling principles which can be implemented to obtain properly sampled tissue blocks for various purposes. Copyright © 2013 Elsevier GmbH. All rights reserved.
Networked Estimation for Event-Based Sampling Systems with Packet Dropouts
Young Soo Suh
2009-04-01
Full Text Available This paper is concerned with a networked estimation problem in which sensor data are transmitted over the network. In the event-based sampling scheme known as level-crossing or send-on-delta (SOD, sensor data are transmitted to the estimator node if the difference between the current sensor value and the last transmitted one is greater than a given threshold. Event-based sampling has been shown to be more efficient than the time-triggered one in some situations, especially in network bandwidth improvement. However, it cannot detect packet dropout situations because data transmission and reception do not use a periodical time-stamp mechanism as found in time-triggered sampling systems. Motivated by this issue, we propose a modified event-based sampling scheme called modified SOD in which sensor data are sent when either the change of sensor output exceeds a given threshold or the time elapses more than a given interval. Through simulation results, we show that the proposed modified SOD sampling significantly improves estimation performance when packet dropouts happen.
Sample processing for DNA chip array-based analysis of enterohemorrhagic Escherichia coli (EHEC
Enfors Sven-Olof
2008-10-01
Full Text Available Abstract Background Exploitation of DNA-based analyses of microbial pathogens, and especially simultaneous typing of several virulence-related genes in bacteria is becoming an important objective of public health these days. Results A procedure for sample processing for a confirmative analysis of enterohemorrhagic Escherichia coli (EHEC on a single colony with DNA chip array was developed and is reported here. The protocol includes application of fragmented genomic DNA from ultrasonicated colonies. The sample processing comprises first 2.5 min of ultrasonic treatment, DNA extraction (2×, and afterwards additional 5 min ultrasonication. Thus, the total sample preparation time for a confirmative analysis of EHEC is nearly 10 min. Additionally, bioinformatic revisions were performed in order to design PCR primers and array probes specific to most conservative regions of the EHEC-associated genes. Six strains with distinct pathogenic properties were selected for this study. At last, the EHEC chip array for a parallel and simultaneous detection of genes etpC-stx1-stx2-eae was designed and examined. This should permit to sense all currently accessible variants of the selected sequences in EHEC types and subtypes. Conclusion In order to implement the DNA chip array-based analysis for direct EHEC detection the sample processing was established in course of this work. However, this sample preparation mode may also be applied to other types of EHEC DNA-based sensing systems.
Target and suspect screening of psychoactive substances in sewage-based samples by UHPLC-QTOF
Baz-Lomba, J.A., E-mail: jba@niva.no [Norwegian Institute for Water Research, Gaustadalléen 21, NO-0349, Oslo (Norway); Faculty of Medicine, University of Oslo, PO box 1078 Blindern, 0316, Oslo (Norway); Reid, Malcolm J.; Thomas, Kevin V. [Norwegian Institute for Water Research, Gaustadalléen 21, NO-0349, Oslo (Norway)
2016-03-31
The quantification of illicit drug and pharmaceutical residues in sewage has been shown to be a valuable tool that complements existing approaches in monitoring the patterns and trends of drug use. The present work delineates the development of a novel analytical tool and dynamic workflow for the analysis of a wide range of substances in sewage-based samples. The validated method can simultaneously quantify 51 target psychoactive substances and pharmaceuticals in sewage-based samples using an off-line automated solid phase extraction (SPE-DEX) method, using Oasis HLB disks, followed by ultra-high performance liquid chromatography coupled to quadrupole time-of-flight mass spectrometry (UHPLC-QTOF) in MS{sup e}. Quantification and matrix effect corrections were overcome with the use of 25 isotopic labeled internal standards (ILIS). Recoveries were generally greater than 60% and the limits of quantification were in the low nanogram-per-liter range (0.4–187 ng L{sup −1}). The emergence of new psychoactive substances (NPS) on the drug scene poses a specific analytical challenge since their market is highly dynamic with new compounds continuously entering the market. Suspect screening using high-resolution mass spectrometry (HRMS) simultaneously allowed the unequivocal identification of NPS based on a mass accuracy criteria of 5 ppm (of the molecular ion and at least two fragments) and retention time (2.5% tolerance) using the UNIFI screening platform. Applying MS{sup e} data against a suspect screening database of over 1000 drugs and metabolites, this method becomes a broad and reliable tool to detect and confirm NPS occurrence. This was demonstrated through the HRMS analysis of three different sewage-based sample types; influent wastewater, passive sampler extracts and pooled urine samples resulting in the concurrent quantification of known psychoactive substances and the identification of NPS and pharmaceuticals. - Highlights: • A novel reiterative workflow
Analysis of photonic band-gap structures in stratified medium
Tong, Ming-Sze; Yinchao, Chen; Lu, Yilong;
2005-01-01
Purpose - To demonstrate the flexibility and advantages of a non-uniform pseudo-spectral time domain (nu-PSTD) method through studies of the wave propagation characteristics on photonic band-gap (PBG) structures in stratified medium Design/methodology/approach - A nu-PSTD method is proposed...
Plane Stratified Flow in a Room Ventilated by Displacement Ventilation
Nielsen, Peter Vilhelm; Nickel, J.; Baron, D. J. G.
2004-01-01
The air movement in the occupied zone of a room ventilated by displacement ventilation exists as a stratified flow along the floor. This flow can be radial or plane according to the number of wall-mounted diffusers and the room geometry. The paper addresses the situations where plane flow...
Bacterial production, protozoan grazing and mineralization in stratified lake Vechten.
Bloem, J.
1989-01-01
The role of heterotrophic nanoflagellates (HNAN, size 2-20 μm) in grazing on bacteria and mineralization of organic matter in stratified Lake Vechten was studied.Quantitative effects of manipulation and fixation on HNAN were checked. Considerable losses were caused by centrifugation, even at low spe
Population dynamics of sinking phytoplankton in stratified waters
Huisman, J.; Sommeijer, B.P.
2002-01-01
We analyze the predictions of a reaction-advection-diffusion model to pinpoint the necessary conditions for bloom development of sinking phytoplanktonspecies in stratified waters. This reveals that there are two parameter windows that can sustain sinking phytoplankton, a turbulence window and atherm
Gravity-induced stresses in stratified rock masses
Amadei, B.; Swolfs, H.S.; Savage, W.Z.
1988-01-01
This paper presents closed-form solutions for the stress field induced by gravity in anisotropic and stratified rock masses. These rocks are assumed to be laterally restrained. The rock mass consists of finite mechanical units, each unit being modeled as a homogeneous, transversely isotropic or isotropic linearly elastic material. The following results are found. The nature of the gravity induced stress field in a stratified rock mass depends on the elastic properties of each rock unit and how these properties vary with depth. It is thermodynamically admissible for the induced horizontal stress component in a given stratified rock mass to exceed the vertical stress component in certain units and to be smaller in other units; this is not possible for the classical unstratified isotropic solution. Examples are presented to explore the nature of the gravity induced stress field in stratified rock masses. It is found that a decrease in rock mass anisotropy and a stiffening of rock masses with depth can generate stress distributions comparable to empirical hyperbolic distributions previously proposed in the literature. ?? 1988 Springer-Verlag.
Dispersion of (light) inertial particles in stratified turbulence
van Aartrijk, M.; Clercx, H.J.H.; Armenio, Vincenzo; Geurts, Bernardus J.; Fröhlich, Jochen
2010-01-01
We present a brief overview of a numerical study of the dispersion of particles in stably stratified turbulence. Three types of particles arc examined: fluid particles, light inertial particles ($\\rho_p/\\rho_f = \\mathcal{O}(1)$) and heavy inertial particles ($\\rho_p/\\rho_f \\gg 1$). Stratification
The dynamics of small inertial particles in weakly stratified turbulence
van Aartrijk, M.; Clercx, H.J.H.
We present an overview of a numerical study on the small-scale dynamics and the large-scale dispersion of small inertial particles in stably stratified turbulence. Three types of particles are examined: fluid particles, light inertial particles (with particle-to-fluid density ratio 1Ͽp/Ͽf25) and
Characterization of Inlet Diffuser Performance for Stratified Thermal Storage
Cimbala, John M.; Bahnfleth, William; Song, Jing
1999-11-01
Storage of sensible heating or cooling capacity in stratified vessels has important applications in central heating and cooling plants, power production, and solar energy utilization, among others. In stratified thermal storage systems, diffusers at the top and bottom of a stratified tank introduce and withdraw fluid while maintaining a stable density gradient and causing as little mixing as possible. In chilled water storage applications, mixing during the formation of the thermocline near an inlet diffuser is the single greatest source of thermal losses. Most stratified chilled water storage tanks are cylindrical vessels with diffusers that are either circular disks that distribute flow radially outward or octagonal rings of perforated pipe that distribute flow both inward and outward radially. Both types produce gravity currents that are strongly influenced by the inlet Richardson number, but the significance of other parameters is not clear. The present investigation considers the dependence of the thermal performance of a perforated pipe diffuser on design parameters including inlet velocity, ambient and inlet fluid temperatures, and tank dimensions for a range of conditions representative of typical chilled water applications. Dimensional analysis is combined with a parametric study using results from computational fluid dynamics to obtain quantitative relationships between design parameters and expected thermal performance.
Diamessis, P.; Gurka, R.; Liberzon, A.
2008-11-01
Proper orthogonal decomposition (POD) is applied to 2-D slices of vorticity and horizontal divergence obtained from the 3-D DNS of the stratified turbulent wake of a towed sphere at Re=5x10^3 and Fr=4. Slices are sampled along the stream-depth (Oxz) and stream-span planes (Oxy) at 231 times during the interval Nt[12,35]. POD was chosen amongst the available statistical tools due to its advantage in characterization of simulated and experimentally measured velocity gradient fields, as previously demonstrated for turbulent boundary layers. In the Oxz planes, at the wake centerline, the higher most energetic modes reveal a structure similar of the structure of late-time stratified wakes. Off-set from centerline, the signature of internal waves in the form of forward-inclined coherent beams extending into the ambient becomes evident. The angle of inclination becomes progressively vertical with increasing POD mode. Lower POD modes on the Oyz planes show a layered structure in the wake core with coherent beams radiating out into the ambient over a broad range of angles. Further insight is provided through the relative energy spectra distribution of the vorticity eigenmodes. POD analysis has provided a statistical description of the geometrical features previously observed in instantaneous flow fields of stratified turbulent wakes.
Repeating cytological preparations on liquid-based cytology samples: A methodological advantage?
Pinto, Alvaro P; Maia, Henrique Felde; di Loretto, Celso; Krunn, Patrícia; Túlio, Siumara; Collaço, Luis Martins
2007-10-01
This study investigates the rule that repeating cytological preparations on liquid-based cytology improves sample adequacy, diagnosis, microbiological, and hormonal evaluations. We reviewed 156 cases of pap-stained preparations of exfoliated cervical cells in two slides processed by DNA-Cytoliq System. After sample repeat/dilution, limiting factors affecting sample adequacy were removed in nine cases and three unsatisfactory cases were reclassified as satisfactory. Diagnosis was altered in 24 cases. Of these, the original diagnosis in 15 was atypical squamous cells of undetermined significance; after the second slide examination, diagnosis in 5 of the 15 cases changed to low-grade squamous intraepithelial lesion, 3 to high-grade squamous intraepithelial lesion, and 7 to absence of lesion. Microbiological evaluation was altered, with Candida sp. detected in two repeated slides. Repeat slide preparation or dilution of residual samples enhances cytological diagnosis and decreases effects of limiting factors in manually processed DIGENE DCS LBC.
Automatic training sample selection for a multi-evidence based crop classification approach
Chellasamy, Menaka; Ferre, Ty; Greve, Mogens Humlekrog
three Multi-Layer Perceptron (MLP) neural networks trained separately with spectral, texture and vegetation indices; classification labels were then assigned based on Endorsement Theory. The present study proposes an approach to feed this ensemble classifier with automatically selected training samples......An approach to use the available agricultural parcel information to automatically select training samples for crop classification is investigated. Previous research addressed the multi-evidence crop classification approach using an ensemble classifier. This first produced confidence measures using....... Thus this approach uses the spectral, texture and indices domains in an ensemble framework to iteratively remove the mislabeled pixels from the crop clusters declared by the farmers. Once the clusters are refined, the selected border samples are used for final learning and the unknown samples...
Calibrating passive sampling and passive dosing techniques to lipid based concentrations
Mayer, Philipp; Schmidt, Stine Nørgaard; Annika, A.;
2011-01-01
towards equilibrium partitioning concentrations in lipids (Clipid,partitioning). The first approach proceeds in two steps; (i) the concentration in the PDMS (CPDMS ) is determined and (ii) multiplied with recently determined lipid to PDMS partition ratios (Klipid,PDMS). The second approach applies...... external partitioning standards in vegetable or fish oil for the complete calibration of equilibrium sampling techniques without additional steps. Equilibrium in tissue sampling in three different fish yielded lipid based PCB concentrations in good agreement with those determined using total extraction...... and lipid normalization. These results support the validity of the in tissue sampling technique, while at the same time confirming that the fugacity capacity of these lipid-rich fish tissues for PCBs was dominated by the lipid fraction. Equilibrium sampling of PCB contaminated lake sediments with PDMS...
LU Huijuan; Qin XU; YAO Mingming; GAO Shouting
2011-01-01
By sampling perturbed state vectors from each ensenble prediction run at properly selected time levels in the vicinity of the analysis time, the recently proposed time-expanded sampling approach can enlarge the ensemble size without increasing the number of prediction runs and, hence, can reduce the computational cost of an ensemble-based filter. In this study, this approach is tested for the first time with real radar data from a tornadic thunderstorm. In particular, four assimilation experiments were performed to test the time-expanded sampling method against the conventional ensemble sampling method used by ensemblebased filters. In these experiments, the ensemble square-root filter (EnSRF) was used with 45 ensemble members generated by the time-expanded sampling and conventional sampling from 15 and 45 prediction runs, respectively, and quality-controlled radar data were compressed into super-observations with properly reduced spatial resolutions to improve the EnSRF performances. The results show that the time-expanded sampling approach not only can reduce the computational cost but also can improve the accuracy of the analysis, especially when the ensemble size is severely limited due to computational constraints for real-radar data assimilation. These potential merits are consistent with those previously demonstrated by assimilation experiments with simulated data.
Nitrogen Detection in Bulk Samples Using a D-D Reaction-Based Portable Neutron Generator
A. A. Naqvi
2013-01-01
Full Text Available Nitrogen concentration was measured via 2.52 MeV nitrogen gamma ray from melamine, caffeine, urea, and disperse orange bulk samples using a newly designed D-D portable neutron generator-based prompt gamma ray setup. Inspite of low flux of thermal neutrons produced by D-D reaction-based portable neutron generator and interference of 2.52 MeV gamma rays from nitrogen in bulk samples with 2.50 MeV gamma ray from bismuth in BGO detector material, an excellent agreement between the experimental and calculated yields of nitrogen gamma rays indicates satisfactory performance of the setup for detection of nitrogen in bulk samples.
A Study of Assimilation Bias in Name-Based Sampling of Migrants
Schnell Rainer
2014-06-01
Full Text Available The use of personal names for screening is an increasingly popular sampling technique for migrant populations. Although this is often an effective sampling procedure, very little is known about the properties of this method. Based on a large German survey, this article compares characteristics of respondents whose names have been correctly classified as belonging to a migrant population with respondentswho aremigrants and whose names have not been classified as belonging to a migrant population. Although significant differences were found for some variables even with some large effect sizes, the overall bias introduced by name-based sampling (NBS is small as long as procedures with small false-negative rates are employed.
Multicenter validation of PCR-based method for detection of Salmonella in chicken and pig samples
Malorny, B.; Cook, N.; D'Agostino, M.;
2004-01-01
As part of a standardization project, an interlaboratory trial including 15 laboratories from 13 European countries was conducted to evaluate the performance of a noproprietary polymerase chain reaction (PCR)-based method for the detection of Salmonella on artificially contaminated chicken rinse...... and pig swab samples. The 3 levels were 1-10, 10-100, and 100-1000 colony-forming units (CFU)/100 mL. Sample preparations, including inoculation and pre-enrichment in buffered peptone water (BPW), were performed centrally in a German laboratory; the pre-PCR sample preparation (by a resin-based method...... specificity was 80.1% (with 85.7% accordance and 67.5% concordance) for chicken rinse, and 91.7% (with 100% accordance and 83.3% concordance) for pig swab. Thus, the interlaboratory variation due to personnel, reagents, thermal cyclers, etc., did not affect the performance of the method, which...
Marcela Vela-Amieva
2014-07-01
collected in a special filter paper (Guthrie’s card. Despite its apparent simplicity, NBS laboratories commonly receive a large number of samples collected incorrectly and technically unsuitable for perfor4ming biochemical determinations. The aim of the present paper is to offer recommendations based on scientific evidence, for the properly blood collection on filter paper for NBS programs.
Reinforcing Sampling Distributions through a Randomization-Based Activity for Introducing ANOVA
Taylor, Laura; Doehler, Kirsten
2015-01-01
This paper examines the use of a randomization-based activity to introduce the ANOVA F-test to students. The two main goals of this activity are to successfully teach students to comprehend ANOVA F-tests and to increase student comprehension of sampling distributions. Four sections of students in an advanced introductory statistics course…
Shafiei, M.; Gharari, S.; Pande, S.; Bhulai, S.
2014-01-01
Posterior sampling methods are increasingly being used to describe parameter and model predictive uncertainty in hydrologic modelling. This paper proposes an alternative to random walk chains (such as DREAM-zs). We propose a sampler based on independence chains with an embedded feature of standardiz
Mezger, Anja; Fock, Jeppe; Antunes, Paula Soares Martins
2015-01-01
We demonstrate a nanoparticle-based assay for the detection of bacteria causing urinary tract infections in patient samples with a total assay time of 4 h. This time is significantly shorter than the current gold standard, plate culture, which can take several days depending on the pathogen...
Studies were conducted in Honduras to determine sampling range for female-targeted food-based synthetic attractants for pest tephritid fruit flies. Field studies were conducted in shaded coffee and adults of the Mediterranean fruit fly, Ceratitis capitata (Wiedemann), were captured. Traps (38 traps ...
Tracking control and synchronization of chaotic systems based upon sampled-data feedback
陈士华; 刘杰; 谢进; 陆君安
2002-01-01
A novel tracking control and synchronization method is proposed based upon sampled-data feedback. This methodcan make a chaotic system approach any desired smooth orbit and synchronize the driving system and the responsesystem, both in the same structure and in diverse structures. Finally, a numerical simulation with a Lorenz system isprovided for the purpose of illustration and verification.
Sampling conditions for gradient-magnitude sparsity based image reconstruction algorithms
Sidky, Emil Y.; Jørgensen, Jakob Heide; Pan, Xiaochuan
2012-01-01
in the compressive sensing (CS) community for this type of sparsity. The preliminary finding here, based on simulations using images of realistic sparsity levels, is that necessary sampling can go as low as N/4 views for an NxN pixel array. This work sets the stage for fixed-exposure studies where the number...
Using Load Balancing to Scalably Parallelize Sampling-Based Motion Planning Algorithms
Fidel, Adam
2014-05-01
Motion planning, which is the problem of computing feasible paths in an environment for a movable object, has applications in many domains ranging from robotics, to intelligent CAD, to protein folding. The best methods for solving this PSPACE-hard problem are so-called sampling-based planners. Recent work introduced uniform spatial subdivision techniques for parallelizing sampling-based motion planning algorithms that scaled well. However, such methods are prone to load imbalance, as planning time depends on region characteristics and, for most problems, the heterogeneity of the sub problems increases as the number of processors increases. In this work, we introduce two techniques to address load imbalance in the parallelization of sampling-based motion planning algorithms: an adaptive work stealing approach and bulk-synchronous redistribution. We show that applying these techniques to representatives of the two major classes of parallel sampling-based motion planning algorithms, probabilistic roadmaps and rapidly-exploring random trees, results in a more scalable and load-balanced computation on more than 3,000 cores. © 2014 IEEE.
Use of sampling based correction for non-radioactivity X-ray energy calibration
CHENG Cheng; WEI Yong-Bo; JIANG Da-Zhen
2005-01-01
As the requirement of non-radioactivity measurement has increased in recent years, various energy calibration methods applied in portable X-ray fluorescence (XRF) spectrometers have been developed. In this paper, a sampling based correction energy calibration has been discussed. In this method both history information and current state of the instrument are considered and relative high precision and reliability can be obtained.
Imaging soft samples in liquid with tuning fork based shear force microscopy
Rensen, W.H.J.; van Hulst, N.F.; Kammer, S.B.
2000-01-01
We present a study of the dynamic behavior of tuning forks and the application of tuning fork based shear force microscopy on soft samples in liquid. A shift in resonance frequency and a recovery of the tip vibration amplitude have been observed upon immersion into liquid. Conservation of the
Sociocultural Experiences of Bulimic and Non-Bulimic Adolescents in a School-Based Chinese Sample
Jackson, Todd; Chen, Hong
2010-01-01
From a large school-based sample (N = 3,084), 49 Mainland Chinese adolescents (31 girls, 18 boys) who endorsed all DSM-IV criteria for bulimia nervosa (BN) or sub-threshold BN and 49 matched controls (31 girls, 18 boys) completed measures of demographics and sociocultural experiences related to body image. Compared to less symptomatic peers, those…
Chang, Jen Jen; Theodore, Adrea D.; Martin, Sandra L.; Runyan, Desmond K.
2008-01-01
Objective: This study examined the association between partner psychological abuse and child maltreatment perpetration. Methods: This cross-sectional study examined a population-based sample of mothers with children aged 0-17 years in North and South Carolina (n = 1,149). Mothers were asked about the occurrence of potentially neglectful or abusive…
Techasrivichien, Teeranee; Darawuttimaprakorn, Niphon; Punpuing, Sureeporn; Musumari, Patou Masika; Lukhele, Bhekumusa Wellington; El-Saaidi, Christina; Suguimoto, S Pilar; Feldman, Mitchell D; Ono-Kihara, Masako; Kihara, Masahiro
2016-02-01
Thailand has undergone rapid modernization with implications for changes in sexual norms. We investigated sexual behavior and attitudes across generations and gender among a probability sample of the general population of Nonthaburi province located near Bangkok in 2012. A tablet-based survey was performed among 2,138 men and women aged 15-59 years identified through a three-stage, stratified, probability proportional to size, clustered sampling. Descriptive statistical analysis was carried out accounting for the effects of multistage sampling. Relationship of age and gender to sexual behavior and attitudes was analyzed by bivariate analysis followed by multivariate logistic regression analysis to adjust for possible confounding. Patterns of sexual behavior and attitudes varied substantially across generations and gender. We found strong evidence for a decline in the age of sexual initiation, a shift in the type of the first sexual partner, and a greater rate of acceptance of adolescent premarital sex among younger generations. The study highlighted profound changes among young women as evidenced by a higher number of lifetime sexual partners as compared to older women. In contrast to the significant gender gap in older generations, sexual profiles of Thai young women have evolved to resemble those of young men with attitudes gradually converging to similar sexual standards. Our data suggest that higher education, being never-married, and an urban lifestyle may have been associated with these changes. Our study found that Thai sexual norms are changing dramatically. It is vital to continue monitoring such changes, considering the potential impact on the HIV/STIs epidemic and unintended pregnancies.
Reconstruction of stratified steady water waves from pressure readings on the ocean bed
Chen, Robin Ming
2015-01-01
Consider a two-dimensional stratified solitary wave propagating through a body of water that is bounded below by an impermeable ocean bed. In this work, we study how such a wave can be reconstructed from data consisting of the wave speed, upstream and downstream density profile, and the trace of the pressure on the bed. First, we prove that this data uniquely determines the wave, both in the (real) analytic and Sobolev regimes. Second, for waves that consist of multiple layers of constant density immiscible fluids, we provide an exact formula describing each of the interfaces in terms of the data. Finally, for continuously stratified fluids, we detail a reconstruction scheme based on approximation by layer-wise constant density flows.
Numerical study of thermally stratified flows of a fluid overlying a highly porous material
Antoniadis, Panagiotis D.; Papalexandris, Miltiadis V.
2014-11-01
In this talk we are concerned with thermally stratified flows in domains that contain a macroscopic interface between a highly porous material and a pure-fluid domain. Our study is based on the single-domain approach according to which the same set of governing equations is employed both inside the porous medium and in the pure-fluid domain. Also, the mathematical model that we employ treats the porous skeleton as a rigid solid that is in thermal non-equilibrium with the fluid. First, we present briefly the basic steps of the derivation of the mathematical model. Then, we present and discuss numerical results for both thermally stratified shear flows and natural convection. Our discussion focuses on the role of thermal stratification on the flows of interest and on the effect of thermal non-equilibrium between the solid matrix and the fluid inside the porous medium. This work is supported by the National Fund for Scientific Research (FNRS), Belgium.
Paper membrane-based SERS platform for the determination of glucose in blood samples.
Torul, Hilal; Çiftçi, Hakan; Çetin, Demet; Suludere, Zekiye; Boyacı, Ismail Hakkı; Tamer, Uğur
2015-11-01
In this report, we present a paper membrane-based surface-enhanced Raman scattering (SERS) platform for the determination of blood glucose level using a nitrocellulose membrane as substrate paper, and the microfluidic channel was simply constructed by wax-printing method. The rod-shaped gold nanorod particles were modified with 4-mercaptophenylboronic acid (4-MBA) and 1-decanethiol (1-DT) molecules and used as embedded SERS probe for paper-based microfluidics. The SERS measurement area was simply constructed by dropping gold nanoparticles on nitrocellulose membrane, and the blood sample was dropped on the membrane hydrophilic channel. While the blood cells and proteins were held on nitrocellulose membrane, glucose molecules were moved through the channel toward the SERS measurement area. Scanning electron microscopy (SEM) was used to confirm the effective separation of blood matrix, and total analysis is completed in 5 min. In SERS measurements, the intensity of the band at 1070 cm(-1) which is attributed to B-OH vibration decreased depending on the rise in glucose concentration in the blood sample. The glucose concentration was found to be 5.43 ± 0.51 mM in the reference blood sample by using a calibration equation, and the certified value for glucose was 6.17 ± 0.11 mM. The recovery of the glucose in the reference blood sample was about 88 %. According to these results, the developed paper-based microfluidic SERS platform has been found to be suitable for use for the detection of glucose in blood samples without any pretreatment procedure. We believe that paper-based microfluidic systems may provide a wide field of usage for paper-based applications.
Perez-Brumer, Amaya; Day, Jack K; Russell, Stephen T; Hatzenbuehler, Mark L
2017-09-01
No representative population-based studies of youth in the United States exist on gender identity-related disparities in suicidal ideation or on factors that underlie this disparity. To address this, this study examined gender identity-related disparities in the prevalence of suicidal ideation; evaluated whether established psychosocial factors explained these disparities; and identified correlates of suicidal ideation among all youth and stratified by gender identity. Data were derived from the 2013 to 2015 California Healthy Kids Survey (CHKS; N = 621,189) and a weighted subsample representative of the Californian student population (Biennial Statewide California Student Survey [CSS], N = 28,856). Prevalence of past 12-month self-reported suicidal ideation was nearly twice as high for transgender compared with non-transgender youth (33.73% versus 18.85%; χ(2) = 35.48, p youth had 2.99 higher odds (95% CI 2.25-3.98) of reporting past-year suicidal ideation compared with non-transgender youth. Among transgender youth, only depressive symptoms (adjusted odds ratio 5.44, 95% CI 1.81-16.38) and victimization (adjusted odds ratio 2.66, 95% CI 1.26-5.65) remained significantly associated with higher odds of suicidal ideation in fully adjusted models. In multiple mediation analyses, depression attenuated the association between gender identity and suicidal ideation by 17.95% and victimization by 14.71%. This study uses the first representative population-based sample of youth in the United States that includes a measurement of gender identity to report on gender identity-related disparities in suicidal ideation and to identify potential mechanisms underlying this disparity in a representative sample. Copyright © 2017 American Academy of Child and Adolescent Psychiatry. Published by Elsevier Inc. All rights reserved.
Application of In-Segment Multiple Sampling in Object-Based Classification
Nataša Đurić
2014-12-01
Full Text Available When object-based analysis is applied to very high-resolution imagery, pixels within the segments reveal large spectral inhomogeneity; their distribution can be considered complex rather than normal. When normality is violated, the classification methods that rely on the assumption of normally distributed data are not as successful or accurate. It is hard to detect normality violations in small samples. The segmentation process produces segments that vary highly in size; samples can be very big or very small. This paper investigates whether the complexity within the segment can be addressed using multiple random sampling of segment pixels and multiple calculations of similarity measures. In order to analyze the effect sampling has on classification results, statistics and probability value equations of non-parametric two-sample Kolmogorov-Smirnov test and parametric Student’s t-test are selected as similarity measures in the classification process. The performance of both classifiers was assessed on a WorldView-2 image for four land cover classes (roads, buildings, grass and trees and compared to two commonly used object-based classifiers—k-Nearest Neighbor (k-NN and Support Vector Machine (SVM. Both proposed classifiers showed a slight improvement in the overall classification accuracies and produced more accurate classification maps when compared to the ground truth image.
Wu, Huai-Hui; Wang, Chih-Yuan; Teng, Hwa-Jen; Lin, Cheo; Lu, Liang-Chen; Jian, Shu-Wan; Chang, Niann-Tai; Wen, Tzai-Hung; Wu, Jhy-Wen; Liu, Ding-Ping; Lin, Li-Jen; Norris, Douglas E; Wu, Ho-Sheng
2013-03-01
Aedes aegypti L. is the primary dengue vector in southern Taiwan. This article is the first report on a large-scale surveillance program to study the spatial-temporal distribution of the local Ae. aegytpi population using ovitraps stratified according to the human population in high dengue-risk areas. The sampling program was conducted for 1 yr and was based on weekly collections of eggs and adults in Kaohsiung City. In total, 10,380 ovitraps were placed in 5,190 households. Paired ovitraps, one indoors and one outdoors were used per 400 people. Three treatments in these ovitraps (paddle-shaped wooden sticks, sticky plastic, or both) were assigned by stratified random sampling to two areas (i.e., metropolitan or rural, respectively). We found that the sticky plastic alone had a higher sensitivity for detecting the occurrence of indigenous dengue cases than other treatments with time lags of up to 14 wk. The wooden paddle alone detected the oviposition of Ae. aegypti throughout the year in this study area. Furthermore, significantly more Ae. aegypti females were collected indoors than outdoors. Therefore, our survey identified the whole year oviposition activity, spatial-temporal distribution of the local Ae. aegypti population and a 14 wk lag correlation with dengue incidence to plan an effectively proactive control.
Contamination of apple orchard soils and fruit trees with copper-based fungicides: sampling aspects.
Wang, Quanying; Liu, Jingshuang; Liu, Qiang
2015-01-01
Accumulations of copper in orchard soils and fruit trees due to the application of Cu-based fungicides have become research hotspots. However, information about the sampling strategies, which can affect the accuracy of the following research results, is lacking. This study aimed to determine some sampling considerations when Cu accumulations in the soils and fruit trees of apple orchards are studied. The study was conducted in three apple orchards from different sites. Each orchard included two different histories of Cu-based fungicides usage, varying from 3 to 28 years. Soil samples were collected from different locations varying with the distances from tree trunk to the canopy drip line. Fruits and leaves from the middle heights of tree canopy at two locations (outer canopy and inner canopy) were collected. The variation in total soil Cu concentrations between orchards was much greater than the variation within orchards. Total soil Cu concentrations had a tendency to increase with the increasing history of Cu-based fungicides usage. Moreover, total soil Cu concentrations had the lowest values at the canopy drip line, while the highest values were found at the half distances between the trunk and the canopy drip line. Additionally, Cu concentrations of leaves and fruits from the outer parts of the canopy were significantly higher than from the inner parts. Depending on the findings of this study, not only the between-orchard variation but also the within-orchard variation should be taken into consideration when conducting future soil and tree samplings in apple orchards.
Xue Li
2016-11-01
Full Text Available A simple method for microfluidic paper-based sample concentration using ion concentration polarization (ICP with smartphone detection is developed. The concise and low-cost microfluidic paper-based ICP analytical device, which consists of a black backing layer, a nitrocellulose membrane, and two absorbent pads, is fabricated with the simple lamination method which is widely used for lateral flow strips. Sample concentration on the nitrocellulose membrane is monitored in real time by a smartphone whose camera is used to collect the fluorescence images from the ICP device. A custom image processing algorithm running on the smartphone is used to track the concentrated sample and obtain its fluorescence signal intensity for quantitative analysis. Two different methods for Nafion coating are evaluated and their performances are compared. The characteristics of the ICP analytical device especially with intentionally adjusted physical properties are fully evaluated to optimize its performance as well as to extend its potential applications. Experimental results show that significant concentration enhancement with fluorescence dye sample is obtained with the developed ICP device when a fast depletion of fluorescent dye is observed. The platform based on the simply laminated ICP device with smartphone detection is desired for point-of-care testing in settings with poor resources.
Chen, Shaohui
2017-09-01
It is extremely important for ensemble based actual evapotranspiration assimilation (AETA) to accurately sample the uncertainties. Traditionally, the perturbing ensemble is sampled from one prescribed multivariate normal distribution (MND). However, MND is under-represented in capturing the non-MND uncertainties caused by the nonlinear integration of land surface models while these hypernormal uncertainties can be better characterized by generalized Gaussian distribution (GGD) which takes MND as the special case. In this paper, one novel GGD based uncertainty sampling approach is outlined to create one hypernormal ensemble for the purpose of better improving land surface models with observation. With this sampling method, various assimilation methods can be tested in a common equation form. Experimental results on Noah LSM show that the outlined method is more powerful than MND in reducing the misfit between model forecasts and observations in terms of actual evapotranspiration, skin temperature, and soil moisture/ temperature in the 1st layer, and also indicate that the energy and water balances constrain ensemble based assimilation to simultaneously optimize all state and diagnostic variables. Overall evaluation expounds that the outlined approach is a better alternative than the traditional MND method for seizing assimilation uncertainties, and it can serve as a useful tool for optimizing hydrological models with data assimilation.
Rudomin, Emily L; Carr, Steven A; Jaffe, Jacob D
2009-06-01
The ability to perform thorough sampling is of critical importance when using mass spectrometry to characterize complex proteomic mixtures. A common approach is to reinterrogate a sample multiple times by LC-MS/MS. However, the conventional data-dependent acquisition methods that are typically used in proteomics studies will often redundantly sample high-intensity precursor ions while failing to sample low-intensity precursors entirely. We describe a method wherein the masses of successfully identified peptides are used to generate an accurate mass exclusion list such that those precursors are not selected for sequencing during subsequent analyses. We performed multiple concatenated analytical runs to sample a complex cell lysate, using either accurate mass exclusion-based data-dependent acquisition (AMEx) or standard data-dependent acquisition, and found that utilization of AMEx on an ESI-Orbitrap instrument significantly increases the total number of validated peptide identifications relative to a standard DDA approach. The additional identified peptides represent precursor ions that exhibit low signal intensity in the sample. Increasing the total number of peptide identifications augmented the number of proteins identified, as well as improved the sequence coverage of those proteins. Together, these data indicate that using AMEx is an effective strategy to improve the characterization of complex proteomic mixtures.
Determination of optimal samples for robot calibration based on error similarity
Tian Wei
2015-06-01
Full Text Available Industrial robots are used for automatic drilling and riveting. The absolute position accuracy of an industrial robot is one of the key performance indexes in aircraft assembly, and can be improved through error compensation to meet aircraft assembly requirements. The achievable accuracy and the difficulty of accuracy compensation implementation are closely related to the choice of sampling points. Therefore, based on the error similarity error compensation method, a method for choosing sampling points on a uniform grid is proposed. A simulation is conducted to analyze the influence of the sample point locations on error compensation. In addition, the grid steps of the sampling points are optimized using a statistical analysis method. The method is used to generate grids and optimize the grid steps of a Kuka KR-210 robot. The experimental results show that the method for planning sampling data can be used to effectively optimize the sampling grid. After error compensation, the position accuracy of the robot meets the position accuracy requirements.
A margin based approach to determining sample sizes via tolerance bounds.
Newcomer, Justin T.; Freeland, Katherine Elizabeth
2013-09-01
This paper proposes a tolerance bound approach for determining sample sizes. With this new methodology we begin to think of sample size in the context of uncertainty exceeding margin. As the sample size decreases the uncertainty in the estimate of margin increases. This can be problematic when the margin is small and only a few units are available for testing. In this case there may be a true underlying positive margin to requirements but the uncertainty may be too large to conclude we have sufficient margin to those requirements with a high level of statistical confidence. Therefore, we provide a methodology for choosing a sample size large enough such that an estimated QMU uncertainty based on the tolerance bound approach will be smaller than the estimated margin (assuming there is positive margin). This ensures that the estimated tolerance bound will be within performance requirements and the tolerance ratio will be greater than one, supporting a conclusion that we have sufficient margin to the performance requirements. In addition, this paper explores the relationship between margin, uncertainty, and sample size and provides an approach and recommendations for quantifying risk when sample sizes are limited.
A GMM-Based Test for Normal Disturbances of the Heckman Sample Selection Model
Michael Pfaffermayr
2014-10-01
Full Text Available The Heckman sample selection model relies on the assumption of normal and homoskedastic disturbances. However, before considering more general, alternative semiparametric models that do not need the normality assumption, it seems useful to test this assumption. Following Meijer and Wansbeek (2007, the present contribution derives a GMM-based pseudo-score LM test on whether the third and fourth moments of the disturbances of the outcome equation of the Heckman model conform to those implied by the truncated normal distribution. The test is easy to calculate and in Monte Carlo simulations it shows good performance for sample sizes of 1000 or larger.
Mu Naushad
2008-12-01
An inorganic cation exchanger, aluminum tungstate (AT), has been synthesized by adding 0.1 M sodium tungstate gradually into 0.1 M aluminium nitrate at pH 1.2 with continuous stirring. The ion exchange capacity for Na+ ion and distribution coefficients of various metal ions was determined on the column of aluminium tungstate. The distribution studies of various metal ions showed the selectivity of Fe(III) ions by this cation exchange material. So, a Fe(III) ion-selective membrane electrode was prepared by using this cation exchange material as an electroactive material. The effect of plasticizers viz. dibutyl phthalate (DBP), dioctylphthalate (DOP), di-(butyl) butyl phosphate (DBBP) and tris-(2-ethylhexylphosphate) (TEHP), has also been studied on the performance of membrane sensor. It was observed that the membrane containing the composition AT: PVC: DBP in the ratio 2 : 20 : 15 displayed a useful analytical response with excellent reproducibility, low detection limit, wide working pH range (1–3.5), quick response time (15 s) and applicability over a wide concentration range of Fe(III) ions from 1 × 10-7 M to 1 × 10-1 M with a slope of 20 ± 1 mV per decade. The selectivity coefficients were determined by the mixed solution method and revealed that the electrode was selective for Fe(III) ions in the presence of interfering ions. The electrode was used for atleast 5 months without any considerable divergence in response characteristics. The constructed sensor was used as indicator electrode in the potentiometric titration of Fe(III) ions against EDTA and Fe(III) determination in rock sample, pharmaceutical sample and water sample. The results are found to be in good agreement with those obtained by using conventional methods.
Development and Testing of Harpoon-Based Approaches for Collecting Comet Samples
Purves, Lloyd (Compiler); Nuth, Joseph (Compiler); Amatucci, Edward (Compiler); Wegel, Donald; Smith, Walter; Church, Joseph; Leary, James; Kee, Lake; Hill, Stuart; Grebenstein, Markus;
2017-01-01
low risk and an affordable cost. A harpoon-based approach for gathering comet samples appears to offer the most effective way of accomplishing this goal. As described below, with a decade of development, analysis, testing and refinement, the harpoon approach has evolved from a promising concept to a practical element of a realistic comet sample return mission. Note that the following material includes references to videos, all of which are contained in different sections of the video supplement identified in the references. Each video will be identified as "SS##", where "SS" means the supplement section and "##" will be the number of the section.
Diversity selection of compounds based on 'protein affinity fingerprints' improves sampling of bioactive chemical space.
Nguyen, Ha P; Koutsoukas, Alexios; Mohd Fauzi, Fazlin; Drakakis, Georgios; Maciejewski, Mateusz; Glen, Robert C; Bender, Andreas
2013-09-01
Diversity selection is a frequently applied strategy for assembling high-throughput screening libraries, making the assumption that a diverse compound set increases chances of finding bioactive molecules. Based on previous work on experimental 'affinity fingerprints', in this study, a novel diversity selection method is benchmarked that utilizes predicted bioactivity profiles as descriptors. Compounds were selected based on their predicted activity against half of the targets (training set), and diversity was assessed based on coverage of the remaining (test set) targets. Simultaneously, fingerprint-based diversity selection was performed. An original version of the method exhibited on average 5% and an improved version on average 10% increase in target space coverage compared with the fingerprint-based methods. As a typical case, bioactivity-based selection of 231 compounds (2%) from a particular data set ('Cutoff-40') resulted in 47.0% and 50.1% coverage, while fingerprint-based selection only achieved 38.4% target coverage for the same subset size. In conclusion, the novel bioactivity-based selection method outperformed the fingerprint-based method in sampling bioactive chemical space on the data sets considered. The structures retrieved were structurally more acceptable to medicinal chemists while at the same time being more lipophilic, hence bioactivity-based diversity selection of compounds would best be combined with physicochemical property filters in practice.
All-optical sampling and magnification based on XPM-induced focusing
Nuno, J; Guasoni, M; Finot, C; Fatome, J
2016-01-01
We theoretically and experimentally investigate the design of an all-optical noiseless magnification and sampling function free from any active gain medium and associated high-power continuous wave pump source. The proposed technique is based on the co-propagation of an arbitrary shaped signal together with an orthogonally polarized intense fast sinusoidal beating within a normally dispersive optical fiber. Basically, the strong nonlinear phase shift induced by the sinusoidal pump beam on the orthogonal weak signal through cross-phase modulation turns the defocusing regime into localized temporal focusing effects. This periodic focusing is then responsible for the generation of a high-repetition-rate temporal comb upon the incident signal whose amplitude is directly proportional to its initial shape. This internal redistribution of energy leads to a simultaneous sampling and magnification of the signal intensity profile. This process allows us to experimentally demonstrate a 40-GHz sampling operation as well ...
All-optical sampling based on quantum-dot semiconductor optical amplifier
Wu, Chen; Wang, Yongjun; Wang, Lina
2016-11-01
In recent years, the all-optical signal processing system has become a hot research field of optical communication. This paper focused on the basic research of quantum-dot (QD) semiconductor optical amplifier (SOA) and studied its practical application to all-optical sampling. A multi-level dynamic physical model of QD-SOA is established, and its ultrafast dynamic characteristics are studied through theoretical and simulation research. For further study, an all-optical sampling scheme based on the nonlinear polarization rotation (NPR) effect of QD-SOA is also proposed. This paper analyzed the characteristics of optical switch window and investigated the influence of different control light pulses on switch performance. The presented optical sampling method has an important role in promoting the improvement of all-optical signal processing technology.
Zhou, Liang
2013-02-01
Multivariate volumetric datasets are important to both science and medicine. We propose a transfer function (TF) design approach based on user selected samples in the spatial domain to make multivariate volumetric data visualization more accessible for domain users. Specifically, the user starts the visualization by probing features of interest on slices and the data values are instantly queried by user selection. The queried sample values are then used to automatically and robustly generate high dimensional transfer functions (HDTFs) via kernel density estimation (KDE). Alternatively, 2D Gaussian TFs can be automatically generated in the dimensionality reduced space using these samples. With the extracted features rendered in the volume rendering view, the user can further refine these features using segmentation brushes. Interactivity is achieved in our system and different views are tightly linked. Use cases show that our system has been successfully applied for simulation and complicated seismic data sets. © 2013 IEEE.
Whissell, Cynthia
2003-06-01
56 samples (n > half a million phonemes) of names (e.g., men's, women's jets'), song lyrics (e.g., Paul Simon's, rap, Beatles'), poems (frequently anthologized English poems), and children's materials (books directed at children ages 3-10 years) were used to study a proposed new measure of English language samples--Pronounceability-based on children's mastery of some phonemes in advance of others. This measure was provisionally equated with greater "youthfulness" and "playfulness" in language samples and with less "maturity." Findings include the facts that women's names were less pronounceable than men's and that poetry was less pronounceable than song lyrics or children's materials. In a supplementary study, 13 university student volunteers' assessments of the youth of randomly constructed names was linearly related to how pronounceable each name was (eta = .8), providing construct validity for the interpretation of Pronounceability as a measure of Youthfulness.
Information-based sample size re-estimation in group sequential design for longitudinal trials.
Zhou, Jing; Adewale, Adeniyi; Shentu, Yue; Liu, Jiajun; Anderson, Keaven
2014-09-28
Group sequential design has become more popular in clinical trials because it allows for trials to stop early for futility or efficacy to save time and resources. However, this approach is less well-known for longitudinal analysis. We have observed repeated cases of studies with longitudinal data where there is an interest in early stopping for a lack of treatment effect or in adapting sample size to correct for inappropriate variance assumptions. We propose an information-based group sequential design as a method to deal with both of these issues. Updating the sample size at each interim analysis makes it possible to maintain the target power while controlling the type I error rate. We will illustrate our strategy with examples and simulations and compare the results with those obtained using fixed design and group sequential design without sample size re-estimation.
ORPOM model for optimum distribution of tree ring sampling based on the climate observation network
无
2010-01-01
Tree ring dating plays an important role in obtaining past climate information.The fundamental study of obtaining tree ring samples in typical climate regions is particularly essential.The optimum distribution of tree ring sampling sites based on climate information from the Climate Observation Network(ORPOM model) is presented in this article.In this setup,the tree rings in a typical region are used for surface representation,by applying excellent correlation with the climate information as the main principle.Taking the Horqin Sandy Land in the cold and arid region of China as an example,the optimum distribution range of the tree ring sampling sites was obtained through the application of the ORPOM model,which is considered a reasonably practical scheme.
Wahl, N.; Hennig, P.; Wieser, H. P.; Bangert, M.
2017-07-01
The sensitivity of intensity-modulated proton therapy (IMPT) treatment plans to uncertainties can be quantified and mitigated with robust/min-max and stochastic/probabilistic treatment analysis and optimization techniques. Those methods usually rely on sparse random, importance, or worst-case sampling. Inevitably, this imposes a trade-off between computational speed and accuracy of the uncertainty propagation. Here, we investigate analytical probabilistic modeling (APM) as an alternative for uncertainty propagation and minimization in IMPT that does not rely on scenario sampling. APM propagates probability distributions over range and setup uncertainties via a Gaussian pencil-beam approximation into moments of the probability distributions over the resulting dose in closed form. It supports arbitrary correlation models and allows for efficient incorporation of fractionation effects regarding random and systematic errors. We evaluate the trade-off between run-time and accuracy of APM uncertainty computations on three patient datasets. Results are compared against reference computations facilitating importance and random sampling. Two approximation techniques to accelerate uncertainty propagation and minimization based on probabilistic treatment plan optimization are presented. Runtimes are measured on CPU and GPU platforms, dosimetric accuracy is quantified in comparison to a sampling-based benchmark (5000 random samples). APM accurately propagates range and setup uncertainties into dose uncertainties at competitive run-times (GPU ≤slant {5} min). The resulting standard deviation (expectation value) of dose show average global γ{3% / {3}~mm} pass rates between 94.2% and 99.9% (98.4% and 100.0%). All investigated importance sampling strategies provided less accuracy at higher run-times considering only a single fraction. Considering fractionation, APM uncertainty propagation and treatment plan optimization was proven to be possible at constant time complexity
The Single Training Sample Extraction of Visual Evoked Potentials Based on Wavelet Transform
LIU Fang; ZHANG Zhen; CHEN Wen-chao; QIN Bing
2007-01-01
Abstract.Based on the good localization characteristic of the wavelet transform both in time and frequency domain, a de-noising method based on wavelet transform is presented, which can make the extraction of visual evoked potentials in single training sample from the EEG background noise in favor of studying the changes between the single sample response happen. The information is probably related with the different function, appearance and pathologies of the brain. At the same time this method can also be used to remove those signal' s artifacts that do not appear with EP within the same scope of time or frequency. The traditional Fourier filter can hardly attain the similar result. This method is different from other wavelet de-noising methods in which different criteria are employed in choosing wavelet coefficient. It has a biggest virtue of noting the differences among the single training sample and making use of the characteristics of high time frequency resolution to reduce the effect of interference factors to a maximum extent within the time scope that EP appear. The experiment result proves that this method is not restricted by the signal-tonoise ratio of evoked potential and electroencephalograph (EEG) and even can recognize instantaneous event under the condition of lower signal-to-noise ratio, as well as recognize the samples which evoked evident response more easily. Therefore, more evident average evoked response could be achieved by de-nosing the signals obtained through averaging out the samples that can evoke evident responses than de-nosing the average of original signals. In addition, averaging methodology can dramatically reduce the number of record samples needed, thus avoiding the effect of behavior change during the recording process.This methodology pays attention to the differences among single training sample and also accomplishes the extraction of visual evoked potentials from single trainings sample. As a result, system speed and
Krizsan, S J; Ahvenjärvi, S; Volden, H; Broderick, G A
2010-03-01
A study was conducted to compare nutrient flows determined by a reticular sampling technique with those made by sampling digesta from the omasal canal. Six lactating dairy cows fitted with ruminal cannulas were used in a design with a 3 x 2 factorial arrangement of treatments and 4 periods. Treatments were 3 grass silages differing mainly in neutral detergent fiber (NDF) concentrations: 412, 530, or 639 g/kg of dry matter, each combined with 1 of 2 levels of concentrate feed. Digesta was collected from the reticulum and the omasal canal to represent a 24-h feeding cycle. Nutrient flow was calculated using the reconstitution system based on 3 markers (Co, Yb, and indigestible NDF) and using (15)N as a microbial marker. Large and small particles and the fluid phase were recovered from digesta collected at both sampling sites. Bacterial samples from the reticulum and the omasum were separated into liquid- and particle-associated bacteria. Reticular samples were sieved through a 1-mm sieve before isolation of digesta phases and bacteria. Composition of the large particle phase differed mainly in fiber content of the digesta obtained from the 2 sampling sites. Sampling site did not affect marker concentration in any of the phases with which the markers were primarily associated. The (15)N enrichment of bacterial samples did not differ between sampling sites. The reticular and omasal canal sampling techniques gave similar estimates of marker concentrations in reconstituted digesta, estimates of ruminal flow, and ruminal digestibility values for dry matter, organic matter, starch, and N. Sampling site x diet interactions were also not significant. Concentration of NDF was 2.2% higher in reconstituted omasal digesta than in reconstituted reticular digesta. Ruminal NDF digestibility was 2.7% higher when estimated by sampling the reticulum than by sampling the omasal canal. The higher estimate of ruminal NDF digestibility with the reticular sampling technique was due to
Gonzalez-Andrades, Miguel; Alonso-Pastor, Luis; Mauris, Jérôme; Cruzat, Andrea; Dohlman, Claes H; Argüeso, Pablo
2016-01-13
The repair of wounds through collective movement of epithelial cells is a fundamental process in multicellular organisms. In stratified epithelia such as the cornea and skin, healing occurs in three steps that include a latent, migratory, and reconstruction phases. Several simple and inexpensive assays have been developed to study the biology of cell migration in vitro. However, these assays are mostly based on monolayer systems that fail to reproduce the differentiation processes associated to multilayered systems. Here, we describe a straightforward in vitro wound assay to evaluate the healing and restoration of barrier function in stratified human corneal epithelial cells. In this assay, circular punch injuries lead to the collective migration of the epithelium as coherent sheets. The closure of the wound was associated with the restoration of the transcellular barrier and the re-establishment of apical intercellular junctions. Altogether, this new model of wound healing provides an important research tool to study the mechanisms leading to barrier function in stratified epithelia and may facilitate the development of future therapeutic applications.
An affordable and accurate conductivity probe for density measurements in stratified flows
Carminati, Marco; Luzzatto-Fegiz, Paolo
2015-11-01
In stratified flow experiments, conductivity (combined with temperature) is often used to measure density. The probes typically used can provide very fine spatial scales, but can be fragile, expensive to replace, and sensitive to environmental noise. A complementary instrument, comprising a low-cost conductivity probe, would prove valuable in a wide range of applications where resolving extremely small spatial scales is not needed. We propose using micro-USB cables as the actual conductivity sensors. By removing the metallic shield from a micro-B connector, 5 gold-plated microelectrodes are exposed and available for 4-wire measurements. These have a cell constant ~550m-1, an intrinsic thermal noise of at most 30pA/Hz1/2, as well as sub-millisecond time response, making them highly suitable for many stratified flow measurements. In addition, we present the design of a custom electronic board (Arduino-based and Matlab-controlled) for simultaneous acquisition from 4 sensors, with resolution (in conductivity, and resulting density) exceeding the performance of typical existing probes. We illustrate the use of our conductivity-measuring system through stratified flow experiments, and describe plans to release simple instructions to construct our complete system for around 200.
Toske, Steven G; Morello, David R; Berger, Jennifer M; Vazquez, Etienne R
2014-01-01
Differentiating methamphetamine samples produced from ephedrine and pseudoephedrine from phenyl-2-propanone precursors is critical for assigning synthetic route information for methamphetamine profiling. The use of isotope ratio mass spectrometry data is now a key component for tracking precursor information. Recent carbon (δ(13)C) isotope results from the analysis of numerous methamphetamine samples show clear differentiation for ephedrine and pseudoephedrine-produced samples compared to P2P-produced samples. The carbon isotope differences were confirmed from synthetic route precursor studies.
Qin, Hui; Qiu, Xiaoyan; Zhao, Jiao; Liu, Mousheng; Yang, Yaling
2013-10-11
Glucocorticoids contamination has become a big environmental issue in China and other developing countries, due to increasing needs in medical prescription and farming. However, no highly sensitive and precise methods have been reported to quantify glucocorticoids so far. In the past several years, supramolecular solvent-based vortex-mixed microextraction (SS-BVMME) has been shown to be effective. However, the mechanism of SS-BVMME is still unknown. In this report, a novel method has been proposed for rapid quantification of trace amount of glucocorticoids, beclomethasone dipropionate (BD), hydrocortisone butyrate (HB) and nandrolone phenylpropionate (NPP) in water samples from the Green Lake. This method is simple, safe and cost effective. It contains two steps: supramolecular solvent-based vortex-mixed microextraction (SS-BVMME) technique and high performance liquid chromatography (HPLC) analysis. First, ionic liquids 1-butyl-3-methylimidazolium tetrafluoroborate ([BMIM]BF4) and n-butanol were mixed to form the supramolecular solvent. After mixing the supramolecular solvent with an aqueous sample to test, a homogenous mixture was formed immediately. BD, HB and NPP were then extracted based on their binding interactions, particularly hydrogen bond formed between their hydroxyl group and the supramolecular solvent. The overall process of sample preparation took only 20min and more than 5 samples could be simultaneously prepared. The minimum detectable concentrations of samples in this method were 0.09925, 0.5429 and 2.428ngmL(-1) for BD, HB and NPP, respectively. Product recoveries ranged from 88% to 103% with relative standard deviations from 0.6% to 4%. For the first time, we report that hydrogen bond plays a key role in SS-BVMME. We also improve the sensitivity significantly to quantify glucocorticoids, which may greatly benefit environmental safety management in China.
Kawakami, Shun; Sasaki, Toshihiko; Koashi, Masato
2017-07-01
An essential step in quantum key distribution is the estimation of parameters related to the leaked amount of information, which is usually done by sampling of the communication data. When the data size is finite, the final key rate depends on how the estimation process handles statistical fluctuations. Many of the present security analyses are based on the method with simple random sampling, where hypergeometric distribution or its known bounds are used for the estimation. Here we propose a concise method based on Bernoulli sampling, which is related to binomial distribution. Our method is suitable for the Bennett-Brassard 1984 (BB84) protocol with weak coherent pulses [C. H. Bennett and G. Brassard, Proceedings of the IEEE Conference on Computers, Systems and Signal Processing (IEEE, New York, 1984), Vol. 175], reducing the number of estimated parameters to achieve a higher key generation rate compared to the method with simple random sampling. We also apply the method to prove the security of the differential-quadrature-phase-shift (DQPS) protocol in the finite-key regime. The result indicates that the advantage of the DQPS protocol over the phase-encoding BB84 protocol in terms of the key rate, which was previously confirmed in the asymptotic regime, persists in the finite-key regime.
Application of Compressive Sampling in Computer Based Monitoring of Power Systems
Sarasij Das
2014-01-01
Full Text Available Shannon’s Nyquist theorem has always dictated the conventional signal acquisition policies. Power system is not an exception to this. As per this theory, the sampling rate must be at least twice the maximum frequency present in the signal. Recently, compressive sampling (CS theory has shown that the signals can be reconstructed from samples obtained at sub-Nyquist rate. Signal reconstruction in this theory is exact for “sparse signals” and is near exact for compressible signals provided certain conditions are satisfied. CS theory has already been applied in communication, medical imaging, MRI, radar imaging, remote sensing, computational biology, machine learning, geophysical data analysis, and so forth. CS is comparatively new in the area of computer based power system monitoring. In this paper, subareas of computer based power system monitoring where compressive sampling theory has been applied are reviewed. At first, an overview of CS is presented and then the relevant literature specific to power systems is discussed.
Shoupeng, Song; Zhou, Jiang
2017-03-01
Converting ultrasonic signal to ultrasonic pulse stream is the key step of finite rate of innovation (FRI) sparse sampling. At present, ultrasonic pulse-stream-forming techniques are mainly based on digital algorithms. No hardware circuit that can achieve it has been reported. This paper proposes a new quadrature demodulation (QD) based circuit implementation method for forming an ultrasonic pulse stream. Elaborating on FRI sparse sampling theory, the process of ultrasonic signal is explained, followed by a discussion and analysis of ultrasonic pulse-stream-forming methods. In contrast to ultrasonic signal envelope extracting techniques, a quadrature demodulation method (QDM) is proposed. Simulation experiments were performed to determine its performance at various signal-to-noise ratios (SNRs). The circuit was then designed, with mixing module, oscillator, low pass filter (LPF), and root of square sum module. Finally, application experiments were carried out on pipeline sample ultrasonic flaw testing. The experimental results indicate that the QDM can accurately convert ultrasonic signal to ultrasonic pulse stream, and reverse the original signal information, such as pulse width, amplitude, and time of arrival. This technique lays the foundation for ultrasonic signal FRI sparse sampling directly with hardware circuitry.
Compressive Sampling based Image Coding for Resource-deficient Visual Communication.
Liu, Xianming; Zhai, Deming; Zhou, Jiantao; Zhang, Xinfeng; Zhao, Debin; Gao, Wen
2016-04-14
In this paper, a new compressive sampling based image coding scheme is developed to achieve competitive coding efficiency at lower encoder computational complexity, while supporting error resilience. This technique is particularly suitable for visual communication with resource-deficient devices. At the encoder, compact image representation is produced, which is a polyphase down-sampled version of the input image; but the conventional low-pass filter prior to down-sampling is replaced by a local random binary convolution kernel. The pixels of the resulting down-sampled pre-filtered image are local random measurements and placed in the original spatial configuration. The advantages of local random measurements are two folds: 1) preserve high-frequency image features that are otherwise discarded by low-pass filtering; 2) remain a conventional image and can therefore be coded by any standardized codec to remove statistical redundancy of larger scales. Moreover, measurements generated by different kernels can be considered as multiple descriptions of the original image and therefore the proposed scheme has the advantage of multiple description coding. At the decoder, a unified sparsity-based soft-decoding technique is developed to recover the original image from received measurements in a framework of compressive sensing. Experimental results demonstrate that the proposed scheme is competitive compared with existing methods, with a unique strength of recovering fine details and sharp edges at low bit-rates.
Li, Y S; Zhou, Y; Meng, X Y; Zhang, Y Y; Song, F; Lu, S Y; Ren, H L; Hu, P; Liu, Z S; Zhang, J H
2014-11-01
Traditional Kjeldahl method, used for quality evaluation of bovine milk, has intrinsic defects of time-consuming sample preparation and two analyses to determine the difference between non-protein nitrogen content and total protein nitrogen content. Herein, based upon antibody functionalized gold nanoparticles (AuNPs), we described a colorimetric method for β-casein (β-CN) detection in bovine milk samples. The linear dynamic range and the LOD were 0.08-250 μg mL(-1), and 0.03 μg mL(-1) respectively. In addition, the real content of β-CN in bovine milk was measured by using the developed assay. The results are closely correlated with those from Kjeldahl method. The advantages of β-CN triggered AuNP aggregation-based colorimetric assay are simple signal generation, the high sensitivity and specificity as well as no need of complicated sample preparation, which make it for on-site detection of β-CN in bovine milk samples.
FC-normal and extended stratified logic program
许道云; 丁德成
2002-01-01
This paper investigates the consistency property of FC-normal logic program and presentsan equivalent deciding condition whether a logic program P is an FC-normal program. The decidingcondition describes the characterizations of FC-normal program. By the Petri-net presentation ofa logic program, the characterizations of stratification of FC-normal program are investigated. Thestratification of FC-normal program motivates us to introduce a new kind of stratification, extendedstratification, over logic program. It is shown that an extended (locally) stratified logic program isan FC-normal program. Thus, an extended (locally) stratified logic program has at least one stablemodel. Finally, we have presented algorithms about computation of consistency property and a fewequivalent deciding methods of the finite FC-normal program.
Turbulent thermal diffusion in strongly stratified turbulence: theory and experiments
Amir, G; Eidelman, A; Elperin, T; Kleeorin, N; Rogachevskii, I
2016-01-01
Turbulent thermal diffusion is a combined effect of the temperature stratified turbulence and inertia of small particles. It causes the appearance of a non-diffusive turbulent flux of particles in the direction of the turbulent heat flux. This non-diffusive turbulent flux of particles is proportional to the product of the mean particle number density and the effective velocity of inertial particles. The theory of this effect has been previously developed only for small temperature gradients and small Stokes numbers (Phys. Rev. Lett. {\\bf 76}, 224, 1996). In this study a generalized theory of turbulent thermal diffusion for arbitrary temperature gradients and Stokes numbers has been developed. The laboratory experiments in the oscillating grid turbulence and in the multi-fan produced turbulence have been performed to validate the theory of turbulent thermal diffusion in strongly stratified turbulent flows. It has been shown that the ratio of the effective velocity of inertial particles to the characteristic ve...
Numerical Simulation of Wakes in a Weakly Stratified Fluid
Rottman, James W; Innis, George E; O'Shea, Thomas T; Novikov, Evgeny
2014-01-01
This paper describes some preliminary numerical studies using large eddy simulation of full-scale submarine wakes. Submarine wakes are a combination of the wake generated by a smooth slender body and a number of superimposed vortex pairs generated by various control surfaces and other body appendages. For this preliminary study, we attempt to gain some insight into the behavior of full-scale submarine wakes by computing separately the evolution the self-propelled wake of a slender body and the motion of a single vortex pair in both a non-stratified and a stratified environment. An important aspect of the simulations is the use of an iterative procedure to relax the initial turbulence field so that turbulent production and dissipation are in balance.
Helicity dynamics in stratified turbulence in the absence of forcing
Rorai, C; Pouquet, A; Mininni, P D
2012-01-01
A numerical study of decaying stably-stratified flows is performed. Relatively high stratification and moderate Reynolds numbers are considered, and a particular emphasis is placed on the role of helicity (velocity-vorticity correlations). The problem is tackled by integrating the Boussinesq equations in a periodic cubical domain using different initial conditions: a non-helical Taylor-Green (TG) flow, a fully helical Beltrami (ABC) flow, and random flows with a tunable helicity. We show that for stratified ABC flows helicity undergoes a substantially slower decay than for unstratified ABC flows. This fact is likely associated to the combined effect of stratification and large scale coherent structures. Indeed, when the latter are missing, as in random flows, helicity is rapidly destroyed by the onset of gravitational waves. A type of large-scale dissipative "cyclostrophic" balance can be invoked to explain this behavior. When helicity survives in the system it strongly affects the temporal energy decay and t...
Axisymmetric modes in vertically stratified self-gravitating discs
Mamatsashvili, George
2010-01-01
We perform linear analysis of axisymmetric vertical normal modes in stratified compressible self-gravitating polytropic discs in the shearing box approximation. We study specific dynamics for subadiabatic, adiabatic and superadiabatic vertical stratifications. In the absence of self-gravity, four well-known principal modes can be identified in a stratified disc: acoustic p-, surface gravity f-, buoyancy g- and inertial r-modes. After characterizing modes in the non-self-gravitating case, we include self-gravity and investigate how it modifies the properties of these modes. We find that self-gravity, to a certain degree, reduces their frequencies and changes the structure of the dispersion curves and eigenfunctions at radial wavelengths comparable to the disc height. Its influence on the basic branch of the r-mode, in the case of subadiabatic and adiabatic stratifications, and on the basic branch of the g-mode, in the case of superadiabatic stratification (which in addition exhibits convective instability), do...
Elementary stratified flows with stability at low Richardson number
Barros, Ricardo [Mathematics Applications Consortium for Science and Industry (MACSI), Department of Mathematics and Statistics, University of Limerick, Limerick (Ireland); Choi, Wooyoung [Department of Mathematical Sciences, New Jersey Institute of Technology, Newark, New Jersey 07102-1982 (United States)
2014-12-15
We revisit the stability analysis for three classical configurations of multiple fluid layers proposed by Goldstein [“On the stability of superposed streams of fluids of different densities,” Proc. R. Soc. A. 132, 524 (1931)], Taylor [“Effect of variation in density on the stability of superposed streams of fluid,” Proc. R. Soc. A 132, 499 (1931)], and Holmboe [“On the behaviour of symmetric waves in stratified shear layers,” Geophys. Publ. 24, 67 (1962)] as simple prototypes to understand stability characteristics of stratified shear flows with sharp density transitions. When such flows are confined in a finite domain, it is shown that a large shear across the layers that is often considered a source of instability plays a stabilizing role. Presented are simple analytical criteria for stability of these low Richardson number flows.
An Investigation of the Sampling-Based Alignment Method and its Contributions
Juan Luo
2013-07-01
Full Text Available By investigating the distribution of phrase pairs in phrase translation tables, the work in this paperdescribes an approach to increase the number of n-gram alignments in phrase translation tables outputbya sampling-based alignment method. This approach consists in enforcing the alignment of n-grams indistinct translation subtables so as to increase the number of n-grams. Standard normal distribution is usedto allot alignment time among translation subtables, which results in adjustment of the distribution of n-grams. This leads to better evaluation results on statistical machine translation tasks than the originalsampling-based alignment approach. Furthermore, thetranslation quality obtained by merging phrasetranslation tables computed from the sampling-basedalignment method and from MGIZA++ is examined
A Novel Sampling Method for Satellite-Based Offshore Wind Resource Estimation
Badger, Merete; Badger, Jake; Hasager, Charlotte Bay
Synthetic aperture radar (SAR) measurements from satellites can be used to estimate the spatial wind speed variation offshore in great detail. The radar senses cm-scale roughness at the sea surface which can be translated to wind speed at the height 10 m using an empirical geophysical model......-based wind climatology have improved gradually as more data were collected. The satellite scenes have been treated as random samples and weighted equally in our previous analyses. Here we introduce a novel sampling strategy based on the wind class methodology that is normally applied in numerical modeling...... climatologically representative large-scale meteorological conditions for the region of interest. The wind classes are used to make the most representative selection of satellite images from the ENVISAT image catalogue. A minimum of one satellite image is chosen per wind class. The frequency of occurrence of each...
GARN: Sampling RNA 3D Structure Space with Game Theory and Knowledge-Based Scoring Strategies.
Boudard, Mélanie; Bernauer, Julie; Barth, Dominique; Cohen, Johanne; Denise, Alain
2015-01-01
Cellular processes involve large numbers of RNA molecules. The functions of these RNA molecules and their binding to molecular machines are highly dependent on their 3D structures. One of the key challenges in RNA structure prediction and modeling is predicting the spatial arrangement of the various structural elements of RNA. As RNA folding is generally hierarchical, methods involving coarse-grained models hold great promise for this purpose. We present here a novel coarse-grained method for sampling, based on game theory and knowledge-based potentials. This strategy, GARN (Game Algorithm for RNa sampling), is often much faster than previously described techniques and generates large sets of solutions closely resembling the native structure. GARN is thus a suitable starting point for the molecular modeling of large RNAs, particularly those with experimental constraints. GARN is available from: http://garn.lri.fr/.
Adaptive list sequential sampling method for population-based observational studies
2014-01-01
Background In population-based observational studies, non-participation and delayed response to the invitation to participate are complications that often arise during the recruitment of a sample. When both are not properly dealt with, the composition of the sample can be different from the desired composition. Inviting too many individuals or too few individuals from a particular subgroup could lead to unnecessary costs or decreased precision. Another problem is that there is frequently no or only partial information available about the willingness to participate. In this situation, we cannot adjust the recruitment procedure for non-participation before the recruitment period starts. Methods We have developed an adaptive list sequential sampling method that can deal with unknown participation probabilities and delayed responses to the invitation to participate in the study. In a sequential way, we evaluate whether we should invite a person from the population or not. During this evaluation, we correct for the fact that this person could decline to participate using an estimated participation probability. We use the information from all previously invited persons to estimate the participation probabilities for the non-evaluated individuals. Results The simulations showed that the adaptive list sequential sampling method can be used to estimate the participation probability during the recruitment period, and that it can successfully recruit a sample with a specific composition. Conclusions The adaptive list sequential sampling method can successfully recruit a sample with a specific desired composition when we have partial or no information about the willingness to participate before we start the recruitment period and when individuals may have a delayed response to the invitation. PMID:24965316
Adaptive list sequential sampling method for population-based observational studies.
Hof, Michel H; Ravelli, Anita C J; Zwinderman, Aeilko H
2014-06-25
In population-based observational studies, non-participation and delayed response to the invitation to participate are complications that often arise during the recruitment of a sample. When both are not properly dealt with, the composition of the sample can be different from the desired composition. Inviting too many individuals or too few individuals from a particular subgroup could lead to unnecessary costs or decreased precision. Another problem is that there is frequently no or only partial information available about the willingness to participate. In this situation, we cannot adjust the recruitment procedure for non-participation before the recruitment period starts. We have developed an adaptive list sequential sampling method that can deal with unknown participation probabilities and delayed responses to the invitation to participate in the study. In a sequential way, we evaluate whether we should invite a person from the population or not. During this evaluation, we correct for the fact that this person could decline to participate using an estimated participation probability. We use the information from all previously invited persons to estimate the participation probabilities for the non-evaluated individuals. The simulations showed that the adaptive list sequential sampling method can be used to estimate the participation probability during the recruitment period, and that it can successfully recruit a sample with a specific composition. The adaptive list sequential sampling method can successfully recruit a sample with a specific desired composition when we have partial or no information about the willingness to participate before we start the recruitment period and when individuals may have a delayed response to the invitation.
Introducing a rainfall compound distribution model based on weather patterns sub-sampling
F. Garavaglia
2010-06-01
Full Text Available This paper presents a probabilistic model for daily rainfall, using sub-sampling based on meteorological circulation. We classified eight typical but contrasted synoptic situations (weather patterns for France and surrounding areas, using a "bottom-up" approach, i.e. from the shape of the rain field to the synoptic situations described by geopotential fields. These weather patterns (WP provide a discriminating variable that is consistent with French climatology, and allows seasonal rainfall records to be split into more homogeneous sub-samples, in term of meteorological genesis.
First results show how the combination of seasonal and WP sub-sampling strongly influences the identification of the asymptotic behaviour of rainfall probabilistic models. Furthermore, with this level of stratification, an asymptotic exponential behaviour of each sub-sample appears as a reasonable hypothesis. This first part is illustrated with two daily rainfall records from SE of France.
The distribution of the multi-exponential weather patterns (MEWP is then defined as the composition, for a given season, of all WP sub-sample marginal distributions, weighted by the relative frequency of occurrence of each WP. This model is finally compared to Exponential and Generalized Pareto distributions, showing good features in terms of robustness and accuracy. These final statistical results are computed from a wide dataset of 478 rainfall chronicles spread on the southern half of France. All these data cover the 1953–2005 period.
LI Tao; ZHANG JiFeng
2009-01-01
In this paper,sampled-data based average-consensus control is considered for networks consisting of continuous-time first-order Integrator agents in a noisy distributed communication environment.The Impact of the sampling size and the number of network nodes on the system performances is analyzed.The control input of each agent can only use information measured at the sampling instants from its neighborhood rather than the complete continuous process,and the measurements of its neighbors'states are corrupted by random noises.By probability limit theory and the property of graph Laplacian matrix,it is shown that for a connected network,the static mean square error between the individual state and the average of the Initial states of all agents can be made arbitrarily small,provided the sampling size is sufficiently small.Furthermore,by properly choosing the consensus gains,almost sure consensus can be achieved.It is worth pointing out that an uncertainty principle of Gaussian networks is obtained,which implies that in the case of white Gausslan noises,no matter what the sampling size is,the product of the steady-state and transient performance indices is always equal to or larger than a constant depending on the noise intensity,network topology and the number of network nodes.
Spotlight SAR sparse sampling and imaging method based on compressive sensing
XU HuaPing; YOU YaNan; LI ChunSheng; ZHANG LvQian
2012-01-01
Spotlight synthetic aperture radar (SAR) emits a chirp signal and the echo bandwidth can be reduced through dechirp processing,where the A/D sampling rate decreases accordingly at the receiver.Compressive sensing allows the compressible signal to be reconstructed with a high probability using only a few samples by solving a linear program problem.This paper presents a novel signal sampling and imaging method for application to spotlight SAR based on compressive sensing.The signal is randomly sampled after dechirp processing to form a low-dimensional sample set,and the dechirp basis is imported to reconstruct the dechirp signal.Matching pursuit (MP) is used as a reconstruction algorithm.The reconstructed signal uses polar format algorithm (PFA) for imaging.Although our novel mechanism increases the system complexity to an extent,the data storage requirements can be compressed considerably. Several simulations verify the feasibility and accuracy of spotlight SAR signal processing via compressive sensing,and the method still obtains acceptable imaging results with 10％ of the original echo data.
Automated liver sampling using a gradient dual-echo Dixon-based technique.
Bashir, Mustafa R; Dale, Brian M; Merkle, Elmar M; Boll, Daniel T
2012-05-01
Magnetic resonance spectroscopy of the liver requires input from a physicist or physician at the time of acquisition to insure proper voxel selection, while in multiecho chemical shift imaging, numerous regions of interest must be manually selected in order to ensure analysis of a representative portion of the liver parenchyma. A fully automated technique could improve workflow by selecting representative portions of the liver prior to human analysis. Complete volumes from three-dimensional gradient dual-echo acquisitions with two-point Dixon reconstruction acquired at 1.5 and 3 T were analyzed in 100 subjects, using an automated liver sampling algorithm, based on ratio pairs calculated from signal intensity image data as fat-only/water-only and log(in-phase/opposed-phase) on a voxel-by-voxel basis. Using different gridding variations of the algorithm, the average correct liver volume samples ranged from 527 to 733 mL. The average percentage of sample located within the liver ranged from 95.4 to 97.1%, whereas the average incorrect volume selected was 16.5-35.4 mL (2.9-4.6%). Average run time was 19.7-79.0 s. The algorithm consistently selected large samples of the hepatic parenchyma with small amounts of erroneous extrahepatic sampling, and run times were feasible for execution on an MRI system console during exam acquisition. Copyright © 2011 Wiley Periodicals, Inc.
A Programmable Look-Up Table-Based Interpolator with Nonuniform Sampling Scheme
Élvio Carlos Dutra e Silva Júnior
2012-01-01
Full Text Available Interpolation is a useful technique for storage of complex functions on limited memory space: some few sampling values are stored on a memory bank, and the function values in between are calculated by interpolation. This paper presents a programmable Look-Up Table-based interpolator, which uses a reconfigurable nonuniform sampling scheme: the sampled points are not uniformly spaced. Their distribution can also be reconfigured to minimize the approximation error on specific portions of the interpolated function’s domain. Switching from one set of configuration parameters to another set, selected on the fly from a variety of precomputed parameters, and using different sampling schemes allow for the interpolation of a plethora of functions, achieving memory saving and minimum approximation error. As a study case, the proposed interpolator was used as the core of a programmable noise generator—output signals drawn from different Probability Density Functions were produced for testing FPGA implementations of chaotic encryption algorithms. As a result of the proposed method, the interpolation of a specific transformation function on a Gaussian noise generator reduced the memory usage to 2.71% when compared to the traditional uniform sampling scheme method, while keeping the approximation error below a threshold equal to 0.000030518.
Royle, J. Andrew; Converse, Sarah J.
2014-01-01
Capture–recapture studies are often conducted on populations that are stratified by space, time or other factors. In this paper, we develop a Bayesian spatial capture–recapture (SCR) modelling framework for stratified populations – when sampling occurs within multiple distinct spatial and temporal strata.We describe a hierarchical model that integrates distinct models for both the spatial encounter history data from capture–recapture sampling, and also for modelling variation in density among strata. We use an implementation of data augmentation to parameterize the model in terms of a latent categorical stratum or group membership variable, which provides a convenient implementation in popular BUGS software packages.We provide an example application to an experimental study involving small-mammal sampling on multiple trapping grids over multiple years, where the main interest is in modelling a treatment effect on population density among the trapping grids.Many capture–recapture studies involve some aspect of spatial or temporal replication that requires some attention to modelling variation among groups or strata. We propose a hierarchical model that allows explicit modelling of group or strata effects. Because the model is formulated for individual encounter histories and is easily implemented in the BUGS language and other free software, it also provides a general framework for modelling individual effects, such as are present in SCR models.
Hassanzadeh, Pedram
infer the height and internal stratification of some astrophysical and geophysical vortices because direct measurements of their vertical structures are difficult. In Chapter 3, we show numerically and experimentally that localized suction in rotating continuously stratified flows produces three-dimensional baroclinic cyclones. As expected from Chapter 2, the interiors of these cyclones are super-stratified. Suction, modeled as a small spherical sink in the simulations, creates an anisotropic flow toward the sink with directional dependence changing with the ratio of the Coriolis parameter to the Brunt-Vaisala frequency. Around the sink, this flow generates cyclonic vorticity and deflects isopycnals so that the interior of the cyclone becomes super-stratified. The super-stratified region is visualized in the companion experiments that we helped to design and analyze using the synthetic schlieren technique. Once the suction stops, the cyclones decay due to viscous dissipation in the simulations and experiments. The numerical results show that the vertical velocity of viscously decaying cyclones flows away from the cyclone's midplane, while the radial velocity flows toward the cyclone's center. This observation is explained based on the cyclo-geostrophic balance. This vertical velocity mixes the flow inside and outside of cyclone and reduces the super-stratification. We speculate that the predominance of anticyclones in geophysical and astrophysical flows is due to the fact that anticyclones require sub-stratification, which occurs naturally by mixing, while cyclones require super-stratification. In Chapter 4, we show that a previously unknown instability creates space-filling lattices of 3D turbulent baroclinic vortices in linearly-stable, rotating, stratified shear flows. The instability starts from a newly discovered family of easily-excited critical layers. This new family, named the baroclinic critical layer, has singular vertical velocities; the traditional family
Sample-Based Motion Planning in High-Dimensional and Differentially-Constrained Systems
2010-02-01
tasks, such as the piano -mover problem, has been shown to be PSPACE-Hard [Reif, 1979]. Despite this time complexity, sample-based methods have proven...point. See [Berkemeier et al., 1999] for one such technique , and for an overview of related. 23 Alternative approaches such as [Stojic et al., 2000] study...the angular momentum. The technique was extend to 3 DOF by ignoring non-linear terms, and used to control landing in a gymnastic robot using feedback
Target and suspect screening of psychoactive substances in sewage-based samples by UHPLC-QTOF.
Baz-Lomba, J A; Reid, Malcolm J; Thomas, Kevin V
2016-03-31
The quantification of illicit drug and pharmaceutical residues in sewage has been shown to be a valuable tool that complements existing approaches in monitoring the patterns and trends of drug use. The present work delineates the development of a novel analytical tool and dynamic workflow for the analysis of a wide range of substances in sewage-based samples. The validated method can simultaneously quantify 51 target psychoactive substances and pharmaceuticals in sewage-based samples using an off-line automated solid phase extraction (SPE-DEX) method, using Oasis HLB disks, followed by ultra-high performance liquid chromatography coupled to quadrupole time-of-flight mass spectrometry (UHPLC-QTOF) in MS(e). Quantification and matrix effect corrections were overcome with the use of 25 isotopic labeled internal standards (ILIS). Recoveries were generally greater than 60% and the limits of quantification were in the low nanogram-per-liter range (0.4-187 ng L(-1)). The emergence of new psychoactive substances (NPS) on the drug scene poses a specific analytical challenge since their market is highly dynamic with new compounds continuously entering the market. Suspect screening using high-resolution mass spectrometry (HRMS) simultaneously allowed the unequivocal identification of NPS based on a mass accuracy criteria of 5 ppm (of the molecular ion and at least two fragments) and retention time (2.5% tolerance) using the UNIFI screening platform. Applying MS(e) data against a suspect screening database of over 1000 drugs and metabolites, this method becomes a broad and reliable tool to detect and confirm NPS occurrence. This was demonstrated through the HRMS analysis of three different sewage-based sample types; influent wastewater, passive sampler extracts and pooled urine samples resulting in the concurrent quantification of known psychoactive substances and the identification of NPS and pharmaceuticals.
Sampling-Based RBDO Using Stochastic Sensitivity and Dynamic Kriging for Broader Army Applications
2011-08-09
AND DYNAMIC KRIGING FOR BROADER ARMY APPLICATIONS K.K. Choi, Ikjin Lee, Liang Zhao, and Yoojeong Noh Department of Mechanical and Industrial...Thus, for broader Army applications, a sampling-based RBDO method using surrogate model has been developed recently. The Dynamic Kriging (DKG) method...Uuing Stochastic Sensitivity and Dynamic Kriging for Broader Army Applications 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6
Sample processing for DNA chip array-based analysis of enterohemorrhagic Escherichia coli (EHEC)
Enfors Sven-Olof; Wegrzyn Grzegorz; Basselet Pascal; Gabig-Ciminska Magdalena
2008-01-01
Abstract Background Exploitation of DNA-based analyses of microbial pathogens, and especially simultaneous typing of several virulence-related genes in bacteria is becoming an important objective of public health these days. Results A procedure for sample processing for a confirmative analysis of enterohemorrhagic Escherichia coli (EHEC) on a single colony with DNA chip array was developed and is reported here. The protocol includes application of fragmented genomic DNA from ultrasonicated co...
Experiments on the dryout behavior of stratified debris beds
Leininger, Simon; Kulenovic, Rudi; Laurien, Eckart [Stuttgart Univ. (Germany). Inst. of Nuclear Technology and Energy Systems (IKE)
2015-10-15
In case of a severe accident with loss of coolant and core meltdown a particle bed (debris) can be formed. The removal of decay heat from the debris bed is of prime importance for the bed's long-term coolability to guarantee the integrity of the RPV. In contrast to previous experiments, the focus is on stratified beds. The experiments have pointed out that the bed's coolability is significantly affected.
A statistical mechanics approach to mixing in stratified fluids
Venaille, A.; Gostiaux, L.; Sommeria, J.
2017-01-01
Predicting how much mixing occurs when a given amount of energy is injected into a Boussinesq fluid is a longstanding problem in stratified turbulence. The huge number of degrees of freedom involved in those processes renders extremely difficult a deterministic approach to the problem. Here we present a statistical mechanics approach yielding prediction for a cumulative, global mixing efficiency as a function of a global Richardson number and the background buoyancy profile.
Dust particle charge distribution in a stratified glow discharge
Sukhinin, Gennady I [Institute of Thermophysics, Siberian Branch, Russian Academy of Sciences, Lavrentyev Ave., 1, Novosibirsk 630090 (Russian Federation); Fedoseev, Alexander V [Institute of Thermophysics, Siberian Branch, Russian Academy of Sciences, Lavrentyev Ave., 1, Novosibirsk 630090 (Russian Federation); Ramazanov, Tlekkabul S [Institute of Experimental and Theoretical Physics, Al Farabi Kazakh National University, Tole Bi, 96a, Almaty 050012 (Kazakhstan); Dzhumagulova, Karlygash N [Institute of Experimental and Theoretical Physics, Al Farabi Kazakh National University, Tole Bi, 96a, Almaty 050012 (Kazakhstan); Amangaliyeva, Rauan Zh [Institute of Experimental and Theoretical Physics, Al Farabi Kazakh National University, Tole Bi, 96a, Almaty 050012 (Kazakhstan)
2007-12-21
The influence of a highly pronounced non-equilibrium characteristic of the electron energy distribution function in a stratified dc glow discharge on the process of dust particle charging in a complex plasma is taken into account for the first time. The calculated particle charge spatial distribution is essentially non-homogeneous and it can explain the vortex motion of particles at the periphery of a dusty cloud obtained in experiments.
Xu, Peng; Yao, Dezhong; Luo, Fen
2005-08-01
The registration method based on mutual information is currently a popular technique for the medical image registration, but the computation for the mutual information is complex and the registration speed is slow. In engineering process, a subsampling technique is taken to accelerate the registration speed at the cost of registration accuracy. In this paper a new method based on statistics sample theory is developed, which has both a higher speed and a higher accuracy as compared with the normal subsampling method, and the simulation results confirm the validity of the new method.
Thermal stratification built up in hot water tank with different inlet stratifiers
Dragsted, Janne; Furbo, Simon; Dannemand, Mark
2017-01-01
H is a rigid plastic pipe with holes for each 30 cm. The holes are designed with flaps preventing counter flow into the pipe. The inlet stratifier from EyeCular Technologies ApS is made of a flexible polymer with openings all along the side and in the full length of the stratifier. The flexibility...... in order to elucidate how well thermal stratification is established in the tank with differently designed inlet stratifiers under different controlled laboratory conditions. The investigated inlet stratifiers are from Solvis GmbH & Co KG and EyeCular Technologies ApS. The inlet stratifier from Solvis Gmb...... of the stratifier prevents counterflow. The tests have shown that both types of inlet stratifiers had an ability to create stratification in the test tank under the different test conditions. The stratifier from EyeCular Technologies ApS had a better performance at low flows of 1-2 l/min and the stratifier...
Acrylamide exposure among Turkish toddlers from selected cereal-based baby food samples.
Cengiz, Mehmet Fatih; Gündüz, Cennet Pelin Boyacı
2013-10-01
In this study, acrylamide exposure from selected cereal-based baby food samples was investigated among toddlers aged 1-3 years in Turkey. The study contained three steps. The first step was collecting food consumption data and toddlers' physical properties, such as gender, age and body weight, using a questionnaire given to parents by a trained interviewer between January and March 2012. The second step was determining the acrylamide levels in food samples that were reported on by the parents in the questionnaire, using a gas chromatography-mass spectrometry (GC-MS) method. The last step was combining the determined acrylamide levels in selected food samples with individual food consumption and body weight data using a deterministic approach to estimate the acrylamide exposure levels. The mean acrylamide levels of baby biscuits, breads, baby bread-rusks, crackers, biscuits, breakfast cereals and powdered cereal-based baby foods were 153, 225, 121, 604, 495, 290 and 36 μg/kg, respectively. The minimum, mean and maximum acrylamide exposures were estimated to be 0.06, 1.43 and 6.41 μg/kg BW per day, respectively. The foods that contributed to acrylamide exposure were aligned from high to low as bread, crackers, biscuits, baby biscuits, powdered cereal-based baby foods, baby bread-rusks and breakfast cereals.
Van der Auwera, Sandra; Wittfeld, Katharina; Shumskaya, Elena; Bralten, Janita; Zwiers, Marcel P; Onnink, A Marten H; Usberti, Niccolo; Hertel, Johannes; Völzke, Henry; Völker, Uwe; Hosten, Norbert; Franke, Barbara; Grabe, Hans J
2017-04-01
Schizophrenia is associated with brain structural abnormalities including gray and white matter volume reductions. Whether these alterations are caused by genetic risk variants for schizophrenia is unclear. Previous attempts to detect associations between polygenic factors for schizophrenia and structural brain phenotypes in healthy subjects have been negative or remain non-replicated. In this study, we used genetic risk scores that were based on the accumulated effect of selected risk variants for schizophrenia belonging to specific biological systems like synaptic function, neurodevelopment, calcium signaling, and glutamatergic neurotransmission. We hypothesized that this "biologically informed" approach would provide the missing link between genetic risk for schizophrenia and brain structural phenotypes. We applied whole-brain voxel-based morphometry (VBM) analyses in two population-based target samples and subsequent regions of interest (ROIs) analyses in an independent replication sample (total N = 2725). No consistent association between the genetic scores and brain volumes were observed in the investigated samples. These results suggest that in healthy subjects with a higher genetic risk for schizophrenia additional factors apart from common genetic variants (e.g., infection, trauma, rare genetic variants, or gene-gene interactions) are required to induce structural abnormalities of the brain. Further studies are recommended to test for possible gene-gene or gene-environment effects. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Vincze, Miklos; Harlander, Uwe; Gal, Patrice Le
2016-01-01
A water-filled differentially heated rotating annulus with initially prepared stable vertical salinity profiles is studied in the laboratory. Based on two-dimensional horizontal particle image velocimetry (PIV) data, and infrared camera visualizations, we describe the appearance and the characteristics of the baroclinic instability in this original configuration. First, we show that when the salinity profile is linear and confined between two non stratified layers at top and bottom, only two separate shallow fluid layers can be destabilized. These unstable layers appear nearby the top and the bottom of the tank with a stratified motionless zone between them. This laboratory arrangement is thus particularly interesting to model geophysical or astrophysical situations where stratified regions are often juxtaposed to convective ones. Then, for more general but stable initial density profiles, statistical measures are introduced to quantify the extent of the baroclinic instability at given depths and to analyze t...
Stability of stratified two-phase flows in horizontal channels
Barmak, I.; Gelfgat, A.; Vitoshkin, H.; Ullmann, A.; Brauner, N.
2016-04-01
Linear stability of stratified two-phase flows in horizontal channels to arbitrary wavenumber disturbances is studied. The problem is reduced to Orr-Sommerfeld equations for the stream function disturbances, defined in each sublayer and coupled via boundary conditions that account also for possible interface deformation and capillary forces. Applying the Chebyshev collocation method, the equations and interface boundary conditions are reduced to the generalized eigenvalue problems solved by standard means of numerical linear algebra for the entire spectrum of eigenvalues and the associated eigenvectors. Some additional conclusions concerning the instability nature are derived from the most unstable perturbation patterns. The results are summarized in the form of stability maps showing the operational conditions at which a stratified-smooth flow pattern is stable. It is found that for gas-liquid and liquid-liquid systems, the stratified flow with a smooth interface is stable only in confined zone of relatively low flow rates, which is in agreement with experiments, but is not predicted by long-wave analysis. Depending on the flow conditions, the critical perturbations can originate mainly at the interface (so-called "interfacial modes of instability") or in the bulk of one of the phases (i.e., "shear modes"). The present analysis revealed that there is no definite correlation between the type of instability and the perturbation wavelength.
Continuous Dependence on the Density for Stratified Steady Water Waves
Chen, Robin Ming; Walsh, Samuel
2016-02-01
There are two distinct regimes commonly used to model traveling waves in stratified water: continuous stratification, where the density is smooth throughout the fluid, and layer-wise continuous stratification, where the fluid consists of multiple immiscible strata. The former is the more physically accurate description, but the latter is frequently more amenable to analysis and computation. By the conservation of mass, the density is constant along the streamlines of the flow; the stratification can therefore be specified by prescribing the value of the density on each streamline. We call this the streamline density function. Our main result states that, for every smoothly stratified periodic traveling wave in a certain small-amplitude regime, there is an L ∞ neighborhood of its streamline density function such that, for any piecewise smooth streamline density function in that neighborhood, there is a corresponding traveling wave solution. Moreover, the mapping from streamline density function to wave is Lipschitz continuous in a certain function space framework. As this neighborhood includes piecewise smooth densities with arbitrarily many jump discontinues, this theorem provides a rigorous justification for the ubiquitous practice of approximating a smoothly stratified wave by a layered one. We also discuss some applications of this result to the study of the qualitative features of such waves.
Andrea Paz; Andrew J Crawford
2012-11-01
Molecular markers offer a universal source of data for quantifying biodiversity. DNA barcoding uses a standardized genetic marker and a curated reference database to identify known species and to reveal cryptic diversity within well-sampled clades. Rapid biological inventories, e.g. rapid assessment programs (RAPs), unlike most barcoding campaigns, are focused on particular geographic localities rather than on clades. Because of the potentially sparse phylogenetic sampling, the addition of DNA barcoding to RAPs may present a greater challenge for the identification of named species or for revealing cryptic diversity. In this article we evaluate the use of DNA barcoding for quantifying lineage diversity within a single sampling site as compared to clade-based sampling, and present examples from amphibians. We compared algorithms for identifying DNA barcode clusters (e.g. species, cryptic species or Evolutionary Significant Units) using previously published DNA barcode data obtained from geography-based sampling at a site in Central Panama, and from clade-based sampling in Madagascar. We found that clustering algorithms based on genetic distance performed similarly on sympatric as well as clade-based barcode data, while a promising coalescent-based method performed poorly on sympatric data. The various clustering algorithms were also compared in terms of speed and software implementation. Although each method has its shortcomings in certain contexts, we recommend the use of the ABGD method, which not only performs fairly well under either sampling method, but does so in a few seconds and with a user-friendly Web interface.
Optical biosensor system with integrated microfluidic sample preparation and TIRF based detection
Gilli, Eduard; Scheicher, Sylvia R.; Suppan, Michael; Pichler, Heinz; Rumpler, Markus; Satzinger, Valentin; Palfinger, Christian; Reil, Frank; Hajnsek, Martin; Köstler, Stefan
2013-05-01
There is a steadily growing demand for miniaturized bioanalytical devices allowing for on-site or point-of-care detection of biomolecules or pathogens in applications like diagnostics, food testing, or environmental monitoring. These, so called labs-on-a-chip or micro-total analysis systems (μ-TAS) should ideally enable convenient sample-in - result-out type operation. Therefore, the entire process from sample preparation, metering, reagent incubation, etc. to detection should be performed on a single disposable device (on-chip). In the early days such devices were mainly fabricated using glass or silicon substrates and adapting established fabrication technologies from the electronics and semiconductor industry. More recently, the development focuses on the use of thermoplastic polymers as they allow for low-cost high volume fabrication of disposables. One of the most promising materials for the development of plastic based lab-on-achip systems are cyclic olefin polymers and copolymers (COP/COC) due to their excellent optical properties (high transparency and low autofluorescence) and ease of processing. We present a bioanalytical system for whole blood samples comprising a disposable plastic chip based on TIRF (total internal reflection fluorescence) optical detection. The chips were fabricated by compression moulding of COP and microfluidic channels were structured by hot embossing. These microfluidic structures integrate several sample pretreatment steps. These are the separation of erythrocytes, metering of sample volume using passive valves, and reagent incubation for competitive bioassays. The surface of the following optical detection zone is functionalized with specific capture probes in an array format. The plastic chips comprise dedicated structures for simple and effective coupling of excitation light from low-cost laser diodes. This enables TIRF excitation of fluorescently labeled probes selectively bound to detection spots at the microchannel surface
Influenza A Virus Surveillance Based on Pre-Weaning Piglet Oral Fluid Samples.
Panyasing, Y; Goodell, C; Kittawornrat, A; Wang, C; Levis, I; Desfresne, L; Rauh, R; Gauger, P C; Zhang, J; Lin, X; Azeem, S; Ghorbani-Nezami, S; Yoon, K-J; Zimmerman, J
2016-10-01
Influenza A virus (IAV) surveillance using pre-weaning oral fluid samples from litters of piglets was evaluated in four ˜12 500 sow and IAV-vaccinated, breeding herds. Oral fluid samples were collected from 600 litters and serum samples from their dams at weaning. Litter oral fluid samples were tested for IAV by virus isolation, quantitative reverse transcription-polymerase chain reaction (qRT-PCR), RT-PCR subtyping and sequencing. Commercial nucleoprotein (NP) enzyme-linked immunosorbent assay (ELISA) kits and NP isotype-specific assays (IgM, IgA and IgG) were used to characterize NP antibody in litter oral fluid and sow serum. All litter oral fluid specimens (n = 600) were negative by virus isolation. Twenty-five oral fluid samples (25/600 = 4.2%) were qRT-PCR positive based on screening (Laboratory 1) and confirmatory testing (Laboratory 2). No hemagglutinin (HA) and neuraminidase (NA) gene sequences were obtained, but matrix (M) gene sequences were obtained for all qRT-PCR-positive samples submitted for sequencing (n = 18). Genetic analysis revealed that all M genes sequences were identical (GenBank accession no. KF487544) and belonged to the triple reassortant influenza A virus M gene (TRIG M) previously identified in swine. The proportion of IgM- and IgA-positive samples was significantly higher in sow serum and litter oral fluid samples, respectively (P oral fluids. This study supported the use of oral fluid sampling as a means of conducting IAV surveillance in pig populations and demonstrated the inapparent circulation of IAV in piglets. Future work on IAV oral fluid diagnostics should focus on improved procedures for virus isolation, subtyping and sequencing of HA and NA genes. The role of antibody in IAV surveillance remains to be elucidated, but longitudinal assessment of specific antibody has the potential to provide information regarding patterns of infection, vaccination status and herd immunity.
Lounsbury, Jenny A; Coult, Natalie; Miranian, Daniel C; Cronk, Stephen M; Haverstick, Doris M; Kinnon, Paul; Saul, David J; Landers, James P
2012-09-01
Extraction of DNA from forensic samples typically uses either an organic extraction protocol or solid phase extraction (SPE) and these methods generally involve numerous sample transfer, wash and centrifugation steps. Although SPE has been successfully adapted to the microdevice, it can be problematic because of lengthy load times and uneven packing of the solid phase. A closed-tube enzyme-based DNA preparation method has recently been developed which uses a neutral proteinase to lyse cells and degrade proteins and nucleases [14]. Following a 20 min incubation of the buccal or whole blood sample with this proteinase, DNA is polymerase chain reaction (PCR)-ready. This paper describes the optimization and quantitation of DNA yield using this method, and application to forensic biological samples, including UV- and heat-degraded whole blood samples on cotton or blue denim substrates. Results demonstrate that DNA yield can be increased from 1.42 (±0.21)ng/μL to 7.78 (±1.40)ng/μL by increasing the quantity of enzyme per reaction by 3-fold. Additionally, there is a linear relationship between the amount of starting cellular material added and the concentration of DNA in the solution, thereby allowing DNA yield estimations to be made. In addition, short tandem repeat (STR) profile results obtained using DNA prepared with the enzyme method were comparable to those obtained with a conventional SPE method, resulting in full STR profiles (16 of 16 loci) from liquid samples (buccal swab eluate and whole blood), dried buccal swabs and bloodstains and partial profiles from UV or heat-degraded bloodstains on cotton or blue denim substrates. Finally, the DNA preparation method is shown to be adaptable to glass or poly(methyl methacrylate) (PMMA) microdevices with little impact on STR peak height but providing a 20-fold reduction in incubation time (as little as 60 s), leading to a ≥1 h reduction in DNA preparation time.
B. G. Martinsson
2014-04-01
Full Text Available Inter-comparison of results from two kinds of aerosol systems in the CARIBIC (Civil Aircraft for the Regular Investigation of the atmosphere Based on an Instrument Container passenger aircraft based observatory, operating during intercontinental flights at 9–12 km altitude, is presented. Aerosol from the lowermost stratosphere (LMS, the extra-tropical upper troposphere (UT and the tropical mid troposphere (MT were investigated. Aerosol particle volume concentration measured with an optical particle counter (OPC is compared with analytical results of the sum of masses of all major and several minor constituents from aerosol samples collected with an impactor. Analyses were undertaken with accelerator-based methods particle-induced X-ray emission (PIXE and particle elastic scattering analysis (PESA. Data from 48 flights during one year are used, leading to a total of 106 individual comparisons. The ratios of the particle volume from the OPC and the total mass from the analyses were in 84% within a relatively narrow interval. Data points outside this interval are connected with inlet-related effects in clouds, large variability in aerosol composition, particle size distribution effects and some cases of non-ideal sampling. Overall, the comparison of these two CARIBIC measurements based on vastly different methods show good agreement, implying that the chemical and size information can be combined in studies of the MT/UT/LMS aerosol.
Validation of genetic algorithm-based optimal sampling for ocean data assimilation
Heaney, Kevin D.; Lermusiaux, Pierre F. J.; Duda, Timothy F.; Haley, Patrick J.
2016-08-01
Regional ocean models are capable of forecasting conditions for usefully long intervals of time (days) provided that initial and ongoing conditions can be measured. In resource-limited circumstances, the placement of sensors in optimal locations is essential. Here, a nonlinear optimization approach to determine optimal adaptive sampling that uses the genetic algorithm (GA) method is presented. The method determines sampling strategies that minimize a user-defined physics-based cost function. The method is evaluated using identical twin experiments, comparing hindcasts from an ensemble of simulations that assimilate data selected using the GA adaptive sampling and other methods. For skill metrics, we employ the reduction of the ensemble root mean square error (RMSE) between the "true" data-assimilative ocean simulation and the different ensembles of data-assimilative hindcasts. A five-glider optimal sampling study is set up for a 400 km × 400 km domain in the Middle Atlantic Bight region, along the New Jersey shelf-break. Results are compared for several ocean and atmospheric forcing conditions.
Frenzel, Wolfgang; Markeviciute, Inga
2017-01-06
Sample preparation is the bottleneck of many analytical methods, including ion chromatography (IC). Procedures based on the application of membranes are important, yet not well appreciated means for clean-up and analyte preconcentration of liquid samples. Filtration, ultrafiltration, the variety of dialysis techniques, i.e. passive dialysis, Donnan dialysis and electrodialysis, as well as gas-diffusion are being reviewed here with respect to their application in combination with IC. Instrumental aspects including hardware requirements, configuration of membrane separation units and membrane characteristics are presented. Operation in batch and flow-through mode is described with emphasis on the latter to in-line coupling with IC, permitting fully automated operation. Attention is also drawn to dialysis probes and microdialysis both providing options for in-situ measurements with inherent selective sampling of analytes and sample preparation. The respective features of the various techniques are outlined with respect to the possibilities of matrix removal and selectivity enhancement. In this article, we provide examples of application of the diverse membrane separation techniques and discuss the benefits and limitations thereof.
An Evaluation of Incentive Experiments in a Two-Phase Address-Based Sample Mail Survey
Daifeng Han
2013-12-01
Full Text Available Address-based sampling (ABS with a two-phase data collection approach has emerged as a promising alternative to random digit dial (RDD surveys for studying specific subpopulations in the United States. In 2011, the National Household Education Surveys Program Field Test used a two-phase ABS design with a postal or mail screener to identify households with eligible children and a mail topical questionnaire administered to parents of sampled children to collect measures of interest. Experiments with prepaid cash incentives and special mail delivery methods were applied in both phases. For the screener, sampled addresses were randomly designated to receive either $2 or $5 in the initial mailing. During the topical phase, incentives (ranging from $0 to $20 and delivery methods (First Class Mail or Priority Mail were assigned randomly but depended on how quickly the household had responded to the screener. The paper first evaluates the effects of incentives on response rates, and then examines incentive levels for attracting the hard-to-reach groups and improving sample composition. The impact of incentive on data collection cost is also examined.
Double-antibody based immunoassay for the detection of β-casein in bovine milk samples.
Zhou, Y; Song, F; Li, Y S; Liu, J Q; Lu, S Y; Ren, H L; Liu, Z S; Zhang, Y Y; Yang, L; Li, Z H; Zhang, J H; Wang, X R
2013-11-01
The concentration of casein (CN) is one of the most important parameters for measuring the quality of bovine milk. Traditional approach to CN concentration determination is Kjeldahl, which is an indirect method for determination of total nitrogen content. Here, we described a double-antibody based direct immunoassay for the detection of β-CN in bovine milk samples. Monoclonal antibody (McAb) was used as capture antibody and polyclonal antibody (PcAb) labelled with horseradish peroxidase (HRP) as detection antibody. With the direct immunoassay format, the linear range of the detection was 0.1-10.0 μg mL(-1). The detection limit was 0.04 μg mL(-1). In addition, the concentration of β-CN in real bovine milk samples has been detected by the developed immunoassay. There was a good correlation between the results obtained by the developed technique and Kjeldahl method from commercial samples. Compared to the traditional approach, the advantage of the assay is no need of time-consuming sample pretreatment.
Recommended Mass Spectrometry-Based Strategies to Identify Botulinum Neurotoxin-Containing Samples
Suzanne R. Kalb
2015-05-01
Full Text Available Botulinum neurotoxins (BoNTs cause the disease called botulism, which can be lethal. BoNTs are proteins secreted by some species of clostridia and are known to cause paralysis by interfering with nerve impulse transmission. Although the human lethal dose of BoNT is not accurately known, it is estimated to be between 0.1 μg to 70 μg, so it is important to enable detection of small amounts of these toxins. Our laboratory previously reported on the development of Endopep-MS, a mass-spectrometric‑based endopeptidase method to detect, differentiate, and quantify BoNT immunoaffinity purified from complex matrices. In this work, we describe the application of Endopep-MS for the analysis of thirteen blinded samples supplied as part of the EQuATox proficiency test. This method successfully identified the presence or absence of BoNT in all thirteen samples and was able to successfully differentiate the serotype of BoNT present in the samples, which included matrices such as buffer, milk, meat extract, and serum. Furthermore, the method yielded quantitative results which had z-scores in the range of −3 to +3 for quantification of BoNT/A containing samples. These results indicate that Endopep-MS is an excellent technique for detection, differentiation, and quantification of BoNT in complex matrices.
Accurate Frequency Estimation Based On Three-Parameter Sine-Fitting With Three FFT Samples
Liu Xin
2015-09-01
Full Text Available This paper presents a simple DFT-based golden section searching algorithm (DGSSA for the single tone frequency estimation. Because of truncation and discreteness in signal samples, Fast Fourier Transform (FFT and Discrete Fourier Transform (DFT are inevitable to cause the spectrum leakage and fence effect which lead to a low estimation accuracy. This method can improve the estimation accuracy under conditions of a low signal-to-noise ratio (SNR and a low resolution. This method firstly uses three FFT samples to determine the frequency searching scope, then – besides the frequency – the estimated values of amplitude, phase and dc component are obtained by minimizing the least square (LS fitting error of three-parameter sine fitting. By setting reasonable stop conditions or the number of iterations, the accurate frequency estimation can be realized. The accuracy of this method, when applied to observed single-tone sinusoid samples corrupted by white Gaussian noise, is investigated by different methods with respect to the unbiased Cramer-Rao Low Bound (CRLB. The simulation results show that the root mean square error (RMSE of the frequency estimation curve is consistent with the tendency of CRLB as SNR increases, even in the case of a small number of samples. The average RMSE of the frequency estimation is less than 1.5 times the CRLB with SNR = 20 dB and N = 512.
Three-dimensional cathodoluminescence characterization of a semipolar GaInN based LED sample
Hocker, Matthias; Maier, Pascal; Tischer, Ingo; Meisch, Tobias; Caliebe, Marian; Scholz, Ferdinand; Mundszinger, Manuel; Kaiser, Ute; Thonke, Klaus
2017-02-01
A semipolar GaInN based light-emitting diode (LED) sample is investigated by three-dimensionally resolved cathodoluminescence (CL) mapping. Similar to conventional depth-resolved CL spectroscopy (DRCLS), the spatial resolution perpendicular to the sample surface is obtained by calibration of the CL data with Monte-Carlo-simulations (MCSs) of the primary electron beam scattering. In addition to conventional MCSs, we take into account semiconductor-specific processes like exciton diffusion and the influence of the band gap energy. With this method, the structure of the LED sample under investigation can be analyzed without additional sample preparation, like cleaving of cross sections. The measurement yields the thickness of the p-type GaN layer, the vertical position of the quantum wells, and a defect analysis of the underlying n-type GaN, including the determination of the free charge carrier density. The layer arrangement reconstructed from the DRCLS data is in good agreement with the nominal parameters defined by the growth conditions.
Cho, Su Gil; Jang, Jun Yong; Kim, Ji Hoon; Lee, Tae Hee [Hanyang University, Seoul (Korea, Republic of); Lee, Min Uk [Romax Technology Ltd., Seoul (Korea, Republic of); Choi, Jong Su; Hong, Sup [Korea Research Institute of Ships and Ocean Engineering, Daejeon (Korea, Republic of)
2015-04-15
Sequential surrogate model-based global optimization algorithms, such as super-EGO, have been developed to increase the efficiency of commonly used global optimization technique as well as to ensure the accuracy of optimization. However, earlier studies have drawbacks because there are three phases in the optimization loop and empirical parameters. We propose a united sampling criterion to simplify the algorithm and to achieve the global optimum of problems with constraints without any empirical parameters. It is able to select the points located in a feasible region with high model uncertainty as well as the points along the boundary of constraint at the lowest objective value. The mean squared error determines which criterion is more dominant among the infill sampling criterion and boundary sampling criterion. Also, the method guarantees the accuracy of the surrogate model because the sample points are not located within extremely small regions like super-EGO. The performance of the proposed method, such as the solvability of a problem, convergence properties, and efficiency, are validated through nonlinear numerical examples with disconnected feasible regions.
Seroincidence of non-typhoid Salmonella infections: convenience vs. random community-based sampling.
Emborg, H-D; Simonsen, J; Jørgensen, C S; Harritshøj, L H; Krogfelt, K A; Linneberg, A; Mølbak, K
2016-01-01
The incidence of reported infections of non-typhoid Salmonella is affected by biases inherent to passive laboratory surveillance, whereas analysis of blood sera may provide a less biased alternative to estimate the force of Salmonella transmission in humans. We developed a mathematical model that enabled a back-calculation of the annual seroincidence of Salmonella based on measurements of specific antibodies. The aim of the present study was to determine the seroincidence in two convenience samples from 2012 (Danish blood donors, n = 500, and pregnant women, n = 637) and a community-based sample of healthy individuals from 2006 to 2007 (n = 1780). The lowest antibody levels were measured in the samples from the community cohort and the highest in pregnant women. The annual Salmonella seroincidences were 319 infections/1000 pregnant women [90% credibility interval (CrI) 210-441], 182/1000 in blood donors (90% CrI 85-298) and 77/1000 in the community cohort (90% CrI 45-114). Although the differences between study populations decreased when accounting for different age distributions the estimates depend on the study population. It is important to be aware of this issue and define a certain population under surveillance in order to obtain consistent results in an application of serological measures for public health purposes.
Morphometric analysis of endometrial cells in liquid-based cervical cytology samples.
Gupta, P; Gupta, N; Dey, P
2017-04-01
Exfoliated endometrial cells can be seen in cervical smears in association with a wide variety of conditions ranging from normal proliferative endometrium to endometrial malignancies. It is often difficult to differentiate between benign, atypical and malignant endometrial cells using cytomorphology alone. This study was conducted to evaluate if morphometric analysis of endometrial nuclei on liquid-based cervical samples could be of help in differentiating between these endometrial cells. Three groups of cervical samples with histopathological correlation were selected: Group A: showing benign endometrial cells; Group B: showing atypical endometrial cells and Group C: showing malignant endometrial cells. There were 30 cases each in Group A and B and 39 cases in Group C. Image J, NIH, USA was used for selecting the endometrial nuclei and performing the morphometric measurements. MANOVA was used for statistical analysis. The mean nuclear area and nuclear perimeter were significantly different between the three groups of endometrial cells with a P-value liquid-based cervical cytology samples. © 2016 John Wiley & Sons Ltd.
Gaussian process based intelligent sampling for measuring nano-structure surfaces
Sun, L. J.; Ren, M. J.; Yin, Y. H.
2016-09-01
Nanotechnology is the science and engineering that manipulate matters at nano scale, which can be used to create many new materials and devices with a vast range of applications. As the nanotech product increasingly enters the commercial marketplace, nanometrology becomes a stringent and enabling technology for the manipulation and the quality control of the nanotechnology. However, many measuring instruments, for instance scanning probe microscopy, are limited to relatively small area of hundreds of micrometers with very low efficiency. Therefore some intelligent sampling strategies should be required to improve the scanning efficiency for measuring large area. This paper presents a Gaussian process based intelligent sampling method to address this problem. The method makes use of Gaussian process based Bayesian regression as a mathematical foundation to represent the surface geometry, and the posterior estimation of Gaussian process is computed by combining the prior probability distribution with the maximum likelihood function. Then each sampling point is adaptively selected by determining the position which is the most likely outside of the required tolerance zone among the candidates and then inserted to update the model iteratively. Both simulationson the nominal surface and manufactured surface have been conducted on nano-structure surfaces to verify the validity of the proposed method. The results imply that the proposed method significantly improves the measurement efficiency in measuring large area structured surfaces.
Baba, Justin S [ORNL; John, Dwayne O [ORNL; Koju, Vijay [ORNL
2015-01-01
The propagation of light in turbid media is an active area of research with relevance to numerous investigational fields, e.g., biomedical diagnostics and therapeutics. The statistical random-walk nature of photon propagation through turbid media is ideal for computational based modeling and simulation. Ready access to super computing resources provide a means for attaining brute force solutions to stochastic light-matter interactions entailing scattering by facilitating timely propagation of sufficient (>10million) photons while tracking characteristic parameters based on the incorporated physics of the problem. One such model that works well for isotropic but fails for anisotropic scatter, which is the case for many biomedical sample scattering problems, is the diffusion approximation. In this report, we address this by utilizing Berry phase (BP) evolution as a means for capturing anisotropic scattering characteristics of samples in the preceding depth where the diffusion approximation fails. We extend the polarization sensitive Monte Carlo method of Ramella-Roman, et al.,1 to include the computationally intensive tracking of photon trajectory in addition to polarization state at every scattering event. To speed-up the computations, which entail the appropriate rotations of reference frames, the code was parallelized using OpenMP. The results presented reveal that BP is strongly correlated to the photon penetration depth, thus potentiating the possibility of polarimetric depth resolved characterization of highly scattering samples, e.g., biological tissues.
Yamamoto, Takashi; Noma, Yukio; Yasuhara, Akio; Sakai, Shin-ichi
2003-10-31
We present the first study on the analytical methods of phenyltin compounds (PTs) in polychlorinated biphenyl (PCB)-based transformer oil samples. Tetraphenyltin (TePhT) has been used as stabilizer for some kinds of PCBs-based transformer oil formulations. Monophenyltin (MPhT), diphenyltin (DPhT) and triphenyltin (TrPhT) could have been formed from TePhT during long-term use. TePhT was directly measured by gas chromatograph (GC) connected with three types of detectors, a mass spectrometer (MS), a flame photometric detector (FPD) and an atomic emission detector (AED) after dilution with hexane. MPhT, DPhT and TrPhT were propylated with Grignard reagent before measurement. The MS was the most sensitive of the detectors, with detection limits of phenyltin compounds of 30 ng/ml (MPhT), 9.8 ng/ml (DPhT), 5.5 ng/ml (TrPhT) and 0.60 ng/ml (TePhT), respectively. From the viewpoint of selectivity, MS was slightly worse than other detectors, but interference from PCBs matrices was not significant under ordinary analytical conditions. Two used transformer oil samples were analyzed using the analytical methods developed in this study. TePhT and TrPhT were found in both samples.
Wang, Shu-min; Zhang, Ai-wu; Hu, Shao-xing; Wang, Jing-meng; Meng, Xian-gang; Duan, Yi-hao; Sun, Wei-dong
2015-02-01
As the rotation speed of ground based hyperspectral imaging system is too fast in the image collection process, which exceeds the speed limitation, there is data missed in the rectified image, it shows as the_black lines. At the same time, there is serious distortion in the collected raw images, which effects the feature information classification and identification. To solve these problems, in this paper, we introduce the each component of the ground based hyperspectral imaging system at first, and give the general process of data collection. The rotation speed is controlled in data collection process, according to the image cover area of each frame and the image collection speed of the ground based hyperspectral imaging system, And then the spatial orientation model is deduced in detail combining with the star scanning angle, stop scanning angle and the minimum distance between the sensor and the scanned object etc. The oriented image is divided into grids and resampled with new spectral. The general flow of distortion image corrected is presented in this paper. Since the image spatial resolution is different between the adjacent frames, and in order to keep the highest image resolution of corrected image, the minimum ground sampling distance is employed as the grid unit to divide the geo-referenced image. Taking the spectral distortion into account caused by direct sampling method when the new uniform grids and the old uneven grids are superimposed to take the pixel value, the precise spectral sampling method based on the position distribution is proposed. The distortion image collected in Lao Si Cheng ruin which is in the Zhang Jiajie town Hunan province is corrected through the algorithm proposed on above. The features keep the original geometric characteristics. It verifies the validity of the algorithm. And we extract the spectral of different features to compute the correlation coefficient. The results show that the improved spectral sampling method is
A Venue-Based Method for Sampling Hard-to-Reach Populations
Farzana B. Muhib; Lillian S. Lin; Ann Stueve; Robin L. Miller; Wesley L. Ford; Wayne D. Johnson; Philip J. Smith
2001-01-01
.... Traditional sample survey methods, such as random sampling from telephone or mailing lists, can yield low numbers of eligible respondents while non-probability sampling introduces unknown biases...
杨春华; 杨玲
2016-01-01
In order to make the combine harvester with automatic measurement and production function , a new output forecasting error eliminating model is proposed based on variable weight stratified activation diffusion .The main function of the measuring system is that the harvester can measure the current operating speed , harvest area and the total output of grain , when the machine works in the field .Datas are collected by using a Holzer sensor and a capacitive pressure sen-sor , which has a higher accuracy .ADC0804 differential A/D conversion chip is selected , which can effectively overcome the system error , the data is transmitted to the processing center of single chip microcomputer .The data is used to deter-mine the error of the variable weight stratified activation diffusion model , and the data will be displayed in the LCD dis-play.The system was applied to the harvester ,and the measured value of grain output was obtained by test .The reliability of the system was verified by comparing with the real value .%为了使联合收割机具有自动测产功能，提出了一种基于变权分层激活扩散的产量预测误差剔除模型，并使用单片机设计了联合收获机测产系统。测产系统的主要功能是：在田间进行作业时，收割机可以测出当前的运行速度、收获面积及谷物的总体产量。数据的采集使用霍尔传感器和电容压力传感器，具有较高的精度。模拟信号的处理选用了 ADC0804差分式 A／D转换芯片，可以有效地克服系统误差，数据传送到单片机处理中心，对每一次转换都进行一次判断，利用变权分层激活扩散模型剔除误差较大的数据，通过计算将数据最终在LCD显示屏进行显示。将系统应用在了收割机上，通过测试得到了谷物产量的测量值，并与真实值进行比较，验证了系统的可靠性。
Paz, Andrea; Crawford, Andrew J
2012-11-01
Molecular markers offer a universal source of data for quantifying biodiversity. DNA barcoding uses a standardized genetic marker and a curated reference database to identify known species and to reveal cryptic diversity within wellsampled clades. Rapid biological inventories, e.g. rapid assessment programs (RAPs), unlike most barcoding campaigns, are focused on particular geographic localities rather than on clades. Because of the potentially sparse phylogenetic sampling, the addition of DNA barcoding to RAPs may present a greater challenge for the identification of named species or for revealing cryptic diversity. In this article we evaluate the use of DNA barcoding for quantifying lineage diversity within a single sampling site as compared to clade-based sampling, and present examples from amphibians. We compared algorithms for identifying DNA barcode clusters (e.g. species, cryptic species or Evolutionary Significant Units) using previously published DNA barcode data obtained from geography-based sampling at a site in Central Panama, and from clade-based sampling in Madagascar. We found that clustering algorithms based on genetic distance performed similarly on sympatric as well as clade-based barcode data, while a promising coalescent-based method performed poorly on sympatric data. The various clustering algorithms were also compared in terms of speed and software implementation. Although each method has its shortcomings in certain contexts, we recommend the use of the ABGD method, which not only performs fairly well under either sampling method, but does so in a few seconds and with a user-friendly Web interface.
NMR-based metabolomics: from sample preparation to applications in nutrition research.
Brennan, Lorraine
2014-11-01
Metabolomics is the study of metabolites present in biological samples such as biofluids, tissue/cellular extracts and culture media. Measurement of these metabolites is achieved through use of analytical techniques such as NMR and mass spectrometry coupled to liquid chromatography. Combining metabolomic data with multivariate data analysis tools allows the elucidation of alterations in metabolic pathways under different physiological conditions. Applications of NMR-based metabolomics have grown in recent years and it is now widely used across a number of disciplines. The present review gives an overview of the developments in the key steps involved in an NMR-based metabolomics study. Furthermore, there will be a particular emphasis on the use of NMR-based metabolomics in nutrition research. Copyright © 2014 Elsevier B.V. All rights reserved.
Automatic Motion Generation for Robotic Milling Optimizing Stiffness with Sample-Based Planning
Julian Ricardo Diaz Posada
2017-01-01
Full Text Available Optimal and intuitive robotic machining is still a challenge. One of the main reasons for this is the lack of robot stiffness, which is also dependent on the robot positioning in the Cartesian space. To make up for this deficiency and with the aim of increasing robot machining accuracy, this contribution describes a solution approach for optimizing the stiffness over a desired milling path using the free degree of freedom of the machining process. The optimal motion is computed based on the semantic and mathematical interpretation of the manufacturing process modeled on its components: product, process and resource; and by configuring automatically a sample-based motion problem and the transition-based rapid-random tree algorithm for computing an optimal motion. The approach is simulated on a CAM software for a machining path revealing its functionality and outlining future potentials for the optimal motion generation for robotic machining processes.
杨建宇; 岳彦利; 宋海荣; 叶思菁; 赵龙; 朱德海
2015-01-01
As a large agricultural country, China has a large population but not enough cultivated land. In 2011, the cultivated land per capita was 0.09 hm2, only 40% of the world average level; and it is getting worse with the rapid development of economy, industrialization and urbanization. Through the monitoring network for cultivated land quality in county area, the distribution and change trend of the cultivated land quality can be reflected. Besides, the quality of non-sampled locations should also be estimated with the data of sampling points. Therefore, this paper proposes a new sampling method for monitoring the quality of arable land in county area based on spatial balanced sampling, which is a pre-processing method to determine the number of sampling points, including preprocessing the data of cultivated land quality before sampling, exploring the spatial correlation and spatial distribution pattern of cultivated land quality, and computing the appropriate quantity of sampling points by analyzing the change trend of sampling number and sampling precision. And the spatial balanced sampling method is aimed to optimize spatial sampling design for setting up the monitoring network. It is required for sampling of a population to understand the trends and patterns in natural resource management because of the financial and time constrains. Spatial balanced sampling provides the mathematical foundation for statistical inference, and is efficient but remains flexible to inevitable logistical or practical constrains during filed data collection. There are integrated factors that affect arable land quality inventory and monitoring, such as geomorphic conditions, altitude, gradient and transport cost. Factors are commonly used to modify sampling intensity; some factors, such as category, gradient, or accessibility, can be readily incorporated into the spatially balanced sampling design. In this paper, we take the distance between the sampling points and the main roads, the
Chauffert, Nicolas; Boucher, Marianne; Mériaux, Sébastien; CIUCIU, Philippe
2015-01-01
Performing k-space variable density sampling is a popular way of reducing scanning time in Magnetic Resonance Imaging (MRI). Unfortunately, given a sampling trajectory, it is not clear how to traverse it using gradient waveforms. In this paper, we actually show that existing methods [1, 2] can yield large traversal time if the trajectory contains high curvature areas. Therefore, we consider here a new method for gradient waveform design which is based on the projection of unrealistic initial trajectory onto the set of hardware constraints. Next, we show on realistic simulations that this algorithm allows implementing variable density trajectories resulting from the piecewise linear solution of the Travelling Salesman Problem in a reasonable time. Finally, we demonstrate the application of this approach to 2D MRI reconstruction and 3D angiography in the mouse brain.
A level set based segmentation approach for point-sampled surfaces
MIAO Yong-wei; FENG Jie-qing; ZHENG Guo-xian; PENG Qun-sheng
2007-01-01
Segmenting a complex 3D surface model into some visually meaningful sub-parts is one of the fundamental problems in digital geometry processing. In this paper, a novel segmentation approach of point-sampled surfaces is proposed, which is based on the level set evolution scheme. To segment the model so as to align the patch boundaries with high curvature zones, the driven speed function for the zero level set inside narrow band is defined by the extended curvature field, which approaches zero speed as the propagating front approaches high curvature zone. The effectiveness of the proposed approach is demonstrated by our experimental results. Furthermore, two applications of model segmentation are illustrated, such as piecewise parameterization and local editing for point-sampled geometry.
Byrd, J.S.; Sand, R.J.
1976-10-01
A microcomputer-based pneumatic controller for neutron activation analysis was designed and built at the Savannah River Laboratory for analysis of large numbers of geologic samples for locating potential supplies of uranium ore for the National Uranium Resource Evaluation program. In this system, commercially available microcomputer logic modules are used to transport sample capsules through a network of pressurized air lines. The logic modules are interfaced to pneumatic valves, solenoids, and photo-optical detectors. The system operates from programs stored in firmware (permanent software). It also commands a minicomputer and a hard-wired pulse height analyzer for data collection and bookkeeping tasks. The advantage of the system is that major system changes can be implemented in the firmware with no hardware changes. This report describes the hardware, firmware, and software for the electronics system.
An inversion method based on random sampling for real-time MEG neuroimaging
Pascarella, Annalisa
2016-01-01
The MagnetoEncephaloGraphy (MEG) has gained great interest in neurorehabilitation training due to its high temporal resolution. The challenge is to localize the active regions of the brain in a fast and accurate way. In this paper we use an inversion method based on random spatial sampling to solve the real-time MEG inverse problem. Several numerical tests on synthetic but realistic data show that the method takes just a few hundredths of a second on a laptop to produce an accurate map of the electric activity inside the brain. Moreover, it requires very little memory storage. For this reasons the random sampling method is particularly attractive in real-time MEG applications.
Analyzing EEG of quasi-brain-death based on dynamic sample entropy measures.
Ni, Li; Cao, Jianting; Wang, Rubin
2013-01-01
To give a more definite criterion using electroencephalograph (EEG) approach on brain death determination is vital for both reducing the risks and preventing medical misdiagnosis. This paper presents several novel adaptive computable entropy methods based on approximate entropy (ApEn) and sample entropy (SampEn) to monitor the varying symptoms of patients and to determine the brain death. The proposed method is a dynamic extension of the standard ApEn and SampEn by introducing a shifted time window. The main advantages of the developed dynamic approximate entropy (DApEn) and dynamic sample entropy (DSampEn) are for real-time computation and practical use. Results from the analysis of 35 patients (63 recordings) show that the proposed methods can illustrate effectiveness and well performance in evaluating the brain consciousness states.
SPR based immunosensor for detection of Legionella pneumophila in water samples
Enrico, De Lorenzis; Manera, Maria G.; Montagna, Giovanni; Cimaglia, Fabio; Chiesa, Maurizio; Poltronieri, Palmiro; Santino, Angelo; Rella, Roberto
2013-05-01
Detection of legionellae by water sampling is an important factor in epidemiological investigations of Legionnaires' disease and its prevention. To avoid labor-intensive problems with conventional methods, an alternative, highly sensitive and simple method is proposed for detecting L. pneumophila in aqueous samples. A compact Surface Plasmon Resonance (SPR) instrumentation prototype, provided with proper microfluidics tools, is built. The developed immunosensor is capable of dynamically following the binding between antigens and the corresponding antibody molecules immobilized on the SPR sensor surface. A proper immobilization strategy is used in this work that makes use of an important efficient step aimed at the orientation of antibodies onto the sensor surface. The feasibility of the integration of SPR-based biosensing setups with microfluidic technologies, resulting in a low-cost and portable biosensor is demonstrated.
Zhou, Shuai; Zhou, Ze-Quan; Zhao, Xuan-Xuan; Xiao, Yu-Hao; Xi, Gang; Liu, Jin-Ting; Zhao, Bao-Xiang
2015-09-01
We have developed a novel fluorescent chemosensor (DAM) based on dansyl and morpholine units for the detection of mercury ion with excellent selectivity and sensitivity. In the presence of Hg2+ in a mixture solution of HEPES buffer (pH 7.5, 20 mM) and MeCN (2/8, v/v) at room temperature, the fluorescence of DAM was almost completely quenched from green to colorless with fast response time. Moreover, DAM also showed its excellent anti-interference capability even in the presence of large amount of interfering ions. It is worth noting that DAM could be used to detect Hg2+ specifically in the Yellow River samples, which significantly implied the potential applications of DAM in the complicated environment samples.
Majaron, B; Milanic, M, E-mail: boris.majaron@ijs.s [Jozef Stefan Institute, Jamova 39, SI-1000 Ljubljana (Slovenia)
2010-03-01
Pulsed photothermal profiling involves reconstruction of temperature depth profile induced in a layered sample by single-pulse laser exposure, based on transient change in mid-infrared (IR) emission from its surface. Earlier studies have indicated that in watery tissues, featuring a pronounced spectral variation of mid-IR absorption coefficient, analysis of broadband radiometric signals within the customary monochromatic approximation adversely affects profiling accuracy. We present here an experimental comparison of pulsed photothermal profiling in layered agar gel samples utilizing a spectrally composite kernel matrix vs. the customary approach. By utilizing a custom reconstruction code, the augmented approach reduces broadening of individual temperature peaks to 14% of the absorber depth, in contrast to 21% obtained with the customary approach.
Performance improvement of classifier fusion for batch samples based on upper integral.
Feng, Hui-Min; Wang, Xi-Zhao
2015-03-01
The generalization ability of ELM can be improved by fusing a number of individual ELMs. This paper proposes a new scheme of fusing ELMs based on upper integrals, which differs from all the existing fuzzy integral models of classifier fusion. The new scheme uses the upper integral to reasonably assign tested samples to different ELMs for maximizing the classification efficiency. By solving an optimization problem of upper integrals, we obtain the proportions of assigning samples to different ELMs and their combinations. The definition of upper integral guarantees such a conclusion that the classification accuracy of the fused ELM is not less than that of any individual ELM theoretically. Numerical simulations demonstrate that most existing fusion methodologies such as Bagging and Boosting can be improved by our upper integral model. Copyright © 2014 Elsevier Ltd. All rights reserved.
Agarwalla, Swapna; Sarma, Kandarpa Kumar
2016-06-01
Automatic Speaker Recognition (ASR) and related issues are continuously evolving as inseparable elements of Human Computer Interaction (HCI). With assimilation of emerging concepts like big data and Internet of Things (IoT) as extended elements of HCI, ASR techniques are found to be passing through a paradigm shift. Oflate, learning based techniques have started to receive greater attention from research communities related to ASR owing to the fact that former possess natural ability to mimic biological behavior and that way aids ASR modeling and processing. The current learning based ASR techniques are found to be evolving further with incorporation of big data, IoT like concepts. Here, in this paper, we report certain approaches based on machine learning (ML) used for extraction of relevant samples from big data space and apply them for ASR using certain soft computing techniques for Assamese speech with dialectal variations. A class of ML techniques comprising of the basic Artificial Neural Network (ANN) in feedforward (FF) and Deep Neural Network (DNN) forms using raw speech, extracted features and frequency domain forms are considered. The Multi Layer Perceptron (MLP) is configured with inputs in several forms to learn class information obtained using clustering and manual labeling. DNNs are also used to extract specific sentence types. Initially, from a large storage, relevant samples are selected and assimilated. Next, a few conventional methods are used for feature extraction of a few selected types. The features comprise of both spectral and prosodic types. These are applied to Recurrent Neural Network (RNN) and Fully Focused Time Delay Neural Network (FFTDNN) structures to evaluate their performance in recognizing mood, dialect, speaker and gender variations in dialectal Assamese speech. The system is tested under several background noise conditions by considering the recognition rates (obtained using confusion matrices and manually) and computation time
Effects of unstratified and centre-stratified randomization in multi-centre clinical trials.
Anisimov, Vladimir V
2011-01-01
This paper deals with the analysis of randomization effects in multi-centre clinical trials. The two randomization schemes most often used in clinical trials are considered: unstratified and centre-stratified block-permuted randomization. The prediction of the number of patients randomized to different treatment arms in different regions during the recruitment period accounting for the stochastic nature of the recruitment and effects of multiple centres is investigated. A new analytic approach using a Poisson-gamma patient recruitment model (patients arrive at different centres according to Poisson processes with rates sampled from a gamma distributed population) and its further extensions is proposed. Closed-form expressions for corresponding distributions of the predicted number of the patients randomized in different regions are derived. In the case of two treatments, the properties of the total imbalance in the number of patients on treatment arms caused by using centre-stratified randomization are investigated and for a large number of centres a normal approximation of imbalance is proved. The impact of imbalance on the power of the study is considered. It is shown that the loss of statistical power is practically negligible and can be compensated by a minor increase in sample size. The influence of patient dropout is also investigated. The impact of randomization on predicted drug supply overage is discussed.
Jet-mixing of initially-stratified liquid-liquid pipe flows: experiments and numerical simulations
Wright, Stuart; Ibarra-Hernandes, Roberto; Xie, Zhihua; Markides, Christos; Matar, Omar
2016-11-01
Low pipeline velocities lead to stratification and so-called 'phase slip' in horizontal liquid-liquid flows due to differences in liquid densities and viscosities. Stratified flows have no suitable single point for sampling, from which average phase properties (e.g. fractions) can be established. Inline mixing, achieved by static mixers or jets in cross-flow (JICF), is often used to overcome liquid-liquid stratification by establishing unstable two-phase dispersions for sampling. Achieving dispersions in liquid-liquid pipeline flows using JICF is the subject of this experimental and modelling work. The experimental facility involves a matched refractive index liquid-liquid-solid system, featuring an ETFE test section, and experimental liquids which are silicone oil and a 51-wt% glycerol solution. The matching then allows the dispersed fluid phase fractions and velocity fields to be established through advanced optical techniques, namely PLIF (for phase) and PTV or PIV (for velocity fields). CFD codes using the volume of a fluid (VOF) method are then used to demonstrate JICF breakup and dispersion in stratified pipeline flows. A number of simple jet configurations are described and their dispersion effectiveness is compared with the experimental results. Funding from Cameron for Ph.D. studentship (SW) gratefully acknowledged.
Geochemical signature of land-based activities in Caribbean coral surface samples
Prouty, N.G.; Hughen, K.A.; Carilli, J.
2008-01-01
Anthropogenic threats, such as increased sedimentation, agrochemical run-off, coastal development, tourism, and overfishing, are of great concern to the Mesoamerican Caribbean Reef System (MACR). Trace metals in corals can be used to quantify and monitor the impact of these land-based activities. Surface coral samples from the MACR were investigated for trace metal signatures resulting from relative differences in water quality. Samples were analyzed at three spatial scales (colony, reef, and regional) as part of a hierarchical multi-scale survey. A primary goal of the paper is to elucidate the extrapolation of information between fine-scale variation at the colony or reef scale and broad-scale patterns at the regional scale. Of the 18 metals measured, five yielded statistical differences at the colony and/or reef scale, suggesting fine-scale spatial heterogeneity not conducive to regional interpretation. Five metals yielded a statistical difference at the regional scale with an absence of a statistical difference at either the colony or reef scale. These metals are barium (Ba), manganese (Mn), chromium (Cr), copper (Cu), and antimony (Sb). The most robust geochemical indicators of land-based activities are coral Ba and Mn concentrations, which are elevated in samples from the southern region of the Gulf of Honduras relative to those from the Turneffe Islands. These findings are consistent with the occurrence of the most significant watersheds in the MACR from southern Belize to Honduras, which contribute sediment-laden freshwater to the coastal zone primarily as a result of human alteration to the landscape (e.g., deforestation and agricultural practices). Elevated levels of Cu and Sb were found in samples from Honduras and may be linked to industrial shipping activities where copper-antimony additives are commonly used in antifouling paints. Results from this study strongly demonstrate the impact of terrestrial runoff and anthropogenic activities on coastal water
Ahola Kohut, Sara; Stinson, Jennifer; Davies-Chalmers, Cleo; Ruskin, Danielle; van Wyk, Margaret
2017-08-01
Mindfulness-based interventions (MBIs) have emerged as a promising strategy for individuals with a chronic illness, given their versatility in targeting both physical and mental health outcomes. However, research to date has focused on adult or community-based populations. To systematically review and critically appraise MBIs in clinical pediatric samples living with chronic physical illness. Electronic searches were conducted by a Library Information Specialist familiar with the field by using EMBASE, PsycINFO, MEDLINE, CINAHL, Web of Science, and EBM Reviews databases. Study Eligibility, Participants, and Interventions: Published English peer-reviewed articles of MBIs in clinical samples of children and adolescents (3-18 years) with chronic physical illness. Two reviewers independently selected articles for review and extracted data. Results are narratively described, and the reporting quality of each study was assessed via the STROBE Checklist. Of a total 4710 articles, 8 articles met inclusion criteria. All studies were small (n < 20, except 1 study of n = 59), included only outpatient adolescent samples, and focused on feasibility and acceptability of MBI; only 1 study included a comparison group (n = 1). No studies included online components or remote attendance. All studies found that MBI was acceptable to adolescents, whereas feasibility and implementation outcomes were mixed. Many studies were underpowered to detect significant differences post-MBI, but MBI did demonstrate improvements in emotional distress in several studies. Conclusions and Implications of Key Findings: The literature on MBIs is preliminary in nature, focusing on adapting and developing MBI for adolescents. Although MBIs appear to be a promising approach to coping with symptoms related to chronic illness in adolescents, future research with adequate sample sizes and rigorous research designs is warranted.
Causon, Tim J; Hann, Stephan
2016-09-28
Fermentation and cell culture biotechnology in the form of so-called "cell factories" now play an increasingly significant role in production of both large (e.g. proteins, biopharmaceuticals) and small organic molecules for a wide variety of applications. However, associated metabolic engineering optimisation processes relying on genetic modification of organisms used in cell factories, or alteration of production conditions remain a challenging undertaking for improving the final yield and quality of cell factory products. In addition to genomic, transcriptomic and proteomic workflows, analytical metabolomics continues to play a critical role in studying detailed aspects of critical pathways (e.g. via targeted quantification of metabolites), identification of biosynthetic intermediates, and also for phenotype differentiation and the elucidation of previously unknown pathways (e.g. via non-targeted strategies). However, the diversity of primary and secondary metabolites and the broad concentration ranges encompassed during typical biotechnological processes means that simultaneous extraction and robust analytical determination of all parts of interest of the metabolome is effectively impossible. As the integration of metabolome data with transcriptome and proteome data is an essential goal of both targeted and non-targeted methods addressing production optimisation goals, additional sample preparation steps beyond necessary sampling, quenching and extraction protocols including clean-up, analyte enrichment, and derivatisation are important considerations for some classes of metabolites, especially those present in low concentrations or exhibiting poor stability. This contribution critically assesses the potential of current sample preparation strategies applied in metabolomic studies of industrially-relevant cell factory organisms using mass spectrometry-based platforms primarily coupled to liquid-phase sample introduction (i.e. flow injection, liquid
3D nanoscale imaging of biological samples with laboratory-based soft X-ray sources
Dehlinger, Aurélie; Blechschmidt, Anne; Grötzsch, Daniel; Jung, Robert; Kanngießer, Birgit; Seim, Christian; Stiel, Holger
2015-09-01
In microscopy, where the theoretical resolution limit depends on the wavelength of the probing light, radiation in the soft X-ray regime can be used to analyze samples that cannot be resolved with visible light microscopes. In the case of soft X-ray microscopy in the water-window, the energy range of the radiation lies between the absorption edges of carbon (at 284 eV, 4.36 nm) and oxygen (543 eV, 2.34 nm). As a result, carbon-based structures, such as biological samples, posses a strong absorption, whereas e.g. water is more transparent to this radiation. Microscopy in the water-window, therefore, allows the structural investigation of aqueous samples with resolutions of a few tens of nanometers and a penetration depth of up to 10μm. The development of highly brilliant laser-produced plasma-sources has enabled the transfer of Xray microscopy, that was formerly bound to synchrotron sources, to the laboratory, which opens the access of this method to a broader scientific community. The Laboratory Transmission X-ray Microscope at the Berlin Laboratory for innovative X-ray technologies (BLiX) runs with a laser produced nitrogen plasma that emits radiation in the soft X-ray regime. The mentioned high penetration depth can be exploited to analyze biological samples in their natural state and with several projection angles. The obtained tomogram is the key to a more precise and global analysis of samples originating from various fields of life science.
Erener, Arzu; Sivas, A. Abdullah; Selcuk-Kestel, A. Sevtap; Düzgün, H. Sebnem
2017-07-01
All of the quantitative landslide susceptibility mapping (QLSM) methods requires two basic data types, namely, landslide inventory and factors that influence landslide occurrence (landslide influencing factors, LIF). Depending on type of landslides, nature of triggers and LIF, accuracy of the QLSM methods differs. Moreover, how to balance the number of 0 (nonoccurrence) and 1 (occurrence) in the training set obtained from the landslide inventory and how to select which one of the 1's and 0's to be included in QLSM models play critical role in the accuracy of the QLSM. Although performance of various QLSM methods is largely investigated in the literature, the challenge of training set construction is not adequately investigated for the QLSM methods. In order to tackle this challenge, in this study three different training set selection strategies along with the original data set is used for testing the performance of three different regression methods namely Logistic Regression (LR), Bayesian Logistic Regression (BLR) and Fuzzy Logistic Regression (FLR). The first sampling strategy is proportional random sampling (PRS), which takes into account a weighted selection of landslide occurrences in the sample set. The second method, namely non-selective nearby sampling (NNS), includes randomly selected sites and their surrounding neighboring points at certain preselected distances to include the impact of clustering. Selective nearby sampling (SNS) is the third method, which concentrates on the group of 1's and their surrounding neighborhood. A randomly selected group of landslide sites and their neighborhood are considered in the analyses similar to NNS parameters. It is found that LR-PRS, FLR-PRS and BLR-Whole Data set-ups, with order, yield the best fits among the other alternatives. The results indicate that in QLSM based on regression models, avoidance of spatial correlation in the data set is critical for the model's performance.