WorldWideScience

Sample records for select samples based

  1. Patch-based visual tracking with online representative sample selection

    Science.gov (United States)

    Ou, Weihua; Yuan, Di; Li, Donghao; Liu, Bin; Xia, Daoxun; Zeng, Wu

    2017-05-01

    Occlusion is one of the most challenging problems in visual object tracking. Recently, a lot of discriminative methods have been proposed to deal with this problem. For the discriminative methods, it is difficult to select the representative samples for the target template updating. In general, the holistic bounding boxes that contain tracked results are selected as the positive samples. However, when the objects are occluded, this simple strategy easily introduces the noises into the training data set and the target template and then leads the tracker to drift away from the target seriously. To address this problem, we propose a robust patch-based visual tracker with online representative sample selection. Different from previous works, we divide the object and the candidates into several patches uniformly and propose a score function to calculate the score of each patch independently. Then, the average score is adopted to determine the optimal candidate. Finally, we utilize the non-negative least square method to find the representative samples, which are used to update the target template. The experimental results on the object tracking benchmark 2013 and on the 13 challenging sequences show that the proposed method is robust to the occlusion and achieves promising results.

  2. Selection bias in population-based cancer case-control studies due to incomplete sampling frame coverage.

    Science.gov (United States)

    Walsh, Matthew C; Trentham-Dietz, Amy; Gangnon, Ronald E; Nieto, F Javier; Newcomb, Polly A; Palta, Mari

    2012-06-01

    Increasing numbers of individuals are choosing to opt out of population-based sampling frames due to privacy concerns. This is especially a problem in the selection of controls for case-control studies, as the cases often arise from relatively complete population-based registries, whereas control selection requires a sampling frame. If opt out is also related to risk factors, bias can arise. We linked breast cancer cases who reported having a valid driver's license from the 2004-2008 Wisconsin women's health study (N = 2,988) with a master list of licensed drivers from the Wisconsin Department of Transportation (WDOT). This master list excludes Wisconsin drivers that requested their information not be sold by the state. Multivariate-adjusted selection probability ratios (SPR) were calculated to estimate potential bias when using this driver's license sampling frame to select controls. A total of 962 cases (32%) had opted out of the WDOT sampling frame. Cases age <40 (SPR = 0.90), income either unreported (SPR = 0.89) or greater than $50,000 (SPR = 0.94), lower parity (SPR = 0.96 per one-child decrease), and hormone use (SPR = 0.93) were significantly less likely to be covered by the WDOT sampling frame (α = 0.05 level). Our results indicate the potential for selection bias due to differential opt out between various demographic and behavioral subgroups of controls. As selection bias may differ by exposure and study base, the assessment of potential bias needs to be ongoing. SPRs can be used to predict the direction of bias when cases and controls stem from different sampling frames in population-based case-control studies.

  3. Sample selection based on kernel-subclustering for the signal reconstruction of multifunctional sensors

    International Nuclear Information System (INIS)

    Wang, Xin; Wei, Guo; Sun, Jinwei

    2013-01-01

    The signal reconstruction methods based on inverse modeling for the signal reconstruction of multifunctional sensors have been widely studied in recent years. To improve the accuracy, the reconstruction methods have become more and more complicated because of the increase in the model parameters and sample points. However, there is another factor that affects the reconstruction accuracy, the position of the sample points, which has not been studied. A reasonable selection of the sample points could improve the signal reconstruction quality in at least two ways: improved accuracy with the same number of sample points or the same accuracy obtained with a smaller number of sample points. Both ways are valuable for improving the accuracy and decreasing the workload, especially for large batches of multifunctional sensors. In this paper, we propose a sample selection method based on kernel-subclustering distill groupings of the sample data and produce the representation of the data set for inverse modeling. The method calculates the distance between two data points based on the kernel-induced distance instead of the conventional distance. The kernel function is a generalization of the distance metric by mapping the data that are non-separable in the original space into homogeneous groups in the high-dimensional space. The method obtained the best results compared with the other three methods in the simulation. (paper)

  4. Acrylamide exposure among Turkish toddlers from selected cereal-based baby food samples.

    Science.gov (United States)

    Cengiz, Mehmet Fatih; Gündüz, Cennet Pelin Boyacı

    2013-10-01

    In this study, acrylamide exposure from selected cereal-based baby food samples was investigated among toddlers aged 1-3 years in Turkey. The study contained three steps. The first step was collecting food consumption data and toddlers' physical properties, such as gender, age and body weight, using a questionnaire given to parents by a trained interviewer between January and March 2012. The second step was determining the acrylamide levels in food samples that were reported on by the parents in the questionnaire, using a gas chromatography-mass spectrometry (GC-MS) method. The last step was combining the determined acrylamide levels in selected food samples with individual food consumption and body weight data using a deterministic approach to estimate the acrylamide exposure levels. The mean acrylamide levels of baby biscuits, breads, baby bread-rusks, crackers, biscuits, breakfast cereals and powdered cereal-based baby foods were 153, 225, 121, 604, 495, 290 and 36 μg/kg, respectively. The minimum, mean and maximum acrylamide exposures were estimated to be 0.06, 1.43 and 6.41 μg/kg BW per day, respectively. The foods that contributed to acrylamide exposure were aligned from high to low as bread, crackers, biscuits, baby biscuits, powdered cereal-based baby foods, baby bread-rusks and breakfast cereals. Copyright © 2013 Elsevier Ltd. All rights reserved.

  5. A genetic algorithm-based framework for wavelength selection on sample categorization.

    Science.gov (United States)

    Anzanello, Michel J; Yamashita, Gabrielli; Marcelo, Marcelo; Fogliatto, Flávio S; Ortiz, Rafael S; Mariotti, Kristiane; Ferrão, Marco F

    2017-08-01

    In forensic and pharmaceutical scenarios, the application of chemometrics and optimization techniques has unveiled common and peculiar features of seized medicine and drug samples, helping investigative forces to track illegal operations. This paper proposes a novel framework aimed at identifying relevant subsets of attenuated total reflectance Fourier transform infrared (ATR-FTIR) wavelengths for classifying samples into two classes, for example authentic or forged categories in case of medicines, or salt or base form in cocaine analysis. In the first step of the framework, the ATR-FTIR spectra were partitioned into equidistant intervals and the k-nearest neighbour (KNN) classification technique was applied to each interval to insert samples into proper classes. In the next step, selected intervals were refined through the genetic algorithm (GA) by identifying a limited number of wavelengths from the intervals previously selected aimed at maximizing classification accuracy. When applied to Cialis®, Viagra®, and cocaine ATR-FTIR datasets, the proposed method substantially decreased the number of wavelengths needed to categorize, and increased the classification accuracy. From a practical perspective, the proposed method provides investigative forces with valuable information towards monitoring illegal production of drugs and medicines. In addition, focusing on a reduced subset of wavelengths allows the development of portable devices capable of testing the authenticity of samples during police checking events, avoiding the need for later laboratorial analyses and reducing equipment expenses. Theoretically, the proposed GA-based approach yields more refined solutions than the current methods relying on interval approaches, which tend to insert irrelevant wavelengths in the retained intervals. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  6. Sample Selection for Training Cascade Detectors.

    Science.gov (United States)

    Vállez, Noelia; Deniz, Oscar; Bueno, Gloria

    2015-01-01

    Automatic detection systems usually require large and representative training datasets in order to obtain good detection and false positive rates. Training datasets are such that the positive set has few samples and/or the negative set should represent anything except the object of interest. In this respect, the negative set typically contains orders of magnitude more images than the positive set. However, imbalanced training databases lead to biased classifiers. In this paper, we focus our attention on a negative sample selection method to properly balance the training data for cascade detectors. The method is based on the selection of the most informative false positive samples generated in one stage to feed the next stage. The results show that the proposed cascade detector with sample selection obtains on average better partial AUC and smaller standard deviation than the other compared cascade detectors.

  7. Sample Selection for Training Cascade Detectors.

    Directory of Open Access Journals (Sweden)

    Noelia Vállez

    Full Text Available Automatic detection systems usually require large and representative training datasets in order to obtain good detection and false positive rates. Training datasets are such that the positive set has few samples and/or the negative set should represent anything except the object of interest. In this respect, the negative set typically contains orders of magnitude more images than the positive set. However, imbalanced training databases lead to biased classifiers. In this paper, we focus our attention on a negative sample selection method to properly balance the training data for cascade detectors. The method is based on the selection of the most informative false positive samples generated in one stage to feed the next stage. The results show that the proposed cascade detector with sample selection obtains on average better partial AUC and smaller standard deviation than the other compared cascade detectors.

  8. Progressive sampling-based Bayesian optimization for efficient and automatic machine learning model selection.

    Science.gov (United States)

    Zeng, Xueqiang; Luo, Gang

    2017-12-01

    Machine learning is broadly used for clinical data analysis. Before training a model, a machine learning algorithm must be selected. Also, the values of one or more model parameters termed hyper-parameters must be set. Selecting algorithms and hyper-parameter values requires advanced machine learning knowledge and many labor-intensive manual iterations. To lower the bar to machine learning, miscellaneous automatic selection methods for algorithms and/or hyper-parameter values have been proposed. Existing automatic selection methods are inefficient on large data sets. This poses a challenge for using machine learning in the clinical big data era. To address the challenge, this paper presents progressive sampling-based Bayesian optimization, an efficient and automatic selection method for both algorithms and hyper-parameter values. We report an implementation of the method. We show that compared to a state of the art automatic selection method, our method can significantly reduce search time, classification error rate, and standard deviation of error rate due to randomization. This is major progress towards enabling fast turnaround in identifying high-quality solutions required by many machine learning-based clinical data analysis tasks.

  9. Transfer function design based on user selected samples for intuitive multivariate volume exploration

    KAUST Repository

    Zhou, Liang

    2013-02-01

    Multivariate volumetric datasets are important to both science and medicine. We propose a transfer function (TF) design approach based on user selected samples in the spatial domain to make multivariate volumetric data visualization more accessible for domain users. Specifically, the user starts the visualization by probing features of interest on slices and the data values are instantly queried by user selection. The queried sample values are then used to automatically and robustly generate high dimensional transfer functions (HDTFs) via kernel density estimation (KDE). Alternatively, 2D Gaussian TFs can be automatically generated in the dimensionality reduced space using these samples. With the extracted features rendered in the volume rendering view, the user can further refine these features using segmentation brushes. Interactivity is achieved in our system and different views are tightly linked. Use cases show that our system has been successfully applied for simulation and complicated seismic data sets. © 2013 IEEE.

  10. Transfer function design based on user selected samples for intuitive multivariate volume exploration

    KAUST Repository

    Zhou, Liang; Hansen, Charles

    2013-01-01

    Multivariate volumetric datasets are important to both science and medicine. We propose a transfer function (TF) design approach based on user selected samples in the spatial domain to make multivariate volumetric data visualization more accessible for domain users. Specifically, the user starts the visualization by probing features of interest on slices and the data values are instantly queried by user selection. The queried sample values are then used to automatically and robustly generate high dimensional transfer functions (HDTFs) via kernel density estimation (KDE). Alternatively, 2D Gaussian TFs can be automatically generated in the dimensionality reduced space using these samples. With the extracted features rendered in the volume rendering view, the user can further refine these features using segmentation brushes. Interactivity is achieved in our system and different views are tightly linked. Use cases show that our system has been successfully applied for simulation and complicated seismic data sets. © 2013 IEEE.

  11. Selecting Sample Preparation Workflows for Mass Spectrometry-Based Proteomic and Phosphoproteomic Analysis of Patient Samples with Acute Myeloid Leukemia.

    Science.gov (United States)

    Hernandez-Valladares, Maria; Aasebø, Elise; Selheim, Frode; Berven, Frode S; Bruserud, Øystein

    2016-08-22

    Global mass spectrometry (MS)-based proteomic and phosphoproteomic studies of acute myeloid leukemia (AML) biomarkers represent a powerful strategy to identify and confirm proteins and their phosphorylated modifications that could be applied in diagnosis and prognosis, as a support for individual treatment regimens and selection of patients for bone marrow transplant. MS-based studies require optimal and reproducible workflows that allow a satisfactory coverage of the proteome and its modifications. Preparation of samples for global MS analysis is a crucial step and it usually requires method testing, tuning and optimization. Different proteomic workflows that have been used to prepare AML patient samples for global MS analysis usually include a standard protein in-solution digestion procedure with a urea-based lysis buffer. The enrichment of phosphopeptides from AML patient samples has previously been carried out either with immobilized metal affinity chromatography (IMAC) or metal oxide affinity chromatography (MOAC). We have recently tested several methods of sample preparation for MS analysis of the AML proteome and phosphoproteome and introduced filter-aided sample preparation (FASP) as a superior methodology for the sensitive and reproducible generation of peptides from patient samples. FASP-prepared peptides can be further fractionated or IMAC-enriched for proteome or phosphoproteome analyses. Herein, we will review both in-solution and FASP-based sample preparation workflows and encourage the use of the latter for the highest protein and phosphorylation coverage and reproducibility.

  12. A quick method based on SIMPLISMA-KPLS for simultaneously selecting outlier samples and informative samples for model standardization in near infrared spectroscopy

    Science.gov (United States)

    Li, Li-Na; Ma, Chang-Ming; Chang, Ming; Zhang, Ren-Cheng

    2017-12-01

    A novel method based on SIMPLe-to-use Interactive Self-modeling Mixture Analysis (SIMPLISMA) and Kernel Partial Least Square (KPLS), named as SIMPLISMA-KPLS, is proposed in this paper for selection of outlier samples and informative samples simultaneously. It is a quick algorithm used to model standardization (or named as model transfer) in near infrared (NIR) spectroscopy. The NIR experiment data of the corn for analysis of the protein content is introduced to evaluate the proposed method. Piecewise direct standardization (PDS) is employed in model transfer. And the comparison of SIMPLISMA-PDS-KPLS and KS-PDS-KPLS is given in this research by discussion of the prediction accuracy of protein content and calculation speed of each algorithm. The conclusions include that SIMPLISMA-KPLS can be utilized as an alternative sample selection method for model transfer. Although it has similar accuracy to Kennard-Stone (KS), it is different from KS as it employs concentration information in selection program. This means that it ensures analyte information is involved in analysis, and the spectra (X) of the selected samples is interrelated with concentration (y). And it can be used for outlier sample elimination simultaneously by validation of calibration. According to the statistical data results of running time, it is clear that the sample selection process is more rapid when using KPLS. The quick algorithm of SIMPLISMA-KPLS is beneficial to improve the speed of online measurement using NIR spectroscopy.

  13. Selecting Sample Preparation Workflows for Mass Spectrometry-Based Proteomic and Phosphoproteomic Analysis of Patient Samples with Acute Myeloid Leukemia

    Directory of Open Access Journals (Sweden)

    Maria Hernandez-Valladares

    2016-08-01

    Full Text Available Global mass spectrometry (MS-based proteomic and phosphoproteomic studies of acute myeloid leukemia (AML biomarkers represent a powerful strategy to identify and confirm proteins and their phosphorylated modifications that could be applied in diagnosis and prognosis, as a support for individual treatment regimens and selection of patients for bone marrow transplant. MS-based studies require optimal and reproducible workflows that allow a satisfactory coverage of the proteome and its modifications. Preparation of samples for global MS analysis is a crucial step and it usually requires method testing, tuning and optimization. Different proteomic workflows that have been used to prepare AML patient samples for global MS analysis usually include a standard protein in-solution digestion procedure with a urea-based lysis buffer. The enrichment of phosphopeptides from AML patient samples has previously been carried out either with immobilized metal affinity chromatography (IMAC or metal oxide affinity chromatography (MOAC. We have recently tested several methods of sample preparation for MS analysis of the AML proteome and phosphoproteome and introduced filter-aided sample preparation (FASP as a superior methodology for the sensitive and reproducible generation of peptides from patient samples. FASP-prepared peptides can be further fractionated or IMAC-enriched for proteome or phosphoproteome analyses. Herein, we will review both in-solution and FASP-based sample preparation workflows and encourage the use of the latter for the highest protein and phosphorylation coverage and reproducibility.

  14. A GMM-Based Test for Normal Disturbances of the Heckman Sample Selection Model

    Directory of Open Access Journals (Sweden)

    Michael Pfaffermayr

    2014-10-01

    Full Text Available The Heckman sample selection model relies on the assumption of normal and homoskedastic disturbances. However, before considering more general, alternative semiparametric models that do not need the normality assumption, it seems useful to test this assumption. Following Meijer and Wansbeek (2007, the present contribution derives a GMM-based pseudo-score LM test on whether the third and fourth moments of the disturbances of the outcome equation of the Heckman model conform to those implied by the truncated normal distribution. The test is easy to calculate and in Monte Carlo simulations it shows good performance for sample sizes of 1000 or larger.

  15. Polymeric membrane sensors based on Cd(II) Schiff base complexes for selective iodide determination in environmental and medicinal samples.

    Science.gov (United States)

    Singh, Ashok Kumar; Mehtab, Sameena

    2008-01-15

    The two cadmium chelates of schiff bases, N,N'-bis(salicylidene)-1,4-diaminobutane, (Cd-S(1)) and N,N'-bis(salicylidene)-3,4-diaminotoluene (Cd-S(2)), have been synthesized and explored as ionophores for preparing PVC-based membrane sensors selective to iodide(I) ion. Potentiometric investigations indicate high affinity of these receptors for iodide ion. Polyvinyl chloride (PVC)-based membranes of Cd-S(1) and Cd-S(2) using as hexadecyltrimethylammonium bromide (HTAB) cation discriminator and o-nitrophenyloctyl ether (o-NPOE), dibutylphthalate (DBP), acetophenone (AP) and tributylphosphate (TBP) as plasticizing solvent mediators were prepared and investigated as iodide-selective sensors. The best performance was shown by the membrane of composition (w/w) of (Cd-S(1)) (7%):PVC (31%):DBP (60%):HTAB (2%). The sensor works well over a wide concentration range 5.3x10(-7) to 1.0x10(-2)M with Nernstian compliance (59.2mVdecade(-1) of activity) within pH range 2.5-9.0 with a response time of 11s and showed good selectivity for iodide ion over a number of anions. The sensor exhibits adequate life (3 months) with good reproducibility (S.D.+/-0.24mV) and could be used successfully for the determination of iodide content in environmental water samples and mouth wash samples.

  16. Sample size estimation and sampling techniques for selecting a representative sample

    Directory of Open Access Journals (Sweden)

    Aamir Omair

    2014-01-01

    Full Text Available Introduction: The purpose of this article is to provide a general understanding of the concepts of sampling as applied to health-related research. Sample Size Estimation: It is important to select a representative sample in quantitative research in order to be able to generalize the results to the target population. The sample should be of the required sample size and must be selected using an appropriate probability sampling technique. There are many hidden biases which can adversely affect the outcome of the study. Important factors to consider for estimating the sample size include the size of the study population, confidence level, expected proportion of the outcome variable (for categorical variables/standard deviation of the outcome variable (for numerical variables, and the required precision (margin of accuracy from the study. The more the precision required, the greater is the required sample size. Sampling Techniques: The probability sampling techniques applied for health related research include simple random sampling, systematic random sampling, stratified random sampling, cluster sampling, and multistage sampling. These are more recommended than the nonprobability sampling techniques, because the results of the study can be generalized to the target population.

  17. Study on the effects of sample selection on spectral reflectance reconstruction based on the algorithm of compressive sensing

    International Nuclear Information System (INIS)

    Zhang, Leihong; Liang, Dong

    2016-01-01

    In order to solve the problem that reconstruction efficiency and precision is not high, in this paper different samples are selected to reconstruct spectral reflectance, and a new kind of spectral reflectance reconstruction method based on the algorithm of compressive sensing is provided. Four different color numbers of matte color cards such as the ColorChecker Color Rendition Chart and Color Checker SG, the copperplate paper spot color card of Panton, and the Munsell colors card are chosen as training samples, the spectral image is reconstructed respectively by the algorithm of compressive sensing and pseudo-inverse and Wiener, and the results are compared. These methods of spectral reconstruction are evaluated by root mean square error and color difference accuracy. The experiments show that the cumulative contribution rate and color difference of the Munsell colors card are better than those of the other three numbers of color cards in the same conditions of reconstruction, and the accuracy of the spectral reconstruction will be affected by the training sample of different numbers of color cards. The key technology of reconstruction means that the uniformity and representation of the training sample selection has important significance upon reconstruction. In this paper, the influence of the sample selection on the spectral image reconstruction is studied. The precision of the spectral reconstruction based on the algorithm of compressive sensing is higher than that of the traditional algorithm of spectral reconstruction. By the MATLAB simulation results, it can be seen that the spectral reconstruction precision and efficiency are affected by the different color numbers of the training sample. (paper)

  18. 40 CFR 89.507 - Sample selection.

    Science.gov (United States)

    2010-07-01

    ... Auditing § 89.507 Sample selection. (a) Engines comprising a test sample will be selected at the location...). However, once the manufacturer ships any test engine, it relinquishes the prerogative to conduct retests...

  19. 40 CFR 90.507 - Sample selection.

    Science.gov (United States)

    2010-07-01

    ... Auditing § 90.507 Sample selection. (a) Engines comprising a test sample will be selected at the location... manufacturer ships any test engine, it relinquishes the prerogative to conduct retests as provided in § 90.508...

  20. Field-based random sampling without a sampling frame: control selection for a case-control study in rural Africa.

    Science.gov (United States)

    Crampin, A C; Mwinuka, V; Malema, S S; Glynn, J R; Fine, P E

    2001-01-01

    Selection bias, particularly of controls, is common in case-control studies and may materially affect the results. Methods of control selection should be tailored both for the risk factors and disease under investigation and for the population being studied. We present here a control selection method devised for a case-control study of tuberculosis in rural Africa (Karonga, northern Malawi) that selects an age/sex frequency-matched random sample of the population, with a geographical distribution in proportion to the population density. We also present an audit of the selection process, and discuss the potential of this method in other settings.

  1. Imaging a Large Sample with Selective Plane Illumination Microscopy Based on Multiple Fluorescent Microsphere Tracking

    Science.gov (United States)

    Ryu, Inkeon; Kim, Daekeun

    2018-04-01

    A typical selective plane illumination microscopy (SPIM) image size is basically limited by the field of view, which is a characteristic of the objective lens. If an image larger than the imaging area of the sample is to be obtained, image stitching, which combines step-scanned images into a single panoramic image, is required. However, accurately registering the step-scanned images is very difficult because the SPIM system uses a customized sample mount where uncertainties for the translational and the rotational motions exist. In this paper, an image registration technique based on multiple fluorescent microsphere tracking is proposed in the view of quantifying the constellations and measuring the distances between at least two fluorescent microspheres embedded in the sample. Image stitching results are demonstrated for optically cleared large tissue with various staining methods. Compensation for the effect of the sample rotation that occurs during the translational motion in the sample mount is also discussed.

  2. Robust online tracking via adaptive samples selection with saliency detection

    Science.gov (United States)

    Yan, Jia; Chen, Xi; Zhu, QiuPing

    2013-12-01

    Online tracking has shown to be successful in tracking of previously unknown objects. However, there are two important factors which lead to drift problem of online tracking, the one is how to select the exact labeled samples even when the target locations are inaccurate, and the other is how to handle the confusors which have similar features with the target. In this article, we propose a robust online tracking algorithm with adaptive samples selection based on saliency detection to overcome the drift problem. To deal with the problem of degrading the classifiers using mis-aligned samples, we introduce the saliency detection method to our tracking problem. Saliency maps and the strong classifiers are combined to extract the most correct positive samples. Our approach employs a simple yet saliency detection algorithm based on image spectral residual analysis. Furthermore, instead of using the random patches as the negative samples, we propose a reasonable selection criterion, in which both the saliency confidence and similarity are considered with the benefits that confusors in the surrounding background are incorporated into the classifiers update process before the drift occurs. The tracking task is formulated as a binary classification via online boosting framework. Experiment results in several challenging video sequences demonstrate the accuracy and stability of our tracker.

  3. Colorimetric biomimetic sensor systems based on molecularly imprinted polymer membranes for highly-selective detection of phenol in environmental samples

    Directory of Open Access Journals (Sweden)

    Sergeyeva T. A.

    2014-05-01

    Full Text Available Aim. Development of an easy-to-use colorimetric sensor system for fast and accurate detection of phenol in envi- ronmental samples. Methods. Technique of molecular imprinting, method of in situ polymerization of molecularly imprinted polymer membranes. Results. The proposed sensor is based on free-standing molecularly imprinted polymer (MIP membranes, synthesized by in situ polymerization, and having in their structure artificial binding sites capable of selective phenol recognition. The quantitative detection of phenol, selectively adsorbed by the MIP membranes, is based on its reaction with 4-aminoantipyrine, which gives a pink-colored product. The intensity of staining of the MIP membrane is proportional to phenol concentration in the analyzed sample. Phenol can be detected within the range 50 nM–10 mM with limit of detection 50 nM, which corresponds to the concentrations that have to be detected in natural and waste waters in accordance with environmental protection standards. Stability of the MIP-membrane-based sensors was assessed during 12 months storage at room temperature. Conclusions. The sensor system provides highly-selective and sensitive detection of phenol in both mo- del and real (drinking, natural, and waste water samples. As compared to traditional methods of phenol detection, the proposed system is characterized by simplicity of operation and can be used in non-laboratory conditions.

  4. A novel heterogeneous training sample selection method on space-time adaptive processing

    Science.gov (United States)

    Wang, Qiang; Zhang, Yongshun; Guo, Yiduo

    2018-04-01

    The performance of ground target detection about space-time adaptive processing (STAP) decreases when non-homogeneity of clutter power is caused because of training samples contaminated by target-like signals. In order to solve this problem, a novel nonhomogeneous training sample selection method based on sample similarity is proposed, which converts the training sample selection into a convex optimization problem. Firstly, the existing deficiencies on the sample selection using generalized inner product (GIP) are analyzed. Secondly, the similarities of different training samples are obtained by calculating mean-hausdorff distance so as to reject the contaminated training samples. Thirdly, cell under test (CUT) and the residual training samples are projected into the orthogonal subspace of the target in the CUT, and mean-hausdorff distances between the projected CUT and training samples are calculated. Fourthly, the distances are sorted in order of value and the training samples which have the bigger value are selective preference to realize the reduced-dimension. Finally, simulation results with Mountain-Top data verify the effectiveness of the proposed method.

  5. Selective Distance-Based K+ Quantification on Paper-Based Microfluidics.

    Science.gov (United States)

    Gerold, Chase T; Bakker, Eric; Henry, Charles S

    2018-04-03

    In this study, paper-based microfluidic devices (μPADs) capable of K + quantification in aqueous samples, as well as in human serum, using both colorimetric and distance-based methods are described. A lipophilic phase containing potassium ionophore I (valinomycin) was utilized to achieve highly selective quantification of K + in the presence of Na + , Li + , and Mg 2+ ions. Successful addition of a suspended lipophilic phase to a wax printed paper-based device is described and offers a solution to current approaches that rely on organic solvents, which damage wax barriers. The approach provides an avenue for future alkali/alkaline quantification utilizing μPADs. Colorimetric spot tests allowed for K + quantification from 0.1-5.0 mM using only 3.00 μL of sample solution. Selective distance-based quantification required small sample volumes (6.00 μL) and gave responses sensitive enough to distinguish between 1.0 and 2.5 mM of sample K + . μPADs using distance-based methods were also capable of differentiating between 4.3 and 6.9 mM K + in human serum samples. Distance-based methods required no digital analysis, electronic hardware, or pumps; any steps required for quantification could be carried out using the naked eye.

  6. UNLABELED SELECTED SAMPLES IN FEATURE EXTRACTION FOR CLASSIFICATION OF HYPERSPECTRAL IMAGES WITH LIMITED TRAINING SAMPLES

    Directory of Open Access Journals (Sweden)

    A. Kianisarkaleh

    2015-12-01

    Full Text Available Feature extraction plays a key role in hyperspectral images classification. Using unlabeled samples, often unlimitedly available, unsupervised and semisupervised feature extraction methods show better performance when limited number of training samples exists. This paper illustrates the importance of selecting appropriate unlabeled samples that used in feature extraction methods. Also proposes a new method for unlabeled samples selection using spectral and spatial information. The proposed method has four parts including: PCA, prior classification, posterior classification and sample selection. As hyperspectral image passes these parts, selected unlabeled samples can be used in arbitrary feature extraction methods. The effectiveness of the proposed unlabeled selected samples in unsupervised and semisupervised feature extraction is demonstrated using two real hyperspectral datasets. Results show that through selecting appropriate unlabeled samples, the proposed method can improve the performance of feature extraction methods and increase classification accuracy.

  7. Selective information sampling

    Directory of Open Access Journals (Sweden)

    Peter A. F. Fraser-Mackenzie

    2009-06-01

    Full Text Available This study investigates the amount and valence of information selected during single item evaluation. One hundred and thirty-five participants evaluated a cell phone by reading hypothetical customers reports. Some participants were first asked to provide a preliminary rating based on a picture of the phone and some technical specifications. The participants who were given the customer reports only after they made a preliminary rating exhibited valence bias in their selection of customers reports. In contrast, the participants that did not make an initial rating sought subsequent information in a more balanced, albeit still selective, manner. The preliminary raters used the least amount of information in their final decision, resulting in faster decision times. The study appears to support the notion that selective exposure is utilized in order to develop cognitive coherence.

  8. A simple highly sensitive and selective aptamer-based colorimetric sensor for environmental toxins microcystin-LR in water samples.

    Science.gov (United States)

    Li, Xiuyan; Cheng, Ruojie; Shi, Huijie; Tang, Bo; Xiao, Hanshuang; Zhao, Guohua

    2016-03-05

    A simple and highly sensitive aptamer-based colorimetric sensor was developed for selective detection of Microcystin-LR (MC-LR). The aptamer (ABA) was employed as recognition element which could bind MC-LR with high-affinity, while gold nanoparticles (AuNPs) worked as sensing materials whose plasma resonance absorption peaks red shifted upon binding of the targets at a high concentration of sodium chloride. With the addition of MC-LR, the random coil aptamer adsorbed on Au NPs altered into regulated structure to form MC-LR-aptamer complexes and broke away from the surface of Au NPs, leading to the aggregation of AuNPs, and the color converted from red to blue due to the interparticle plasmon coupling. Results showed that our aptamer-based colorimetric sensor exhibited rapid and sensitive detection performance for MC-LR with linear range from 0.5 nM to 7.5 μM and the detection limit reached 0.37 nM. Meanwhile, the pollutants usually coexisting with MC-LR in pollutant water samples had not demonstrated disturbance for detecting of MC-LR. The mechanism was also proposed suggesting that high affinity interaction between aptamer and MC-LR significantly enhanced the sensitivity and selectivity for MC-LR detection. Besides, the established method was utilized in analyzing real water samples and splendid sensitivity and selectivity were obtained as well. Copyright © 2015 Elsevier B.V. All rights reserved.

  9. Selection of the Sample for Data-Driven $Z \\to \

    CERN Document Server

    Krauss, Martin

    2009-01-01

    The topic of this study was to improve the selection of the sample for data-driven Z → ν ν background estimation, which is a major contribution in supersymmetric searches in ̄ a no-lepton search mode. The data is based on Z → + − samples using data created with ATLAS simulation software. This method works if two leptons are reconstructed, but using cuts that are typical for SUSY searches reconstruction efficiency for electrons and muons is rather low. For this reason it was tried to enhance the data sample. Therefore events were considered, where only one electron was reconstructed. In this case the invariant mass for the electron and each jet was computed to select the jet with the best match for the Z boson mass as not reconstructed electron. This way the sample can be extended but significantly looses purity because of also reconstructed background events. To improve this method other variables have to be considered which were not available for this study. Applying a similar method to muons using ...

  10. Robust inference in sample selection models

    KAUST Repository

    Zhelonkin, Mikhail; Genton, Marc G.; Ronchetti, Elvezio

    2015-01-01

    The problem of non-random sample selectivity often occurs in practice in many fields. The classical estimators introduced by Heckman are the backbone of the standard statistical analysis of these models. However, these estimators are very sensitive to small deviations from the distributional assumptions which are often not satisfied in practice. We develop a general framework to study the robustness properties of estimators and tests in sample selection models. We derive the influence function and the change-of-variance function of Heckman's two-stage estimator, and we demonstrate the non-robustness of this estimator and its estimated variance to small deviations from the model assumed. We propose a procedure for robustifying the estimator, prove its asymptotic normality and give its asymptotic variance. Both cases with and without an exclusion restriction are covered. This allows us to construct a simple robust alternative to the sample selection bias test. We illustrate the use of our new methodology in an analysis of ambulatory expenditures and we compare the performance of the classical and robust methods in a Monte Carlo simulation study.

  11. Robust inference in sample selection models

    KAUST Repository

    Zhelonkin, Mikhail

    2015-11-20

    The problem of non-random sample selectivity often occurs in practice in many fields. The classical estimators introduced by Heckman are the backbone of the standard statistical analysis of these models. However, these estimators are very sensitive to small deviations from the distributional assumptions which are often not satisfied in practice. We develop a general framework to study the robustness properties of estimators and tests in sample selection models. We derive the influence function and the change-of-variance function of Heckman\\'s two-stage estimator, and we demonstrate the non-robustness of this estimator and its estimated variance to small deviations from the model assumed. We propose a procedure for robustifying the estimator, prove its asymptotic normality and give its asymptotic variance. Both cases with and without an exclusion restriction are covered. This allows us to construct a simple robust alternative to the sample selection bias test. We illustrate the use of our new methodology in an analysis of ambulatory expenditures and we compare the performance of the classical and robust methods in a Monte Carlo simulation study.

  12. The Impact of Selection, Gene Conversion, and Biased Sampling on the Assessment of Microbial Demography.

    Science.gov (United States)

    Lapierre, Marguerite; Blin, Camille; Lambert, Amaury; Achaz, Guillaume; Rocha, Eduardo P C

    2016-07-01

    Recent studies have linked demographic changes and epidemiological patterns in bacterial populations using coalescent-based approaches. We identified 26 studies using skyline plots and found that 21 inferred overall population expansion. This surprising result led us to analyze the impact of natural selection, recombination (gene conversion), and sampling biases on demographic inference using skyline plots and site frequency spectra (SFS). Forward simulations based on biologically relevant parameters from Escherichia coli populations showed that theoretical arguments on the detrimental impact of recombination and especially natural selection on the reconstructed genealogies cannot be ignored in practice. In fact, both processes systematically lead to spurious interpretations of population expansion in skyline plots (and in SFS for selection). Weak purifying selection, and especially positive selection, had important effects on skyline plots, showing patterns akin to those of population expansions. State-of-the-art techniques to remove recombination further amplified these biases. We simulated three common sampling biases in microbiological research: uniform, clustered, and mixed sampling. Alone, or together with recombination and selection, they further mislead demographic inferences producing almost any possible skyline shape or SFS. Interestingly, sampling sub-populations also affected skyline plots and SFS, because the coalescent rates of populations and their sub-populations had different distributions. This study suggests that extreme caution is needed to infer demographic changes solely based on reconstructed genealogies. We suggest that the development of novel sampling strategies and the joint analyzes of diverse population genetic methods are strictly necessary to estimate demographic changes in populations where selection, recombination, and biased sampling are present. © The Author 2016. Published by Oxford University Press on behalf of the Society for

  13. 40 CFR 205.171-3 - Test motorcycle sample selection.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 24 2010-07-01 2010-07-01 false Test motorcycle sample selection. 205... ABATEMENT PROGRAMS TRANSPORTATION EQUIPMENT NOISE EMISSION CONTROLS Motorcycle Exhaust Systems § 205.171-3 Test motorcycle sample selection. A test motorcycle to be used for selective enforcement audit testing...

  14. Hybrid nested sampling algorithm for Bayesian model selection applied to inverse subsurface flow problems

    International Nuclear Information System (INIS)

    Elsheikh, Ahmed H.; Wheeler, Mary F.; Hoteit, Ibrahim

    2014-01-01

    A Hybrid Nested Sampling (HNS) algorithm is proposed for efficient Bayesian model calibration and prior model selection. The proposed algorithm combines, Nested Sampling (NS) algorithm, Hybrid Monte Carlo (HMC) sampling and gradient estimation using Stochastic Ensemble Method (SEM). NS is an efficient sampling algorithm that can be used for Bayesian calibration and estimating the Bayesian evidence for prior model selection. Nested sampling has the advantage of computational feasibility. Within the nested sampling algorithm, a constrained sampling step is performed. For this step, we utilize HMC to reduce the correlation between successive sampled states. HMC relies on the gradient of the logarithm of the posterior distribution, which we estimate using a stochastic ensemble method based on an ensemble of directional derivatives. SEM only requires forward model runs and the simulator is then used as a black box and no adjoint code is needed. The developed HNS algorithm is successfully applied for Bayesian calibration and prior model selection of several nonlinear subsurface flow problems

  15. Hybrid nested sampling algorithm for Bayesian model selection applied to inverse subsurface flow problems

    Energy Technology Data Exchange (ETDEWEB)

    Elsheikh, Ahmed H., E-mail: aelsheikh@ices.utexas.edu [Institute for Computational Engineering and Sciences (ICES), University of Texas at Austin, TX (United States); Institute of Petroleum Engineering, Heriot-Watt University, Edinburgh EH14 4AS (United Kingdom); Wheeler, Mary F. [Institute for Computational Engineering and Sciences (ICES), University of Texas at Austin, TX (United States); Hoteit, Ibrahim [Department of Earth Sciences and Engineering, King Abdullah University of Science and Technology (KAUST), Thuwal (Saudi Arabia)

    2014-02-01

    A Hybrid Nested Sampling (HNS) algorithm is proposed for efficient Bayesian model calibration and prior model selection. The proposed algorithm combines, Nested Sampling (NS) algorithm, Hybrid Monte Carlo (HMC) sampling and gradient estimation using Stochastic Ensemble Method (SEM). NS is an efficient sampling algorithm that can be used for Bayesian calibration and estimating the Bayesian evidence for prior model selection. Nested sampling has the advantage of computational feasibility. Within the nested sampling algorithm, a constrained sampling step is performed. For this step, we utilize HMC to reduce the correlation between successive sampled states. HMC relies on the gradient of the logarithm of the posterior distribution, which we estimate using a stochastic ensemble method based on an ensemble of directional derivatives. SEM only requires forward model runs and the simulator is then used as a black box and no adjoint code is needed. The developed HNS algorithm is successfully applied for Bayesian calibration and prior model selection of several nonlinear subsurface flow problems.

  16. Hybrid nested sampling algorithm for Bayesian model selection applied to inverse subsurface flow problems

    KAUST Repository

    Elsheikh, Ahmed H.

    2014-02-01

    A Hybrid Nested Sampling (HNS) algorithm is proposed for efficient Bayesian model calibration and prior model selection. The proposed algorithm combines, Nested Sampling (NS) algorithm, Hybrid Monte Carlo (HMC) sampling and gradient estimation using Stochastic Ensemble Method (SEM). NS is an efficient sampling algorithm that can be used for Bayesian calibration and estimating the Bayesian evidence for prior model selection. Nested sampling has the advantage of computational feasibility. Within the nested sampling algorithm, a constrained sampling step is performed. For this step, we utilize HMC to reduce the correlation between successive sampled states. HMC relies on the gradient of the logarithm of the posterior distribution, which we estimate using a stochastic ensemble method based on an ensemble of directional derivatives. SEM only requires forward model runs and the simulator is then used as a black box and no adjoint code is needed. The developed HNS algorithm is successfully applied for Bayesian calibration and prior model selection of several nonlinear subsurface flow problems. © 2013 Elsevier Inc.

  17. Does self-selection affect samples' representativeness in online surveys? An investigation in online video game research.

    Science.gov (United States)

    Khazaal, Yasser; van Singer, Mathias; Chatton, Anne; Achab, Sophia; Zullino, Daniele; Rothen, Stephane; Khan, Riaz; Billieux, Joel; Thorens, Gabriel

    2014-07-07

    The number of medical studies performed through online surveys has increased dramatically in recent years. Despite their numerous advantages (eg, sample size, facilitated access to individuals presenting stigmatizing issues), selection bias may exist in online surveys. However, evidence on the representativeness of self-selected samples in online studies is patchy. Our objective was to explore the representativeness of a self-selected sample of online gamers using online players' virtual characters (avatars). All avatars belonged to individuals playing World of Warcraft (WoW), currently the most widely used online game. Avatars' characteristics were defined using various games' scores, reported on the WoW's official website, and two self-selected samples from previous studies were compared with a randomly selected sample of avatars. We used scores linked to 1240 avatars (762 from the self-selected samples and 478 from the random sample). The two self-selected samples of avatars had higher scores on most of the assessed variables (except for guild membership and exploration). Furthermore, some guilds were overrepresented in the self-selected samples. Our results suggest that more proficient players or players more involved in the game may be more likely to participate in online surveys. Caution is needed in the interpretation of studies based on online surveys that used a self-selection recruitment procedure. Epidemiological evidence on the reduced representativeness of sample of online surveys is warranted.

  18. Risk Attitudes, Sample Selection and Attrition in a Longitudinal Field Experiment

    DEFF Research Database (Denmark)

    Harrison, Glenn W.; Lau, Morten Igel

    with respect to risk attitudes. Our design builds in explicit randomization on the incentives for participation. We show that there are significant sample selection effects on inferences about the extent of risk aversion, but that the effects of subsequent sample attrition are minimal. Ignoring sample...... selection leads to inferences that subjects in the population are more risk averse than they actually are. Correcting for sample selection and attrition affects utility curvature, but does not affect inferences about probability weighting. Properly accounting for sample selection and attrition effects leads...... to findings of temporal stability in overall risk aversion. However, that stability is around different levels of risk aversion than one might naively infer without the controls for sample selection and attrition we are able to implement. This evidence of “randomization bias” from sample selection...

  19. The genealogy of samples in models with selection.

    Science.gov (United States)

    Neuhauser, C; Krone, S M

    1997-02-01

    We introduce the genealogy of a random sample of genes taken from a large haploid population that evolves according to random reproduction with selection and mutation. Without selection, the genealogy is described by Kingman's well-known coalescent process. In the selective case, the genealogy of the sample is embedded in a graph with a coalescing and branching structure. We describe this graph, called the ancestral selection graph, and point out differences and similarities with Kingman's coalescent. We present simulations for a two-allele model with symmetric mutation in which one of the alleles has a selective advantage over the other. We find that when the allele frequencies in the population are already in equilibrium, then the genealogy does not differ much from the neutral case. This is supported by rigorous results. Furthermore, we describe the ancestral selection graph for other selective models with finitely many selection classes, such as the K-allele models, infinitely-many-alleles models. DNA sequence models, and infinitely-many-sites models, and briefly discuss the diploid case.

  20. Alpha Matting with KL-Divergence Based Sparse Sampling.

    Science.gov (United States)

    Karacan, Levent; Erdem, Aykut; Erdem, Erkut

    2017-06-22

    In this paper, we present a new sampling-based alpha matting approach for the accurate estimation of foreground and background layers of an image. Previous sampling-based methods typically rely on certain heuristics in collecting representative samples from known regions, and thus their performance deteriorates if the underlying assumptions are not satisfied. To alleviate this, we take an entirely new approach and formulate sampling as a sparse subset selection problem where we propose to pick a small set of candidate samples that best explains the unknown pixels. Moreover, we describe a new dissimilarity measure for comparing two samples which is based on KLdivergence between the distributions of features extracted in the vicinity of the samples. The proposed framework is general and could be easily extended to video matting by additionally taking temporal information into account in the sampling process. Evaluation on standard benchmark datasets for image and video matting demonstrates that our approach provides more accurate results compared to the state-of-the-art methods.

  1. 40 CFR 205.160-2 - Test sample selection and preparation.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 24 2010-07-01 2010-07-01 false Test sample selection and preparation... sample selection and preparation. (a) Vehicles comprising the sample which are required to be tested... maintained in any manner unless such preparation, tests, modifications, adjustments or maintenance are part...

  2. Nested sampling algorithm for subsurface flow model selection, uncertainty quantification, and nonlinear calibration

    KAUST Repository

    Elsheikh, A. H.

    2013-12-01

    Calibration of subsurface flow models is an essential step for managing ground water aquifers, designing of contaminant remediation plans, and maximizing recovery from hydrocarbon reservoirs. We investigate an efficient sampling algorithm known as nested sampling (NS), which can simultaneously sample the posterior distribution for uncertainty quantification, and estimate the Bayesian evidence for model selection. Model selection statistics, such as the Bayesian evidence, are needed to choose or assign different weights to different models of different levels of complexities. In this work, we report the first successful application of nested sampling for calibration of several nonlinear subsurface flow problems. The estimated Bayesian evidence by the NS algorithm is used to weight different parameterizations of the subsurface flow models (prior model selection). The results of the numerical evaluation implicitly enforced Occam\\'s razor where simpler models with fewer number of parameters are favored over complex models. The proper level of model complexity was automatically determined based on the information content of the calibration data and the data mismatch of the calibrated model.

  3. Effective traffic features selection algorithm for cyber-attacks samples

    Science.gov (United States)

    Li, Yihong; Liu, Fangzheng; Du, Zhenyu

    2018-05-01

    By studying the defense scheme of Network attacks, this paper propose an effective traffic features selection algorithm based on k-means++ clustering to deal with the problem of high dimensionality of traffic features which extracted from cyber-attacks samples. Firstly, this algorithm divide the original feature set into attack traffic feature set and background traffic feature set by the clustering. Then, we calculates the variation of clustering performance after removing a certain feature. Finally, evaluating the degree of distinctiveness of the feature vector according to the result. Among them, the effective feature vector is whose degree of distinctiveness exceeds the set threshold. The purpose of this paper is to select out the effective features from the extracted original feature set. In this way, it can reduce the dimensionality of the features so as to reduce the space-time overhead of subsequent detection. The experimental results show that the proposed algorithm is feasible and it has some advantages over other selection algorithms.

  4. Measurement of radioactivity in the environment - Soil - Part 2: Guidance for the selection of the sampling strategy, sampling and pre-treatment of samples

    International Nuclear Information System (INIS)

    2007-01-01

    This part of ISO 18589 specifies the general requirements, based on ISO 11074 and ISO/IEC 17025, for all steps in the planning (desk study and area reconnaissance) of the sampling and the preparation of samples for testing. It includes the selection of the sampling strategy, the outline of the sampling plan, the presentation of general sampling methods and equipment, as well as the methodology of the pre-treatment of samples adapted to the measurements of the activity of radionuclides in soil. This part of ISO 18589 is addressed to the people responsible for determining the radioactivity present in soil for the purpose of radiation protection. It is applicable to soil from gardens, farmland, urban or industrial sites, as well as soil not affected by human activities. This part of ISO 18589 is applicable to all laboratories regardless of the number of personnel or the range of the testing performed. When a laboratory does not undertake one or more of the activities covered by this part of ISO 18589, such as planning, sampling or testing, the corresponding requirements do not apply. Information is provided on scope, normative references, terms and definitions and symbols, principle, sampling strategy, sampling plan, sampling process, pre-treatment of samples and recorded information. Five annexes inform about selection of the sampling strategy according to the objectives and the radiological characterization of the site and sampling areas, diagram of the evolution of the sample characteristics from the sampling site to the laboratory, example of sampling plan for a site divided in three sampling areas, example of a sampling record for a single/composite sample and example for a sample record for a soil profile with soil description. A bibliography is provided

  5. The quasar luminosity function from a variability-selected sample

    Science.gov (United States)

    Hawkins, M. R. S.; Veron, P.

    1993-01-01

    A sample of quasars is selected from a 10-yr sequence of 30 UK Schmidt plates. Luminosity functions are derived in several redshift intervals, which in each case show a featureless power-law rise towards low luminosities. There is no sign of the 'break' found in the recent UVX sample of Boyle et al. It is suggested that reasons for the disagreement are connected with biases in the selection of the UVX sample. The question of the nature of quasar evolution appears to be still unresolved.

  6. On a Robust MaxEnt Process Regression Model with Sample-Selection

    Directory of Open Access Journals (Sweden)

    Hea-Jung Kim

    2018-04-01

    Full Text Available In a regression analysis, a sample-selection bias arises when a dependent variable is partially observed as a result of the sample selection. This study introduces a Maximum Entropy (MaxEnt process regression model that assumes a MaxEnt prior distribution for its nonparametric regression function and finds that the MaxEnt process regression model includes the well-known Gaussian process regression (GPR model as a special case. Then, this special MaxEnt process regression model, i.e., the GPR model, is generalized to obtain a robust sample-selection Gaussian process regression (RSGPR model that deals with non-normal data in the sample selection. Various properties of the RSGPR model are established, including the stochastic representation, distributional hierarchy, and magnitude of the sample-selection bias. These properties are used in the paper to develop a hierarchical Bayesian methodology to estimate the model. This involves a simple and computationally feasible Markov chain Monte Carlo algorithm that avoids analytical or numerical derivatives of the log-likelihood function of the model. The performance of the RSGPR model in terms of the sample-selection bias correction, robustness to non-normality, and prediction, is demonstrated through results in simulations that attest to its good finite-sample performance.

  7. Approaches to sampling and case selection in qualitative research: examples in the geography of health.

    Science.gov (United States)

    Curtis, S; Gesler, W; Smith, G; Washburn, S

    2000-04-01

    This paper focuses on the question of sampling (or selection of cases) in qualitative research. Although the literature includes some very useful discussions of qualitative sampling strategies, the question of sampling often seems to receive less attention in methodological discussion than questions of how data is collected or is analysed. Decisions about sampling are likely to be important in many qualitative studies (although it may not be an issue in some research). There are varying accounts of the principles applicable to sampling or case selection. Those who espouse 'theoretical sampling', based on a 'grounded theory' approach, are in some ways opposed to those who promote forms of 'purposive sampling' suitable for research informed by an existing body of social theory. Diversity also results from the many different methods for drawing purposive samples which are applicable to qualitative research. We explore the value of a framework suggested by Miles and Huberman [Miles, M., Huberman,, A., 1994. Qualitative Data Analysis, Sage, London.], to evaluate the sampling strategies employed in three examples of research by the authors. Our examples comprise three studies which respectively involve selection of: 'healing places'; rural places which incorporated national anti-malarial policies; young male interviewees, identified as either chronically ill or disabled. The examples are used to show how in these three studies the (sometimes conflicting) requirements of the different criteria were resolved, as well as the potential and constraints placed on the research by the selection decisions which were made. We also consider how far the criteria Miles and Huberman suggest seem helpful for planning 'sample' selection in qualitative research.

  8. Electromembrane extraction as a rapid and selective miniaturized sample preparation technique for biological fluids

    DEFF Research Database (Denmark)

    Gjelstad, Astrid; Pedersen-Bjergaard, Stig; Seip, Knut Fredrik

    2015-01-01

    This special report discusses the sample preparation method electromembrane extraction, which was introduced in 2006 as a rapid and selective miniaturized extraction method. The extraction principle is based on isolation of charged analytes extracted from an aqueous sample, across a thin film....... Technical aspects of electromembrane extraction, important extraction parameters as well as a handful of examples of applications from different biological samples and bioanalytical areas are discussed in the paper....

  9. Size selective isocyanate aerosols personal air sampling using porous plastic foams

    International Nuclear Information System (INIS)

    Cong Khanh Huynh; Trinh Vu Duc

    2009-01-01

    As part of a European project (SMT4-CT96-2137), various European institutions specialized in occupational hygiene (BGIA, HSL, IOM, INRS, IST, Ambiente e Lavoro) have established a program of scientific collaboration to develop one or more prototypes of European personal samplers for the collection of simultaneous three dust fractions: inhalable, thoracic and respirable. These samplers based on existing sampling heads (IOM, GSP and cassettes) use Polyurethane Plastic Foam (PUF) according to their porosity to support sampling and separator size of the particles. In this study, the authors present an original application of size selective personal air sampling using chemical impregnated PUF to perform isocyanate aerosols capturing and derivatizing in industrial spray-painting shops.

  10. Design-based estimators for snowball sampling

    OpenAIRE

    Shafie, Termeh

    2010-01-01

    Snowball sampling, where existing study subjects recruit further subjects from amongtheir acquaintances, is a popular approach when sampling from hidden populations.Since people with many in-links are more likely to be selected, there will be a selectionbias in the samples obtained. In order to eliminate this bias, the sample data must beweighted. However, the exact selection probabilities are unknown for snowball samplesand need to be approximated in an appropriate way. This paper proposes d...

  11. Passive sampling of selected endocrine disrupting compounds using polar organic chemical integrative samplers

    International Nuclear Information System (INIS)

    Arditsoglou, Anastasia; Voutsa, Dimitra

    2008-01-01

    Two types of polar organic chemical integrative samplers (pharmaceutical POCIS and pesticide POCIS) were examined for their sampling efficiency of selected endocrine disrupting compounds (EDCs). Laboratory-based calibration of POCISs was conducted by exposing them at high and low concentrations of 14 EDCs (4-alkyl-phenols, their ethoxylate oligomers, bisphenol A, selected estrogens and synthetic steroids) for different time periods. The kinetic studies showed an integrative uptake up to 28 days. The sampling rates for the individual compounds were obtained. The use of POCISs could result in an integrative approach to the quality status of the aquatic systems especially in the case of high variation of water concentrations of EDCs. The sampling efficiency of POCISs under various field conditions was assessed after their deployment in different aquatic environments. - Calibration and field performance of polar organic integrative samplers for monitoring EDCs in aquatic environments

  12. HOT-DUST-POOR QUASARS IN MID-INFRARED AND OPTICALLY SELECTED SAMPLES

    International Nuclear Information System (INIS)

    Hao Heng; Elvis, Martin; Civano, Francesca; Lawrence, Andy

    2011-01-01

    We show that the hot-dust-poor (HDP) quasars, originally found in the X-ray-selected XMM-COSMOS type 1 active galactic nucleus (AGN) sample, are just as common in two samples selected at optical/infrared wavelengths: the Richards et al. Spitzer/SDSS sample (8.7% ± 2.2%) and the Palomar-Green-quasar-dominated sample of Elvis et al. (9.5% ± 5.0%). The properties of the HDP quasars in these two samples are consistent with the XMM-COSMOS sample, except that, at the 99% (∼ 2.5σ) significance, a larger proportion of the HDP quasars in the Spitzer/SDSS sample have weak host galaxy contributions, probably due to the selection criteria used. Either the host dust is destroyed (dynamically or by radiation) or is offset from the central black hole due to recoiling. Alternatively, the universality of HDP quasars in samples with different selection methods and the continuous distribution of dust covering factor in type 1 AGNs suggest that the range of spectral energy distributions could be related to the range of tilts in warped fueling disks, as in the model of Lawrence and Elvis, with HDP quasars having relatively small warps.

  13. THE zCOSMOS-SINFONI PROJECT. I. SAMPLE SELECTION AND NATURAL-SEEING OBSERVATIONS

    Energy Technology Data Exchange (ETDEWEB)

    Mancini, C.; Renzini, A. [INAF-OAPD, Osservatorio Astronomico di Padova, Vicolo Osservatorio 5, I-35122 Padova (Italy); Foerster Schreiber, N. M.; Hicks, E. K. S.; Genzel, R.; Tacconi, L.; Davies, R. [Max-Planck-Institut fuer Extraterrestrische Physik, Giessenbachstrasse, D-85748 Garching (Germany); Cresci, G. [Osservatorio Astrofisico di Arcetri (OAF), INAF-Firenze, Largo E. Fermi 5, I-50125 Firenze (Italy); Peng, Y.; Lilly, S.; Carollo, M.; Oesch, P. [Institute of Astronomy, Department of Physics, Eidgenossische Technische Hochschule, ETH Zurich CH-8093 (Switzerland); Vergani, D.; Pozzetti, L.; Zamorani, G. [INAF-Bologna, Via Ranzani, I-40127 Bologna (Italy); Daddi, E. [CEA-Saclay, DSM/DAPNIA/Service d' Astrophysique, F-91191 Gif-Sur Yvette Cedex (France); Maraston, C. [Institute of Cosmology and Gravitation, University of Portsmouth, Dennis Sciama Building, Burnaby Road, PO1 3HE Portsmouth (United Kingdom); McCracken, H. J. [IAP, 98bis bd Arago, F-75014 Paris (France); Bouche, N. [Department of Physics, University of California, Santa Barbara, CA 93106 (United States); Shapiro, K. [Aerospace Research Laboratories, Northrop Grumman Aerospace Systems, Redondo Beach, CA 90278 (United States); and others

    2011-12-10

    The zCOSMOS-SINFONI project is aimed at studying the physical and kinematical properties of a sample of massive z {approx} 1.4-2.5 star-forming galaxies, through SINFONI near-infrared integral field spectroscopy (IFS), combined with the multiwavelength information from the zCOSMOS (COSMOS) survey. The project is based on one hour of natural-seeing observations per target, and adaptive optics (AO) follow-up for a major part of the sample, which includes 30 galaxies selected from the zCOSMOS/VIMOS spectroscopic survey. This first paper presents the sample selection, and the global physical characterization of the target galaxies from multicolor photometry, i.e., star formation rate (SFR), stellar mass, age, etc. The H{alpha} integrated properties, such as, flux, velocity dispersion, and size, are derived from the natural-seeing observations, while the follow-up AO observations will be presented in the next paper of this series. Our sample appears to be well representative of star-forming galaxies at z {approx} 2, covering a wide range in mass and SFR. The H{alpha} integrated properties of the 25 H{alpha} detected galaxies are similar to those of other IFS samples at the same redshifts. Good agreement is found among the SFRs derived from H{alpha} luminosity and other diagnostic methods, provided the extinction affecting the H{alpha} luminosity is about twice that affecting the continuum. A preliminary kinematic analysis, based on the maximum observed velocity difference across the source and on the integrated velocity dispersion, indicates that the sample splits nearly 50-50 into rotation-dominated and velocity-dispersion-dominated galaxies, in good agreement with previous surveys.

  14. 6. Label-free selective plane illumination microscopy of tissue samples

    Directory of Open Access Journals (Sweden)

    Muteb Alharbi

    2017-10-01

    Conclusion: Overall this method meets the demands of the current needs for 3D imaging tissue samples in a label-free manner. Label-free Selective Plane Microscopy directly provides excellent information about the structure of the tissue samples. This work has highlighted the superiority of Label-free Selective Plane Microscopy to current approaches to label-free 3D imaging of tissue.

  15. Using maximum entropy modeling for optimal selection of sampling sites for monitoring networks

    Science.gov (United States)

    Stohlgren, Thomas J.; Kumar, Sunil; Barnett, David T.; Evangelista, Paul H.

    2011-01-01

    Environmental monitoring programs must efficiently describe state shifts. We propose using maximum entropy modeling to select dissimilar sampling sites to capture environmental variability at low cost, and demonstrate a specific application: sample site selection for the Central Plains domain (453,490 km2) of the National Ecological Observatory Network (NEON). We relied on four environmental factors: mean annual temperature and precipitation, elevation, and vegetation type. A “sample site” was defined as a 20 km × 20 km area (equal to NEON’s airborne observation platform [AOP] footprint), within which each 1 km2 cell was evaluated for each environmental factor. After each model run, the most environmentally dissimilar site was selected from all potential sample sites. The iterative selection of eight sites captured approximately 80% of the environmental envelope of the domain, an improvement over stratified random sampling and simple random designs for sample site selection. This approach can be widely used for cost-efficient selection of survey and monitoring sites.

  16. Risk-based audit selection of dairy farms.

    Science.gov (United States)

    van Asseldonk, M A P M; Velthuis, A G J

    2014-02-01

    Dairy farms are audited in the Netherlands on numerous process standards. Each farm is audited once every 2 years. Increasing demands for cost-effectiveness in farm audits can be met by introducing risk-based principles. This implies targeting subpopulations with a higher risk of poor process standards. To select farms for an audit that present higher risks, a statistical analysis was conducted to test the relationship between the outcome of farm audits and bulk milk laboratory results before the audit. The analysis comprised 28,358 farm audits and all conducted laboratory tests of bulk milk samples 12 mo before the audit. The overall outcome of each farm audit was classified as approved or rejected. Laboratory results included somatic cell count (SCC), total bacterial count (TBC), antimicrobial drug residues (ADR), level of butyric acid spores (BAB), freezing point depression (FPD), level of free fatty acids (FFA), and cleanliness of the milk (CLN). The bulk milk laboratory results were significantly related to audit outcomes. Rejected audits are likely to occur on dairy farms with higher mean levels of SCC, TBC, ADR, and BAB. Moreover, in a multivariable model, maxima for TBC, SCC, and FPD as well as standard deviations for TBC and FPD are risk factors for negative audit outcomes. The efficiency curve of a risk-based selection approach, on the basis of the derived regression results, dominated the current random selection approach. To capture 25, 50, or 75% of the population with poor process standards (i.e., audit outcome of rejected), respectively, only 8, 20, or 47% of the population had to be sampled based on a risk-based selection approach. Milk quality information can thus be used to preselect high-risk farms to be audited more frequently. Copyright © 2014 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  17. Vibration and acoustic frequency spectra for industrial process modeling using selective fusion multi-condition samples and multi-source features

    Science.gov (United States)

    Tang, Jian; Qiao, Junfei; Wu, ZhiWei; Chai, Tianyou; Zhang, Jian; Yu, Wen

    2018-01-01

    Frequency spectral data of mechanical vibration and acoustic signals relate to difficult-to-measure production quality and quantity parameters of complex industrial processes. A selective ensemble (SEN) algorithm can be used to build a soft sensor model of these process parameters by fusing valued information selectively from different perspectives. However, a combination of several optimized ensemble sub-models with SEN cannot guarantee the best prediction model. In this study, we use several techniques to construct mechanical vibration and acoustic frequency spectra of a data-driven industrial process parameter model based on selective fusion multi-condition samples and multi-source features. Multi-layer SEN (MLSEN) strategy is used to simulate the domain expert cognitive process. Genetic algorithm and kernel partial least squares are used to construct the inside-layer SEN sub-model based on each mechanical vibration and acoustic frequency spectral feature subset. Branch-and-bound and adaptive weighted fusion algorithms are integrated to select and combine outputs of the inside-layer SEN sub-models. Then, the outside-layer SEN is constructed. Thus, "sub-sampling training examples"-based and "manipulating input features"-based ensemble construction methods are integrated, thereby realizing the selective information fusion process based on multi-condition history samples and multi-source input features. This novel approach is applied to a laboratory-scale ball mill grinding process. A comparison with other methods indicates that the proposed MLSEN approach effectively models mechanical vibration and acoustic signals.

  18. An algorithm to improve sampling efficiency for uncertainty propagation using sampling based method

    International Nuclear Information System (INIS)

    Campolina, Daniel; Lima, Paulo Rubens I.; Pereira, Claubia; Veloso, Maria Auxiliadora F.

    2015-01-01

    Sample size and computational uncertainty were varied in order to investigate sample efficiency and convergence of the sampling based method for uncertainty propagation. Transport code MCNPX was used to simulate a LWR model and allow the mapping, from uncertain inputs of the benchmark experiment, to uncertain outputs. Random sampling efficiency was improved through the use of an algorithm for selecting distributions. Mean range, standard deviation range and skewness were verified in order to obtain a better representation of uncertainty figures. Standard deviation of 5 pcm in the propagated uncertainties for 10 n-samples replicates was adopted as convergence criterion to the method. Estimation of 75 pcm uncertainty on reactor k eff was accomplished by using sample of size 93 and computational uncertainty of 28 pcm to propagate 1σ uncertainty of burnable poison radius. For a fixed computational time, in order to reduce the variance of the uncertainty propagated, it was found, for the example under investigation, it is preferable double the sample size than double the amount of particles followed by Monte Carlo process in MCNPX code. (author)

  19. Selective removal of phosphate for analysis of organic acids in complex samples.

    Science.gov (United States)

    Deshmukh, Sandeep; Frolov, Andrej; Marcillo, Andrea; Birkemeyer, Claudia

    2015-04-03

    Accurate quantitation of compounds in samples of biological origin is often hampered by matrix interferences one of which occurs in GC-MS analysis from the presence of highly abundant phosphate. Consequently, high concentrations of phosphate need to be removed before sample analysis. Within this context, we screened 17 anion exchange solid-phase extraction (SPE) materials for selective phosphate removal using different protocols to meet the challenge of simultaneous recovery of six common organic acids in aqueous samples prior to derivatization for GC-MS analysis. Up to 75% recovery was achieved for the most organic acids, only the low pKa tartaric and citric acids were badly recovered. Compared to the traditional approach of phosphate removal by precipitation, SPE had a broader compatibility with common detection methods and performed more selectively among the organic acids under investigation. Based on the results of this study, it is recommended that phosphate removal strategies during the analysis of biologically relevant small molecular weight organic acids consider the respective pKa of the anticipated analytes and the detection method of choice. Copyright © 2015 Elsevier B.V. All rights reserved.

  20. Data Quality Objectives For Selecting Waste Samples For Bench-Scale Reformer Treatability Studies

    International Nuclear Information System (INIS)

    Banning, D.L.

    2011-01-01

    This document describes the data quality objectives to select archived samples located at the 222-S Laboratory for Bench-Scale Reforming testing. The type, quantity, and quality of the data required to select the samples for Fluid Bed Steam Reformer testing are discussed. In order to maximize the efficiency and minimize the time to treat Hanford tank waste in the Waste Treatment and Immobilization Plant, additional treatment processes may be required. One of the potential treatment processes is the fluidized bed steam reformer. A determination of the adequacy of the fluidized bed steam reformer process to treat Hanford tank waste is required. The initial step in determining the adequacy of the fluidized bed steam reformer process is to select archived waste samples from the 222-S Laboratory that will be used in a bench scale tests. Analyses of the selected samples will be required to confirm the samples meet the shipping requirements and for comparison to the bench scale reformer (BSR) test sample selection requirements.

  1. 40 CFR 205.57-2 - Test vehicle sample selection.

    Science.gov (United States)

    2010-07-01

    ... pursuant to a test request in accordance with this subpart will be selected in the manner specified in the... then using a table of random numbers to select the number of vehicles as specified in paragraph (c) of... with the desig-nated AQL are contained in Appendix I, -Table II. (c) The appropriate batch sample size...

  2. The Multi-Template Molecularly Imprinted Polymer Based on SBA-15 for Selective Separation and Determination of Panax notoginseng Saponins Simultaneously in Biological Samples

    Directory of Open Access Journals (Sweden)

    Chenghong Sun

    2017-11-01

    Full Text Available The feasible, reliable and selective multi-template molecularly imprinted polymers (MT-MIPs based on SBA-15 (SBA-15@MT-MIPs for the selective separation and determination of the trace level of ginsenoside Rb1 (Rb1, ginsenoside Rg1 (Rg1 and notoginsenoside R1 (R1 simultaneously from biological samples were developed. The polymers were constructed by SBA-15 as support, Rb1, Rg1, R1 as multi-template, acrylamide (AM as functional monomer and ethylene glycol dimethacrylate (EGDMA as cross-linker. The new synthetic SBA-15@MT-MIPs were satisfactorily applied to solid-phase extraction (SPE coupled with high performance liquid chromatography (HPLC for the separation and determination of trace Rb1, Rg1 and R1 in plasma samples. Under the optimized conditions, the limits of detection (LODs and quantitation (LOQs of the proposed method for Rb1, Rg1 and R1 were in the range of 0.63–0.75 ng·mL−1 and 2.1–2.5 ng·mL−1, respectively. The recoveries of R1, Rb1 and Rg1 were obtained between 93.4% and 104.3% with relative standard deviations (RSDs in the range of 3.3–4.2%. All results show that the obtained SBA-15@MT-MIPs could be a promising prospect for the practical application in the selective separation and enrichment of trace Panax notoginseng saponins (PNS in the biological samples.

  3. Failure Probability Estimation Using Asymptotic Sampling and Its Dependence upon the Selected Sampling Scheme

    Directory of Open Access Journals (Sweden)

    Martinásková Magdalena

    2017-12-01

    Full Text Available The article examines the use of Asymptotic Sampling (AS for the estimation of failure probability. The AS algorithm requires samples of multidimensional Gaussian random vectors, which may be obtained by many alternative means that influence the performance of the AS method. Several reliability problems (test functions have been selected in order to test AS with various sampling schemes: (i Monte Carlo designs; (ii LHS designs optimized using the Periodic Audze-Eglājs (PAE criterion; (iii designs prepared using Sobol’ sequences. All results are compared with the exact failure probability value.

  4. Learning from Past Classification Errors: Exploring Methods for Improving the Performance of a Deep Learning-based Building Extraction Model through Quantitative Analysis of Commission Errors for Optimal Sample Selection

    Science.gov (United States)

    Swan, B.; Laverdiere, M.; Yang, L.

    2017-12-01

    In the past five years, deep Convolutional Neural Networks (CNN) have been increasingly favored for computer vision applications due to their high accuracy and ability to generalize well in very complex problems; however, details of how they function and in turn how they may be optimized are still imperfectly understood. In particular, their complex and highly nonlinear network architecture, including many hidden layers and self-learned parameters, as well as their mathematical implications, presents open questions about how to effectively select training data. Without knowledge of the exact ways the model processes and transforms its inputs, intuition alone may fail as a guide to selecting highly relevant training samples. Working in the context of improving a CNN-based building extraction model used for the LandScan USA gridded population dataset, we have approached this problem by developing a semi-supervised, highly-scalable approach to select training samples from a dataset of identified commission errors. Due to the large scope this project, tens of thousands of potential samples could be derived from identified commission errors. To efficiently trim those samples down to a manageable and effective set for creating additional training sample, we statistically summarized the spectral characteristics of areas with rates of commission errors at the image tile level and grouped these tiles using affinity propagation. Highly representative members of each commission error cluster were then used to select sites for training sample creation. The model will be incrementally re-trained with the new training data to allow for an assessment of how the addition of different types of samples affects the model performance, such as precision and recall rates. By using quantitative analysis and data clustering techniques to select highly relevant training samples, we hope to improve model performance in a manner that is resource efficient, both in terms of training process

  5. New competitive dendrimer-based and highly selective immunosensor for determination of atrazine in environmental, feed and food samples: the importance of antibody selectivity for discrimination among related triazinic metabolites.

    Science.gov (United States)

    Giannetto, Marco; Umiltà, Eleonora; Careri, Maria

    2014-01-02

    A new voltammetric competitive immunosensor selective for atrazine, based on the immobilization of a conjugate atrazine-bovine serum albumine on a nanostructured gold substrate previously functionalized with poliamidoaminic dendrimers, was realized, characterized, and validated in different real samples of environmental and food concern. Response of the sensor was reliable, highly selective and suitable for the detection and quantification of atrazine at trace levels in complex matrices such as territorial waters, corn-cultivated soils, corn-containing poultry and bovine feeds and corn flakes for human use. Selectivity studies were focused on desethylatrazine, the principal metabolite generated by long-term microbiological degradation of atrazine, terbutylazine-2-hydroxy and simazine as potential interferents. The response of the developed immunosensor for atrazine was explored over the 10(-2)-10(3) ng mL(-1) range. Good sensitivity was proved, as limit of detection and limit of quantitation of 1.2 and 5 ng mL(-1), respectively, were estimated for atrazine. RSD values <5% over the entire explored range attested a good precision of the device. Copyright © 2013 Elsevier B.V. All rights reserved.

  6. [Electroencephalogram Feature Selection Based on Correlation Coefficient Analysis].

    Science.gov (United States)

    Zhou, Jinzhi; Tang, Xiaofang

    2015-08-01

    In order to improve the accuracy of classification with small amount of motor imagery training data on the development of brain-computer interface (BCD systems, we proposed an analyzing method to automatically select the characteristic parameters based on correlation coefficient analysis. Throughout the five sample data of dataset IV a from 2005 BCI Competition, we utilized short-time Fourier transform (STFT) and correlation coefficient calculation to reduce the number of primitive electroencephalogram dimension, then introduced feature extraction based on common spatial pattern (CSP) and classified by linear discriminant analysis (LDA). Simulation results showed that the average rate of classification accuracy could be improved by using correlation coefficient feature selection method than those without using this algorithm. Comparing with support vector machine (SVM) optimization features algorithm, the correlation coefficient analysis can lead better selection parameters to improve the accuracy of classification.

  7. 40 CFR 205.171-2 - Test exhaust system sample selection and preparation.

    Science.gov (United States)

    2010-07-01

    ... Systems § 205.171-2 Test exhaust system sample selection and preparation. (a)(1) Exhaust systems... 40 Protection of Environment 24 2010-07-01 2010-07-01 false Test exhaust system sample selection and preparation. 205.171-2 Section 205.171-2 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY...

  8. Forecasting Urban Air Quality via a Back-Propagation Neural Network and a Selection Sample Rule

    Directory of Open Access Journals (Sweden)

    Yonghong Liu

    2015-07-01

    Full Text Available In this paper, based on a sample selection rule and a Back Propagation (BP neural network, a new model of forecasting daily SO2, NO2, and PM10 concentration in seven sites of Guangzhou was developed using data from January 2006 to April 2012. A meteorological similarity principle was applied in the development of the sample selection rule. The key meteorological factors influencing SO2, NO2, and PM10 daily concentrations as well as weight matrices and threshold matrices were determined. A basic model was then developed based on the improved BP neural network. Improving the basic model, identification of the factor variation consistency was added in the rule, and seven sets of sensitivity experiments in one of the seven sites were conducted to obtain the selected model. A comparison of the basic model from May 2011 to April 2012 in one site showed that the selected model for PM10 displayed better forecasting performance, with Mean Absolute Percentage Error (MAPE values decreasing by 4% and R2 values increasing from 0.53 to 0.68. Evaluations conducted at the six other sites revealed a similar performance. On the whole, the analysis showed that the models presented here could provide local authorities with reliable and precise predictions and alarms about air quality if used at an operational scale.

  9. ARTIFICIAL NEURAL NETWORKS BASED GEARS MATERIAL SELECTION HYBRID INTELLIGENT SYSTEM

    Institute of Scientific and Technical Information of China (English)

    X.C. Li; W.X. Zhu; G. Chen; D.S. Mei; J. Zhang; K.M. Chen

    2003-01-01

    An artificial neural networks(ANNs) based gear material selection hybrid intelligent system is established by analyzing the individual advantages and weakness of expert system (ES) and ANNs and the applications in material select of them. The system mainly consists of tow parts: ES and ANNs. By being trained with much data samples,the back propagation (BP) ANN gets the knowledge of gear materials selection, and is able to inference according to user input. The system realizes the complementing of ANNs and ES. Using this system, engineers without materials selection experience can conveniently deal with gear materials selection.

  10. Spectroelectrochemical Sensing Based on Multimode Selectivity simultaneously Achievable in a Single Device. 11. Design and Evaluation of a Small Portable Sensor for the Determination of Ferrocyanide in Hanford Waste Samples

    International Nuclear Information System (INIS)

    Stegemiller, Michael L.; Heineman, William R.; Seliskar, Carl J.; Ridgway, Thomas H.; Bryan, Samuel A.; Hubler, Timothy L.; Sell, Richard L.

    2003-01-01

    Spectroelectrochemical sensing based on multimode selectivity simultaneously achievable in a single device. 11. Design and evaluation of a small portable sensor for the determination of ferrocyanide in Hanford waste samples

  11. Highly selective ionic liquid-based microextraction method for sensitive trace cobalt determination in environmental and biological samples

    International Nuclear Information System (INIS)

    Berton, Paula; Wuilloud, Rodolfo G.

    2010-01-01

    A simple and rapid dispersive liquid-liquid microextraction procedure based on an ionic liquid (IL-DLLME) was developed for selective determination of cobalt (Co) with electrothermal atomic absorption spectrometry (ETAAS) detection. Cobalt was initially complexed with 1-nitroso-2-naphtol (1N2N) reagent at pH 4.0. The IL-DLLME procedure was then performed by using a few microliters of the room temperature ionic liquid (RTIL) 1-hexyl-3-methylimidazolium hexafluorophosphate [C 6 mim][PF 6 ] as extractant while methanol was the dispersant solvent. After microextraction procedure, the Co-enriched RTIL phase was solubilized in methanol and directly injected into the graphite furnace. The effect of several variables on Co-1N2N complex formation, extraction with the dispersed RTIL phase, and analyte detection with ETAAS, was carefully studied in this work. An enrichment factor of 120 was obtained with only 6 mL of sample solution and under optimal experimental conditions. The resultant limit of detection (LOD) was 3.8 ng L -1 , while the relative standard deviation (RSD) was 3.4% (at 1 μg L -1 Co level and n = 10), calculated from the peak height of absorbance signals. The accuracy of the proposed methodology was tested by analysis of a certified reference material. The method was successfully applied for the determination of Co in environmental and biological samples.

  12. Testing the normality assumption in the sample selection model with an application to travel demand

    NARCIS (Netherlands)

    van der Klaauw, B.; Koning, R.H.

    2003-01-01

    In this article we introduce a test for the normality assumption in the sample selection model. The test is based on a flexible parametric specification of the density function of the error terms in the model. This specification follows a Hermite series with bivariate normality as a special case.

  13. Testing the normality assumption in the sample selection model with an application to travel demand

    NARCIS (Netherlands)

    van der Klauw, B.; Koning, R.H.

    In this article we introduce a test for the normality assumption in the sample selection model. The test is based on a flexible parametric specification of the density function of the error terms in the model. This specification follows a Hermite series with bivariate normality as a special case.

  14. An improved selective sampling method

    International Nuclear Information System (INIS)

    Miyahara, Hiroshi; Iida, Nobuyuki; Watanabe, Tamaki

    1986-01-01

    The coincidence methods which are currently used for the accurate activity standardisation of radio-nuclides, require dead time and resolving time corrections which tend to become increasingly uncertain as countrates exceed about 10 K. To reduce the dependence on such corrections, Muller, in 1981, proposed the selective sampling method using a fast multichannel analyser (50 ns ch -1 ) for measuring the countrates. It is, in many ways, more convenient and possibly potentially more reliable to replace the MCA with scalers and a circuit is described employing five scalers; two of them serving to measure the background correction. Results of comparisons using our new method and the coincidence method for measuring the activity of 60 Co sources yielded agree-ment within statistical uncertainties. (author)

  15. Obscured AGN at z ~ 1 from the zCOSMOS-Bright Survey. I. Selection and optical properties of a [Ne v]-selected sample

    Science.gov (United States)

    Mignoli, M.; Vignali, C.; Gilli, R.; Comastri, A.; Zamorani, G.; Bolzonella, M.; Bongiorno, A.; Lamareille, F.; Nair, P.; Pozzetti, L.; Lilly, S. J.; Carollo, C. M.; Contini, T.; Kneib, J.-P.; Le Fèvre, O.; Mainieri, V.; Renzini, A.; Scodeggio, M.; Bardelli, S.; Caputi, K.; Cucciati, O.; de la Torre, S.; de Ravel, L.; Franzetti, P.; Garilli, B.; Iovino, A.; Kampczyk, P.; Knobel, C.; Kovač, K.; Le Borgne, J.-F.; Le Brun, V.; Maier, C.; Pellò, R.; Peng, Y.; Perez Montero, E.; Presotto, V.; Silverman, J. D.; Tanaka, M.; Tasca, L.; Tresse, L.; Vergani, D.; Zucca, E.; Bordoloi, R.; Cappi, A.; Cimatti, A.; Koekemoer, A. M.; McCracken, H. J.; Moresco, M.; Welikala, N.

    2013-08-01

    Aims: The application of multi-wavelength selection techniques is essential for obtaining a complete and unbiased census of active galactic nuclei (AGN). We present here a method for selecting z ~ 1 obscured AGN from optical spectroscopic surveys. Methods: A sample of 94 narrow-line AGN with 0.65 advantage of the large amount of data available in the COSMOS field, the properties of the [Ne v]-selected type 2 AGN were investigated, focusing on their host galaxies, X-ray emission, and optical line-flux ratios. Finally, a previously developed diagnostic, based on the X-ray-to-[Ne v] luminosity ratio, was exploited to search for the more heavily obscured AGN. Results: We found that [Ne v]-selected narrow-line AGN have Seyfert 2-like optical spectra, although their emission line ratios are diluted by a star-forming component. The ACS morphologies and stellar component in the optical spectra indicate a preference for our type 2 AGN to be hosted in early-type spirals with stellar masses greater than 109.5 - 10 M⊙, on average higher than those of the galaxy parent sample. The fraction of galaxies hosting [Ne v]-selected obscured AGN increases with the stellar mass, reaching a maximum of about 3% at ≈2 × 1011 M⊙. A comparison with other selection techniques at z ~ 1, namely the line-ratio diagnostics and X-ray detections, shows that the detection of the [Ne v] λ3426 line is an effective method for selecting AGN in the optical band, in particular the most heavily obscured ones, but cannot provide a complete census of type 2 AGN by itself. Finally, the high fraction of [Ne v]-selected type 2 AGN not detected in medium-deep (≈100-200 ks) Chandra observations (67%) is suggestive of the inclusion of Compton-thick (i.e., with NH > 1024 cm-2) sources in our sample. The presence of a population of heavily obscured AGN is corroborated by the X-ray-to-[Ne v] ratio; we estimated, by means of an X-ray stacking technique and simulations, that the Compton-thick fraction in our

  16. Microbiological sampling plan based on risk classification to verify supplier selection and production of served meals in food service operation.

    Science.gov (United States)

    Lahou, Evy; Jacxsens, Liesbeth; Van Landeghem, Filip; Uyttendaele, Mieke

    2014-08-01

    Food service operations are confronted with a diverse range of raw materials and served meals. The implementation of a microbial sampling plan in the framework of verification of suppliers and their own production process (functionality of their prerequisite and HACCP program), demands selection of food products and sampling frequencies. However, these are often selected without a well described scientifically underpinned sampling plan. Therefore, an approach on how to set-up a focused sampling plan, enabled by a microbial risk categorization of food products, for both incoming raw materials and meals served to the consumers is presented. The sampling plan was implemented as a case study during a one-year period in an institutional food service operation to test the feasibility of the chosen approach. This resulted in 123 samples of raw materials and 87 samples of meal servings (focused on high risk categorized food products) which were analyzed for spoilage bacteria, hygiene indicators and food borne pathogens. Although sampling plans are intrinsically limited in assessing the quality and safety of sampled foods, it was shown to be useful to reveal major non-compliances and opportunities to improve the food safety management system in place. Points of attention deduced in the case study were control of Listeria monocytogenes in raw meat spread and raw fish as well as overall microbial quality of served sandwiches and salads. Copyright © 2014 Elsevier Ltd. All rights reserved.

  17. Mineral Composition of Selected Serbian Propolis Samples

    Directory of Open Access Journals (Sweden)

    Tosic Snezana

    2017-06-01

    Full Text Available The aim of this work was to determine the content of 22 macro- and microelements in ten raw Serbian propolis samples which differ in geographical and botanical origin as well as in polluted agent contents by atomic emission spectrometry with inductively coupled plasma (ICP-OES. The macroelements were more common and present Ca content was the highest while Na content the lowest. Among the studied essential trace elements Fe was the most common element. The levels of toxic elements (Pb, Cd, As and Hg were also analyzed, since they were possible environmental contaminants that could be transferred into propolis products for human consumption. As and Hg were not detected in any of the analyzed samples but a high level of Pb (2.0-9.7 mg/kg was detected and only selected portions of raw propolis could be used to produce natural medicines and dietary supplements for humans. Obtained results were statistically analyzed, and the examined samples showed a wide range of element content.

  18. A Highly Sensitive and Selective Method for the Determination of an Iodate in Table-salt Samples Using Malachite Green-based Spectrophotometry.

    Science.gov (United States)

    Konkayan, Mongkol; Limchoowong, Nunticha; Sricharoen, Phitchan; Chanthai, Saksit

    2016-01-01

    A simple, rapid, and sensitive malachite green-based spectrophotometric method for the selective trace determination of an iodate has been developed and presented for the first time. The reaction mixture was specifically involved in the liberation of iodine in the presence of an excess of iodide in an acidic condition following an instantaneous reaction between the liberated iodine and malachite green dye. The optimum condition was obtained with a buffer solution pH of 5.2 in the presence of 40 mg L -1 potassium iodide and 1.5 × 10 -5 M malachite green for a 5-min incubation time. The iodate contents in some table-salt samples were in the range of 26 to 45 mg kg -1 , while those of drinking water, tap water, canal water, and seawater samples were not detectable (< 96 ng mL -1 of limits of detection, LOQ) with their satisfied method of recoveries of between 93 and 108%. The results agreed with those obtained using ICP-OES for comparison.

  19. Interval-value Based Particle Swarm Optimization algorithm for cancer-type specific gene selection and sample classification

    Directory of Open Access Journals (Sweden)

    D. Ramyachitra

    2015-09-01

    Full Text Available Microarray technology allows simultaneous measurement of the expression levels of thousands of genes within a biological tissue sample. The fundamental power of microarrays lies within the ability to conduct parallel surveys of gene expression using microarray data. The classification of tissue samples based on gene expression data is an important problem in medical diagnosis of diseases such as cancer. In gene expression data, the number of genes is usually very high compared to the number of data samples. Thus the difficulty that lies with data are of high dimensionality and the sample size is small. This research work addresses the problem by classifying resultant dataset using the existing algorithms such as Support Vector Machine (SVM, K-nearest neighbor (KNN, Interval Valued Classification (IVC and the improvised Interval Value based Particle Swarm Optimization (IVPSO algorithm. Thus the results show that the IVPSO algorithm outperformed compared with other algorithms under several performance evaluation functions.

  20. Interval-value Based Particle Swarm Optimization algorithm for cancer-type specific gene selection and sample classification.

    Science.gov (United States)

    Ramyachitra, D; Sofia, M; Manikandan, P

    2015-09-01

    Microarray technology allows simultaneous measurement of the expression levels of thousands of genes within a biological tissue sample. The fundamental power of microarrays lies within the ability to conduct parallel surveys of gene expression using microarray data. The classification of tissue samples based on gene expression data is an important problem in medical diagnosis of diseases such as cancer. In gene expression data, the number of genes is usually very high compared to the number of data samples. Thus the difficulty that lies with data are of high dimensionality and the sample size is small. This research work addresses the problem by classifying resultant dataset using the existing algorithms such as Support Vector Machine (SVM), K-nearest neighbor (KNN), Interval Valued Classification (IVC) and the improvised Interval Value based Particle Swarm Optimization (IVPSO) algorithm. Thus the results show that the IVPSO algorithm outperformed compared with other algorithms under several performance evaluation functions.

  1. The lack of selection bias in a snowball sampled case-control study on drug abuse.

    Science.gov (United States)

    Lopes, C S; Rodrigues, L C; Sichieri, R

    1996-12-01

    Friend controls in matched case-control studies can be a potential source of bias based on the assumption that friends are more likely to share exposure factors. This study evaluates the role of selection bias in a case-control study that used the snowball sampling method based on friendship for the selection of cases and controls. The cases selected fro the study were drug abusers located in the community. Exposure was defined by the presence of at least one psychiatric diagnosis. Psychiatric and drug abuse/dependence diagnoses were made according to the Diagnostic and Statistical Manual of Mental Disorders (DSM-III-R) criteria. Cases and controls were matched on sex, age and friendship. The measurement of selection bias was made through the comparison of the proportion of exposed controls selected by exposed cases (p1) with the proportion of exposed controls selected by unexposed cases (p2). If p1 = p2 then, selection bias should not occur. The observed distribution of the 185 matched pairs having at least one psychiatric disorder showed a p1 value of 0.52 and a p2 value of 0.51, indicating no selection bias in this study. Our findings support the idea that the use of friend controls can produce a valid basis for a case-control study.

  2. Solution-based targeted genomic enrichment for precious DNA samples

    Directory of Open Access Journals (Sweden)

    Shearer Aiden

    2012-05-01

    Full Text Available Abstract Background Solution-based targeted genomic enrichment (TGE protocols permit selective sequencing of genomic regions of interest on a massively parallel scale. These protocols could be improved by: 1 modifying or eliminating time consuming steps; 2 increasing yield to reduce input DNA and excessive PCR cycling; and 3 enhancing reproducible. Results We developed a solution-based TGE method for downstream Illumina sequencing in a non-automated workflow, adding standard Illumina barcode indexes during the post-hybridization amplification to allow for sample pooling prior to sequencing. The method utilizes Agilent SureSelect baits, primers and hybridization reagents for the capture, off-the-shelf reagents for the library preparation steps, and adaptor oligonucleotides for Illumina paired-end sequencing purchased directly from an oligonucleotide manufacturing company. Conclusions This solution-based TGE method for Illumina sequencing is optimized for small- or medium-sized laboratories and addresses the weaknesses of standard protocols by reducing the amount of input DNA required, increasing capture yield, optimizing efficiency, and improving reproducibility.

  3. 40 CFR 761.247 - Sample site selection for pipe segment removal.

    Science.gov (United States)

    2010-07-01

    ... end of the pipe segment. (3) If the pipe segment is cut with a saw or other mechanical device, take..., take samples from a total of seven segments. (A) Sample the first and last segments removed. (B) Select... total length for purposes of disposal, take samples of each segment that is 1/2 mile distant from the...

  4. Traditional and robust vector selection methods for use with similarity based models

    International Nuclear Information System (INIS)

    Hines, J. W.; Garvey, D. R.

    2006-01-01

    Vector selection, or instance selection as it is often called in the data mining literature, performs a critical task in the development of nonparametric, similarity based models. Nonparametric, similarity based modeling (SBM) is a form of 'lazy learning' which constructs a local model 'on the fly' by comparing a query vector to historical, training vectors. For large training sets the creation of local models may become cumbersome, since each training vector must be compared to the query vector. To alleviate this computational burden, varying forms of training vector sampling may be employed with the goal of selecting a subset of the training data such that the samples are representative of the underlying process. This paper describes one such SBM, namely auto-associative kernel regression (AAKR), and presents five traditional vector selection methods and one robust vector selection method that may be used to select prototype vectors from a larger data set in model training. The five traditional vector selection methods considered are min-max, vector ordering, combination min-max and vector ordering, fuzzy c-means clustering, and Adeli-Hung clustering. Each method is described in detail and compared using artificially generated data and data collected from the steam system of an operating nuclear power plant. (authors)

  5. Computational fragment-based screening using RosettaLigand: the SAMPL3 challenge

    Science.gov (United States)

    Kumar, Ashutosh; Zhang, Kam Y. J.

    2012-05-01

    SAMPL3 fragment based virtual screening challenge provides a valuable opportunity for researchers to test their programs, methods and screening protocols in a blind testing environment. We participated in SAMPL3 challenge and evaluated our virtual fragment screening protocol, which involves RosettaLigand as the core component by screening a 500 fragments Maybridge library against bovine pancreatic trypsin. Our study reaffirmed that the real test for any virtual screening approach would be in a blind testing environment. The analyses presented in this paper also showed that virtual screening performance can be improved, if a set of known active compounds is available and parameters and methods that yield better enrichment are selected. Our study also highlighted that to achieve accurate orientation and conformation of ligands within a binding site, selecting an appropriate method to calculate partial charges is important. Another finding is that using multiple receptor ensembles in docking does not always yield better enrichment than individual receptors. On the basis of our results and retrospective analyses from SAMPL3 fragment screening challenge we anticipate that chances of success in a fragment screening process could be increased significantly with careful selection of receptor structures, protein flexibility, sufficient conformational sampling within binding pocket and accurate assignment of ligand and protein partial charges.

  6. Selective ionic liquid ferrofluid based dispersive-solid phase extraction for simultaneous preconcentration/separation of lead and cadmium in milk and biological samples.

    Science.gov (United States)

    Fasih Ramandi, Negin; Shemirani, Farzaneh

    2015-01-01

    For the first time, a selective ionic liquid ferrofluid has been used in dispersive solid phase extraction (IL-FF-D-SPE) for simultaneous preconcentration and separation of lead and cadmium in milk and biological samples combined with flame atomic absorption spectrometry. To improve the selectivity of the ionic liquid ferrofluid, the surface of TiO2 nanoparticles with a magnetic core as sorbent was modified by loading 1-(2-pyridylazo)-2-naphtol. Due to the rapid injection of an appropriate amount of ionic liquid ferrofluid into the aqueous sample by a syringe, extraction can be achieved within a few seconds. In addition, based on the attraction of the ionic liquid ferrofluid to a magnet, no centrifugation step is needed for phase separation. The experimental parameters of IL-FF-D-SPE were optimized using a Box-Behnken design (BBD) after a Plackett-Burman screening design. Under the optimum conditions, the relative standard deviations of 2.2% and 2.4% were obtained for lead and cadmium, respectively (n=7). The limit of detections were 1.21 µg L(-1) for Pb(II) and 0.21 µg L(-1) for Cd(II). The preconcentration factors were 250 for lead and 200 for cadmium and the maximum adsorption capacities of the sorbent were 11.18 and 9.34 mg g(-1) for lead and cadmium, respectively. Copyright © 2014 Elsevier B.V. All rights reserved.

  7. Fine mapping quantitative trait loci under selective phenotyping strategies based on linkage and linkage disequilibrium criteria

    DEFF Research Database (Denmark)

    Ansari-Mahyari, S; Berg, P; Lund, M S

    2009-01-01

    disequilibrium-based sampling criteria (LDC) for selecting individuals to phenotype are compared to random phenotyping in a quantitative trait loci (QTL) verification experiment using stochastic simulation. Several strategies based on LAC and LDC for selecting the most informative 30%, 40% or 50% of individuals...... for phenotyping to extract maximum power and precision in a QTL fine mapping experiment were developed and assessed. Linkage analyses for the mapping was performed for individuals sampled on LAC within families and combined linkage disequilibrium and linkage analyses was performed for individuals sampled across...... the whole population based on LDC. The results showed that selecting individuals with similar haplotypes to the paternal haplotypes (minimum recombination criterion) using LAC compared to random phenotyping gave at least the same power to detect a QTL but decreased the accuracy of the QTL position. However...

  8. Proposal for selecting an ore sample from mining shaft under Kvanefjeld

    International Nuclear Information System (INIS)

    Lund Clausen, F.

    1979-02-01

    Uranium ore recovered from the tunnel under Kvanefjeld (Greenland) will be processed in a pilot plant. Selection of a fully representative ore sample for both the whole area and single local sites is discussed. A FORTRAN program for ore distribution is presented, in order to enable correct sampling. (EG)

  9. Automated sample plan selection for OPC modeling

    Science.gov (United States)

    Casati, Nathalie; Gabrani, Maria; Viswanathan, Ramya; Bayraktar, Zikri; Jaiswal, Om; DeMaris, David; Abdo, Amr Y.; Oberschmidt, James; Krause, Andreas

    2014-03-01

    It is desired to reduce the time required to produce metrology data for calibration of Optical Proximity Correction (OPC) models and also maintain or improve the quality of the data collected with regard to how well that data represents the types of patterns that occur in real circuit designs. Previous work based on clustering in geometry and/or image parameter space has shown some benefit over strictly manual or intuitive selection, but leads to arbitrary pattern exclusion or selection which may not be the best representation of the product. Forming the pattern selection as an optimization problem, which co-optimizes a number of objective functions reflecting modelers' insight and expertise, has shown to produce models with equivalent quality to the traditional plan of record (POR) set but in a less time.

  10. Accounting for animal movement in estimation of resource selection functions: sampling and data analysis.

    Science.gov (United States)

    Forester, James D; Im, Hae Kyung; Rathouz, Paul J

    2009-12-01

    Patterns of resource selection by animal populations emerge as a result of the behavior of many individuals. Statistical models that describe these population-level patterns of habitat use can miss important interactions between individual animals and characteristics of their local environment; however, identifying these interactions is difficult. One approach to this problem is to incorporate models of individual movement into resource selection models. To do this, we propose a model for step selection functions (SSF) that is composed of a resource-independent movement kernel and a resource selection function (RSF). We show that standard case-control logistic regression may be used to fit the SSF; however, the sampling scheme used to generate control points (i.e., the definition of availability) must be accommodated. We used three sampling schemes to analyze simulated movement data and found that ignoring sampling and the resource-independent movement kernel yielded biased estimates of selection. The level of bias depended on the method used to generate control locations, the strength of selection, and the spatial scale of the resource map. Using empirical or parametric methods to sample control locations produced biased estimates under stronger selection; however, we show that the addition of a distance function to the analysis substantially reduced that bias. Assuming a uniform availability within a fixed buffer yielded strongly biased selection estimates that could be corrected by including the distance function but remained inefficient relative to the empirical and parametric sampling methods. As a case study, we used location data collected from elk in Yellowstone National Park, USA, to show that selection and bias may be temporally variable. Because under constant selection the amount of bias depends on the scale at which a resource is distributed in the landscape, we suggest that distance always be included as a covariate in SSF analyses. This approach to

  11. Active sites in the alkylation of toluene with methanol : a study by selective acid-base poisoning

    NARCIS (Netherlands)

    Borgna, A.; Sepulveda, J.; Magni, S.I.; Apesteguia, C.R.

    2004-01-01

    Selective acid–base poisoning of the alkylation of toluene with methanol was studied over alkali and alkaline-earth exchanged Y zeolites. Surface acid–base properties of the samples were determined by infrared spectroscopy using carbon dioxide and pyridine as probe molecules. Selective poisoning

  12. Evaluation of pump pulsation in respirable size-selective sampling: part II. Changes in sampling efficiency.

    Science.gov (United States)

    Lee, Eun Gyung; Lee, Taekhee; Kim, Seung Won; Lee, Larry; Flemmer, Michael M; Harper, Martin

    2014-01-01

    This second, and concluding, part of this study evaluated changes in sampling efficiency of respirable size-selective samplers due to air pulsations generated by the selected personal sampling pumps characterized in Part I (Lee E, Lee L, Möhlmann C et al. Evaluation of pump pulsation in respirable size-selective sampling: Part I. Pulsation measurements. Ann Occup Hyg 2013). Nine particle sizes of monodisperse ammonium fluorescein (from 1 to 9 μm mass median aerodynamic diameter) were generated individually by a vibrating orifice aerosol generator from dilute solutions of fluorescein in aqueous ammonia and then injected into an environmental chamber. To collect these particles, 10-mm nylon cyclones, also known as Dorr-Oliver (DO) cyclones, were used with five medium volumetric flow rate pumps. Those were the Apex IS, HFS513, GilAir5, Elite5, and Basic5 pumps, which were found in Part I to generate pulsations of 5% (the lowest), 25%, 30%, 56%, and 70% (the highest), respectively. GK2.69 cyclones were used with the Legacy [pump pulsation (PP) = 15%] and Elite12 (PP = 41%) pumps for collection at high flows. The DO cyclone was also used to evaluate changes in sampling efficiency due to pulse shape. The HFS513 pump, which generates a more complex pulse shape, was compared to a single sine wave fluctuation generated by a piston. The luminescent intensity of the fluorescein extracted from each sample was measured with a luminescence spectrometer. Sampling efficiencies were obtained by dividing the intensity of the fluorescein extracted from the filter placed in a cyclone with the intensity obtained from the filter used with a sharp-edged reference sampler. Then, sampling efficiency curves were generated using a sigmoid function with three parameters and each sampling efficiency curve was compared to that of the reference cyclone by constructing bias maps. In general, no change in sampling efficiency (bias under ±10%) was observed until pulsations exceeded 25% for the

  13. A novel variable selection approach that iteratively optimizes variable space using weighted binary matrix sampling.

    Science.gov (United States)

    Deng, Bai-chuan; Yun, Yong-huan; Liang, Yi-zeng; Yi, Lun-zhao

    2014-10-07

    In this study, a new optimization algorithm called the Variable Iterative Space Shrinkage Approach (VISSA) that is based on the idea of model population analysis (MPA) is proposed for variable selection. Unlike most of the existing optimization methods for variable selection, VISSA statistically evaluates the performance of variable space in each step of optimization. Weighted binary matrix sampling (WBMS) is proposed to generate sub-models that span the variable subspace. Two rules are highlighted during the optimization procedure. First, the variable space shrinks in each step. Second, the new variable space outperforms the previous one. The second rule, which is rarely satisfied in most of the existing methods, is the core of the VISSA strategy. Compared with some promising variable selection methods such as competitive adaptive reweighted sampling (CARS), Monte Carlo uninformative variable elimination (MCUVE) and iteratively retaining informative variables (IRIV), VISSA showed better prediction ability for the calibration of NIR data. In addition, VISSA is user-friendly; only a few insensitive parameters are needed, and the program terminates automatically without any additional conditions. The Matlab codes for implementing VISSA are freely available on the website: https://sourceforge.net/projects/multivariateanalysis/files/VISSA/.

  14. Selection Component Analysis of Natural Polymorphisms using Population Samples Including Mother-Offspring Combinations, II

    DEFF Research Database (Denmark)

    Jarmer, Hanne Østergaard; Christiansen, Freddy Bugge

    1981-01-01

    Population samples including mother-offspring combinations provide information on the selection components: zygotic selection, sexual selection, gametic seletion and fecundity selection, on the mating pattern, and on the deviation from linkage equilibrium among the loci studied. The theory...

  15. Privacy problems in the small sample selection

    Directory of Open Access Journals (Sweden)

    Loredana Cerbara

    2013-05-01

    Full Text Available The side of social research that uses small samples for the production of micro data, today finds some operating difficulties due to the privacy law. The privacy code is a really important and necessary law because it guarantees the Italian citizen’s rights, as already happens in other Countries of the world. However it does not seem appropriate to limit once more the possibilities of the data production of the national centres of research. That possibilities are already moreover compromised due to insufficient founds is a common problem becoming more and more frequent in the research field. It would be necessary, therefore, to include in the law the possibility to use telephonic lists to select samples useful for activities directly of interest and importance to the citizen, such as the collection of the data carried out on the basis of opinion polls by the centres of research of the Italian CNR and some universities.

  16. Data Quality Objectives For Selecting Waste Samples To Test The Fluid Bed Steam Reformer Test

    International Nuclear Information System (INIS)

    Banning, D.L.

    2010-01-01

    This document describes the data quality objectives to select archived samples located at the 222-S Laboratory for Fluid Bed Steam Reformer testing. The type, quantity and quality of the data required to select the samples for Fluid Bed Steam Reformer testing are discussed. In order to maximize the efficiency and minimize the time to treat Hanford tank waste in the Waste Treatment and Immobilization Plant, additional treatment processes may be required. One of the potential treatment processes is the fluid bed steam reformer (FBSR). A determination of the adequacy of the FBSR process to treat Hanford tank waste is required. The initial step in determining the adequacy of the FBSR process is to select archived waste samples from the 222-S Laboratory that will be used to test the FBSR process. Analyses of the selected samples will be required to confirm the samples meet the testing criteria.

  17. Chemometric classification of casework arson samples based on gasoline content.

    Science.gov (United States)

    Sinkov, Nikolai A; Sandercock, P Mark L; Harynuk, James J

    2014-02-01

    Detection and identification of ignitable liquids (ILs) in arson debris is a critical part of arson investigations. The challenge of this task is due to the complex and unpredictable chemical nature of arson debris, which also contains pyrolysis products from the fire. ILs, most commonly gasoline, are complex chemical mixtures containing hundreds of compounds that will be consumed or otherwise weathered by the fire to varying extents depending on factors such as temperature, air flow, the surface on which IL was placed, etc. While methods such as ASTM E-1618 are effective, data interpretation can be a costly bottleneck in the analytical process for some laboratories. In this study, we address this issue through the application of chemometric tools. Prior to the application of chemometric tools such as PLS-DA and SIMCA, issues of chromatographic alignment and variable selection need to be addressed. Here we use an alignment strategy based on a ladder consisting of perdeuterated n-alkanes. Variable selection and model optimization was automated using a hybrid backward elimination (BE) and forward selection (FS) approach guided by the cluster resolution (CR) metric. In this work, we demonstrate the automated construction, optimization, and application of chemometric tools to casework arson data. The resulting PLS-DA and SIMCA classification models, trained with 165 training set samples, have provided classification of 55 validation set samples based on gasoline content with 100% specificity and sensitivity. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  18. Theory of sampling and its application in tissue based diagnosis

    Directory of Open Access Journals (Sweden)

    Kayser Gian

    2009-02-01

    Full Text Available Abstract Background A general theory of sampling and its application in tissue based diagnosis is presented. Sampling is defined as extraction of information from certain limited spaces and its transformation into a statement or measure that is valid for the entire (reference space. The procedure should be reproducible in time and space, i.e. give the same results when applied under similar circumstances. Sampling includes two different aspects, the procedure of sample selection and the efficiency of its performance. The practical performance of sample selection focuses on search for localization of specific compartments within the basic space, and search for presence of specific compartments. Methods When a sampling procedure is applied in diagnostic processes two different procedures can be distinguished: I the evaluation of a diagnostic significance of a certain object, which is the probability that the object can be grouped into a certain diagnosis, and II the probability to detect these basic units. Sampling can be performed without or with external knowledge, such as size of searched objects, neighbourhood conditions, spatial distribution of objects, etc. If the sample size is much larger than the object size, the application of a translation invariant transformation results in Kriege's formula, which is widely used in search for ores. Usually, sampling is performed in a series of area (space selections of identical size. The size can be defined in relation to the reference space or according to interspatial relationship. The first method is called random sampling, the second stratified sampling. Results Random sampling does not require knowledge about the reference space, and is used to estimate the number and size of objects. Estimated features include area (volume fraction, numerical, boundary and surface densities. Stratified sampling requires the knowledge of objects (and their features and evaluates spatial features in relation to

  19. Portable, universal, and visual ion sensing platform based on the light emitting diode-based self-referencing-ion selective field-effect transistor.

    Science.gov (United States)

    Zhang, Xiaowei; Han, Yanchao; Li, Jing; Zhang, Libing; Jia, Xiaofang; Wang, Erkang

    2014-02-04

    In this work, a novel and universal ion sensing platform was presented, which enables the visual detection of various ions with high sensitivity and selectivity. Coaxial potential signals (millivolt-scale) of the sample from the self-referencing (SR) ion selective chip can be transferred into the ad620-based amplifier with an output of volt-scale potentials. The amplified voltage is high enough to drive a light emitting diode (LED), which can be used as an amplifier and indicator to report the sample information. With this double amplification device (light emitting diode-based self-referencing-ion selective field-effect transistor, LED-SR-ISFET), a tiny change of the sample concentration can be observed with a distinguishable variation of LED brightness by visual inspection. This LED-based luminescent platform provided a facile, low-cost, and rapid sensing strategy without the need of additional expensive chemiluminescence reagent and instruments. Moreover, the SR mode also endows this device excellent stability and reliability. With this innovative design, sensitive determination of K(+), H(+), and Cl(-) by the naked eye was achieved. It should also be noticed that this sensing strategy can easily be extended to other ions (or molecules) by simply integrating the corresponding ion (or molecule) selective electrode.

  20. Sample selection via angular distance in the space of the arguments of an artificial neural network

    Science.gov (United States)

    Fernández Jaramillo, J. M.; Mayerle, R.

    2018-05-01

    In the construction of an artificial neural network (ANN) a proper data splitting of the available samples plays a major role in the training process. This selection of subsets for training, testing and validation affects the generalization ability of the neural network. Also the number of samples has an impact in the time required for the design of the ANN and the training. This paper introduces an efficient and simple method for reducing the set of samples used for training a neural network. The method reduces the required time to calculate the network coefficients, while keeping the diversity and avoiding overtraining the ANN due the presence of similar samples. The proposed method is based on the calculation of the angle between two vectors, each one representing one input of the neural network. When the angle formed among samples is smaller than a defined threshold only one input is accepted for the training. The accepted inputs are scattered throughout the sample space. Tidal records are used to demonstrate the proposed method. The results of a cross-validation show that with few inputs the quality of the outputs is not accurate and depends on the selection of the first sample, but as the number of inputs increases the accuracy is improved and differences among the scenarios with a different starting sample have and important reduction. A comparison with the K-means clustering algorithm shows that for this application the proposed method with a smaller number of samples is producing a more accurate network.

  1. New sorbent materials for selective extraction of cocaine and benzoylecgonine from human urine samples.

    Science.gov (United States)

    Bujak, Renata; Gadzała-Kopciuch, Renata; Nowaczyk, Alicja; Raczak-Gutknecht, Joanna; Kordalewska, Marta; Struck-Lewicka, Wiktoria; Waszczuk-Jankowska, Małgorzata; Tomczak, Ewa; Kaliszan, Michał; Buszewski, Bogusław; Markuszewski, Michał J

    2016-02-20

    An increase in cocaine consumption has been observed in Europe during the last decade. Benzoylecgonine, as a main urinary metabolite of cocaine in human, is so far the most reliable marker of cocaine consumption. Determination of cocaine and its metabolite in complex biological samples as urine or blood, requires efficient and selective sample pretreatment. In this preliminary study, the newly synthesized sorbent materials were proposed for selective extraction of cocaine and benzoylecgonine from urine samples. Application of these sorbent media allowed to determine cocaine and benzoylecgonine in urine samples at the concentration level of 100ng/ml with good recovery values as 81.7%±6.6 and 73.8%±4.2, respectively. The newly synthesized materials provided efficient, inexpensive and selective extraction of both cocaine and benzoylecgonine from urine samples, which can consequently lead to an increase of the sensitivity of the current available screening diagnostic tests. Copyright © 2015 Elsevier B.V. All rights reserved.

  2. Observed Characteristics and Teacher Quality: Impacts of Sample Selection on a Value Added Model

    Science.gov (United States)

    Winters, Marcus A.; Dixon, Bruce L.; Greene, Jay P.

    2012-01-01

    We measure the impact of observed teacher characteristics on student math and reading proficiency using a rich dataset from Florida. We expand upon prior work by accounting directly for nonrandom attrition of teachers from the classroom in a sample selection framework. We find evidence that sample selection is present in the estimation of the…

  3. The Quasar Fraction in Low-Frequency Selected Complete Samples and Implications for Unified Schemes

    Science.gov (United States)

    Willott, Chris J.; Rawlings, Steve; Blundell, Katherine M.; Lacy, Mark

    2000-01-01

    Low-frequency radio surveys are ideal for selecting orientation-independent samples of extragalactic sources because the sample members are selected by virtue of their isotropic steep-spectrum extended emission. We use the new 7C Redshift Survey along with the brighter 3CRR and 6C samples to investigate the fraction of objects with observed broad emission lines - the 'quasar fraction' - as a function of redshift and of radio and narrow emission line luminosity. We find that the quasar fraction is more strongly dependent upon luminosity (both narrow line and radio) than it is on redshift. Above a narrow [OII] emission line luminosity of log(base 10) (L(sub [OII])/W) approximately > 35 [or radio luminosity log(base 10) (L(sub 151)/ W/Hz.sr) approximately > 26.5], the quasar fraction is virtually independent of redshift and luminosity; this is consistent with a simple unified scheme with an obscuring torus with a half-opening angle theta(sub trans) approximately equal 53 deg. For objects with less luminous narrow lines, the quasar fraction is lower. We show that this is not due to the difficulty of detecting lower-luminosity broad emission lines in a less luminous, but otherwise similar, quasar population. We discuss evidence which supports at least two probable physical causes for the drop in quasar fraction at low luminosity: (i) a gradual decrease in theta(sub trans) and/or a gradual increase in the fraction of lightly-reddened (0 approximately quasar luminosity; and (ii) the emergence of a distinct second population of low luminosity radio sources which, like M8T, lack a well-fed quasar nucleus and may well lack a thick obscuring torus.

  4. Methodology Series Module 5: Sampling Strategies

    OpenAIRE

    Setia, Maninder Singh

    2016-01-01

    Once the research question and the research design have been finalised, it is important to select the appropriate sample for the study. The method by which the researcher selects the sample is the ? Sampling Method?. There are essentially two types of sampling methods: 1) probability sampling ? based on chance events (such as random numbers, flipping a coin etc.); and 2) non-probability sampling ? based on researcher's choice, population that accessible & available. Some of the non-probabilit...

  5. Progressive sample processing of band selection for hyperspectral imagery

    Science.gov (United States)

    Liu, Keng-Hao; Chien, Hung-Chang; Chen, Shih-Yu

    2017-10-01

    Band selection (BS) is one of the most important topics in hyperspectral image (HSI) processing. The objective of BS is to find a set of representative bands that can represent the whole image with lower inter-band redundancy. Many types of BS algorithms were proposed in the past. However, most of them can be carried on in an off-line manner. It means that they can only be implemented on the pre-collected data. Those off-line based methods are sometime useless for those applications that are timeliness, particular in disaster prevention and target detection. To tackle this issue, a new concept, called progressive sample processing (PSP), was proposed recently. The PSP is an "on-line" framework where the specific type of algorithm can process the currently collected data during the data transmission under band-interleavedby-sample/pixel (BIS/BIP) protocol. This paper proposes an online BS method that integrates a sparse-based BS into PSP framework, called PSP-BS. In PSP-BS, the BS can be carried out by updating BS result recursively pixel by pixel in the same way that a Kalman filter does for updating data information in a recursive fashion. The sparse regression is solved by orthogonal matching pursuit (OMP) algorithm, and the recursive equations of PSP-BS are derived by using matrix decomposition. The experiments conducted on a real hyperspectral image show that the PSP-BS can progressively output the BS status with very low computing time. The convergence of BS results during the transmission can be quickly achieved by using a rearranged pixel transmission sequence. This significant advantage allows BS to be implemented in a real time manner when the HSI data is transmitted pixel by pixel.

  6. Sampling point selection for energy estimation in the quasicontinuum method

    NARCIS (Netherlands)

    Beex, L.A.A.; Peerlings, R.H.J.; Geers, M.G.D.

    2010-01-01

    The quasicontinuum (QC) method reduces computational costs of atomistic calculations by using interpolation between a small number of so-called repatoms to represent the displacements of the complete lattice and by selecting a small number of sampling atoms to estimate the total potential energy of

  7. Sample Entropy-Based Approach to Evaluate the Stability of Double-Wire Pulsed MIG Welding

    Directory of Open Access Journals (Sweden)

    Ping Yao

    2014-01-01

    Full Text Available According to the sample entropy, this paper deals with a quantitative method to evaluate the current stability in double-wire pulsed MIG welding. Firstly, the sample entropy of current signals with different stability but the same parameters is calculated. The results show that the more stable the current, the smaller the value and the standard deviation of sample entropy. Secondly, four parameters, which are pulse width, peak current, base current, and frequency, are selected for four-level three-factor orthogonal experiment. The calculation and analysis of desired signals indicate that sample entropy values are affected by welding current parameters. Then, a quantitative method based on sample entropy is proposed. The experiment results show that the method can preferably quantify the welding current stability.

  8. Thermal properties of selected cheeses samples

    Directory of Open Access Journals (Sweden)

    Monika BOŽIKOVÁ

    2016-02-01

    Full Text Available The thermophysical parameters of selected cheeses (processed cheese and half hard cheese are presented in the article. Cheese is a generic term for a diverse group of milk-based food products. Cheese is produced throughout the world in wide-ranging flavors, textures, and forms. Cheese goes during processing through the thermal and mechanical manipulation, so thermal properties are one of the most important. Knowledge about thermal parameters of cheeses could be used in the process of quality evaluation. Based on the presented facts thermal properties of selected cheeses which are produced by Slovak producers were measured. Theoretical part of article contains description of cheese and description of plane source method which was used for thermal parameters detection. Thermophysical parameters as thermal conductivity, thermal diffusivity and volume specific heat were measured during the temperature stabilisation. The results are presented as relations of thermophysical parameters to the temperature in temperature range from 13.5°C to 24°C. Every point of graphic relation was obtained as arithmetic average from measured values for the same temperature. Obtained results were statistically processed. Presented graphical relations were chosen according to the results of statistical evaluation and also according to the coefficients of determination for every relation. The results of thermal parameters are in good agreement with values measured by other authors for similar types of cheeses.

  9. A large sample of Kohonen-selected SDSS quasars with weak emission lines: selection effects and statistical properties

    Science.gov (United States)

    Meusinger, H.; Balafkan, N.

    2014-08-01

    Aims: A tiny fraction of the quasar population shows remarkably weak emission lines. Several hypotheses have been developed, but the weak line quasar (WLQ) phenomenon still remains puzzling. The aim of this study was to create a sizeable sample of WLQs and WLQ-like objects and to evaluate various properties of this sample. Methods: We performed a search for WLQs in the spectroscopic data from the Sloan Digital Sky Survey Data Release 7 based on Kohonen self-organising maps for nearly 105 quasar spectra. The final sample consists of 365 quasars in the redshift range z = 0.6 - 4.2 (z¯ = 1.50 ± 0.45) and includes in particular a subsample of 46 WLQs with equivalent widths WMg iiattention was paid to selection effects. Results: The WLQs have, on average, significantly higher luminosities, Eddington ratios, and accretion rates. About half of the excess comes from a selection bias, but an intrinsic excess remains probably caused primarily by higher accretion rates. The spectral energy distribution shows a bluer continuum at rest-frame wavelengths ≳1500 Å. The variability in the optical and UV is relatively low, even taking the variability-luminosity anti-correlation into account. The percentage of radio detected quasars and of core-dominant radio sources is significantly higher than for the control sample, whereas the mean radio-loudness is lower. Conclusions: The properties of our WLQ sample can be consistently understood assuming that it consists of a mix of quasars at the beginning of a stage of increased accretion activity and of beamed radio-quiet quasars. The higher luminosities and Eddington ratios in combination with a bluer spectral energy distribution can be explained by hotter continua, i.e. higher accretion rates. If quasar activity consists of subphases with different accretion rates, a change towards a higher rate is probably accompanied by an only slow development of the broad line region. The composite WLQ spectrum can be reasonably matched by the

  10. Application of In-Segment Multiple Sampling in Object-Based Classification

    Directory of Open Access Journals (Sweden)

    Nataša Đurić

    2014-12-01

    Full Text Available When object-based analysis is applied to very high-resolution imagery, pixels within the segments reveal large spectral inhomogeneity; their distribution can be considered complex rather than normal. When normality is violated, the classification methods that rely on the assumption of normally distributed data are not as successful or accurate. It is hard to detect normality violations in small samples. The segmentation process produces segments that vary highly in size; samples can be very big or very small. This paper investigates whether the complexity within the segment can be addressed using multiple random sampling of segment pixels and multiple calculations of similarity measures. In order to analyze the effect sampling has on classification results, statistics and probability value equations of non-parametric two-sample Kolmogorov-Smirnov test and parametric Student’s t-test are selected as similarity measures in the classification process. The performance of both classifiers was assessed on a WorldView-2 image for four land cover classes (roads, buildings, grass and trees and compared to two commonly used object-based classifiers—k-Nearest Neighbor (k-NN and Support Vector Machine (SVM. Both proposed classifiers showed a slight improvement in the overall classification accuracies and produced more accurate classification maps when compared to the ground truth image.

  11. Recurrent pain is associated with decreased selective attention in a population-based sample.

    Science.gov (United States)

    Gijsen, C P; Dijkstra, J B; van Boxtel, M P J

    2011-01-01

    Studies which have examined the impact of pain on cognitive functioning in the general population are scarce. In the present study we assessed the predictive value of recurrent pain on cognitive functioning in a population-based study (N=1400). Furthermore, we investigated the effect of pain on cognitive functioning in individuals with specific pain complaints (i.e. back pain, gastric pain, muscle pain and headache). Cognitive functioning was assessed using the Stroop Color-Word Interference test (Stroop interference), the Letter-Digit-Substitution test (LDST) and the Visual Verbal learning Task (VVLT). Pain was measured with the COOP/WONCA pain scale (Dartmouth Primary Care Cooperative Information Project/World Organization of National Colleges, Academies, and Academic Associations of General Practice /Family Physicians). We controlled for the effects of age, sex, level of education and depressive symptoms. It was demonstrated that pain had a negative impact on the performance on the Stroop interference but not on the VVLT and the LDST. This indicates that subjects who reported extreme pain had more problems with selective attention and were more easily distracted. Effects were in general larger in the specific pain groups when compared to the associations found in the total group. Implications of these findings are discussed. The experience of recurrent pain has a negative influence on selective attention in a healthy population. Copyright © 2010 International Association for the Study of Pain. Published by Elsevier B.V. All rights reserved.

  12. Hybrid nested sampling algorithm for Bayesian model selection applied to inverse subsurface flow problems

    KAUST Repository

    Elsheikh, Ahmed H.; Wheeler, Mary Fanett; Hoteit, Ibrahim

    2014-01-01

    A Hybrid Nested Sampling (HNS) algorithm is proposed for efficient Bayesian model calibration and prior model selection. The proposed algorithm combines, Nested Sampling (NS) algorithm, Hybrid Monte Carlo (HMC) sampling and gradient estimation using

  13. Response rates and selection problems, with emphasis on mental health variables and DNA sampling, in large population-based, cross-sectional and longitudinal studies of adolescents in Norway.

    Science.gov (United States)

    Bjertness, Espen; Sagatun, Ase; Green, Kristian; Lien, Lars; Søgaard, Anne Johanne; Selmer, Randi

    2010-10-12

    Selection bias is a threat to the internal validity of epidemiological studies. In light of a growing number of studies which aim to provide DNA, as well as a considerable number of invitees who declined to participate, we discuss response rates, predictors of lost to follow-up and failure to provide DNA, and the presence of possible selection bias, based on five samples of adolescents. We included nearly 7,000 adolescents from two longitudinal studies of 18/19 year olds with two corresponding cross-sectional baseline studies at age 15/16 (10th graders), and one cross-sectional study of 13th graders (18/19 years old). DNA was sampled from the cheek mucosa of 18/19 year olds. Predictors of lost to follow-up and failure to provide DNA were studied by Poisson regression. Selection bias in the follow-up at age 18/19 was estimated through investigation of prevalence ratios (PRs) between selected exposures (physical activity, smoking) and outcome variables (general health, mental distress, externalizing problems) measured at baseline. Out of 5,750 who participated at age 15/16, we lost 42% at follow-up at age 18/19. The percentage of participants who gave their consent to DNA provision was as high as the percentage that consented to a linkage of data with other health registers and surveys, approximately 90%. Significant predictors of lost to follow-up and failure to provide DNA samples in the present genetic epidemiological study were: male gender; non-western ethnicity; postal survey compared with school-based; low educational plans; low education and income of father; low perceived family economy; unmarried parents; poor self-reported health; externalized symptoms and smoking, with some differences in subgroups of ethnicity and gender. The association measures (PRs) were quite similar among participants and all invitees, with some minor discrepancies in subgroups of non-western boys and girls. Lost to follow-up had marginal impact on the estimated prevalence ratios

  14. Random selection of items. Selection of n1 samples among N items composing a stratum

    International Nuclear Information System (INIS)

    Jaech, J.L.; Lemaire, R.J.

    1987-02-01

    STR-224 provides generalized procedures to determine required sample sizes, for instance in the course of a Physical Inventory Verification at Bulk Handling Facilities. The present report describes procedures to generate random numbers and select groups of items to be verified in a given stratum through each of the measurement methods involved in the verification. (author). 3 refs

  15. Towards the harmonization between National Forest Inventory and Forest Condition Monitoring. Consistency of plot allocation and effect of tree selection methods on sample statistics in Italy.

    Science.gov (United States)

    Gasparini, Patrizia; Di Cosmo, Lucio; Cenni, Enrico; Pompei, Enrico; Ferretti, Marco

    2013-07-01

    In the frame of a process aiming at harmonizing National Forest Inventory (NFI) and ICP Forests Level I Forest Condition Monitoring (FCM) in Italy, we investigated (a) the long-term consistency between FCM sample points (a subsample of the first NFI, 1985, NFI_1) and recent forest area estimates (after the second NFI, 2005, NFI_2) and (b) the effect of tree selection method (tree-based or plot-based) on sample composition and defoliation statistics. The two investigations were carried out on 261 and 252 FCM sites, respectively. Results show that some individual forest categories (larch and stone pine, Norway spruce, other coniferous, beech, temperate oaks and cork oak forests) are over-represented and others (hornbeam and hophornbeam, other deciduous broadleaved and holm oak forests) are under-represented in the FCM sample. This is probably due to a change in forest cover, which has increased by 1,559,200 ha from 1985 to 2005. In case of shift from a tree-based to a plot-based selection method, 3,130 (46.7%) of the original 6,703 sample trees will be abandoned, and 1,473 new trees will be selected. The balance between exclusion of former sample trees and inclusion of new ones will be particularly unfavourable for conifers (with only 16.4% of excluded trees replaced by new ones) and less for deciduous broadleaves (with 63.5% of excluded trees replaced). The total number of tree species surveyed will not be impacted, while the number of trees per species will, and the resulting (plot-based) sample composition will have a much larger frequency of deciduous broadleaved trees. The newly selected trees have-in general-smaller diameter at breast height (DBH) and defoliation scores. Given the larger rate of turnover, the deciduous broadleaved part of the sample will be more impacted. Our results suggest that both a revision of FCM network to account for forest area change and a plot-based approach to permit statistical inference and avoid bias in the tree sample

  16. Selective determination of four arsenic species in rice and water samples by modified graphite electrode-based electrolytic hydride generation coupled with atomic fluorescence spectrometry.

    Science.gov (United States)

    Yang, Xin-An; Lu, Xiao-Ping; Liu, Lin; Chi, Miao-Bin; Hu, Hui-Hui; Zhang, Wang-Bing

    2016-10-01

    This work describes a novel non-chromatographic approach for the accurate and selective determining As species by modified graphite electrode-based electrolytic hydride generation (EHG) for sample introduction coupled with atomic fluorescence spectrometry (AFS) detection. Two kinds of sulfydryl-containing modifiers, l-cysteine (Cys) and glutathione (GSH), are used to modify cathode. The EHG performance of As has been changed greatly at the modified cathode, which has never been reported. Arsenite [As(III)] on the GSH modified graphite electrode (GSH/GE)-based EHG can be selectively and quantitatively converted to AsH3 at applied current of 0.4A. As(III) and arsenate [As(V)] on the Cys modified graphite electrode (Cys/GE) EHG can be selectively and efficiently converted to arsine at applied current of 0.6A, whereas monomethylarsonic acid (MMA) and dimethylarsinic acid (DMA) do not form any or only less volatile hydrides under this condition. By changing the analytical conditions, we also have achieved the analysis of total As (tAs) and DMA. Under the optimal condition, the detection limits (3s) of As(III), iAs and tAs in aqueous solutions are 0.25μgL(-1), 0.22μgL(-1) and 0.10μgL(-1), respectively. The accuracy of the method is verified through the analysis of standard reference materials (SRM 1568a). Copyright © 2016 Elsevier B.V. All rights reserved.

  17. 40 CFR 761.308 - Sample selection by random number generation on any two-dimensional square grid.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 30 2010-07-01 2010-07-01 false Sample selection by random number... § 761.79(b)(3) § 761.308 Sample selection by random number generation on any two-dimensional square... area created in accordance with paragraph (a) of this section, select two random numbers: one each for...

  18. Frequency-Selective Signal Sensing with Sub-Nyquist Uniform Sampling Scheme

    DEFF Research Database (Denmark)

    Pierzchlewski, Jacek; Arildsen, Thomas

    2015-01-01

    In this paper the authors discuss a problem of acquisition and reconstruction of a signal polluted by adjacent- channel interference. The authors propose a method to find a sub-Nyquist uniform sampling pattern which allows for correct reconstruction of selected frequencies. The method is inspired...... by the Restricted Isometry Property, which is known from the field of compressed sensing. Then, compressed sensing is used to successfully reconstruct a wanted signal even if some of the uniform samples were randomly lost, e. g. due to ADC saturation. An experiment which tests the proposed method in practice...

  19. Covariance-Based Measurement Selection Criterion for Gaussian-Based Algorithms

    Directory of Open Access Journals (Sweden)

    Fernando A. Auat Cheein

    2013-01-01

    Full Text Available Process modeling by means of Gaussian-based algorithms often suffers from redundant information which usually increases the estimation computational complexity without significantly improving the estimation performance. In this article, a non-arbitrary measurement selection criterion for Gaussian-based algorithms is proposed. The measurement selection criterion is based on the determination of the most significant measurement from both an estimation convergence perspective and the covariance matrix associated with the measurement. The selection criterion is independent from the nature of the measured variable. This criterion is used in conjunction with three Gaussian-based algorithms: the EIF (Extended Information Filter, the EKF (Extended Kalman Filter and the UKF (Unscented Kalman Filter. Nevertheless, the measurement selection criterion shown herein can also be applied to other Gaussian-based algorithms. Although this work is focused on environment modeling, the results shown herein can be applied to other Gaussian-based algorithm implementations. Mathematical descriptions and implementation results that validate the proposal are also included in this work.

  20. Silicon based ultrafast optical waveform sampling

    DEFF Research Database (Denmark)

    Ji, Hua; Galili, Michael; Pu, Minhao

    2010-01-01

    A 300 nmx450 nmx5 mm silicon nanowire is designed and fabricated for a four wave mixing based non-linear optical gate. Based on this silicon nanowire, an ultra-fast optical sampling system is successfully demonstrated using a free-running fiber laser with a carbon nanotube-based mode-locker as th......A 300 nmx450 nmx5 mm silicon nanowire is designed and fabricated for a four wave mixing based non-linear optical gate. Based on this silicon nanowire, an ultra-fast optical sampling system is successfully demonstrated using a free-running fiber laser with a carbon nanotube-based mode......-locker as the sampling source. A clear eye-diagram of a 320 Gbit/s data signal is obtained. The temporal resolution of the sampling system is estimated to 360 fs....

  1. Assessment of selected contaminants in streambed- and suspended-sediment samples collected in Bexar County, Texas, 2007-09

    Science.gov (United States)

    Wilson, Jennifer T.

    2011-01-01

    Elevated concentrations of sediment-associated contaminants are typically associated with urban areas such as San Antonio, Texas, in Bexar County, the seventh most populous city in the United States. This report describes an assessment of selected sediment-associated contaminants in samples collected in Bexar County from sites on the following streams: Medio Creek, Medina River, Elm Creek, Martinez Creek, Chupaderas Creek, Leon Creek, Salado Creek, and San Antonio River. During 2007-09, the U.S. Geological Survey periodically collected surficial streambed-sediment samples during base flow and suspended-sediment (large-volume suspended-sediment) samples from selected streams during stormwater runoff. All sediment samples were analyzed for major and trace elements and for organic compounds including halogenated organic compounds and polycyclic aromatic hydrocarbons (PAHs). Selected contaminants in streambed and suspended sediments in watersheds of the eight major streams in Bexar County were assessed by using a variety of methods—observations of occurrence and distribution, comparison to sediment-quality guidelines and data from previous studies, statistical analyses, and source indicators. Trace elements concentrations were low compared to the consensus-based sediment-quality guidelines threshold effect concentration (TEC) and probable effect concentration (PEC). Trace element concentrations were greater than the TEC in 28 percent of the samples and greater than the PEC in 1.5 percent of the samples. Chromium concentrations exceeded sediment-quality guidelines more frequently than concentrations of any other constituents analyzed in this study (greater than the TEC in 69 percent of samples and greater than the PEC in 8 percent of samples). Mean trace element concentrations generally are lower in Bexar County samples compared to concentrations in samples collected during previous studies in the Austin and Fort Worth, Texas, areas, but considering the relatively

  2. Wavelength Selection Method Based on Differential Evolution for Precise Quantitative Analysis Using Terahertz Time-Domain Spectroscopy.

    Science.gov (United States)

    Li, Zhi; Chen, Weidong; Lian, Feiyu; Ge, Hongyi; Guan, Aihong

    2017-12-01

    Quantitative analysis of component mixtures is an important application of terahertz time-domain spectroscopy (THz-TDS) and has attracted broad interest in recent research. Although the accuracy of quantitative analysis using THz-TDS is affected by a host of factors, wavelength selection from the sample's THz absorption spectrum is the most crucial component. The raw spectrum consists of signals from the sample and scattering and other random disturbances that can critically influence the quantitative accuracy. For precise quantitative analysis using THz-TDS, the signal from the sample needs to be retained while the scattering and other noise sources are eliminated. In this paper, a novel wavelength selection method based on differential evolution (DE) is investigated. By performing quantitative experiments on a series of binary amino acid mixtures using THz-TDS, we demonstrate the efficacy of the DE-based wavelength selection method, which yields an error rate below 5%.

  3. Composition of Trace Metals in Dust Samples Collected from Selected High Schools in Pretoria, South Africa

    Directory of Open Access Journals (Sweden)

    J. O. Olowoyo

    2016-01-01

    Full Text Available Potential health risks associated with trace metal pollution have necessitated the importance of monitoring their levels in the environment. The present study investigated the concentrations and compositions of trace metals in dust samples collected from classrooms and playing ground from the selected high schools In Pretoria. Schools were selected from Pretoria based on factors such as proximity to high traffic ways, industrial areas, and residential areas. Thirty-two dust samples were collected from inside and outside the classrooms, where learners often stay during recess period. The dust samples were analysed for trace metal concentrations using Inductively Coupled Plasma-Mass Spectrometry (ICP-MS. The composition of the elements showed that the concentrations of Zn were more than all other elements except from one of the schools. There were significant differences in the concentrations of trace metals from the schools (p<0.05. Regular cleaning, proximity to busy road, and well maintained gardens seem to have positive effects on the concentrations of trace metals recorded from the classrooms dust. The result further revealed a positive correlation for elements such as Pb, Cu, Zn, Mn, and Sb, indicating that the dust might have a common source.

  4. Random sampling or geostatistical modelling? Choosing between design-based and model-based sampling strategies for soil (with discussion)

    NARCIS (Netherlands)

    Brus, D.J.; Gruijter, de J.J.

    1997-01-01

    Classical sampling theory has been repeatedly identified with classical statistics which assumes that data are identically and independently distributed. This explains the switch of many soil scientists from design-based sampling strategies, based on classical sampling theory, to the model-based

  5. Highly selective solid phase extraction and preconcentration of Azathioprine with nano-sized imprinted polymer based on multivariate optimization and its trace determination in biological and pharmaceutical samples

    Energy Technology Data Exchange (ETDEWEB)

    Davarani, Saied Saeed Hosseiny, E-mail: ss-hosseiny@cc.sbu.ac.ir [Faculty of Chemistry, Shahid Beheshti University, G. C., P.O. Box 19839-4716, Tehran (Iran, Islamic Republic of); Rezayati zad, Zeinab [Faculty of Chemistry, Shahid Beheshti University, G. C., P.O. Box 19839-4716, Tehran (Iran, Islamic Republic of); Taheri, Ali Reza; Rahmatian, Nasrin [Islamic Azad University, Ilam Branch, Ilam (Iran, Islamic Republic of)

    2017-02-01

    In this research, for first time selective separation and determination of Azathioprine is demonstrated using molecularly imprinted polymer as the solid-phase extraction adsorbent, measured by spectrophotometry at λ{sub max} 286 nm. The selective molecularly imprinted polymer was produced using Azathioprine and methacrylic acid as a template molecule and monomer, respectively. A molecularly imprinted solid-phase extraction procedure was performed in column for the analyte from pharmaceutical and serum samples. The synthesized polymers were characterized by infrared spectroscopy (IR), field emission scanning electron microscopy (FESEM). In order to investigate the effect of independent variables on the extraction efficiency, the response surface methodology (RSM) based on Box–Behnken design (BBD) was employed. The analytical parameters such as precision, accuracy and linear working range were also determined in optimal experimental conditions and the proposed method was applied to analysis of Azathioprine. The linear dynamic range and limits of detection were 2.5–0.01 and 0.008 mg L{sup ‐1} respectively. The recoveries for analyte were higher than 95% and relative standard deviation values were found to be in the range of 0.83–4.15%. This method was successfully applied for the determination of Azathioprine in biological and pharmaceutical samples. - Graphical abstract: A new-nano sized imprinted polymer was synthesized and applied as sorbent in SPE in order to selective recognition, preconcentration, and determination of Azathioprine with the response surface methodology based on Box–Behnken design and was successfully investigated for the clean-up of human blood serum and pharmaceutical samples. - Highlights: • The nanosized-imprinted polymer has been synthesized by precipitation polymerization technique. • A molecularly imprinted solid-phase extraction procedure was performed for determination of Azathioprine. • The Azathioprine

  6. Highly selective solid phase extraction and preconcentration of Azathioprine with nano-sized imprinted polymer based on multivariate optimization and its trace determination in biological and pharmaceutical samples

    International Nuclear Information System (INIS)

    Davarani, Saied Saeed Hosseiny; Rezayati zad, Zeinab; Taheri, Ali Reza; Rahmatian, Nasrin

    2017-01-01

    In this research, for first time selective separation and determination of Azathioprine is demonstrated using molecularly imprinted polymer as the solid-phase extraction adsorbent, measured by spectrophotometry at λ max 286 nm. The selective molecularly imprinted polymer was produced using Azathioprine and methacrylic acid as a template molecule and monomer, respectively. A molecularly imprinted solid-phase extraction procedure was performed in column for the analyte from pharmaceutical and serum samples. The synthesized polymers were characterized by infrared spectroscopy (IR), field emission scanning electron microscopy (FESEM). In order to investigate the effect of independent variables on the extraction efficiency, the response surface methodology (RSM) based on Box–Behnken design (BBD) was employed. The analytical parameters such as precision, accuracy and linear working range were also determined in optimal experimental conditions and the proposed method was applied to analysis of Azathioprine. The linear dynamic range and limits of detection were 2.5–0.01 and 0.008 mg L ‐1 respectively. The recoveries for analyte were higher than 95% and relative standard deviation values were found to be in the range of 0.83–4.15%. This method was successfully applied for the determination of Azathioprine in biological and pharmaceutical samples. - Graphical abstract: A new-nano sized imprinted polymer was synthesized and applied as sorbent in SPE in order to selective recognition, preconcentration, and determination of Azathioprine with the response surface methodology based on Box–Behnken design and was successfully investigated for the clean-up of human blood serum and pharmaceutical samples. - Highlights: • The nanosized-imprinted polymer has been synthesized by precipitation polymerization technique. • A molecularly imprinted solid-phase extraction procedure was performed for determination of Azathioprine. • The Azathioprine-molecular imprinting

  7. The Complete Local Volume Groups Sample - I. Sample selection and X-ray properties of the high-richness subsample

    Science.gov (United States)

    O'Sullivan, Ewan; Ponman, Trevor J.; Kolokythas, Konstantinos; Raychaudhury, Somak; Babul, Arif; Vrtilek, Jan M.; David, Laurence P.; Giacintucci, Simona; Gitti, Myriam; Haines, Chris P.

    2017-12-01

    We present the Complete Local-Volume Groups Sample (CLoGS), a statistically complete optically selected sample of 53 groups within 80 Mpc. Our goal is to combine X-ray, radio and optical data to investigate the relationship between member galaxies, their active nuclei and the hot intra-group medium (IGM). We describe sample selection, define a 26-group high-richness subsample of groups containing at least four optically bright (log LB ≥ 10.2 LB⊙) galaxies, and report the results of XMM-Newton and Chandra observations of these systems. We find that 14 of the 26 groups are X-ray bright, possessing a group-scale IGM extending at least 65 kpc and with luminosity >1041 erg s-1, while a further three groups host smaller galaxy-scale gas haloes. The X-ray bright groups have masses in the range M500 ≃ 0.5-5 × 1013 M⊙, based on system temperatures of 0.4-1.4 keV, and X-ray luminosities in the range 2-200 × 1041 erg s-1. We find that ∼53-65 per cent of the X-ray bright groups have cool cores, a somewhat lower fraction than found by previous archival surveys. Approximately 30 per cent of the X-ray bright groups show evidence of recent dynamical interactions (mergers or sloshing), and ∼35 per cent of their dominant early-type galaxies host active galactic nuclei with radio jets. We find no groups with unusually high central entropies, as predicted by some simulations, and confirm that CLoGS is in principle capable of detecting such systems. We identify three previously unrecognized groups, and find that they are either faint (LX, R500 < 1042 erg s-1) with no concentrated cool core, or highly disturbed. This leads us to suggest that ∼20 per cent of X-ray bright groups in the local universe may still be unidentified.

  8. Adult health study reference papers. Selection of the sample. Characteristics of the sample

    Energy Technology Data Exchange (ETDEWEB)

    Beebe, G W; Fujisawa, Hideo; Yamasaki, Mitsuru

    1960-12-14

    The characteristics and selection of the clinical sample have been described in some detail to provide information on the comparability of the exposure groups with respect to factors excluded from the matching criteria and to provide basic descriptive information potentially relevant to individual studies that may be done within the framework of the Adult Health Study. The characteristics under review here are age, sex, many different aspects of residence, marital status, occupation and industry, details of location and shielding ATB, acute radiation signs and symptoms, and prior ABCC medical or pathology examinations. 5 references, 57 tables.

  9. Comparative studies of praseodymium(III) selective sensors based on newly synthesized Schiff's bases

    International Nuclear Information System (INIS)

    Gupta, Vinod K.; Goyal, Rajendra N.; Pal, Manoj K.; Sharma, Ram A.

    2009-01-01

    Praseodymium ion selective polyvinyl chloride (PVC) membrane sensors, based on two new Schiff's bases 1,3-diphenylpropane-1,3-diylidenebis(azan-1-ylidene)diphenol (M 1 ) and N,N'-bis(pyridoxylideneiminato) ethylene (M 2 ) have been developed and studied. The sensor having membrane composition of PVC: o-NPOE: ionophore (M 1 ): NaTPB (w/w; mg) of 150: 300: 8: 5 showed best performances in comparison to M 2 based membranes. The sensor based on (M 1 ) exhibits the working concentration range 1.0 x 10 -8 to 1.0 x 10 -2 M with a detection limit of 5.0 x 10 -9 M and a Nernstian slope 20.0 ± 0.3 mV decade -1 of activity. It exhibited a quick response time as <8 s and its potential responses were pH independent across the range of 3.5-8.5.The influence of the membrane composition and possible interfering ions have also been investigated on the response properties of the electrode. The sensor has been found to work satisfactorily in partially non-aqueous media up to 15% (v/v) content of methanol, ethanol or acetonitrile and could be used for a period of 3 months. The selectivity coefficients determined by using fixed interference method (FIM) indicate high selectivity for praseodymium(III) ions over wide variety of other cations. To asses its analytical applicability the prepared sensor was successfully applied for determination of praseodymium(III) in spiked water samples.

  10. GMDH-Based Semi-Supervised Feature Selection for Electricity Load Classification Forecasting

    Directory of Open Access Journals (Sweden)

    Lintao Yang

    2018-01-01

    Full Text Available With the development of smart power grids, communication network technology and sensor technology, there has been an exponential growth in complex electricity load data. Irregular electricity load fluctuations caused by the weather and holiday factors disrupt the daily operation of the power companies. To deal with these challenges, this paper investigates a day-ahead electricity peak load interval forecasting problem. It transforms the conventional continuous forecasting problem into a novel interval forecasting problem, and then further converts the interval forecasting problem into the classification forecasting problem. In addition, an indicator system influencing the electricity load is established from three dimensions, namely the load series, calendar data, and weather data. A semi-supervised feature selection algorithm is proposed to address an electricity load classification forecasting issue based on the group method of data handling (GMDH technology. The proposed algorithm consists of three main stages: (1 training the basic classifier; (2 selectively marking the most suitable samples from the unclassified label data, and adding them to an initial training set; and (3 training the classification models on the final training set and classifying the test samples. An empirical analysis of electricity load dataset from four Chinese cities is conducted. Results show that the proposed model can address the electricity load classification forecasting problem more efficiently and effectively than the FW-Semi FS (forward semi-supervised feature selection and GMDH-U (GMDH-based semi-supervised feature selection for customer classification models.

  11. Evaluation of gene importance in microarray data based upon probability of selection

    Directory of Open Access Journals (Sweden)

    Fu Li M

    2005-03-01

    Full Text Available Abstract Background Microarray devices permit a genome-scale evaluation of gene function. This technology has catalyzed biomedical research and development in recent years. As many important diseases can be traced down to the gene level, a long-standing research problem is to identify specific gene expression patterns linking to metabolic characteristics that contribute to disease development and progression. The microarray approach offers an expedited solution to this problem. However, it has posed a challenging issue to recognize disease-related genes expression patterns embedded in the microarray data. In selecting a small set of biologically significant genes for classifier design, the nature of high data dimensionality inherent in this problem creates substantial amount of uncertainty. Results Here we present a model for probability analysis of selected genes in order to determine their importance. Our contribution is that we show how to derive the P value of each selected gene in multiple gene selection trials based on different combinations of data samples and how to conduct a reliability analysis accordingly. The importance of a gene is indicated by its associated P value in that a smaller value implies higher information content from information theory. On the microarray data concerning the subtype classification of small round blue cell tumors, we demonstrate that the method is capable of finding the smallest set of genes (19 genes with optimal classification performance, compared with results reported in the literature. Conclusion In classifier design based on microarray data, the probability value derived from gene selection based on multiple combinations of data samples enables an effective mechanism for reducing the tendency of fitting local data particularities.

  12. Acceptance sampling using judgmental and randomly selected samples

    Energy Technology Data Exchange (ETDEWEB)

    Sego, Landon H.; Shulman, Stanley A.; Anderson, Kevin K.; Wilson, John E.; Pulsipher, Brent A.; Sieber, W. Karl

    2010-09-01

    We present a Bayesian model for acceptance sampling where the population consists of two groups, each with different levels of risk of containing unacceptable items. Expert opinion, or judgment, may be required to distinguish between the high and low-risk groups. Hence, high-risk items are likely to be identifed (and sampled) using expert judgment, while the remaining low-risk items are sampled randomly. We focus on the situation where all observed samples must be acceptable. Consequently, the objective of the statistical inference is to quantify the probability that a large percentage of the unsampled items in the population are also acceptable. We demonstrate that traditional (frequentist) acceptance sampling and simpler Bayesian formulations of the problem are essentially special cases of the proposed model. We explore the properties of the model in detail, and discuss the conditions necessary to ensure that required samples sizes are non-decreasing function of the population size. The method is applicable to a variety of acceptance sampling problems, and, in particular, to environmental sampling where the objective is to demonstrate the safety of reoccupying a remediated facility that has been contaminated with a lethal agent.

  13. A selective iodide ion sensor electrode based on functionalized ZnO nanotubes.

    Science.gov (United States)

    Ibupoto, Zafar Hussain; Khun, Kimleang; Willander, Magnus

    2013-02-04

    In this research work, ZnO nanotubes were fabricated on a gold coated glass substrate through chemical etching by the aqueous chemical growth method. For the first time a nanostructure-based iodide ion selective electrode was developed. The ZnO nanotubes were functionalized with miconazole ion exchanger and the electromotive force (EMF) was measured by the potentiometric method. The iodide ion sensor exhibited a linear response over a wide range of concentrations (1 × 10-6 to 1 × 10-1 M) and excellent sensitivity of -62 ± 1 mV/decade. The detection limit of the proposed sensor was found to be 5 × 10-7 M. The effects of pH, temperature, additive, plasticizer and stabilizer on the potential response of iodide ion selective electrode were also studied. The proposed iodide ion sensor demonstrated a fast response time of less than 5 s and high selectivity against common organic and the inorganic anions. All the obtained results revealed that the iodide ion sensor based on functionalized ZnO nanotubes may be used for the detection of iodide ion in environmental water samples, pharmaceutical products and other real samples.

  14. A Selective Iodide Ion Sensor Electrode Based on Functionalized ZnO Nanotubes

    Directory of Open Access Journals (Sweden)

    Magnus Willander

    2013-02-01

    Full Text Available In this research work, ZnO nanotubes were fabricated on a gold coated glass substrate through chemical etching by the aqueous chemical growth method. For the first time a nanostructure-based iodide ion selective electrode was developed. The ZnO nanotubes were functionalized with miconazole ion exchanger and the electromotive force (EMF was measured by the potentiometric method. The iodide ion sensor exhibited a linear response over a wide range of concentrations (1 × 10−6 to 1 × 10−1 M and excellent sensitivity of –62 ± 1 mV/decade. The detection limit of the proposed sensor was found to be 5 × 10−7 M. The effects of pH, temperature, additive, plasticizer and stabilizer on the potential response of iodide ion selective electrode were also studied. The proposed iodide ion sensor demonstrated a fast response time of less than 5 s and high selectivity against common organic and the inorganic anions. All the obtained results revealed that the iodide ion sensor based on functionalized ZnO nanotubes may be used for the detection of iodide ion in environmental water samples, pharmaceutical products and other real samples.

  15. A ROC-based feature selection method for computer-aided detection and diagnosis

    Science.gov (United States)

    Wang, Songyuan; Zhang, Guopeng; Liao, Qimei; Zhang, Junying; Jiao, Chun; Lu, Hongbing

    2014-03-01

    Image-based computer-aided detection and diagnosis (CAD) has been a very active research topic aiming to assist physicians to detect lesions and distinguish them from benign to malignant. However, the datasets fed into a classifier usually suffer from small number of samples, as well as significantly less samples available in one class (have a disease) than the other, resulting in the classifier's suboptimal performance. How to identifying the most characterizing features of the observed data for lesion detection is critical to improve the sensitivity and minimize false positives of a CAD system. In this study, we propose a novel feature selection method mR-FAST that combines the minimal-redundancymaximal relevance (mRMR) framework with a selection metric FAST (feature assessment by sliding thresholds) based on the area under a ROC curve (AUC) generated on optimal simple linear discriminants. With three feature datasets extracted from CAD systems for colon polyps and bladder cancer, we show that the space of candidate features selected by mR-FAST is more characterizing for lesion detection with higher AUC, enabling to find a compact subset of superior features at low cost.

  16. Determination of specific activity of americium and plutonium in selected environmental samples

    International Nuclear Information System (INIS)

    Trebunova, T.

    1999-01-01

    The aim of this work was development of method for determination of americium and plutonium in environmental samples. Developed method was evaluated on soil samples and after they was applied on selected samples of fishes (smoked mackerel, herring and fillet from Alaska hake). The method for separation of americium is based on liquid separation with Aliquate-336, precipitation with oxalic acid and using of chromatographic material TRU-Spec TM .The intervals of radiochemical yields were from 13.0% to 80.9% for plutonium-236 and from 10.5% to 100% for americium-241. Determined specific activities of plutonium-239,240 were from (2.3 ± 1.4) mBq/kg to (82 ± 29) mBq/kg, the specific activities of plutonium-238 were from (14.2 ± 3.7) mBq/kg to (708 ± 86) mBq/kg. The specific activities of americium-241 were from (1.4 ± 0.9) mBq/kg to (3360 ± 210) mBq/kg. The fishes from Baltic Sea as well as from North Sea show highest specific activities then fresh-water fishes from Slovakia. Therefore the monitoring of alpha radionuclides in foods imported from territories with nuclear testing is recommended

  17. Enhancement of the spectral selectivity of complex samples by measuring them in a frozen state at low temperatures in order to improve accuracy for quantitative analysis. Part II. Determination of viscosity for lube base oils using Raman spectroscopy.

    Science.gov (United States)

    Kim, Mooeung; Chung, Hoeil

    2013-03-07

    The use of selectivity-enhanced Raman spectra of lube base oil (LBO) samples achieved by the spectral collection under frozen conditions at low temperatures was effective for improving accuracy for the determination of the kinematic viscosity at 40 °C (KV@40). A collection of Raman spectra from samples cooled around -160 °C provided the most accurate measurement of KV@40. Components of the LBO samples were mainly long-chain hydrocarbons with molecular structures that were deformable when these were frozen, and the different structural deformabilities of the components enhanced spectral selectivity among the samples. To study the structural variation of components according to the change of sample temperature from cryogenic to ambient condition, n-heptadecane and pristane (2,6,10,14-tetramethylpentadecane) were selected as representative components of LBO samples, and their temperature-induced spectral features as well as the corresponding spectral loadings were investigated. A two-dimensional (2D) correlation analysis was also employed to explain the origin for the improved accuracy. The asynchronous 2D correlation pattern was simplest at the optimal temperature, indicating the occurrence of distinct and selective spectral variations, which enabled the variation of KV@40 of LBO samples to be more accurately assessed.

  18. Discrete Biogeography Based Optimization for Feature Selection in Molecular Signatures.

    Science.gov (United States)

    Liu, Bo; Tian, Meihong; Zhang, Chunhua; Li, Xiangtao

    2015-04-01

    Biomarker discovery from high-dimensional data is a complex task in the development of efficient cancer diagnoses and classification. However, these data are usually redundant and noisy, and only a subset of them present distinct profiles for different classes of samples. Thus, selecting high discriminative genes from gene expression data has become increasingly interesting in the field of bioinformatics. In this paper, a discrete biogeography based optimization is proposed to select the good subset of informative gene relevant to the classification. In the proposed algorithm, firstly, the fisher-markov selector is used to choose fixed number of gene data. Secondly, to make biogeography based optimization suitable for the feature selection problem; discrete migration model and discrete mutation model are proposed to balance the exploration and exploitation ability. Then, discrete biogeography based optimization, as we called DBBO, is proposed by integrating discrete migration model and discrete mutation model. Finally, the DBBO method is used for feature selection, and three classifiers are used as the classifier with the 10 fold cross-validation method. In order to show the effective and efficiency of the algorithm, the proposed algorithm is tested on four breast cancer dataset benchmarks. Comparison with genetic algorithm, particle swarm optimization, differential evolution algorithm and hybrid biogeography based optimization, experimental results demonstrate that the proposed method is better or at least comparable with previous method from literature when considering the quality of the solutions obtained. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  19. A novel knot selection method for the error-bounded B-spline curve fitting of sampling points in the measuring process

    International Nuclear Information System (INIS)

    Liang, Fusheng; Zhao, Ji; Ji, Shijun; Zhang, Bing; Fan, Cheng

    2017-01-01

    The B-spline curve has been widely used in the reconstruction of measurement data. The error-bounded sampling points reconstruction can be achieved by the knot addition method (KAM) based B-spline curve fitting. In KAM, the selection pattern of initial knot vector has been associated with the ultimate necessary number of knots. This paper provides a novel initial knots selection method to condense the knot vector required for the error-bounded B-spline curve fitting. The initial knots are determined by the distribution of features which include the chord length (arc length) and bending degree (curvature) contained in the discrete sampling points. Firstly, the sampling points are fitted into an approximate B-spline curve Gs with intensively uniform knot vector to substitute the description of the feature of the sampling points. The feature integral of Gs is built as a monotone increasing function in an analytic form. Then, the initial knots are selected according to the constant increment of the feature integral. After that, an iterative knot insertion (IKI) process starting from the initial knots is introduced to improve the fitting precision, and the ultimate knot vector for the error-bounded B-spline curve fitting is achieved. Lastly, two simulations and the measurement experiment are provided, and the results indicate that the proposed knot selection method can reduce the number of ultimate knots available. (paper)

  20. Environment-based selection effects of Planck clusters

    Energy Technology Data Exchange (ETDEWEB)

    Kosyra, R.; Gruen, D.; Seitz, S.; Mana, A.; Rozo, E.; Rykoff, E.; Sanchez, A.; Bender, R.

    2015-07-24

    We investigate whether the large-scale structure environment of galaxy clusters imprints a selection bias on Sunyaev–Zel'dovich (SZ) catalogues. Such a selection effect might be caused by line of sight (LoS) structures that add to the SZ signal or contain point sources that disturb the signal extraction in the SZ survey. We use the Planck PSZ1 union catalogue in the Sloan Digital Sky Survey (SDSS) region as our sample of SZ-selected clusters. We calculate the angular two-point correlation function (2pcf) for physically correlated, foreground and background structure in the RedMaPPer SDSS DR8 catalogue with respect to each cluster. We compare our results with an optically selected comparison cluster sample and with theoretical predictions. In contrast to the hypothesis of no environment-based selection, we find a mean 2pcf for background structures of -0.049 on scales of ≲40 arcmin, significantly non-zero at ~4σ, which means that Planck clusters are more likely to be detected in regions of low background density. We hypothesize this effect arises either from background estimation in the SZ survey or from radio sources in the background. We estimate the defect in SZ signal caused by this effect to be negligibly small, of the order of ~10-4 of the signal of a typical Planck detection. Analogously, there are no implications on X-ray mass measurements. However, the environmental dependence has important consequences for weak lensing follow up of Planck galaxy clusters: we predict that projection effects account for half of the mass contained within a 15 arcmin radius of Planck galaxy clusters. We did not detect a background underdensity of CMASS LRGs, which also leaves a spatially varying redshift dependence of the Planck SZ selection function as a possible cause for our findings.

  1. Cold Spray Deposition of Freestanding Inconel Samples and Comparative Analysis with Selective Laser Melting

    Science.gov (United States)

    Bagherifard, Sara; Roscioli, Gianluca; Zuccoli, Maria Vittoria; Hadi, Mehdi; D'Elia, Gaetano; Demir, Ali Gökhan; Previtali, Barbara; Kondás, Ján; Guagliano, Mario

    2017-10-01

    Cold spray offers the possibility of obtaining almost zero-porosity buildups with no theoretical limit to the thickness. Moreover, cold spray can eliminate particle melting, evaporation, crystallization, grain growth, unwanted oxidation, undesirable phases and thermally induced tensile residual stresses. Such characteristics can boost its potential to be used as an additive manufacturing technique. Indeed, deposition via cold spray is recently finding its path toward fabrication of freeform components since it can address the common challenges of powder-bed additive manufacturing techniques including major size constraints, deposition rate limitations and high process temperature. Herein, we prepared nickel-based superalloy Inconel 718 samples with cold spray technique and compared them with similar samples fabricated by selective laser melting method. The samples fabricated using both methods were characterized in terms of mechanical strength, microstructural and porosity characteristics, Vickers microhardness and residual stresses distribution. Different heat treatment cycles were applied to the cold-sprayed samples in order to enhance their mechanical characteristics. The obtained data confirm that cold spray technique can be used as a complementary additive manufacturing method for fabrication of high-quality freestanding components where higher deposition rate, larger final size and lower fabrication temperatures are desired.

  2. Population genetics inference for longitudinally-sampled mutants under strong selection.

    Science.gov (United States)

    Lacerda, Miguel; Seoighe, Cathal

    2014-11-01

    Longitudinal allele frequency data are becoming increasingly prevalent. Such samples permit statistical inference of the population genetics parameters that influence the fate of mutant variants. To infer these parameters by maximum likelihood, the mutant frequency is often assumed to evolve according to the Wright-Fisher model. For computational reasons, this discrete model is commonly approximated by a diffusion process that requires the assumption that the forces of natural selection and mutation are weak. This assumption is not always appropriate. For example, mutations that impart drug resistance in pathogens may evolve under strong selective pressure. Here, we present an alternative approximation to the mutant-frequency distribution that does not make any assumptions about the magnitude of selection or mutation and is much more computationally efficient than the standard diffusion approximation. Simulation studies are used to compare the performance of our method to that of the Wright-Fisher and Gaussian diffusion approximations. For large populations, our method is found to provide a much better approximation to the mutant-frequency distribution when selection is strong, while all three methods perform comparably when selection is weak. Importantly, maximum-likelihood estimates of the selection coefficient are severely attenuated when selection is strong under the two diffusion models, but not when our method is used. This is further demonstrated with an application to mutant-frequency data from an experimental study of bacteriophage evolution. We therefore recommend our method for estimating the selection coefficient when the effective population size is too large to utilize the discrete Wright-Fisher model. Copyright © 2014 by the Genetics Society of America.

  3. Response rates and selection problems, with emphasis on mental health variables and DNA sampling, in large population-based, cross-sectional and longitudinal studies of adolescents in Norway

    Directory of Open Access Journals (Sweden)

    Lien Lars

    2010-10-01

    Full Text Available Abstract Background Selection bias is a threat to the internal validity of epidemiological studies. In light of a growing number of studies which aim to provide DNA, as well as a considerable number of invitees who declined to participate, we discuss response rates, predictors of lost to follow-up and failure to provide DNA, and the presence of possible selection bias, based on five samples of adolescents. Methods We included nearly 7,000 adolescents from two longitudinal studies of 18/19 year olds with two corresponding cross-sectional baseline studies at age 15/16 (10th graders, and one cross-sectional study of 13th graders (18/19 years old. DNA was sampled from the cheek mucosa of 18/19 year olds. Predictors of lost to follow-up and failure to provide DNA were studied by Poisson regression. Selection bias in the follow-up at age 18/19 was estimated through investigation of prevalence ratios (PRs between selected exposures (physical activity, smoking and outcome variables (general health, mental distress, externalizing problems measured at baseline. Results Out of 5,750 who participated at age 15/16, we lost 42% at follow-up at age 18/19. The percentage of participants who gave their consent to DNA provision was as high as the percentage that consented to a linkage of data with other health registers and surveys, approximately 90%. Significant predictors of lost to follow-up and failure to provide DNA samples in the present genetic epidemiological study were: male gender; non-western ethnicity; postal survey compared with school-based; low educational plans; low education and income of father; low perceived family economy; unmarried parents; poor self-reported health; externalized symptoms and smoking, with some differences in subgroups of ethnicity and gender. The association measures (PRs were quite similar among participants and all invitees, with some minor discrepancies in subgroups of non-western boys and girls. Conclusions Lost to

  4. Gender Wage Gap : A Semi-Parametric Approach With Sample Selection Correction

    NARCIS (Netherlands)

    Picchio, M.; Mussida, C.

    2010-01-01

    Sizeable gender differences in employment rates are observed in many countries. Sample selection into the workforce might therefore be a relevant issue when estimating gender wage gaps. This paper proposes a new semi-parametric estimator of densities in the presence of covariates which incorporates

  5. Evaluation of Stress Loaded Steel Samples Using Selected Electromagnetic Methods

    International Nuclear Information System (INIS)

    Chady, T.

    2004-01-01

    In this paper the magnetic leakage flux and eddy current method were used to evaluate changes of materials' properties caused by stress. Seven samples made of ferromagnetic material with different level of applied stress were prepared. First, the leakage magnetic fields were measured by scanning the surface of the specimens with GMR gradiometer. Next, the same samples were evaluated using an eddy current sensor. A comparison between results obtained from both methods was carried out. Finally, selected parameters of the measured signal were calculated and utilized to evaluate level of the applied stress. A strong coincidence between amount of the applied stress and the maximum amplitude of the derivative was confirmed

  6. Magnetically separable polymer (Mag-MIP) for selective analysis of biotin in food samples.

    Science.gov (United States)

    Uzuriaga-Sánchez, Rosario Josefina; Khan, Sabir; Wong, Ademar; Picasso, Gino; Pividori, Maria Isabel; Sotomayor, Maria Del Pilar Taboada

    2016-01-01

    This work presents an efficient method for the preparation of magnetic nanoparticles modified with molecularly imprinted polymers (Mag-MIP) through core-shell method for the determination of biotin in milk food samples. The functional monomer acrylic acid was selected from molecular modeling, EGDMA was used as cross-linking monomer and AIBN as radical initiator. The Mag-MIP and Mag-NIP were characterized by FTIR, magnetic hysteresis, XRD, SEM and N2-sorption measurements. The capacity of Mag-MIP for biotin adsorption, its kinetics and selectivity were studied in detail. The adsorption data was well described by Freundlich isotherm model with adsorption equilibrium constant (KF) of 1.46 mL g(-1). The selectivity experiments revealed that prepared Mag-MIP had higher selectivity toward biotin compared to other molecules with different chemical structure. The material was successfully applied for the determination of biotin in diverse milk samples using HPLC for quantification of the analyte, obtaining the mean value of 87.4% recovery. Copyright © 2015 Elsevier Ltd. All rights reserved.

  7. Selection of Sampling Pumps Used for Groundwater Monitoring at the Hanford Site

    Energy Technology Data Exchange (ETDEWEB)

    Schalla, Ronald; Webber, William D.; Smith, Ronald M.

    2001-11-05

    The variable frequency drive centrifugal submersible pump, Redi-Flo2a made by Grundfosa, was selected for universal application for Hanford Site groundwater monitoring. Specifications for the selected pump and five other pumps were evaluated against current and future Hanford groundwater monitoring performance requirements, and the Redi-Flo2 was selected as the most versatile and applicable for the range of monitoring conditions. The Redi-Flo2 pump distinguished itself from the other pumps considered because of its wide range in output flow rate and its comparatively moderate maintenance and low capital costs. The Redi-Flo2 pump is able to purge a well at a high flow rate and then supply water for sampling at a low flow rate. Groundwater sampling using a low-volume-purging technique (e.g., low flow, minimal purge, no purge, or micropurgea) is planned in the future, eliminating the need for the pump to supply a high-output flow rate. Under those conditions, the Well Wizard bladder pump, manufactured by QED Environmental Systems, Inc., may be the preferred pump because of the lower capital cost.

  8. Salicylimine-Based Colorimetric and Fluorescent Chemosensor for Selective Detection of Cyanide in Aqueous Buffer

    Energy Technology Data Exchange (ETDEWEB)

    Noh, Jin Young; Hwang, In Hong; Kim, Hyun; Song, Eun Joo; Kim, Kyung Beom; Kim, Cheal [Seoul National Univ., Seoul (Korea, Republic of)

    2013-07-15

    A simple colorimetric and fluorescent anion sensor 1 based on salicylimine showed a high selectivity and sensitivity for detection of cyanide in aqueous solution. The receptor 1 showed high selectivity toward CN{sup -} ions in a 1:1 stoichiometric manner, which induces a fast color change from colorless to orange and a dramatic enhancement in fluorescence intensity selectively for cyanide anions over other anions. Such selectivity resulted from the nucleophilic addition of CN{sup -} to the carbon atom of an electron-deficient imine group. The sensitivity of the fluorescence-based assay (0.06 μM) is below the 1.9 μM suggested by the World Health Organization (WHO) as the maximum allowable cyanide concentration in drinking water, capable of being a practical system for the monitoring of CN. concentrations in aqueous samples.

  9. Selective solid-phase extraction of Ni(II) by an ion-imprinted polymer from water samples

    International Nuclear Information System (INIS)

    Saraji, Mohammad; Yousefi, Hamideh

    2009-01-01

    A new ion-imprinted polymer (IIP) material was synthesized by copolymerization of 4-vinylpyridine as monomer, ethyleneglycoldimethacrylate as crosslinking agent and 2,2'-azobis-sobutyronitrile as initiator in the presence of Ni-dithizone complex. The IIP was used as sorbent in a solid-phase extraction column. The effects of sampling volume, elution conditions, sample pH and sample flow rate on the extraction of Ni ions form water samples were studied. The maximum adsorption capacity and the relative selectivity coefficients of imprinted polymer for Ni(II)/Co(II), Ni(II)/Cu(II) and Ni(II)/Cd(II) were calculated. Compared with non-imprinted polymer particles, the IIP had higher selectivity for Ni(II). The relative selectivity factor (α r ) values of Ni(II)/Co(II), Ni(II)/Cu(II) and Ni(II)/Cd(II) were 21.6, 54.3, and 22.7, respectively, which are greater than 1. The relative standard deviation of the five replicate determinations of Ni(II) was 3.4%. The detection limit for 150 mL of sample was 1.6 μg L -1 using flame atomic absorption spectrometry. The developed method was successfully applied to the determination of trace nickel in water samples with satisfactory results.

  10. An Improved Nested Sampling Algorithm for Model Selection and Assessment

    Science.gov (United States)

    Zeng, X.; Ye, M.; Wu, J.; WANG, D.

    2017-12-01

    Multimodel strategy is a general approach for treating model structure uncertainty in recent researches. The unknown groundwater system is represented by several plausible conceptual models. Each alternative conceptual model is attached with a weight which represents the possibility of this model. In Bayesian framework, the posterior model weight is computed as the product of model prior weight and marginal likelihood (or termed as model evidence). As a result, estimating marginal likelihoods is crucial for reliable model selection and assessment in multimodel analysis. Nested sampling estimator (NSE) is a new proposed algorithm for marginal likelihood estimation. The implementation of NSE comprises searching the parameters' space from low likelihood area to high likelihood area gradually, and this evolution is finished iteratively via local sampling procedure. Thus, the efficiency of NSE is dominated by the strength of local sampling procedure. Currently, Metropolis-Hasting (M-H) algorithm and its variants are often used for local sampling in NSE. However, M-H is not an efficient sampling algorithm for high-dimensional or complex likelihood function. For improving the performance of NSE, it could be feasible to integrate more efficient and elaborated sampling algorithm - DREAMzs into the local sampling. In addition, in order to overcome the computation burden problem of large quantity of repeating model executions in marginal likelihood estimation, an adaptive sparse grid stochastic collocation method is used to build the surrogates for original groundwater model.

  11. Sol-gel based sensor for selective formaldehyde determination

    Energy Technology Data Exchange (ETDEWEB)

    Bunkoed, Opas [Trace Analysis and Biosensor Research Center, Prince of Songkla University, Hat Yai, Songkhla 90112 (Thailand); Department of Chemistry and Center for Innovation in Chemistry, Faculty of Science, Prince of Songkla University, Hat Yai, Songkhla 90112 (Thailand); Davis, Frank [Cranfield Health, Cranfield University, Bedford MK43 0AL (United Kingdom); Kanatharana, Proespichaya, E-mail: proespichaya.K@psu.ac.th [Trace Analysis and Biosensor Research Center, Prince of Songkla University, Hat Yai, Songkhla 90112 (Thailand); Department of Chemistry and Center for Innovation in Chemistry, Faculty of Science, Prince of Songkla University, Hat Yai, Songkhla 90112 (Thailand); Thavarungkul, Panote [Trace Analysis and Biosensor Research Center, Prince of Songkla University, Hat Yai, Songkhla 90112 (Thailand); Department of Physics, Faculty of Science, Prince of Songkla University, Hat Yai, Songkhla 90112 (Thailand); Higson, Seamus P.J., E-mail: s.p.j.higson@cranfield.ac.uk [Cranfield Health, Cranfield University, Bedford MK43 0AL (United Kingdom)

    2010-02-05

    We report the development of transparent sol-gels with entrapped sensitive and selective reagents for the detection of formaldehyde. The sampling method is based on the adsorption of formaldehyde from the air and reaction with {beta}-diketones (for example acetylacetone) in a sol-gel matrix to produce a yellow product, lutidine, which was detected directly. The proposed method does not require preparation of samples prior to analysis and allows both screening by visual detection and quantitative measurement by simple spectrophotometry. The detection limit of 0.03 ppmv formaldehyde is reported which is lower than the maximum exposure concentrations recommended by both the World Health Organisation (WHO) and the Occupational Safety and Health Administration (OSHA). This sampling method was found to give good reproducibility, the relative standard deviation at 0.2 and 1 ppmv being 6.3% and 4.6%, respectively. Other carbonyl compounds i.e. acetaldehyde, benzaldehyde, acetone and butanone do not interfere with this analytical approach. Results are provided for the determination of formaldehyde in indoor air.

  12. Sol-gel based sensor for selective formaldehyde determination

    International Nuclear Information System (INIS)

    Bunkoed, Opas; Davis, Frank; Kanatharana, Proespichaya; Thavarungkul, Panote; Higson, Seamus P.J.

    2010-01-01

    We report the development of transparent sol-gels with entrapped sensitive and selective reagents for the detection of formaldehyde. The sampling method is based on the adsorption of formaldehyde from the air and reaction with β-diketones (for example acetylacetone) in a sol-gel matrix to produce a yellow product, lutidine, which was detected directly. The proposed method does not require preparation of samples prior to analysis and allows both screening by visual detection and quantitative measurement by simple spectrophotometry. The detection limit of 0.03 ppmv formaldehyde is reported which is lower than the maximum exposure concentrations recommended by both the World Health Organisation (WHO) and the Occupational Safety and Health Administration (OSHA). This sampling method was found to give good reproducibility, the relative standard deviation at 0.2 and 1 ppmv being 6.3% and 4.6%, respectively. Other carbonyl compounds i.e. acetaldehyde, benzaldehyde, acetone and butanone do not interfere with this analytical approach. Results are provided for the determination of formaldehyde in indoor air.

  13. 40 CFR 761.306 - Sampling 1 meter square surfaces by random selection of halves.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 30 2010-07-01 2010-07-01 false Sampling 1 meter square surfaces by...(b)(3) § 761.306 Sampling 1 meter square surfaces by random selection of halves. (a) Divide each 1 meter square portion where it is necessary to collect a surface wipe test sample into two equal (or as...

  14. Principal Stratification in sample selection problems with non normal error terms

    DEFF Research Database (Denmark)

    Rocci, Roberto; Mellace, Giovanni

    The aim of the paper is to relax distributional assumptions on the error terms, often imposed in parametric sample selection models to estimate causal effects, when plausible exclusion restrictions are not available. Within the principal stratification framework, we approximate the true distribut...... an application to the Job Corps training program....

  15. Autonomous site selection and instrument positioning for sample acquisition

    Science.gov (United States)

    Shaw, A.; Barnes, D.; Pugh, S.

    The European Space Agency Aurora Exploration Program aims to establish a European long-term programme for the exploration of Space, culminating in a human mission to space in the 2030 timeframe. Two flagship missions, namely Mars Sample Return and ExoMars, have been proposed as recognised steps along the way. The Exomars Rover is the first of these flagship missions and includes a rover carrying the Pasteur Payload, a mobile exobiology instrumentation package, and the Beagle 2 arm. The primary objective is the search for evidence of past or present life on mars, but the payload will also study the evolution of the planet and the atmosphere, look for evidence of seismological activity and survey the environment in preparation for future missions. The operation of rovers in unknown environments is complicated, and requires large resources not only on the planet but also in ground based operations. Currently, this can be very labour intensive, and costly, if large teams of scientists and engineers are required to assess mission progress, plan mission scenarios, and construct a sequence of events or goals for uplink. Furthermore, the constraints in communication imposed by the time delay involved over such large distances, and line-of-sight required, make autonomy paramount to mission success, affording the ability to operate in the event of communications outages and be opportunistic with respect to scientific discovery. As part of this drive to reduce mission costs and increase autonomy the Space Robotics group at the University of Wales, Aberystwyth is researching methods of autonomous site selection and instrument positioning, directly applicable to the ExoMars mission. The site selection technique used builds on the geometric reasoning algorithms used previously for localisation and navigation [Shaw 03]. It is proposed that a digital elevation model (DEM) of the local surface, generated during traverse and without interaction from ground based operators, can be

  16. Selective parathyroid venous sampling in primary hyperparathyroidism: A systematic review and meta-analysis.

    Science.gov (United States)

    Ibraheem, Kareem; Toraih, Eman A; Haddad, Antoine B; Farag, Mahmoud; Randolph, Gregory W; Kandil, Emad

    2018-05-14

    Minimally invasive parathyroidectomy requires accurate preoperative localization techniques. There is considerable controversy about the effectiveness of selective parathyroid venous sampling (sPVS) in primary hyperparathyroidism (PHPT) patients. The aim of this meta-analysis is to examine the diagnostic accuracy of sPVS as a preoperative localization modality in PHPT. Studies evaluating the diagnostic accuracy of sPVS for PHPT were electronically searched in the PubMed, EMBASE, Web of Science, and Cochrane Controlled Trials Register databases. Two independent authors reviewed the studies, and revised quality assessment of diagnostic accuracy study tool was used for the quality assessment. Study heterogeneity and pooled estimates were calculated. Two hundred and two unique studies were identified. Of those, 12 studies were included in the meta-analysis. Pooled sensitivity, specificity, and positive likelihood ratio (PLR) of sPVS were 74%, 41%, and 1.55, respectively. The area-under-the-receiver operating characteristic curve was 0.684, indicating an average discriminatory ability of sPVS. On comparison between sPVS and noninvasive imaging modalities, sensitivity, PLR, and positive posttest probability were significantly higher in sPVS compared to noninvasive imaging modalities. Interestingly, super-selective venous sampling had the highest sensitivity, accuracy, and positive posttest probability compared to other parathyroid venous sampling techniques. This is the first meta-analysis to examine the accuracy of sPVS in PHPT. sPVS had higher pooled sensitivity when compared to noninvasive modalities in revision parathyroid surgery. However, the invasiveness of this technique does not favor its routine use for preoperative localization. Super-selective venous sampling was the most accurate among all other parathyroid venous sampling techniques. Laryngoscope, 2018. © 2018 The American Laryngological, Rhinological and Otological Society, Inc.

  17. Generating samples for association studies based on HapMap data

    Directory of Open Access Journals (Sweden)

    Chen Yixuan

    2008-01-01

    Full Text Available Abstract Background With the completion of the HapMap project, a variety of computational algorithms and tools have been proposed for haplotype inference, tag SNP selection and genome-wide association studies. Simulated data are commonly used in evaluating these new developed approaches. In addition to simulations based on population models, empirical data generated by perturbing real data, has also been used because it may inherit specific properties from real data. However, there is no tool that is publicly available to generate large scale simulated variation data by taking into account knowledge from the HapMap project. Results A computer program (gs was developed to quickly generate a large number of samples based on real data that are useful for a variety of purposes, including evaluating methods for haplotype inference, tag SNP selection and association studies. Two approaches have been implemented to generate dense SNP haplotype/genotype data that share similar local linkage disequilibrium (LD patterns as those in human populations. The first approach takes haplotype pairs from samples as inputs, and the second approach takes patterns of haplotype block structures as inputs. Both quantitative and qualitative traits have been incorporated in the program. Phenotypes are generated based on a disease model, or based on the effect of a quantitative trait nucleotide, both of which can be specified by users. In addition to single-locus disease models, two-locus disease models have also been implemented that can incorporate any degree of epistasis. Users are allowed to specify all nine parameters in a 3 × 3 penetrance table. For several commonly used two-locus disease models, the program can automatically calculate penetrances based on the population prevalence and marginal effects of a disease that users can conveniently specify. Conclusion The program gs can effectively generate large scale genetic and phenotypic variation data that can be

  18. Hyperspectral band selection based on consistency-measure of neighborhood rough set theory

    International Nuclear Information System (INIS)

    Liu, Yao; Xie, Hong; Wang, Liguo; Tan, Kezhu; Chen, Yuehua; Xu, Zhen

    2016-01-01

    Band selection is a well-known approach for reducing dimensionality in hyperspectral imaging. In this paper, a band selection method based on consistency-measure of neighborhood rough set theory (CMNRS) was proposed to select informative bands from hyperspectral images. A decision-making information system was established by the reflection spectrum of soybeans’ hyperspectral data between 400 nm and 1000 nm wavelengths. The neighborhood consistency-measure, which reflects not only the size of the decision positive region, but also the sample distribution in the boundary region, was used as the evaluation function of band significance. The optimal band subset was selected by a forward greedy search algorithm. A post-pruning strategy was employed to overcome the over-fitting problem and find the minimum subset. To assess the effectiveness of the proposed band selection technique, two classification models (extreme learning machine (ELM) and random forests (RF)) were built. The experimental results showed that the proposed algorithm can effectively select key bands and obtain satisfactory classification accuracy. (paper)

  19. EEG feature selection method based on decision tree.

    Science.gov (United States)

    Duan, Lijuan; Ge, Hui; Ma, Wei; Miao, Jun

    2015-01-01

    This paper aims to solve automated feature selection problem in brain computer interface (BCI). In order to automate feature selection process, we proposed a novel EEG feature selection method based on decision tree (DT). During the electroencephalogram (EEG) signal processing, a feature extraction method based on principle component analysis (PCA) was used, and the selection process based on decision tree was performed by searching the feature space and automatically selecting optimal features. Considering that EEG signals are a series of non-linear signals, a generalized linear classifier named support vector machine (SVM) was chosen. In order to test the validity of the proposed method, we applied the EEG feature selection method based on decision tree to BCI Competition II datasets Ia, and the experiment showed encouraging results.

  20. A large sample of Kohonen selected E+A (post-starburst) galaxies from the Sloan Digital Sky Survey

    Science.gov (United States)

    Meusinger, H.; Brünecke, J.; Schalldach, P.; in der Au, A.

    2017-01-01

    Context. The galaxy population in the contemporary Universe is characterised by a clear bimodality, blue galaxies with significant ongoing star formation and red galaxies with only a little. The migration between the blue and the red cloud of galaxies is an issue of active research. Post starburst (PSB) galaxies are thought to be observed in the short-lived transition phase. Aims: We aim to create a large sample of local PSB galaxies from the Sloan Digital Sky Survey (SDSS) to study their characteristic properties, particularly morphological features indicative of gravitational distortions and indications for active galactic nuclei (AGNs). Another aim is to present a tool set for an efficient search in a large database of SDSS spectra based on Kohonen self-organising maps (SOMs). Methods: We computed a huge Kohonen SOM for ∼106 spectra from SDSS data release 7. The SOM is made fully available, in combination with an interactive user interface, for the astronomical community. We selected a large sample of PSB galaxies taking advantage of the clustering behaviour of the SOM. The morphologies of both PSB galaxies and randomly selected galaxies from a comparison sample in SDSS Stripe 82 (S82) were inspected on deep co-added SDSS images to search for indications of gravitational distortions. We used the Portsmouth galaxy property computations to study the evolutionary stage of the PSB galaxies and archival multi-wavelength data to search for hidden AGNs. Results: We compiled a catalogue of 2665 PSB galaxies with redshifts z 3 Å and z cloud, in agreement with the idea that PSB galaxies represent the transitioning phase between actively and passively evolving galaxies. The relative frequency of distorted PSB galaxies is at least 57% for EW(Hδ) > 5 Å, significantly higher than in the comparison sample. The search for AGNs based on conventional selection criteria in the radio and MIR results in a low AGN fraction of ∼2-3%. We confirm an MIR excess in the mean SED of

  1. SELECTION OF FISÁLIS POPULATIONS FOR HIBRIDIZATIONS, BASED ON FRUIT TRAITS

    Directory of Open Access Journals (Sweden)

    NICOLE TREVISANI

    2016-01-01

    Full Text Available ABSTRACT The objective of this study was to characterize the genetic variability in fisális populations and select promising parents based on fruit traits. The experimental design consisted of randomized blocks, with six populations. Five plants per treatment were sampled. The evaluated traits were fruit weight, capsule weight, 1000- seed weight and fruit diameter. The data were subjected to multivariate analysis of variance with error specification between and within (p <0.05. Mahalanobis’ distance was used as a measure of genetic dissimilarity. Significant differences for the assessed traits were detected between fisális populations. The ratio error among by within indicated no need for sampling within the experimental unit. Dissimilarity was greatest between Lages and Vacaria. The most discriminating traits were capsule weight, fruit weight and fruit diameter. The multivariate contrasts indicated differences between the populations of Vacaria and from Caçador, Lages and Peru, selected for hybridizations.

  2. Tyrosinase-Based Biosensors for Selective Dopamine Detection

    Directory of Open Access Journals (Sweden)

    Monica Florescu

    2017-06-01

    Full Text Available A novel tyrosinase-based biosensor was developed for the detection of dopamine (DA. For increased selectivity, gold electrodes were previously modified with cobalt (II-porphyrin (CoP film with electrocatalytic activity, to act both as an electrochemical mediator and an enzyme support, upon which the enzyme tyrosinase (Tyr was cross-linked. Differential pulse voltammetry was used for electrochemical detection and the reduction current of dopamine-quinone was measured as a function of dopamine concentration. Our experiments demonstrated that the presence of CoP improves the selectivity of the electrode towards dopamine in the presence of ascorbic acid (AA, with a linear trend of concentration dependence in the range of 2–30 µM. By optimizing the conditioning parameters, a separation of 130 mV between the peak potentials for ascorbic acid AA and DA was obtained, allowing the selective detection of DA. The biosensor had a sensitivity of 1.22 ± 0.02 µA·cm−2·µM−1 and a detection limit of 0.43 µM. Biosensor performances were tested in the presence of dopamine medication, with satisfactory results in terms of recovery (96%, and relative standard deviation values below 5%. These results confirmed the applicability of the biosensors in real samples such as human urine and blood serum.

  3. 40 CFR Appendix A to Subpart G of... - Sampling Plans for Selective Enforcement Auditing of Marine Engines

    Science.gov (United States)

    2010-07-01

    ... Enforcement Auditing of Marine Engines A Appendix A to Subpart G of Part 91 Protection of Environment...-IGNITION ENGINES Selective Enforcement Auditing Regulations Pt. 91, Subpt. G, App. A Appendix A to Subpart G of Part 91—Sampling Plans for Selective Enforcement Auditing of Marine Engines Table 1—Sampling...

  4. Efficient Multi-Label Feature Selection Using Entropy-Based Label Selection

    Directory of Open Access Journals (Sweden)

    Jaesung Lee

    2016-11-01

    Full Text Available Multi-label feature selection is designed to select a subset of features according to their importance to multiple labels. This task can be achieved by ranking the dependencies of features and selecting the features with the highest rankings. In a multi-label feature selection problem, the algorithm may be faced with a dataset containing a large number of labels. Because the computational cost of multi-label feature selection increases according to the number of labels, the algorithm may suffer from a degradation in performance when processing very large datasets. In this study, we propose an efficient multi-label feature selection method based on an information-theoretic label selection strategy. By identifying a subset of labels that significantly influence the importance of features, the proposed method efficiently outputs a feature subset. Experimental results demonstrate that the proposed method can identify a feature subset much faster than conventional multi-label feature selection methods for large multi-label datasets.

  5. 40 CFR Appendix A to Subpart F of... - Sampling Plans for Selective Enforcement Auditing of Nonroad Engines

    Science.gov (United States)

    2010-07-01

    ... Enforcement Auditing of Nonroad Engines A Appendix A to Subpart F of Part 89 Protection of Environment... NONROAD COMPRESSION-IGNITION ENGINES Selective Enforcement Auditing Pt. 89, Subpt. F, App. A Appendix A to Subpart F of Part 89—Sampling Plans for Selective Enforcement Auditing of Nonroad Engines Table 1—Sampling...

  6. Multiwavelength diagnostics of accretion in an X-ray selected sample of CTTSs

    Science.gov (United States)

    Curran, R. L.; Argiroffi, C.; Sacco, G. G.; Orlando, S.; Peres, G.; Reale, F.; Maggio, A.

    2011-02-01

    Context. High resolution X-ray spectroscopy has revealed soft X-rays from high density plasma in classical T Tauri stars (CTTSs), probably arising from the accretion shock region. However, the mass accretion rates derived from the X-ray observations are consistently lower than those derived from UV/optical/NIR studies. Aims: We aim to test the hypothesis that the high density soft X-ray emission originates from accretion by analysing, in a homogeneous manner, optical accretion indicators for an X-ray selected sample of CTTSs. Methods: We analyse optical spectra of the X-ray selected sample of CTTSs and calculate the accretion rates based on measuring the Hα, Hβ, Hγ, He ii 4686 Å, He i 5016 Å, He i 5876 Å, O i 6300 Å, and He i 6678 Å equivalent widths. In addition, we also calculate the accretion rates based on the full width at 10% maximum of the Hα line. The different optical tracers of accretion are compared and discussed. The derived accretion rates are then compared to the accretion rates derived from the X-ray spectroscopy. Results: We find that, for each CTTS in our sample, the different optical tracers predict mass-accretion rates that agree within the errors, albeit with a spread of ≈ 1 order of magnitude. Typically, mass-accretion rates derived from Hα and He i 5876 Å are larger than those derived from Hβ, Hγ, and O i. In addition, the Hα full width at 10%, whilst a good indicator of accretion, may not accurately measure the mass-accretion rate. When the optical mass-accretion rates are compared to the X-ray derived mass-accretion rates, we find that: a) the latter are always lower (but by varying amounts); b) the latter range within a factor of ≈ 2 around 2 × 10-10 M⊙ yr-1, despite the former spanning a range of ≈ 3 orders of magnitude. We suggest that the systematic underestimate of the X-ray derived mass-accretion rates could depend on the density distribution inside the accretion streams, where the densest part of the stream is

  7. EXPLORING THE DIVERSITY OF GROUPS AT 0.1 < z < 0.8 WITH X-RAY AND OPTICALLY SELECTED SAMPLES

    Energy Technology Data Exchange (ETDEWEB)

    Connelly, J. L. [Max-Planck-Institut fuer Extraterrestrische Physik, Giessenbachstrasse, D-85748 Garching (Germany); Wilman, David J.; Finoguenov, Alexis; Saglia, Roberto [Max Planck Institute for Extraterrestrial Physics, P.O. Box 1312, Giessenbachstr., D-85741 Garching (Germany); Hou, Annie; Parker, Laura C.; Henderson, Robert D. E. [Department of Physics and Astronomy, McMaster University, Hamilton ON L8S4M1 (Canada); Mulchaey, John S. [Observatories of the Carnegie Institution, 813 Santa Barbara Street, Pasadena, CA 91101 (United States); McGee, Sean L.; Balogh, Michael L. [Department of Physics and Astronomy, University of Waterloo, Waterloo, Ontario N2L 3G1 (Canada); Bower, Richard G. [Department of Physics, University of Durham, Durham DH1 3LE (United Kingdom)

    2012-09-10

    We present the global group properties of two samples of galaxy groups containing 39 high-quality X-ray-selected systems and 38 optically (spectroscopically) selected systems in coincident spatial regions at 0.12 < z < 0.79. The total mass range of the combined sample is {approx}(10{sup 12}-5) Multiplication-Sign 10{sup 14} M{sub Sun }. Only nine optical systems are associable with X-ray systems. We discuss the confusion inherent in the matching of both galaxies to extended X-ray emission and of X-ray emission to already identified optical systems. Extensive spectroscopy has been obtained and the resultant redshift catalog and group membership are provided here. X-ray, dynamical, and total stellar masses of the groups are also derived and presented. We explore the effects of utilizing different centers and applying three different kinds of radial cut to our systems: a constant cut of 1 Mpc and two r{sub 200} cuts, one based on the velocity dispersion of the system and the other on the X-ray emission. We find that an X-ray-based r{sub 200} results in less scatter in scaling relations and less dynamical complexity as evidenced by results of the Anderson-Darling and Dressler-Schectman tests, indicating that this radius tends to isolate the virialized part of the system. The constant and velocity dispersion based cuts can overestimate membership and can work to inflate velocity dispersion and dynamical and stellar mass. We find L{sub X} -{sigma} and M{sub stellar}-L{sub X} scaling relations for X-ray and optically selected systems are not dissimilar. The mean fraction of mass found in stars, excluding intracluster light, for our systems is {approx}0.014 with a logarithmic standard deviation of 0.398 dex. We also define and investigate a sample of groups which are X-ray underluminous given the total group stellar mass. For these systems the fraction of stellar mass contributed by the most massive galaxy is typically lower than that found for the total population of

  8. Enhanced Sampling and Analysis, Selection of Technology for Testing

    Energy Technology Data Exchange (ETDEWEB)

    Svoboda, John; Meikrantz, David

    2010-02-01

    The focus of this study includes the investigation of sampling technologies used in industry and their potential application to nuclear fuel processing. The goal is to identify innovative sampling methods using state of the art techniques that could evolve into the next generation sampling and analysis system for metallic elements. This report details the progress made in the first half of FY 2010 and includes a further consideration of the research focus and goals for this year. Our sampling options and focus for the next generation sampling method are presented along with the criteria used for choosing our path forward. We have decided to pursue the option of evaluating the feasibility of microcapillary based chips to remotely collect, transfer, track and supply microliters of sample solutions to analytical equipment in support of aqueous processes for used nuclear fuel cycles. Microchip vendors have been screened and a choice made for the development of a suitable microchip design followed by production of samples for evaluation by ANL, LANL, and INL on an independent basis.

  9. Gaussian process based intelligent sampling for measuring nano-structure surfaces

    Science.gov (United States)

    Sun, L. J.; Ren, M. J.; Yin, Y. H.

    2016-09-01

    Nanotechnology is the science and engineering that manipulate matters at nano scale, which can be used to create many new materials and devices with a vast range of applications. As the nanotech product increasingly enters the commercial marketplace, nanometrology becomes a stringent and enabling technology for the manipulation and the quality control of the nanotechnology. However, many measuring instruments, for instance scanning probe microscopy, are limited to relatively small area of hundreds of micrometers with very low efficiency. Therefore some intelligent sampling strategies should be required to improve the scanning efficiency for measuring large area. This paper presents a Gaussian process based intelligent sampling method to address this problem. The method makes use of Gaussian process based Bayesian regression as a mathematical foundation to represent the surface geometry, and the posterior estimation of Gaussian process is computed by combining the prior probability distribution with the maximum likelihood function. Then each sampling point is adaptively selected by determining the position which is the most likely outside of the required tolerance zone among the candidates and then inserted to update the model iteratively. Both simulationson the nominal surface and manufactured surface have been conducted on nano-structure surfaces to verify the validity of the proposed method. The results imply that the proposed method significantly improves the measurement efficiency in measuring large area structured surfaces.

  10. Decomposing the Gender Wage Gap in the Netherlands with Sample Selection Adjustments

    NARCIS (Netherlands)

    Albrecht, James; Vuuren, van Aico; Vroman, Susan

    2004-01-01

    In this paper, we use quantile regression decomposition methods to analyzethe gender gap between men and women who work full time in the Nether-lands. Because the fraction of women working full time in the Netherlands isquite low, sample selection is a serious issue. In addition to shedding light

  11. Systematic sampling for suspended sediment

    Science.gov (United States)

    Robert B. Thomas

    1991-01-01

    Abstract - Because of high costs or complex logistics, scientific populations cannot be measured entirely and must be sampled. Accepted scientific practice holds that sample selection be based on statistical principles to assure objectivity when estimating totals and variances. Probability sampling--obtaining samples with known probabilities--is the only method that...

  12. Semiparametric efficient and robust estimation of an unknown symmetric population under arbitrary sample selection bias

    KAUST Repository

    Ma, Yanyuan

    2013-09-01

    We propose semiparametric methods to estimate the center and shape of a symmetric population when a representative sample of the population is unavailable due to selection bias. We allow an arbitrary sample selection mechanism determined by the data collection procedure, and we do not impose any parametric form on the population distribution. Under this general framework, we construct a family of consistent estimators of the center that is robust to population model misspecification, and we identify the efficient member that reaches the minimum possible estimation variance. The asymptotic properties and finite sample performance of the estimation and inference procedures are illustrated through theoretical analysis and simulations. A data example is also provided to illustrate the usefulness of the methods in practice. © 2013 American Statistical Association.

  13. EFFICIENT SELECTION AND CLASSIFICATION OF INFRARED EXCESS EMISSION STARS BASED ON AKARI AND 2MASS DATA

    Energy Technology Data Exchange (ETDEWEB)

    Huang Yafang; Li Jinzeng [National Astronomical Observatories, Chinese Academy of Sciences, 20A Datun Road, Chaoyang District, Beijing 100012 (China); Rector, Travis A. [University of Alaska, 3211 Providence Drive, Anchorage, AK 99508 (United States); Mallamaci, Carlos C., E-mail: ljz@nao.cas.cn [Observatorio Astronomico Felix Aguilar, Universidad Nacional de San Juan (Argentina)

    2013-05-15

    The selection of young stellar objects (YSOs) based on excess emission in the infrared is easily contaminated by post-main-sequence stars and various types of emission line stars with similar properties. We define in this paper stringent criteria for an efficient selection and classification of stellar sources with infrared excess emission based on combined Two Micron All Sky Survey (2MASS) and AKARI colors. First of all, bright dwarfs and giants with known spectral types were selected from the Hipparcos Catalogue and cross-identified with the 2MASS and AKARI Point Source Catalogues to produce the main-sequence and the post-main-sequence tracks, which appear as expected as tight tracks with very small dispersion. However, several of the main-sequence stars indicate excess emission in the color space. Further investigations based on the SIMBAD data help to clarify their nature as classical Be stars, which are found to be located in a well isolated region on each of the color-color (C-C) diagrams. Several kinds of contaminants were then removed based on their distribution in the C-C diagrams. A test sample of Herbig Ae/Be stars and classical T Tauri stars were cross-identified with the 2MASS and AKARI catalogs to define the loci of YSOs with different masses on the C-C diagrams. Well classified Class I and Class II sources were taken as a second test sample to discriminate between various types of YSOs at possibly different evolutionary stages. This helped to define the loci of different types of YSOs and a set of criteria for selecting YSOs based on their colors in the near- and mid-infrared. Candidate YSOs toward IC 1396 indicating excess emission in the near-infrared were employed to verify the validity of the new source selection criteria defined based on C-C diagrams compiled with the 2MASS and AKARI data. Optical spectroscopy and spectral energy distributions of the IC 1396 sample yield a clear identification of the YSOs and further confirm the criteria defined

  14. EFFICIENT SELECTION AND CLASSIFICATION OF INFRARED EXCESS EMISSION STARS BASED ON AKARI AND 2MASS DATA

    International Nuclear Information System (INIS)

    Huang Yafang; Li Jinzeng; Rector, Travis A.; Mallamaci, Carlos C.

    2013-01-01

    The selection of young stellar objects (YSOs) based on excess emission in the infrared is easily contaminated by post-main-sequence stars and various types of emission line stars with similar properties. We define in this paper stringent criteria for an efficient selection and classification of stellar sources with infrared excess emission based on combined Two Micron All Sky Survey (2MASS) and AKARI colors. First of all, bright dwarfs and giants with known spectral types were selected from the Hipparcos Catalogue and cross-identified with the 2MASS and AKARI Point Source Catalogues to produce the main-sequence and the post-main-sequence tracks, which appear as expected as tight tracks with very small dispersion. However, several of the main-sequence stars indicate excess emission in the color space. Further investigations based on the SIMBAD data help to clarify their nature as classical Be stars, which are found to be located in a well isolated region on each of the color-color (C-C) diagrams. Several kinds of contaminants were then removed based on their distribution in the C-C diagrams. A test sample of Herbig Ae/Be stars and classical T Tauri stars were cross-identified with the 2MASS and AKARI catalogs to define the loci of YSOs with different masses on the C-C diagrams. Well classified Class I and Class II sources were taken as a second test sample to discriminate between various types of YSOs at possibly different evolutionary stages. This helped to define the loci of different types of YSOs and a set of criteria for selecting YSOs based on their colors in the near- and mid-infrared. Candidate YSOs toward IC 1396 indicating excess emission in the near-infrared were employed to verify the validity of the new source selection criteria defined based on C-C diagrams compiled with the 2MASS and AKARI data. Optical spectroscopy and spectral energy distributions of the IC 1396 sample yield a clear identification of the YSOs and further confirm the criteria defined

  15. Content-based image retrieval: Color-selection exploited

    NARCIS (Netherlands)

    Broek, E.L. van den; Vuurpijl, L.G.; Kisters, P. M. F.; Schmid, J.C.M. von; Moens, M.F.; Busser, R. de; Hiemstra, D.; Kraaij, W.

    2002-01-01

    This research presents a new color selection interface that facilitates query-by-color in Content-Based Image Retrieval (CBIR). Existing CBIR color selection interfaces, are being judged as non-intuitive and difficult to use. Our interface copes with these problems of usability. It is based on 11

  16. Content-Based Image Retrieval: Color-selection exploited

    NARCIS (Netherlands)

    Moens, Marie-Francine; van den Broek, Egon; Vuurpijl, L.G.; de Brusser, Rik; Kisters, P.M.F.; Hiemstra, Djoerd; Kraaij, Wessel; von Schmid, J.C.M.

    2002-01-01

    This research presents a new color selection interface that facilitates query-by-color in Content-Based Image Retrieval (CBIR). Existing CBIR color selection interfaces, are being judged as non-intuitive and difficult to use. Our interface copes with these problems of usability. It is based on 11

  17. Eleven-Year Retrospective Report of Super-Selective Venous Sampling for the Evaluation of Recurrent or Persistent Hyperparathyroidism in 32 Patients.

    Science.gov (United States)

    Habibollahi, Peiman; Shin, Benjamin; Shamchi, Sara P; Wachtel, Heather; Fraker, Douglas L; Trerotola, Scott O

    2018-01-01

    Parathyroid venous sampling (PAVS) is usually reserved for patients with persistent or recurrent hyperparathyroidism after parathyroidectomy with inconclusive noninvasive imaging studies. A retrospective study was performed to evaluate the diagnostic efficacy of super-selective PAVS (SSVS) in patients needing revision neck surgery with inconclusive imaging. Patients undergoing PAVS between 2005 and 2016 due to persistent or recurrent hyperparathyroidism following surgery were reviewed. PAVS was performed in all patients using super-selective technique. Single-value measurements within central neck veins performed as part of super-selective PAVS were used to simulate selective venous sampling (SVS) and allow for comparison to data, which might be obtained in a non-super-selective approach. 32 patients (mean age 51 ± 15 years; 8 men and 24 women) met inclusion and exclusion criteria. The sensitivity and positive predictive value (PPV) of SSVS for localizing the source of elevated PTH to a limited area in the neck or chest was 96 and 84%, respectively. Simulated SVS, on the other hand, had a sensitivity of 28% and a PPV of 89% based on the predefined gold standard. SSVS had a significantly higher sensitivity compared to simulated SVS (p localizing the source of hyperparathyroidism in patients undergoing revision surgery for hyperparathyroidism in whom noninvasive imaging studies are inconclusive. SSVS data had also markedly higher sensitivity for localizing disease in these patients compared to simulated SVS.

  18. Application of the Sampling Selection Technique in Approaching Financial Audit

    Directory of Open Access Journals (Sweden)

    Victor Munteanu

    2018-03-01

    Full Text Available In his professional approach, the financial auditor has a wide range of working techniques, including selection techniques. They are applied depending on the nature of the information available to the financial auditor, the manner in which they are presented - paper or electronic format, and, last but not least, the time available. Several techniques are applied, successively or in parallel, to increase the safety of the expressed opinion and to provide the audit report with a solid basis of information. Sampling is used in the phase of control or clarification of the identified error. The main purpose is to corroborate or measure the degree of risk detected following a pertinent analysis. Since the auditor does not have time or means to thoroughly rebuild the information, the sampling technique can provide an effective response to the need for valorization.

  19. A novel one-class SVM based negative data sampling method for reconstructing proteome-wide HTLV-human protein interaction networks.

    Science.gov (United States)

    Mei, Suyu; Zhu, Hao

    2015-01-26

    Protein-protein interaction (PPI) prediction is generally treated as a problem of binary classification wherein negative data sampling is still an open problem to be addressed. The commonly used random sampling is prone to yield less representative negative data with considerable false negatives. Meanwhile rational constraints are seldom exerted on model selection to reduce the risk of false positive predictions for most of the existing computational methods. In this work, we propose a novel negative data sampling method based on one-class SVM (support vector machine, SVM) to predict proteome-wide protein interactions between HTLV retrovirus and Homo sapiens, wherein one-class SVM is used to choose reliable and representative negative data, and two-class SVM is used to yield proteome-wide outcomes as predictive feedback for rational model selection. Computational results suggest that one-class SVM is more suited to be used as negative data sampling method than two-class PPI predictor, and the predictive feedback constrained model selection helps to yield a rational predictive model that reduces the risk of false positive predictions. Some predictions have been validated by the recent literature. Lastly, gene ontology based clustering of the predicted PPI networks is conducted to provide valuable cues for the pathogenesis of HTLV retrovirus.

  20. Correlations fo Sc, rare earths and other elements in selected rock samples from Arrua-i

    Energy Technology Data Exchange (ETDEWEB)

    Facetti, J F; Prats, M [Asuncion Nacional Univ. (Paraguay). Inst. de Ciencias

    1972-01-01

    The Sc and Eu contents in selected rocks samples from the stock of Arrua-i have been determined and correlations established with other elements and with the relative amount of some rare earths. These correlations suggest metasomatic phenomena for the formation of the rock samples.

  1. Correlations fo Sc, rare earths and other elements in selected rock samples from Arrua-i

    International Nuclear Information System (INIS)

    Facetti, J.F.; Prats, M.

    1972-01-01

    The Sc and Eu contents in selected rocks samples from the stock of Arrua-i have been determined and correlations established with other elements and with the relative amount of some rare earths. These correlations suggest metasomatic phenomena for the formation of the rock samples

  2. The Swift Gamma-Ray Burst Host Galaxy Legacy Survey. I. Sample Selection and Redshift Distribution

    Science.gov (United States)

    Perley, D. A.; Kruhler, T.; Schulze, S.; Postigo, A. De Ugarte; Hjorth, J.; Berger, E.; Cenko, S. B.; Chary, R.; Cucchiara, A.; Ellis, R.; hide

    2016-01-01

    We introduce the Swift Gamma-Ray Burst Host Galaxy Legacy Survey (SHOALS), a multi-observatory high redshift galaxy survey targeting the largest unbiased sample of long-duration gamma-ray burst (GRB) hosts yet assembled (119 in total). We describe the motivations of the survey and the development of our selection criteria, including an assessment of the impact of various observability metrics on the success rate of afterglow-based redshift measurement. We briefly outline our host galaxy observational program, consisting of deep Spitzer/IRAC imaging of every field supplemented by similarly deep, multicolor optical/near-IR photometry, plus spectroscopy of events without preexisting redshifts. Our optimized selection cuts combined with host galaxy follow-up have so far enabled redshift measurements for 110 targets (92%) and placed upper limits on all but one of the remainder. About 20% of GRBs in the sample are heavily dust obscured, and at most 2% originate from z > 5.5. Using this sample, we estimate the redshift-dependent GRB rate density, showing it to peak at z approx. 2.5 and fall by at least an order of magnitude toward low (z = 0) redshift, while declining more gradually toward high (z approx. 7) redshift. This behavior is consistent with a progenitor whose formation efficiency varies modestly over cosmic history. Our survey will permit the most detailed examination to date of the connection between the GRB host population and general star-forming galaxies, directly measure evolution in the host population over cosmic time and discern its causes, and provide new constraints on the fraction of cosmic star formation occurring in undetectable galaxies at all redshifts.

  3. Trace level and highly selective determination of urea in various real samples based upon voltammetric analysis of diacetylmonoxime-urea reaction product on the carbon nanotube/carbon paste electrode.

    Science.gov (United States)

    Alizadeh, Taher; Ganjali, Mohammad Reza; Rafiei, Faride

    2017-06-29

    In this study an innovative method was introduced for selective and precise determination of urea in various real samples including urine, blood serum, soil and water. The method was based on the square wave voltammetry determination of an electroactive product, generated during diacetylmonoxime reaction with urea. A carbon paste electrode, modified with multi-walled carbon nanotubes (MWCNTs) was found to be an appropriate electrochemical transducer for recording of the electrochemical signal. It was found that the chemical reaction conditions influenced the analytical signal directly. The calibration graph of the method was linear in the range of 1 × 10 -7 - 1 × 10 -2  mol L -1 . The detection limit was calculated to be 52 nmol L -1 . Relative standard error of the method was also calculated to be 3.9% (n = 3). The developed determination procedure was applied for urea determination in various real samples including soil, urine, plasma and water samples. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. Properties of hypothesis testing techniques and (Bayesian) model selection for exploration-based and theory-based (order-restricted) hypotheses.

    Science.gov (United States)

    Kuiper, Rebecca M; Nederhoff, Tim; Klugkist, Irene

    2015-05-01

    In this paper, the performance of six types of techniques for comparisons of means is examined. These six emerge from the distinction between the method employed (hypothesis testing, model selection using information criteria, or Bayesian model selection) and the set of hypotheses that is investigated (a classical, exploration-based set of hypotheses containing equality constraints on the means, or a theory-based limited set of hypotheses with equality and/or order restrictions). A simulation study is conducted to examine the performance of these techniques. We demonstrate that, if one has specific, a priori specified hypotheses, confirmation (i.e., investigating theory-based hypotheses) has advantages over exploration (i.e., examining all possible equality-constrained hypotheses). Furthermore, examining reasonable order-restricted hypotheses has more power to detect the true effect/non-null hypothesis than evaluating only equality restrictions. Additionally, when investigating more than one theory-based hypothesis, model selection is preferred over hypothesis testing. Because of the first two results, we further examine the techniques that are able to evaluate order restrictions in a confirmatory fashion by examining their performance when the homogeneity of variance assumption is violated. Results show that the techniques are robust to heterogeneity when the sample sizes are equal. When the sample sizes are unequal, the performance is affected by heterogeneity. The size and direction of the deviations from the baseline, where there is no heterogeneity, depend on the effect size (of the means) and on the trend in the group variances with respect to the ordering of the group sizes. Importantly, the deviations are less pronounced when the group variances and sizes exhibit the same trend (e.g., are both increasing with group number). © 2014 The British Psychological Society.

  5. Spot the difference. Impact of different selection criteria on observed properties of passive galaxies in zCOSMOS-20k sample

    Science.gov (United States)

    Moresco, M.; Pozzetti, L.; Cimatti, A.; Zamorani, G.; Bolzonella, M.; Lamareille, F.; Mignoli, M.; Zucca, E.; Lilly, S. J.; Carollo, C. M.; Contini, T.; Kneib, J.-P.; Le Fèvre, O.; Mainieri, V.; Renzini, A.; Scodeggio, M.; Bardelli, S.; Bongiorno, A.; Caputi, K.; Cucciati, O.; de la Torre, S.; de Ravel, L.; Franzetti, P.; Garilli, B.; Iovino, A.; Kampczyk, P.; Knobel, C.; Kovač, K.; Le Borgne, J.-F.; Le Brun, V.; Maier, C.; Pelló, R.; Peng, Y.; Perez-Montero, E.; Presotto, V.; Silverman, J. D.; Tanaka, M.; Tasca, L.; Tresse, L.; Vergani, D.; Barnes, L.; Bordoloi, R.; Cappi, A.; Diener, C.; Koekemoer, A. M.; Le Floc'h, E.; López-Sanjuan, C.; McCracken, H. J.; Nair, P.; Oesch, P.; Scarlata, C.; Scoville, N.; Welikala, N.

    2013-10-01

    Aims: We present the analysis of photometric, spectroscopic, and morphological properties for differently selected samples of passive galaxies up to z = 1 extracted from the zCOSMOS-20k spectroscopic survey. This analysis intends toexplore the dependence of galaxy properties on the selection criterion adopted, study the degree of contamination due to star-forming outliers, and provide a comparison between different commonly used selection criteria. This work is a first step to fully investigating the selection effects of passive galaxies for future massive surveys such as Euclid. Methods: We extracted from the zCOSMOS-20k catalog six different samples of passive galaxies, based on morphology (3336 "morphological" early-type galaxies), optical colors (4889 "red-sequence" galaxies and 4882 "red UVJ" galaxies), specific star-formation rate (2937 "quiescent" galaxies), a best fit to the observed spectral energy distribution (2603 "red SED" galaxies), and a criterion that combines morphological, spectroscopic, and photometric information (1530 "red & passive early-type galaxies"). For all the samples, we studied optical and infrared colors, morphological properties, specific star-formation rates (SFRs), and the equivalent widths of the residual emission lines; this analysis was performed as a function of redshift and stellar mass to inspect further possible dependencies. Results: We find that each passive galaxy sample displays a certain level of contamination due to blue/star-forming/nonpassive outliers. The morphological sample is the one that presents the higher percentage of contamination, with ~12-65% (depending on the mass range) of galaxies not located in the red sequence, ~25-80% of galaxies with a specific SFR up to ~25 times higher than the adopted definition of passive, and significant emission lines found in the median stacked spectra, at least for log (M/M⊙) contamination in color 10.25, very limited tails in sSFR, a median value ~20% higher than the

  6. Optimal Selection of the Sampling Interval for Estimation of Modal Parameters by an ARMA- Model

    DEFF Research Database (Denmark)

    Kirkegaard, Poul Henning

    1993-01-01

    Optimal selection of the sampling interval for estimation of the modal parameters by an ARMA-model for a white noise loaded structure modelled as a single degree of- freedom linear mechanical system is considered. An analytical solution for an optimal uniform sampling interval, which is optimal...

  7. Effect of selective logging on genetic diversity and gene flow in Cariniana legalis sampled from a cacao agroforestry system.

    Science.gov (United States)

    Leal, J B; Santos, R P; Gaiotto, F A

    2014-01-28

    The fragments of the Atlantic Forest of southern Bahia have a long history of intense logging and selective cutting. Some tree species, such as jequitibá rosa (Cariniana legalis), have experienced a reduction in their populations with respect to both area and density. To evaluate the possible effects of selective logging on genetic diversity, gene flow, and spatial genetic structure, 51 C. legalis individuals were sampled, representing the total remaining population from the cacao agroforestry system. A total of 120 alleles were observed from the 11 microsatellite loci analyzed. The average observed heterozygosity (0.486) was less than the expected heterozygosity (0.721), indicating a loss of genetic diversity in this population. A high fixation index (FIS = 0.325) was found, which is possibly due to a reduction in population size, resulting in increased mating among relatives. The maximum (1055 m) and minimum (0.095 m) distances traveled by pollen or seeds were inferred based on paternity tests. We found 36.84% of unique parents among all sampled seedlings. The progenitors of the remaining seedlings (63.16%) were most likely out of the sampled area. Positive and significant spatial genetic structure was identified in this population among classes 10 to 30 m away with an average coancestry coefficient between pairs of individuals of 0.12. These results suggest that the agroforestry system of cacao cultivation is contributing to maintaining levels of diversity and gene flow in the studied population, thus minimizing the effects of selective logging.

  8. Representativeness-based sampling network design for the State of Alaska

    Science.gov (United States)

    Forrest M. Hoffman; Jitendra Kumar; Richard T. Mills; William W. Hargrove

    2013-01-01

    Resource and logistical constraints limit the frequency and extent of environmental observations, particularly in the Arctic, necessitating the development of a systematic sampling strategy to maximize coverage and objectively represent environmental variability at desired scales. A quantitative methodology for stratifying sampling domains, informing site selection,...

  9. Sample similarity analysis of angles of repose based on experimental results for DEM calibration

    Science.gov (United States)

    Tan, Yuan; Günthner, Willibald A.; Kessler, Stephan; Zhang, Lu

    2017-06-01

    As a fundamental material property, particle-particle friction coefficient is usually calculated based on angle of repose which can be obtained experimentally. In the present study, the bottomless cylinder test was carried out to investigate this friction coefficient of a kind of biomass material, i.e. willow chips. Because of its irregular shape and varying particle size distribution, calculation of the angle becomes less applicable and decisive. In the previous studies only one section of those uneven slopes is chosen in most cases, although standard methods in definition of a representable section are barely found. Hence, we presented an efficient and reliable method from the new technology, 3D scan, which was used to digitize the surface of heaps and generate its point cloud. Then, two tangential lines of any selected section were calculated through the linear least-squares regression (LLSR), such that the left and right angle of repose of a pile could be derived. As the next step, a certain sum of sections were stochastic selected, and calculations were repeated correspondingly in order to achieve sample of angles, which was plotted in Cartesian coordinates as spots diagram. Subsequently, different samples were acquired through various selections of sections. By applying similarities and difference analysis of these samples, the reliability of this proposed method was verified. Phased results provides a realistic criterion to reduce the deviation between experiment and simulation as a result of random selection of a single angle, which will be compared with the simulation results in the future.

  10. Sample similarity analysis of angles of repose based on experimental results for DEM calibration

    Directory of Open Access Journals (Sweden)

    Tan Yuan

    2017-01-01

    Full Text Available As a fundamental material property, particle-particle friction coefficient is usually calculated based on angle of repose which can be obtained experimentally. In the present study, the bottomless cylinder test was carried out to investigate this friction coefficient of a kind of biomass material, i.e. willow chips. Because of its irregular shape and varying particle size distribution, calculation of the angle becomes less applicable and decisive. In the previous studies only one section of those uneven slopes is chosen in most cases, although standard methods in definition of a representable section are barely found. Hence, we presented an efficient and reliable method from the new technology, 3D scan, which was used to digitize the surface of heaps and generate its point cloud. Then, two tangential lines of any selected section were calculated through the linear least-squares regression (LLSR, such that the left and right angle of repose of a pile could be derived. As the next step, a certain sum of sections were stochastic selected, and calculations were repeated correspondingly in order to achieve sample of angles, which was plotted in Cartesian coordinates as spots diagram. Subsequently, different samples were acquired through various selections of sections. By applying similarities and difference analysis of these samples, the reliability of this proposed method was verified. Phased results provides a realistic criterion to reduce the deviation between experiment and simulation as a result of random selection of a single angle, which will be compared with the simulation results in the future.

  11. Correcting Classifiers for Sample Selection Bias in Two-Phase Case-Control Studies

    Science.gov (United States)

    Theis, Fabian J.

    2017-01-01

    Epidemiological studies often utilize stratified data in which rare outcomes or exposures are artificially enriched. This design can increase precision in association tests but distorts predictions when applying classifiers on nonstratified data. Several methods correct for this so-called sample selection bias, but their performance remains unclear especially for machine learning classifiers. With an emphasis on two-phase case-control studies, we aim to assess which corrections to perform in which setting and to obtain methods suitable for machine learning techniques, especially the random forest. We propose two new resampling-based methods to resemble the original data and covariance structure: stochastic inverse-probability oversampling and parametric inverse-probability bagging. We compare all techniques for the random forest and other classifiers, both theoretically and on simulated and real data. Empirical results show that the random forest profits from only the parametric inverse-probability bagging proposed by us. For other classifiers, correction is mostly advantageous, and methods perform uniformly. We discuss consequences of inappropriate distribution assumptions and reason for different behaviors between the random forest and other classifiers. In conclusion, we provide guidance for choosing correction methods when training classifiers on biased samples. For random forests, our method outperforms state-of-the-art procedures if distribution assumptions are roughly fulfilled. We provide our implementation in the R package sambia. PMID:29312464

  12. Determination of Selected Polycyclic Aromatic Compounds in Particulate Matter Samples with Low Mass Loading: An Approach to Test Method Accuracy

    Directory of Open Access Journals (Sweden)

    Susana García-Alonso

    2017-01-01

    Full Text Available A miniaturized analytical procedure to determine selected polycyclic aromatic compounds (PACs in low mass loadings (<10 mg of particulate matter (PM is evaluated. The proposed method is based on a simple sonication/agitation method using small amounts of solvent for extraction. The use of a reduced sample size of particulate matter is often limiting for allowing the quantification of analytes. This also leads to the need for changing analytical procedures and evaluating its performance. The trueness and precision of the proposed method were tested using ambient air samples. Analytical results from the proposed method were compared with those of pressurized liquid and microwave extractions. Selected PACs (polycyclic aromatic hydrocarbons (PAHs and nitro polycyclic aromatic hydrocarbons (NPAHs were determined by liquid chromatography with fluorescence detection (HPLC/FD. Taking results from pressurized liquid extractions as reference values, recovery rates of sonication/agitation method were over 80% for the most abundant PAHs. Recovery rates of selected NPAHs were lower. Enhanced rates were obtained when methanol was used as a modifier. Intermediate precision was estimated by data comparison from two mathematical approaches: normalized difference data and pooled relative deviations. Intermediate precision was in the range of 10–20%. The effectiveness of the proposed method was evaluated in PM aerosol samples collected with very low mass loadings (<0.2 mg during characterization studies from turbofan engine exhausts.

  13. Estimating the residential demand function for natural gas in Seoul with correction for sample selection bias

    International Nuclear Information System (INIS)

    Yoo, Seung-Hoon; Lim, Hea-Jin; Kwak, Seung-Jun

    2009-01-01

    Over the last twenty years, the consumption of natural gas in Korea has increased dramatically. This increase has mainly resulted from the rise of consumption in the residential sector. The main objective of the study is to estimate households' demand function for natural gas by applying a sample selection model using data from a survey of households in Seoul. The results show that there exists a selection bias in the sample and that failure to correct for sample selection bias distorts the mean estimate, of the demand for natural gas, downward by 48.1%. In addition, according to the estimation results, the size of the house, the dummy variable for dwelling in an apartment, the dummy variable for having a bed in an inner room, and the household's income all have positive relationships with the demand for natural gas. On the other hand, the size of the family and the price of gas negatively contribute to the demand for natural gas. (author)

  14. PWR steam generator tubing sample library

    International Nuclear Information System (INIS)

    Anon.

    1987-01-01

    In order to compile the tubing sample library, two approaches were employed: (a) tubing sample replication by either chemical or mechanical means, based on field tube data and metallography reports for tubes already destructively examined; and (b) acquisition of field tubes removed from operating or retired steam generators. In addition, a unique mercury modeling concept is in use to guide the selection of replica samples. A compendium was compiled that summarizes field observations and morphologies of steam generator tube degradation types based on available NDE, destructive examinations, and field reports. This compendium was used in selecting candidate degradation types that were manufactured for inclusion in the tube library

  15. Supplier selection an MCDA-based approach

    CERN Document Server

    Mukherjee, Krishnendu

    2017-01-01

    The purpose of this book is to present a comprehensive review of the latest research and development trends at the international level for modeling and optimization of the supplier selection process for different industrial sectors. It is targeted to serve two audiences: the MBA and PhD student interested in procurement, and the practitioner who wishes to gain a deeper understanding of procurement analysis with multi-criteria based decision tools to avoid upstream risks to get better supply chain visibility. The book is expected to serve as a ready reference for supplier selection criteria and various multi-criteria based supplier’s evaluation methods for forward, reverse and mass customized supply chain. This book encompasses several criteria, methods for supplier selection in a systematic way based on extensive literature review from 1998 to 2012. It provides several case studies and some useful links which can serve as a starting point for interested researchers. In the appendix several computer code wri...

  16. PLS-based and regularization-based methods for the selection of relevant variables in non-targeted metabolomics data

    Directory of Open Access Journals (Sweden)

    Renata Bujak

    2016-07-01

    Full Text Available Non-targeted metabolomics constitutes a part of systems biology and aims to determine many metabolites in complex biological samples. Datasets obtained in non-targeted metabolomics studies are multivariate and high-dimensional due to the sensitivity of mass spectrometry-based detection methods as well as complexity of biological matrices. Proper selection of variables which contribute into group classification is a crucial step, especially in metabolomics studies which are focused on searching for disease biomarker candidates. In the present study, three different statistical approaches were tested using two metabolomics datasets (RH and PH study. Orthogonal projections to latent structures-discriminant analysis (OPLS-DA without and with multiple testing correction as well as least absolute shrinkage and selection operator (LASSO were tested and compared. For the RH study, OPLS-DA model built without multiple testing correction, selected 46 and 218 variables based on VIP criteria using Pareto and UV scaling, respectively. In the case of the PH study, 217 and 320 variables were selected based on VIP criteria using Pareto and UV scaling, respectively. In the RH study, OPLS-DA model built with multiple testing correction, selected 4 and 19 variables as statistically significant in terms of Pareto and UV scaling, respectively. For PH study, 14 and 18 variables were selected based on VIP criteria in terms of Pareto and UV scaling, respectively. Additionally, the concept and fundaments of the least absolute shrinkage and selection operator (LASSO with bootstrap procedure evaluating reproducibility of results, was demonstrated. In the RH and PH study, the LASSO selected 14 and 4 variables with reproducibility between 99.3% and 100%. However, apart from the popularity of PLS-DA and OPLS-DA methods in metabolomics, it should be highlighted that they do not control type I or type II error, but only arbitrarily establish a cut-off value for PLS-DA loadings

  17. HICOSMO - cosmology with a complete sample of galaxy clusters - I. Data analysis, sample selection and luminosity-mass scaling relation

    Science.gov (United States)

    Schellenberger, G.; Reiprich, T. H.

    2017-08-01

    The X-ray regime, where the most massive visible component of galaxy clusters, the intracluster medium, is visible, offers directly measured quantities, like the luminosity, and derived quantities, like the total mass, to characterize these objects. The aim of this project is to analyse a complete sample of galaxy clusters in detail and constrain cosmological parameters, like the matter density, Ωm, or the amplitude of initial density fluctuations, σ8. The purely X-ray flux-limited sample (HIFLUGCS) consists of the 64 X-ray brightest galaxy clusters, which are excellent targets to study the systematic effects, that can bias results. We analysed in total 196 Chandra observations of the 64 HIFLUGCS clusters, with a total exposure time of 7.7 Ms. Here, we present our data analysis procedure (including an automated substructure detection and an energy band optimization for surface brightness profile analysis) that gives individually determined, robust total mass estimates. These masses are tested against dynamical and Planck Sunyaev-Zeldovich (SZ) derived masses of the same clusters, where good overall agreement is found with the dynamical masses. The Planck SZ masses seem to show a mass-dependent bias to our hydrostatic masses; possible biases in this mass-mass comparison are discussed including the Planck selection function. Furthermore, we show the results for the (0.1-2.4) keV luminosity versus mass scaling relation. The overall slope of the sample (1.34) is in agreement with expectations and values from literature. Splitting the sample into galaxy groups and clusters reveals, even after a selection bias correction, that galaxy groups exhibit a significantly steeper slope (1.88) compared to clusters (1.06).

  18. A Uniformly Selected Sample of Low-mass Black Holes in Seyfert 1 Galaxies. II. The SDSS DR7 Sample

    Science.gov (United States)

    Liu, He-Yang; Yuan, Weimin; Dong, Xiao-Bo; Zhou, Hongyan; Liu, Wen-Juan

    2018-04-01

    A new sample of 204 low-mass black holes (LMBHs) in active galactic nuclei (AGNs) is presented with black hole masses in the range of (1–20) × 105 M ⊙. The AGNs are selected through a systematic search among galaxies in the Seventh Data Release (DR7) of the Sloan Digital Sky Survey (SDSS), and careful analyses of their optical spectra and precise measurement of spectral parameters. Combining them with our previous sample selected from SDSS DR4 makes it the largest LMBH sample so far, totaling over 500 objects. Some of the statistical properties of the combined LMBH AGN sample are briefly discussed in the context of exploring the low-mass end of the AGN population. Their X-ray luminosities follow the extension of the previously known correlation with the [O III] luminosity. The effective optical-to-X-ray spectral indices α OX, albeit with a large scatter, are broadly consistent with the extension of the relation with the near-UV luminosity L 2500 Å. Interestingly, a correlation of α OX with black hole mass is also found, with α OX being statistically flatter (stronger X-ray relative to optical) for lower black hole masses. Only 26 objects, mostly radio loud, were detected in radio at 20 cm in the FIRST survey, giving a radio-loud fraction of 4%. The host galaxies of LMBHs have stellar masses in the range of 108.8–1012.4 M ⊙ and optical colors typical of Sbc spirals. They are dominated by young stellar populations that seem to have undergone continuous star formation history.

  19. Obscured AGN at z similar to 1 from the zCOSMOS-Bright Survey : I. Selection and optical properties of a [Ne v]-selected sample

    NARCIS (Netherlands)

    Mignoli, M.; Vignali, C.; Gilli, R.; Comastri, A.; Zamorani, G.; Bolzonella, M.; Bongiorno, A.; Lamareille, F.; Nair, P.; Pozzetti, L.; Lilly, S. J.; Carollo, C. M.; Contini, T.; Kneib, J. -P.; Le Fevre, O.; Mainieri, V.; Renzini, A.; Scodeggio, M.; Bardelli, S.; Caputi, K.; Cucciati, O.; de la Torre, S.; de Ravel, L.; Franzetti, P.; Garilli, B.; Iovino, A.; Kampczyk, P.; Knobel, C.; Kovac, K.; Le Borgne, J. -F.; Le Brun, V.; Maier, C.; Pello, R.; Peng, Y.; Montero, E. Perez; Presotto, V.; Silverman, J. D.; Tanaka, M.; Tasca, L.; Tresse, L.; Vergani, D.; Zucca, E.; Bordoloi, R.; Cappi, A.; Cimatti, A.; Koekemoer, A. M.; McCracken, H. J.; Moresco, M.; Welikala, N.

    Aims. The application of multi-wavelength selection techniques is essential for obtaining a complete and unbiased census of active galactic nuclei (AGN). We present here a method for selecting z similar to 1 obscured AGN from optical spectroscopic surveys. Methods. A sample of 94 narrow-line AGN

  20. Fiber array based hyperspectral Raman imaging for chemical selective analysis of malaria-infected red blood cells

    Energy Technology Data Exchange (ETDEWEB)

    Brückner, Michael [Leibniz Institute of Photonic Technology, 07745 Jena (Germany); Becker, Katja [Justus Liebig University Giessen, Biochemistry and Molecular Biology, 35392 Giessen (Germany); Popp, Jürgen [Leibniz Institute of Photonic Technology, 07745 Jena (Germany); Friedrich Schiller University Jena, Institute for Physical Chemistry, 07745 Jena (Germany); Friedrich Schiller University Jena, Abbe Centre of Photonics, 07745 Jena (Germany); Frosch, Torsten, E-mail: torsten.frosch@uni-jena.de [Leibniz Institute of Photonic Technology, 07745 Jena (Germany); Friedrich Schiller University Jena, Institute for Physical Chemistry, 07745 Jena (Germany); Friedrich Schiller University Jena, Abbe Centre of Photonics, 07745 Jena (Germany)

    2015-09-24

    A new setup for Raman spectroscopic wide-field imaging is presented. It combines the advantages of a fiber array based spectral translator with a tailor-made laser illumination system for high-quality Raman chemical imaging of sensitive biological samples. The Gaussian-like intensity distribution of the illuminating laser beam is shaped by a square-core optical multimode fiber to a top-hat profile with very homogeneous intensity distribution to fulfill the conditions of Koehler. The 30 m long optical fiber and an additional vibrator efficiently destroy the polarization and coherence of the illuminating light. This homogeneous, incoherent illumination is an essential prerequisite for stable quantitative imaging of complex biological samples. The fiber array translates the two-dimensional lateral information of the Raman stray light into separated spectral channels with very high contrast. The Raman image can be correlated with a corresponding white light microscopic image of the sample. The new setup enables simultaneous quantification of all Raman spectra across the whole spatial area with very good spectral resolution and thus outperforms other Raman imaging approaches based on scanning and tunable filters. The unique capabilities of the setup for fast, gentle, sensitive, and selective chemical imaging of biological samples were applied for automated hemozoin analysis. A special algorithm was developed to generate Raman images based on the hemozoin distribution in red blood cells without any influence from other Raman scattering. The new imaging setup in combination with the robust algorithm provides a novel, elegant way for chemical selective analysis of the malaria pigment hemozoin in early ring stages of Plasmodium falciparum infected erythrocytes. - Highlights: • Raman hyperspectral imaging allows for chemical selective analysis of biological samples with spatial heterogeneity. • A homogeneous, incoherent illumination is essential for reliable

  1. Fiber array based hyperspectral Raman imaging for chemical selective analysis of malaria-infected red blood cells

    International Nuclear Information System (INIS)

    Brückner, Michael; Becker, Katja; Popp, Jürgen; Frosch, Torsten

    2015-01-01

    A new setup for Raman spectroscopic wide-field imaging is presented. It combines the advantages of a fiber array based spectral translator with a tailor-made laser illumination system for high-quality Raman chemical imaging of sensitive biological samples. The Gaussian-like intensity distribution of the illuminating laser beam is shaped by a square-core optical multimode fiber to a top-hat profile with very homogeneous intensity distribution to fulfill the conditions of Koehler. The 30 m long optical fiber and an additional vibrator efficiently destroy the polarization and coherence of the illuminating light. This homogeneous, incoherent illumination is an essential prerequisite for stable quantitative imaging of complex biological samples. The fiber array translates the two-dimensional lateral information of the Raman stray light into separated spectral channels with very high contrast. The Raman image can be correlated with a corresponding white light microscopic image of the sample. The new setup enables simultaneous quantification of all Raman spectra across the whole spatial area with very good spectral resolution and thus outperforms other Raman imaging approaches based on scanning and tunable filters. The unique capabilities of the setup for fast, gentle, sensitive, and selective chemical imaging of biological samples were applied for automated hemozoin analysis. A special algorithm was developed to generate Raman images based on the hemozoin distribution in red blood cells without any influence from other Raman scattering. The new imaging setup in combination with the robust algorithm provides a novel, elegant way for chemical selective analysis of the malaria pigment hemozoin in early ring stages of Plasmodium falciparum infected erythrocytes. - Highlights: • Raman hyperspectral imaging allows for chemical selective analysis of biological samples with spatial heterogeneity. • A homogeneous, incoherent illumination is essential for reliable

  2. Bacterial clonal diagnostics as a tool for evidence-based empiric antibiotic selection.

    Science.gov (United States)

    Tchesnokova, Veronika; Avagyan, Hovhannes; Rechkina, Elena; Chan, Diana; Muradova, Mariya; Haile, Helen Ghirmai; Radey, Matthew; Weissman, Scott; Riddell, Kim; Scholes, Delia; Johnson, James R; Sokurenko, Evgeni V

    2017-01-01

    Despite the known clonal distribution of antibiotic resistance in many bacteria, empiric (pre-culture) antibiotic selection still relies heavily on species-level cumulative antibiograms, resulting in overuse of broad-spectrum agents and excessive antibiotic/pathogen mismatch. Urinary tract infections (UTIs), which account for a large share of antibiotic use, are caused predominantly by Escherichia coli, a highly clonal pathogen. In an observational clinical cohort study of urgent care patients with suspected UTI, we assessed the potential for E. coli clonal-level antibiograms to improve empiric antibiotic selection. A novel PCR-based clonotyping assay was applied to fresh urine samples to rapidly detect E. coli and the urine strain's clonotype. Based on a database of clonotype-specific antibiograms, the acceptability of various antibiotics for empiric therapy was inferred using a 20%, 10%, and 30% allowed resistance threshold. The test's performance characteristics and possible effects on prescribing were assessed. The rapid test identified E. coli clonotypes directly in patients' urine within 25-35 minutes, with high specificity and sensitivity compared to culture. Antibiotic selection based on a clonotype-specific antibiogram could reduce the relative likelihood of antibiotic/pathogen mismatch by ≥ 60%. Compared to observed prescribing patterns, clonal diagnostics-guided antibiotic selection could safely double the use of trimethoprim/sulfamethoxazole and minimize fluoroquinolone use. In summary, a rapid clonotyping test showed promise for improving empiric antibiotic prescribing for E. coli UTI, including reversing preferential use of fluoroquinolones over trimethoprim/sulfamethoxazole. The clonal diagnostics approach merges epidemiologic surveillance, antimicrobial stewardship, and molecular diagnostics to bring evidence-based medicine directly to the point of care.

  3. The particle analysis based on FT-TIMS technique for swipe sample under the frame of nuclear safeguard

    International Nuclear Information System (INIS)

    Yang Tianli; Liu Xuemei; Liu Zhao; Tang Lei; Long Kaiming

    2008-06-01

    Under the frame of nuclear safeguard, the particles analysis for swipe sample is an advance mean to detect the undeclared uranium enriched facilities and undeclared uranium enriched activity. The technique of particle analysis based on fission track-thermal ionization mass spectrometry (FT-TIMS) for swipe sample have been built. The reliability and the experimental background for selecting particles consisting of uranium from swipe sample by FT method have been verified. In addition, the utilization coefficient of particles on the surface of swipe sample have also been tested. These works have provided the technique support for application in the area of nuclear verification. (authors)

  4. Generic Learning-Based Ensemble Framework for Small Sample Size Face Recognition in Multi-Camera Networks.

    Science.gov (United States)

    Zhang, Cuicui; Liang, Xuefeng; Matsuyama, Takashi

    2014-12-08

    Multi-camera networks have gained great interest in video-based surveillance systems for security monitoring, access control, etc. Person re-identification is an essential and challenging task in multi-camera networks, which aims to determine if a given individual has already appeared over the camera network. Individual recognition often uses faces as a trial and requires a large number of samples during the training phrase. This is difficult to fulfill due to the limitation of the camera hardware system and the unconstrained image capturing conditions. Conventional face recognition algorithms often encounter the "small sample size" (SSS) problem arising from the small number of training samples compared to the high dimensionality of the sample space. To overcome this problem, interest in the combination of multiple base classifiers has sparked research efforts in ensemble methods. However, existing ensemble methods still open two questions: (1) how to define diverse base classifiers from the small data; (2) how to avoid the diversity/accuracy dilemma occurring during ensemble. To address these problems, this paper proposes a novel generic learning-based ensemble framework, which augments the small data by generating new samples based on a generic distribution and introduces a tailored 0-1 knapsack algorithm to alleviate the diversity/accuracy dilemma. More diverse base classifiers can be generated from the expanded face space, and more appropriate base classifiers are selected for ensemble. Extensive experimental results on four benchmarks demonstrate the higher ability of our system to cope with the SSS problem compared to the state-of-the-art system.

  5. Generic Learning-Based Ensemble Framework for Small Sample Size Face Recognition in Multi-Camera Networks

    Directory of Open Access Journals (Sweden)

    Cuicui Zhang

    2014-12-01

    Full Text Available Multi-camera networks have gained great interest in video-based surveillance systems for security monitoring, access control, etc. Person re-identification is an essential and challenging task in multi-camera networks, which aims to determine if a given individual has already appeared over the camera network. Individual recognition often uses faces as a trial and requires a large number of samples during the training phrase. This is difficult to fulfill due to the limitation of the camera hardware system and the unconstrained image capturing conditions. Conventional face recognition algorithms often encounter the “small sample size” (SSS problem arising from the small number of training samples compared to the high dimensionality of the sample space. To overcome this problem, interest in the combination of multiple base classifiers has sparked research efforts in ensemble methods. However, existing ensemble methods still open two questions: (1 how to define diverse base classifiers from the small data; (2 how to avoid the diversity/accuracy dilemma occurring during ensemble. To address these problems, this paper proposes a novel generic learning-based ensemble framework, which augments the small data by generating new samples based on a generic distribution and introduces a tailored 0–1 knapsack algorithm to alleviate the diversity/accuracy dilemma. More diverse base classifiers can be generated from the expanded face space, and more appropriate base classifiers are selected for ensemble. Extensive experimental results on four benchmarks demonstrate the higher ability of our system to cope with the SSS problem compared to the state-of-the-art system.

  6. New sample preparation method based on task-specific ionic liquids for extraction and determination of copper in urine and wastewater.

    Science.gov (United States)

    Trtić-Petrović, Tatjana; Dimitrijević, Aleksandra; Zdolšek, Nikola; Đorđević, Jelena; Tot, Aleksandar; Vraneš, Milan; Gadžurić, Slobodan

    2018-01-01

    In this study, four hydrophilic ionic liquids (ILs) containing 1-alkyl-3-methylimidazolim cation and either salicylate or chloride anions were synthetized and studied as new task-specific ionic liquids (TSILs) suitable for aqueous biphasic system (ABS) formation and selective one-step extraction of copper(II). TSILs are designed so that the anion is responsible for forming the complex with metal(II) and preventing hydrolysis of metal cations at very strong alkaline pH, whereas the cation is responsible for selective extraction of metal(II)-salicylate complexes. It was found that 1-butyl-3-methylimidazolium salicylate could be used for selective extraction of Cu(II) in the presence of Zn(II), Cd(II), and Pb(II) at very alkaline solution without metal hydroxide formation. It was assumed that formation of metal(II)-salicylate complexes prevents the hydrolysis of the metal ions in alkaline solutions. The determined stability constants for Cu(II)-salicylate complexes, where salicylate was derived from different ionic liquids, indicated that there was no significant influence of the cation of the ionic liquid on the stability of the complexes. The ABS based on 1-butyl-3-methylimidazolium salicylate has been applied as the sample preparation step prior to voltammetric determination of Cu(II). The effect of volume of aqueous sample and IL and extraction time were investigated and optimum extraction conditions were determined. The obtained detection limits were 8 ng dm -3 . The optimized method was applied for the determination of Cu(II) in tap water, wastewater, and urine. The study indicated that application of the ABS based on 1-butyl-3-methylimidazolium salicylate ionic liquid could be successfully applied as the sample preparation method for the determination of Cu(II) from various environmental samples. Graphical abstract Aqueous biphasic system based on task-specific ionic liquid as a sample pretreatment for selective determination of Cu(II) in biological and

  7. A novel highly sensitive and selective optical sensor based on a symmetric tetradentate Schiff-base embedded in PVC polymeric film for determination of Zn{sup 2+} ion in real samples

    Energy Technology Data Exchange (ETDEWEB)

    Abdel Aziz, Ayman A., E-mail: aymanaziz31@gmail.com [Chemistry Department, Faculty of Science, Ain Shams University, 11566 Cairo (Egypt); Chemistry Department, Faculty of Science, University of Tabuk, 71421, Tabuk (Saudi Arabia)

    2013-11-15

    A novel prepared Zn{sup 2+} ion PVC membrane sensor based on a novel Schiff base; N,N′bis(salicylaldehyde)2,3-diaminonaphthalene (SDN) for the determination of Zn{sup 2+} ion was described. The chemosensor was synthesized under microwave irradiation via condensation of 2,3-diaminonaphthalene and salicylaldehyde. Photoluminescence characteristics of the novel Schiff base ligand were investigated in different solvents including dicholoromethane (DCM), tetrahydrofuran (THF) and ethanol (EtOH). SDN was found to have higher emission intensity and Stoke’s shift value (Δλ{sub ST}) in EtOH solution. The sensor exhibited a specific fluorescent on response to Zn{sup 2+}. The response of the sensor is based on the fluorescence enhancement of SDN (LH{sub 2}) by Zn{sup 2+} ion as a result of formation the rigid structure L-Zn complex. The experiment results also show that the response behavior of SDN to Zn{sup 2+} is pH independent in the range of pH 6.0–8.0. At pH 7.0, the proposed sensor displays a calibration response for Zn{sup 2+} over a wide concentration range of 1.0×10{sup −9}–2.0×10{sup −3} mol L{sup −1} with a limit of detection (LOD) 8.1×10{sup −10} mol L{sup −1} (0.0529659 μg L{sup −1}). The sensor shows excellent selectivity toward Zn{sup 2+} with respect to common coexisting cations. The proposed fluorescence optode was successfully applied to detect Zn{sup 2+} in human hair samples, different brands of powdered milk and some pharmaceuticals. -- Highlights: • A novel Zn(II) chemosensor has been developed. • Wide linear concentration range of 1.0×10{sup −9}–2.0×10{sup −3} mol L{sup −1}. • Application for determination of Zn(II) in real samples.

  8. Minimal gene selection for classification and diagnosis prediction based on gene expression profile

    Directory of Open Access Journals (Sweden)

    Alireza Mehridehnavi

    2013-01-01

    Conclusion: We have shown that the use of two most significant genes based on their S/N ratios and selection of suitable training samples can lead to classify DLBCL patients with a rather good result. Actually with the aid of mentioned methods we could compensate lack of enough number of patients, improve accuracy of classifying and reduce complication of computations and so running time.

  9. Using rule-based machine learning for candidate disease gene prioritization and sample classification of cancer gene expression data.

    Science.gov (United States)

    Glaab, Enrico; Bacardit, Jaume; Garibaldi, Jonathan M; Krasnogor, Natalio

    2012-01-01

    Microarray data analysis has been shown to provide an effective tool for studying cancer and genetic diseases. Although classical machine learning techniques have successfully been applied to find informative genes and to predict class labels for new samples, common restrictions of microarray analysis such as small sample sizes, a large attribute space and high noise levels still limit its scientific and clinical applications. Increasing the interpretability of prediction models while retaining a high accuracy would help to exploit the information content in microarray data more effectively. For this purpose, we evaluate our rule-based evolutionary machine learning systems, BioHEL and GAssist, on three public microarray cancer datasets, obtaining simple rule-based models for sample classification. A comparison with other benchmark microarray sample classifiers based on three diverse feature selection algorithms suggests that these evolutionary learning techniques can compete with state-of-the-art methods like support vector machines. The obtained models reach accuracies above 90% in two-level external cross-validation, with the added value of facilitating interpretation by using only combinations of simple if-then-else rules. As a further benefit, a literature mining analysis reveals that prioritizations of informative genes extracted from BioHEL's classification rule sets can outperform gene rankings obtained from a conventional ensemble feature selection in terms of the pointwise mutual information between relevant disease terms and the standardized names of top-ranked genes.

  10. Improved selection criteria for H II regions, based on IRAS sources

    Science.gov (United States)

    Yan, Qing-Zeng; Xu, Ye; Walsh, A. J.; Macquart, J. P.; MacLeod, G. C.; Zhang, Bo; Hancock, P. J.; Chen, Xi; Tang, Zheng-Hong

    2018-05-01

    We present new criteria for selecting H II regions from the Infrared Astronomical Satellite (IRAS) Point Source Catalogue (PSC), based on an H II region catalogue derived manually from the all-sky Wide-field Infrared Survey Explorer (WISE). The criteria are used to augment the number of H II region candidates in the Milky Way. The criteria are defined by the linear decision boundary of two samples: IRAS point sources associated with known H II regions, which serve as the H II region sample, and IRAS point sources at high Galactic latitudes, which serve as the non-H II region sample. A machine learning classifier, specifically a support vector machine, is used to determine the decision boundary. We investigate all combinations of four IRAS bands and suggest that the optimal criterion is log(F_{60}/F_{12})≥ ( -0.19 × log(F_{100}/F_{25})+ 1.52), with detections at 60 and 100 {μ}m. This selects 3041 H II region candidates from the IRAS PSC. We find that IRAS H II region candidates show evidence of evolution on the two-colour diagram. Merging the WISE H II catalogue with IRAS H II region candidates, we estimate a lower limit of approximately 10 200 for the number of H II regions in the Milky Way.

  11. The effect of morphometric atlas selection on multi-atlas-based automatic brachial plexus segmentation

    International Nuclear Information System (INIS)

    Van de Velde, Joris; Wouters, Johan; Vercauteren, Tom; De Gersem, Werner; Achten, Eric; De Neve, Wilfried; Van Hoof, Tom

    2015-01-01

    The present study aimed to measure the effect of a morphometric atlas selection strategy on the accuracy of multi-atlas-based BP autosegmentation using the commercially available software package ADMIRE® and to determine the optimal number of selected atlases to use. Autosegmentation accuracy was measured by comparing all generated automatic BP segmentations with anatomically validated gold standard segmentations that were developed using cadavers. Twelve cadaver computed tomography (CT) atlases were included in the study. One atlas was selected as a patient in ADMIRE®, and multi-atlas-based BP autosegmentation was first performed with a group of morphometrically preselected atlases. In this group, the atlases were selected on the basis of similarity in the shoulder protraction position with the patient. The number of selected atlases used started at two and increased up to eight. Subsequently, a group of randomly chosen, non-selected atlases were taken. In this second group, every possible combination of 2 to 8 random atlases was used for multi-atlas-based BP autosegmentation. For both groups, the average Dice similarity coefficient (DSC), Jaccard index (JI) and Inclusion index (INI) were calculated, measuring the similarity of the generated automatic BP segmentations and the gold standard segmentation. Similarity indices of both groups were compared using an independent sample t-test, and the optimal number of selected atlases was investigated using an equivalence trial. For each number of atlases, average similarity indices of the morphometrically selected atlas group were significantly higher than the random group (p < 0,05). In this study, the highest similarity indices were achieved using multi-atlas autosegmentation with 6 selected atlases (average DSC = 0,598; average JI = 0,434; average INI = 0,733). Morphometric atlas selection on the basis of the protraction position of the patient significantly improves multi-atlas-based BP autosegmentation accuracy

  12. Feasibility of self-sampled dried blood spot and saliva samples sent by mail in a population-based study.

    Science.gov (United States)

    Sakhi, Amrit Kaur; Bastani, Nasser Ezzatkhah; Ellingjord-Dale, Merete; Gundersen, Thomas Erik; Blomhoff, Rune; Ursin, Giske

    2015-04-11

    In large epidemiological studies it is often challenging to obtain biological samples. Self-sampling by study participants using dried blood spots (DBS) technique has been suggested to overcome this challenge. DBS is a type of biosampling where blood samples are obtained by a finger-prick lancet, blotted and dried on filter paper. However, the feasibility and efficacy of collecting DBS samples from study participants in large-scale epidemiological studies is not known. The aim of the present study was to test the feasibility and response rate of collecting self-sampled DBS and saliva samples in a population-based study of women above 50 years of age. We determined response proportions, number of phone calls to the study center with questions about sampling, and quality of the DBS. We recruited women through a study conducted within the Norwegian Breast Cancer Screening Program. Invitations, instructions and materials were sent to 4,597 women. The data collection took place over a 3 month period in the spring of 2009. Response proportions for the collection of DBS and saliva samples were 71.0% (3,263) and 70.9% (3,258), respectively. We received 312 phone calls (7% of the 4,597 women) with questions regarding sampling. Of the 3,263 individuals that returned DBS cards, 3,038 (93.1%) had been packaged and shipped according to instructions. A total of 3,032 DBS samples were sufficient for at least one biomarker analysis (i.e. 92.9% of DBS samples received by the laboratory). 2,418 (74.1%) of the DBS cards received by the laboratory were filled with blood according to the instructions (i.e. 10 completely filled spots with up to 7 punches per spot for up to 70 separate analyses). To assess the quality of the samples, we selected and measured two biomarkers (carotenoids and vitamin D). The biomarker levels were consistent with previous reports. Collecting self-sampled DBS and saliva samples through the postal services provides a low cost, effective and feasible

  13. Evaluation of pump pulsation in respirable size-selective sampling: Part III. Investigation of European standard methods.

    Science.gov (United States)

    Soo, Jhy-Charm; Lee, Eun Gyung; Lee, Larry A; Kashon, Michael L; Harper, Martin

    2014-10-01

    Lee et al. (Evaluation of pump pulsation in respirable size-selective sampling: part I. Pulsation measurements. Ann Occup Hyg 2014a;58:60-73) introduced an approach to measure pump pulsation (PP) using a real-world sampling train, while the European Standards (EN) (EN 1232-1997 and EN 12919-1999) suggest measuring PP using a resistor in place of the sampler. The goal of this study is to characterize PP according to both EN methods and to determine the relationship of PP between the published method (Lee et al., 2014a) and the EN methods. Additional test parameters were investigated to determine whether the test conditions suggested by the EN methods were appropriate for measuring pulsations. Experiments were conducted using a factorial combination of personal sampling pumps (six medium- and two high-volumetric flow rate pumps), back pressures (six medium- and seven high-flow rate pumps), resistors (two types), tubing lengths between a pump and resistor (60 and 90 cm), and different flow rates (2 and 2.5 l min(-1) for the medium- and 4.4, 10, and 11.2 l min(-1) for the high-flow rate pumps). The selection of sampling pumps and the ranges of back pressure were based on measurements obtained in the previous study (Lee et al., 2014a). Among six medium-flow rate pumps, only the Gilian5000 and the Apex IS conformed to the 10% criterion specified in EN 1232-1997. Although the AirChek XR5000 exceeded the 10% limit, the average PP (10.9%) was close to the criterion. One high-flow rate pump, the Legacy (PP=8.1%), conformed to the 10% criterion in EN 12919-1999, while the Elite12 did not (PP=18.3%). Conducting supplemental tests with additional test parameters beyond those used in the two subject EN standards did not strengthen the characterization of PPs. For the selected test conditions, a linear regression model [PPEN=0.014+0.375×PPNIOSH (adjusted R2=0.871)] was developed to determine the PP relationship between the published method (Lee et al., 2014a) and the EN methods

  14. Optimal processing pathway selection for microalgae-based biorefinery under uncertainty

    DEFF Research Database (Denmark)

    Rizwan, Muhammad; Zaman, Muhammad; Lee, Jay H.

    2015-01-01

    We propose a systematic framework for the selection of optimal processing pathways for a microalgaebased biorefinery under techno-economic uncertainty. The proposed framework promotes robust decision making by taking into account the uncertainties that arise due to inconsistencies among...... and shortage in the available technical information. A stochastic mixed integer nonlinear programming (sMINLP) problem is formulated for determining the optimal biorefinery configurations based on a superstructure model where parameter uncertainties are modeled and included as sampled scenarios. The solution...... the accounting of uncertainty are compared with respect to different objectives. (C) 2015 Elsevier Ltd. All rights reserved....

  15. Development of ion imprinted polymers for the selective extraction of lanthanides from environmental samples

    International Nuclear Information System (INIS)

    Moussa, Manel

    2016-01-01

    The analysis of the lanthanide ions present at trace level in complex environmental matrices requires often a purification and preconcentration step. The solid phase extraction (SPE) is the most used sample preparation technique. To improve the selectivity of this step, Ion Imprinted Polymers (IIPs) can be used as SPE solid supports. The aim of this work was the development of IIPs for the selective extraction of lanthanide ions from environmental samples. In a first part, IIPs were prepared according to the trapping approach using 5,7-dichloroquinoline-8-ol as non-vinylated ligand. For the first time, the loss of the trapped ligand during template ion removal and sedimentation steps was demonstrated by HPLC-UV. Moreover, this loss was not repeatable, which led to a lack of repeatability of the SPE profiles. It was then demonstrated that the trapping approach is not appropriate for the IIPs synthesis. In a second part, IIPs were synthesized by chemical immobilization of methacrylic acid as vinylated monomer. The repeatability of the synthesis and the SPE protocol were confirmed. A good selectivity of the IIPs for all the lanthanide ions was obtained. IIPs were successfully used to selectively extract lanthanide ions from tap and river water. Finally, IIPs were synthesized by chemical immobilization of methacrylic acid and 4-vinylpyridine as functional monomers and either a light (Nd 3+ ) or a heavy (Er 3+ ) lanthanide ion as template. Both kinds of IIPs led to a similar selectivity for all lanthanide ions. Nevertheless, this selectivity can be modified by changing the nature and the pH of the washing solution used in the SPE protocol. (author)

  16. The diagnostic value of CT scan and selective venous sampling in Cushing's syndrome

    International Nuclear Information System (INIS)

    Negoro, Makoto; Kuwayama, Akio; Yamamoto, Naoto; Nakane, Toshichi; Yokoe, Toshio; Kageyama, Naoki; Ichihara, Kaoru; Ishiguchi, Tsuneo; Sakuma, Sadayuki

    1986-01-01

    We studied 24 patients with Cushing's syndrome in order to find the best way to confirm the pituitary adenoma preoperatively. At first, the sellar content was studied by means of a high-resolution CT scan in each patient. Second, by selective catheterization in the bilateral internal jugular vein and the inferior petrosal sinus, venous samples (c) were obtained for ACTH assay. Simultaneously, peripheral blood sampling (P) was made at the anterior cubital vein for the same purpose, and the C/P ratio was carefully calculated in each patient. If the C/P ratio exceeded 2, it was highly suggestive of the presence of pituitary adenoma. Even by an advanced high-resolution CT scan with a thickness of 2 mm, pituitary adenomas were detected in only 32 % of the patients studied. The result of image diagnosis in Cushing disease was discouraging. As for the chemical diagnosis, the results were as follows. At the early stage of this study, the catheterization was terminated in the jugular veins of nine patients. Among these, in five patients the presence of pituitary adenoma was predicted correctly in the preoperative stage. Later, by means of inferior petrosal sinus samplings, pituitary microadenomas were detected in ten patients among the twelve. Selective venous sampling for ACTH in the inferior petrosal sinus or jugular vein proved to be useful for the differential diagnosis of Cushing's syndrome when other diagnostic measures such as CT scan were inconclusive. (author)

  17. Selecting a risk-based tool to aid in decision making

    Energy Technology Data Exchange (ETDEWEB)

    Bendure, A.O.

    1995-03-01

    Selecting a risk-based tool to aid in decision making is as much of a challenge as properly using the tool once it has been selected. Failure to consider customer and stakeholder requirements and the technical bases and differences in risk-based decision making tools will produce confounding and/or politically unacceptable results when the tool is used. Selecting a risk-based decisionmaking tool must therefore be undertaken with the same, if not greater, rigor than the use of the tool once it is selected. This paper presents a process for selecting a risk-based tool appropriate to a set of prioritization or resource allocation tasks, discusses the results of applying the process to four risk-based decision-making tools, and identifies the ``musts`` for successful selection and implementation of a risk-based tool to aid in decision making.

  18. Molecularly imprinted membrane extraction combined with high-performance liquid chromatography for selective analysis of cloxacillin from shrimp samples.

    Science.gov (United States)

    Du, Wei; Sun, Min; Guo, Pengqi; Chang, Chun; Fu, Qiang

    2018-09-01

    Nowadays, the abuse of antibiotics in aquaculture has generated considerable problems for food safety. Therefore, it is imperative to develop a simple and selective method for monitoring illegal use of antibiotics in aquatic products. In this study, a method combined molecularly imprinted membranes (MIMs) extraction and liquid chromatography was developed for the selective analysis of cloxacillin from shrimp samples. The MIMs was synthesized by UV photopolymerization, and characterized by scanning electron microscope, Fourier transform infrared spectra, thermo-gravimetric analysis and swelling test. The results showed that the MIMs exhibited excellent permselectivity, high adsorption capacity and fast adsorption rate for cloxacillin. Finally, the method was utilized to determine cloxacillin from shrimp samples, with good accuracies and acceptable relative standard deviation values for precision. The proposed method was a promising alternative for selective analysis of cloxacillin in shrimp samples, due to the easy-operation and excellent selectivity. Copyright © 2018. Published by Elsevier Ltd.

  19. Optimization of Decision-Making for Spatial Sampling in the North China Plain, Based on Remote-Sensing a Priori Knowledge

    Science.gov (United States)

    Feng, J.; Bai, L.; Liu, S.; Su, X.; Hu, H.

    2012-07-01

    In this paper, the MODIS remote sensing data, featured with low-cost, high-timely and moderate/low spatial resolutions, in the North China Plain (NCP) as a study region were firstly used to carry out mixed-pixel spectral decomposition to extract an useful regionalized indicator parameter (RIP) (i.e., an available ratio, that is, fraction/percentage, of winter wheat planting area in each pixel as a regionalized indicator variable (RIV) of spatial sampling) from the initial selected indicators. Then, the RIV values were spatially analyzed, and the spatial structure characteristics (i.e., spatial correlation and variation) of the NCP were achieved, which were further processed to obtain the scalefitting, valid a priori knowledge or information of spatial sampling. Subsequently, founded upon an idea of rationally integrating probability-based and model-based sampling techniques and effectively utilizing the obtained a priori knowledge or information, the spatial sampling models and design schemes and their optimization and optimal selection were developed, as is a scientific basis of improving and optimizing the existing spatial sampling schemes of large-scale cropland remote sensing monitoring. Additionally, by the adaptive analysis and decision strategy the optimal local spatial prediction and gridded system of extrapolation results were able to excellently implement an adaptive report pattern of spatial sampling in accordance with report-covering units in order to satisfy the actual needs of sampling surveys.

  20. Classification and quantitation of milk powder by near-infrared spectroscopy and mutual information-based variable selection and partial least squares

    Science.gov (United States)

    Chen, Hui; Tan, Chao; Lin, Zan; Wu, Tong

    2018-01-01

    Milk is among the most popular nutrient source worldwide, which is of great interest due to its beneficial medicinal properties. The feasibility of the classification of milk powder samples with respect to their brands and the determination of protein concentration is investigated by NIR spectroscopy along with chemometrics. Two datasets were prepared for experiment. One contains 179 samples of four brands for classification and the other contains 30 samples for quantitative analysis. Principal component analysis (PCA) was used for exploratory analysis. Based on an effective model-independent variable selection method, i.e., minimal-redundancy maximal-relevance (MRMR), only 18 variables were selected to construct a partial least-square discriminant analysis (PLS-DA) model. On the test set, the PLS-DA model based on the selected variable set was compared with the full-spectrum PLS-DA model, both of which achieved 100% accuracy. In quantitative analysis, the partial least-square regression (PLSR) model constructed by the selected subset of 260 variables outperforms significantly the full-spectrum model. It seems that the combination of NIR spectroscopy, MRMR and PLS-DA or PLSR is a powerful tool for classifying different brands of milk and determining the protein content.

  1. Sample Based Unit Liter Dose Estimates

    International Nuclear Information System (INIS)

    JENSEN, L.

    2000-01-01

    The Tank Waste Characterization Program has taken many core samples, grab samples, and auger samples from the single-shell and double-shell tanks during the past 10 years. Consequently, the amount of sample data available has increased, both in terms of quantity of sample results and the number of tanks characterized. More and better data is available than when the current radiological and toxicological source terms used in the Basis for Interim Operation (BIO) (FDH 1999a) and the Final Safety Analysis Report (FSAR) (FDH 1999b) were developed. The Nuclear Safety and Licensing (NS and L) organization wants to use the new data to upgrade the radiological and toxicological source terms used in the BIO and FSAR. The NS and L organization requested assistance in producing a statistically based process for developing the source terms. This report describes the statistical techniques used and the assumptions made to support the development of a new radiological source term for liquid and solid wastes stored in single-shell and double-shell tanks. The results given in this report are a revision to similar results given in an earlier version of the document (Jensen and Wilmarth 1999). The main difference between the results in this document and the earlier version is that the dose conversion factors (DCF) for converting μCi/g or μCi/L to Sv/L (sieverts per liter) have changed. There are now two DCFs, one based on ICRP-68 and one based on ICW-71 (Brevick 2000)

  2. Re-Emergence of Under-Selected Stimuli, after the Extinction of Over-Selected Stimuli in an Automated Match to Samples Procedure

    Science.gov (United States)

    Broomfield, Laura; McHugh, Louise; Reed, Phil

    2008-01-01

    Stimulus over-selectivity occurs when one of potentially many aspects of the environment comes to control behaviour. In two experiments, adults with no developmental disabilities, were trained and tested in an automated match to samples (MTS) paradigm. In Experiment 1, participants completed two conditions, in one of which the over-selected…

  3. Methodology Series Module 5: Sampling Strategies.

    Science.gov (United States)

    Setia, Maninder Singh

    2016-01-01

    Once the research question and the research design have been finalised, it is important to select the appropriate sample for the study. The method by which the researcher selects the sample is the ' Sampling Method'. There are essentially two types of sampling methods: 1) probability sampling - based on chance events (such as random numbers, flipping a coin etc.); and 2) non-probability sampling - based on researcher's choice, population that accessible & available. Some of the non-probability sampling methods are: purposive sampling, convenience sampling, or quota sampling. Random sampling method (such as simple random sample or stratified random sample) is a form of probability sampling. It is important to understand the different sampling methods used in clinical studies and mention this method clearly in the manuscript. The researcher should not misrepresent the sampling method in the manuscript (such as using the term ' random sample' when the researcher has used convenience sample). The sampling method will depend on the research question. For instance, the researcher may want to understand an issue in greater detail for one particular population rather than worry about the ' generalizability' of these results. In such a scenario, the researcher may want to use ' purposive sampling' for the study.

  4. Methodology series module 5: Sampling strategies

    Directory of Open Access Journals (Sweden)

    Maninder Singh Setia

    2016-01-01

    Full Text Available Once the research question and the research design have been finalised, it is important to select the appropriate sample for the study. The method by which the researcher selects the sample is the 'Sampling Method'. There are essentially two types of sampling methods: 1 probability samplingbased on chance events (such as random numbers, flipping a coin etc.; and 2 non-probability samplingbased on researcher's choice, population that accessible & available. Some of the non-probability sampling methods are: purposive sampling, convenience sampling, or quota sampling. Random sampling method (such as simple random sample or stratified random sample is a form of probability sampling. It is important to understand the different sampling methods used in clinical studies and mention this method clearly in the manuscript. The researcher should not misrepresent the sampling method in the manuscript (such as using the term 'random sample' when the researcher has used convenience sample. The sampling method will depend on the research question. For instance, the researcher may want to understand an issue in greater detail for one particular population rather than worry about the 'generalizability' of these results. In such a scenario, the researcher may want to use 'purposive sampling' for the study.

  5. Structure-based prediction of subtype selectivity of histamine H3 receptor selective antagonists in clinical trials.

    Science.gov (United States)

    Kim, Soo-Kyung; Fristrup, Peter; Abrol, Ravinder; Goddard, William A

    2011-12-27

    Histamine receptors (HRs) are excellent drug targets for the treatment of diseases, such as schizophrenia, psychosis, depression, migraine, allergies, asthma, ulcers, and hypertension. Among them, the human H(3) histamine receptor (hH(3)HR) antagonists have been proposed for specific therapeutic applications, including treatment of Alzheimer's disease, attention deficit hyperactivity disorder (ADHD), epilepsy, and obesity. However, many of these drug candidates cause undesired side effects through the cross-reactivity with other histamine receptor subtypes. In order to develop improved selectivity and activity for such treatments, it would be useful to have the three-dimensional structures for all four HRs. We report here the predicted structures of four HR subtypes (H(1), H(2), H(3), and H(4)) using the GEnSeMBLE (GPCR ensemble of structures in membrane bilayer environment) Monte Carlo protocol, sampling ∼35 million combinations of helix packings to predict the 10 most stable packings for each of the four subtypes. Then we used these 10 best protein structures with the DarwinDock Monte Carlo protocol to sample ∼50 000 × 10(20) poses to predict the optimum ligand-protein structures for various agonists and antagonists. We find that E206(5.46) contributes most in binding H(3) selective agonists (5, 6, 7) in agreement with experimental mutation studies. We also find that conserved E5.46/S5.43 in both of hH(3)HR and hH(4)HR are involved in H(3)/ H(4) subtype selectivity. In addition, we find that M378(6.55) in hH(3)HR provides additional hydrophobic interactions different from hH(4)HR (the corresponding amino acid of T323(6.55) in hH(4)HR) to provide additional subtype bias. From these studies, we developed a pharmacophore model based on our predictions for known hH(3)HR selective antagonists in clinical study [ABT-239 1, GSK-189,254 2, PF-3654746 3, and BF2.649 (tiprolisant) 4] that suggests critical selectivity directing elements are: the basic proton

  6. Distance based control system for machine vision-based selective spraying

    NARCIS (Netherlands)

    Steward, B.L.; Tian, L.F.; Tang, L.

    2002-01-01

    For effective operation of a selective sprayer with real-time local weed sensing, herbicides must be delivered, accurately to weed targets in the field. With a machine vision-based selective spraying system, acquiring sequential images and switching nozzles on and off at the correct locations are

  7. The Effect of Using a Proposed Teaching Strategy Based on the Selective Thinking on Students' Acquisition Concepts in Mathematics

    Science.gov (United States)

    Qudah, Ahmad Hassan

    2016-01-01

    This study aimed at identify the effect of using a proposed teaching strategy based on the selective thinking in acquire mathematical concepts by Classroom Teacher Students at Al- al- Bayt University, The sample of the study consisted of (74) students, equally distributed into a control group and an experimental group. The selective thinking…

  8. Prototype selection based on FCM and its application in discrimination between nuclear explosion and earthquake

    International Nuclear Information System (INIS)

    Han Shaoqing; Li Xihai; Song Zibiao; Liu Daizhi

    2007-01-01

    The synergetic pattern recognition is a new way of pattern recognition with many excellent features such as noise resistance and deformity resistance. But when it is used in the discrimination between nuclear explosion and earthquake using existing methods of prototype selection, the results are not satisfying. A new method of prototype selection based on FCM is proposed in this paper. First, each group of training samples is clustered into c groups using FCM; then c barycenters or centers are chosen as prototypes. Experiment results show that compared with existing methods of prototype selection this new method is effective and it increases the recognition ratio greatly. (authors)

  9. Bacterial clonal diagnostics as a tool for evidence-based empiric antibiotic selection.

    Directory of Open Access Journals (Sweden)

    Veronika Tchesnokova

    Full Text Available Despite the known clonal distribution of antibiotic resistance in many bacteria, empiric (pre-culture antibiotic selection still relies heavily on species-level cumulative antibiograms, resulting in overuse of broad-spectrum agents and excessive antibiotic/pathogen mismatch. Urinary tract infections (UTIs, which account for a large share of antibiotic use, are caused predominantly by Escherichia coli, a highly clonal pathogen. In an observational clinical cohort study of urgent care patients with suspected UTI, we assessed the potential for E. coli clonal-level antibiograms to improve empiric antibiotic selection. A novel PCR-based clonotyping assay was applied to fresh urine samples to rapidly detect E. coli and the urine strain's clonotype. Based on a database of clonotype-specific antibiograms, the acceptability of various antibiotics for empiric therapy was inferred using a 20%, 10%, and 30% allowed resistance threshold. The test's performance characteristics and possible effects on prescribing were assessed. The rapid test identified E. coli clonotypes directly in patients' urine within 25-35 minutes, with high specificity and sensitivity compared to culture. Antibiotic selection based on a clonotype-specific antibiogram could reduce the relative likelihood of antibiotic/pathogen mismatch by ≥ 60%. Compared to observed prescribing patterns, clonal diagnostics-guided antibiotic selection could safely double the use of trimethoprim/sulfamethoxazole and minimize fluoroquinolone use. In summary, a rapid clonotyping test showed promise for improving empiric antibiotic prescribing for E. coli UTI, including reversing preferential use of fluoroquinolones over trimethoprim/sulfamethoxazole. The clonal diagnostics approach merges epidemiologic surveillance, antimicrobial stewardship, and molecular diagnostics to bring evidence-based medicine directly to the point of care.

  10. Selection of representative calibration sample sets for near-infrared reflectance spectroscopy to predict nitrogen concentration in grasses

    DEFF Research Database (Denmark)

    Shetty, Nisha; Rinnan, Åsmund; Gislum, René

    2012-01-01

    ) algorithm were used and compared. Both Puchwein and CADEX methods provide a calibration set equally distributed in space, and both methods require a minimum prior of knowledge. The samples were also selected randomly using complete random, cultivar random (year fixed), year random (cultivar fixed......) and interaction (cultivar × year fixed) random procedures to see the influence of different factors on sample selection. Puchwein's method performed best with lowest RMSEP followed by CADEX, interaction random, year random, cultivar random and complete random. Out of 118 samples of the complete calibration set...... effectively enhance the cost-effectiveness of NIR spectral analysis by reducing the number of analyzed samples in the calibration set by more than 80%, which substantially reduces the effort of laboratory analyses with no significant loss in prediction accuracy....

  11. Selective isolation of gonyautoxins 1,4 from the dinoflagellate Alexandrium minutum based on molecularly imprinted solid-phase extraction.

    Science.gov (United States)

    Lian, Ziru; Wang, Jiangtao

    2017-09-15

    Gonyautoxins 1,4 (GTX1,4) from Alexandrium minutum samples were isolated selectively and recognized specifically by an innovative and effective extraction procedure based on molecular imprinting technology. Novel molecularly imprinted polymer microspheres (MIPMs) were prepared by double-templated imprinting strategy using caffeine and pentoxifylline as dummy templates. The synthesized polymers displayed good affinity to GTX1,4 and were applied as sorbents. Further, an off-line molecularly imprinted solid-phase extraction (MISPE) protocol was optimized and an effective approach based on the MISPE coupled with HPLC-FLD was developed for selective isolation of GTX1,4 from the cultured A. minutum samples. The separation method showed good extraction efficiency (73.2-81.5%) for GTX1,4 and efficient removal of interferences matrices was also achieved after the MISPE process for the microalgal samples. The outcome demonstrated the superiority and great potential of the MISPE procedure for direct separation of GTX1,4 from marine microalgal extracts. Copyright © 2017. Published by Elsevier Ltd.

  12. Methodology Series Module 5: Sampling Strategies

    Science.gov (United States)

    Setia, Maninder Singh

    2016-01-01

    Once the research question and the research design have been finalised, it is important to select the appropriate sample for the study. The method by which the researcher selects the sample is the ‘ Sampling Method’. There are essentially two types of sampling methods: 1) probability samplingbased on chance events (such as random numbers, flipping a coin etc.); and 2) non-probability samplingbased on researcher's choice, population that accessible & available. Some of the non-probability sampling methods are: purposive sampling, convenience sampling, or quota sampling. Random sampling method (such as simple random sample or stratified random sample) is a form of probability sampling. It is important to understand the different sampling methods used in clinical studies and mention this method clearly in the manuscript. The researcher should not misrepresent the sampling method in the manuscript (such as using the term ‘ random sample’ when the researcher has used convenience sample). The sampling method will depend on the research question. For instance, the researcher may want to understand an issue in greater detail for one particular population rather than worry about the ‘ generalizability’ of these results. In such a scenario, the researcher may want to use ‘ purposive sampling’ for the study. PMID:27688438

  13. A redshift survey of IRAS galaxies. I. Sample selection

    International Nuclear Information System (INIS)

    Strauss, M.A.; Davis, M.; Yahil, A.; Huchra, J.P.

    1990-01-01

    A complete all-sky sample of objects, flux-limited at 60 microns, has been extracted from the data base of the IRAS. The sample consists of 5014 objects, of which 2649 are galaxies and 13 are not yet identified. In order to study large-scale structure with this sample, it must be free of systematic biases. Corrections are applied for a major systematic effect in the flux densities listed in the IRAS Point Source Catalog: sources resolved by the IRAS beam have flux densities systematically underestimated. In addition, accurate flux densities are obtained for sources flagged as variable, or of moderate flux quality at 60 microns. The IRAS detectors suffered radiation-induced responsivity enhancement (hysteresis) due to crossings of the satellite scans across the Galactic plane; this effect is measured and is shown to be negligible. 53 refs

  14. Sampling dynamics: an alternative to payoff-monotone selection dynamics

    DEFF Research Database (Denmark)

    Berkemer, Rainer

    payoff-monotone nor payoff-positive which has interesting consequences. This can be demonstrated by application to the travelers dilemma, a deliberately constructed social dilemma. The game has just one symmetric Nash equilibrium which is Pareto inefficient. Especially when the travelers have many......'' of the standard game theory result. Both, analytical tools and agent based simulation are used to investigate the dynamic stability of sampling equilibria in a generalized travelers dilemma. Two parameters are of interest: the number of strategy options (m) available to each traveler and an experience parameter...... (k), which indicates the number of samples an agent would evaluate before fixing his decision. The special case (k=1) can be treated analytically. The stationary points of the dynamics must be sampling equilibria and one can calculate that for m>3 there will be an interior solution in addition...

  15. A novel method of selective removal of human DNA improves PCR sensitivity for detection of Salmonella Typhi in blood samples.

    Science.gov (United States)

    Zhou, Liqing; Pollard, Andrew J

    2012-07-27

    Enteric fever is a major public health problem, causing an estimated 21million new cases and 216,000 or more deaths every year. Current diagnosis of the disease is inadequate. Blood culture only identifies 45 to 70% of the cases and is time-consuming. Serological tests have very low sensitivity and specificity. Clinical samples obtained for diagnosis of enteric fever in the field generally have blood, so that even PCR-based methods, widely used for detection of other infectious diseases, are not a straightforward option in typhoid diagnosis. We developed a novel method to enrich target bacterial DNA by selective removal of human DNA from blood samples, enhancing the sensitivity of PCR tests. This method offers the possibility of improving PCR assays directly using clinical specimens for diagnosis of this globally important infectious disease. Blood samples were mixed with ox bile for selective lysis of human blood cells and the released human DNA was then digested with addition of bile resistant micrococcal nuclease. The intact Salmonella Typhi bacteria were collected from the specimen by centrifugation and the DNA extracted with QIAamp DNA mini kit. The presence of Salmonella Typhi bacteria in blood samples was detected by PCR with the fliC-d gene of Salmonella Typhi as the target. Micrococcal nuclease retained activity against human blood DNA in the presence of up to 9% ox bile. Background human DNA was dramatically removed from blood samples through the use of ox bile lysis and micrococcal nuclease for removal of mammalian DNA. Consequently target Salmonella Typhi DNA was enriched in DNA preparations and the PCR sensitivity for detection of Salmonella Typhi in spiked blood samples was enhanced by 1,000 fold. Use of a combination of selective ox-bile blood cell lysis and removal of human DNA with micrococcal nuclease significantly improves PCR sensitivity and offers a better option for improved typhoid PCR assays directly using clinical specimens in diagnosis of

  16. Groundwater sampling in uranium reconnaissance

    International Nuclear Information System (INIS)

    Butz, T.R.

    1977-03-01

    The groundwater sampling program is based on the premise that ground water geochemistry reflects the chemical composition of, and geochemical processes active in the strata from which the sample is obtained. Pilot surveys have shown that wells are the best source of groundwater, although springs are sampled on occasion. The procedures followed in selecting a sampling site, the sampling itself, and the field measurements, as well as the site records made, are described

  17. Antibiotic content of selective culture media for isolation of Capnocytophaga species from oral polymicrobial samples.

    Science.gov (United States)

    Ehrmann, E; Jolivet-Gougeon, A; Bonnaure-Mallet, M; Fosse, T

    2013-10-01

    In oral microbiome, because of the abundance of commensal competitive flora, selective media with antibiotics are necessary for the recovery of fastidious Capnocytophaga species. The performances of six culture media (blood agar, chocolate blood agar, VCAT medium, CAPE medium, bacitracin chocolate blood agar and VK medium) were compared with literature data concerning five other media (FAA, LB, TSBV, CapR and TBBP media). To understand variable growth on selective media, the MICs of each antimicrobial agent contained in this different media (colistin, kanamycin, trimethoprim, trimethoprim-sulfamethoxazole, vancomycin, aztreonam and bacitracin) were determined for all Capnocytophaga species. Overall, VCAT medium (Columbia, 10% cooked horse blood, polyvitaminic supplement, 3·75 mg l(-1) of colistin, 1·5 mg l(-1) of trimethoprim, 1 mg l(-1) of vancomycin and 0·5 mg l(-1) of amphotericin B, Oxoid, France) was the more efficient selective medium, with regard to the detection of Capnocytophaga species from oral samples (P culture, a simple blood agar allowed the growth of all Capnocytophaga species. Nonetheless, in oral samples, because of the abundance of commensal competitive flora, selective media with antibiotics are necessary for the recovery of Capnocytophaga species. The demonstrated superiority of VCAT medium made its use essential for the optimal detection of this bacterial genus. This work showed that extreme caution should be exercised when reporting the isolation of Capnocytophaga species from oral polymicrobial samples, because the culture medium is a determining factor. © 2013 The Society for Applied Microbiology.

  18. A novel approach to assessing environmental disturbance based on habitat selection by zebra fish as a model organism.

    Science.gov (United States)

    Araújo, Cristiano V M; Griffith, Daniel M; Vera-Vera, Victoria; Jentzsch, Paul Vargas; Cervera, Laura; Nieto-Ariza, Beatriz; Salvatierra, David; Erazo, Santiago; Jaramillo, Rusbel; Ramos, Luis A; Moreira-Santos, Matilde; Ribeiro, Rui

    2018-04-01

    Aquatic ecotoxicity assays used to assess ecological risk assume that organisms living in a contaminated habitat are forcedly exposed to the contamination. This assumption neglects the ability of organisms to detect and avoid contamination by moving towards less disturbed habitats, as long as connectivity exists. In fluvial systems, many environmental parameters vary spatially and thus condition organisms' habitat selection. We assessed the preference of zebra fish (Danio rerio) when exposed to water samples from two western Ecuadorian rivers with apparently distinct disturbance levels: Pescadillo River (highly disturbed) and Oro River (moderately disturbed). Using a non-forced exposure system in which water samples from each river were arranged according to their spatial sequence in the field and connected to allow individuals to move freely among samples, we assayed habitat selection by D. rerio to assess environmental disturbance in the two rivers. Fish exposed to Pescadillo River samples preferred downstream samples near the confluence zone with the Oro River. Fish exposed to Oro River samples preferred upstream waters. When exposed to samples from both rivers simultaneously, fish exhibited the same pattern of habitat selection by preferring the Oro River samples. Given that the rivers are connected, preference for the Oro River enabled us to predict a depression in fish populations in the Pescadillo River. Although these findings indicate higher disturbance levels in the Pescadillo River, none of the physical-chemical variables measured was significantly correlated with the preference pattern towards the Oro River. Non-linear spatial patterns of habitat preference suggest that other environmental parameters like urban or agricultural contaminants play an important role in the model organism's habitat selection in these rivers. The non-forced exposure system represents a habitat selection-based approach that can serve as a valuable tool to unravel the factors

  19. Selecting Feature Subsets Based on SVM-RFE and the Overlapping Ratio with Applications in Bioinformatics.

    Science.gov (United States)

    Lin, Xiaohui; Li, Chao; Zhang, Yanhui; Su, Benzhe; Fan, Meng; Wei, Hai

    2017-12-26

    Feature selection is an important topic in bioinformatics. Defining informative features from complex high dimensional biological data is critical in disease study, drug development, etc. Support vector machine-recursive feature elimination (SVM-RFE) is an efficient feature selection technique that has shown its power in many applications. It ranks the features according to the recursive feature deletion sequence based on SVM. In this study, we propose a method, SVM-RFE-OA, which combines the classification accuracy rate and the average overlapping ratio of the samples to determine the number of features to be selected from the feature rank of SVM-RFE. Meanwhile, to measure the feature weights more accurately, we propose a modified SVM-RFE-OA (M-SVM-RFE-OA) algorithm that temporally screens out the samples lying in a heavy overlapping area in each iteration. The experiments on the eight public biological datasets show that the discriminative ability of the feature subset could be measured more accurately by combining the classification accuracy rate with the average overlapping degree of the samples compared with using the classification accuracy rate alone, and shielding the samples in the overlapping area made the calculation of the feature weights more stable and accurate. The methods proposed in this study can also be used with other RFE techniques to define potential biomarkers from big biological data.

  20. Selecting Feature Subsets Based on SVM-RFE and the Overlapping Ratio with Applications in Bioinformatics

    Directory of Open Access Journals (Sweden)

    Xiaohui Lin

    2017-12-01

    Full Text Available Feature selection is an important topic in bioinformatics. Defining informative features from complex high dimensional biological data is critical in disease study, drug development, etc. Support vector machine-recursive feature elimination (SVM-RFE is an efficient feature selection technique that has shown its power in many applications. It ranks the features according to the recursive feature deletion sequence based on SVM. In this study, we propose a method, SVM-RFE-OA, which combines the classification accuracy rate and the average overlapping ratio of the samples to determine the number of features to be selected from the feature rank of SVM-RFE. Meanwhile, to measure the feature weights more accurately, we propose a modified SVM-RFE-OA (M-SVM-RFE-OA algorithm that temporally screens out the samples lying in a heavy overlapping area in each iteration. The experiments on the eight public biological datasets show that the discriminative ability of the feature subset could be measured more accurately by combining the classification accuracy rate with the average overlapping degree of the samples compared with using the classification accuracy rate alone, and shielding the samples in the overlapping area made the calculation of the feature weights more stable and accurate. The methods proposed in this study can also be used with other RFE techniques to define potential biomarkers from big biological data.

  1. Sales Tax Compliance and Audit Selection

    OpenAIRE

    Murray, Matthew N.

    1995-01-01

    Uses sample selection estimation techniques to identify systematic audit selection rules and determinants of sales tax underreporting. Though based on data from only one state (Tennessee), outcomes are useful in developing and evaluating audit selection results.

  2. Calibration model maintenance in melamine resin production: Integrating drift detection, smart sample selection and model adaptation.

    Science.gov (United States)

    Nikzad-Langerodi, Ramin; Lughofer, Edwin; Cernuda, Carlos; Reischer, Thomas; Kantner, Wolfgang; Pawliczek, Marcin; Brandstetter, Markus

    2018-07-12

    The physico-chemical properties of Melamine Formaldehyde (MF) based thermosets are largely influenced by the degree of polymerization (DP) in the underlying resin. On-line supervision of the turbidity point by means of vibrational spectroscopy has recently emerged as a promising technique to monitor the DP of MF resins. However, spectroscopic determination of the DP relies on chemometric models, which are usually sensitive to drifts caused by instrumental and/or sample-associated changes occurring over time. In order to detect the time point when drifts start causing prediction bias, we here explore a universal drift detector based on a faded version of the Page-Hinkley (PH) statistic, which we test in three data streams from an industrial MF resin production process. We employ committee disagreement (CD), computed as the variance of model predictions from an ensemble of partial least squares (PLS) models, as a measure for sample-wise prediction uncertainty and use the PH statistic to detect changes in this quantity. We further explore supervised and unsupervised strategies for (semi-)automatic model adaptation upon detection of a drift. For the former, manual reference measurements are requested whenever statistical thresholds on Hotelling's T 2 and/or Q-Residuals are violated. Models are subsequently re-calibrated using weighted partial least squares in order to increase the influence of newer samples, which increases the flexibility when adapting to new (drifted) states. Unsupervised model adaptation is carried out exploiting the dual antecedent-consequent structure of a recently developed fuzzy systems variant of PLS termed FLEXFIS-PLS. In particular, antecedent parts are updated while maintaining the internal structure of the local linear predictors (i.e. the consequents). We found improved drift detection capability of the CD compared to Hotelling's T 2 and Q-Residuals when used in combination with the proposed PH test. Furthermore, we found that active

  3. Personnel Selection Based on Fuzzy Methods

    Directory of Open Access Journals (Sweden)

    Lourdes Cañós

    2011-03-01

    Full Text Available The decisions of managers regarding the selection of staff strongly determine the success of the company. A correct choice of employees is a source of competitive advantage. We propose a fuzzy method for staff selection, based on competence management and the comparison with the valuation that the company considers the best in each competence (ideal candidate. Our method is based on the Hamming distance and a Matching Level Index. The algorithms, implemented in the software StaffDesigner, allow us to rank the candidates, even when the competences of the ideal candidate have been evaluated only in part. Our approach is applied in a numerical example.

  4. NetProt: Complex-based Feature Selection.

    Science.gov (United States)

    Goh, Wilson Wen Bin; Wong, Limsoon

    2017-08-04

    Protein complex-based feature selection (PCBFS) provides unparalleled reproducibility with high phenotypic relevance on proteomics data. Currently, there are five PCBFS paradigms, but not all representative methods have been implemented or made readily available. To allow general users to take advantage of these methods, we developed the R-package NetProt, which provides implementations of representative feature-selection methods. NetProt also provides methods for generating simulated differential data and generating pseudocomplexes for complex-based performance benchmarking. The NetProt open source R package is available for download from https://github.com/gohwils/NetProt/releases/ , and online documentation is available at http://rpubs.com/gohwils/204259 .

  5. Do Culture-based Segments Predict Selection of Market Strategy?

    Directory of Open Access Journals (Sweden)

    Veronika Jadczaková

    2015-01-01

    Full Text Available Academists and practitioners have already acknowledged the importance of unobservable segmentation bases (such as psychographics yet still focusing on how well these bases are capable of describing relevant segments (the identifiability criterion rather than on how precisely these segments can predict (the predictability criterion. Therefore, this paper intends to add a debate to this topic by exploring whether culture-based segments do account for a selection of market strategy. To do so, a set of market strategy variables over a sample of 251 manufacturing firms was first regressed on a set of 19 cultural variables using canonical correlation analysis. Having found significant relationship in the first canonical function, it was further examined by means of correspondence analysis which cultural segments – if any – are linked to which market strategies. However, as correspondence analysis failed to find a significant relationship, it may be concluded that business culture might relate to the adoption of market strategy but not to the cultural groupings presented in the paper.

  6. Novel joint selection methods can reduce sample size for rheumatoid arthritis clinical trials with ultrasound endpoints.

    Science.gov (United States)

    Allen, John C; Thumboo, Julian; Lye, Weng Kit; Conaghan, Philip G; Chew, Li-Ching; Tan, York Kiat

    2018-03-01

    To determine whether novel methods of selecting joints through (i) ultrasonography (individualized-ultrasound [IUS] method), or (ii) ultrasonography and clinical examination (individualized-composite-ultrasound [ICUS] method) translate into smaller rheumatoid arthritis (RA) clinical trial sample sizes when compared to existing methods utilizing predetermined joint sites for ultrasonography. Cohen's effect size (ES) was estimated (ES^) and a 95% CI (ES^L, ES^U) calculated on a mean change in 3-month total inflammatory score for each method. Corresponding 95% CIs [nL(ES^U), nU(ES^L)] were obtained on a post hoc sample size reflecting the uncertainty in ES^. Sample size calculations were based on a one-sample t-test as the patient numbers needed to provide 80% power at α = 0.05 to reject a null hypothesis H 0 : ES = 0 versus alternative hypotheses H 1 : ES = ES^, ES = ES^L and ES = ES^U. We aimed to provide point and interval estimates on projected sample sizes for future studies reflecting the uncertainty in our study ES^S. Twenty-four treated RA patients were followed up for 3 months. Utilizing the 12-joint approach and existing methods, the post hoc sample size (95% CI) was 22 (10-245). Corresponding sample sizes using ICUS and IUS were 11 (7-40) and 11 (6-38), respectively. Utilizing a seven-joint approach, the corresponding sample sizes using ICUS and IUS methods were nine (6-24) and 11 (6-35), respectively. Our pilot study suggests that sample size for RA clinical trials with ultrasound endpoints may be reduced using the novel methods, providing justification for larger studies to confirm these observations. © 2017 Asia Pacific League of Associations for Rheumatology and John Wiley & Sons Australia, Ltd.

  7. Rapid screening of selective serotonin re-uptake inhibitors in urine samples using solid-phase microextraction gas chromatography-mass spectrometry.

    Science.gov (United States)

    Salgado-Petinal, Carmen; Lamas, J Pablo; Garcia-Jares, Carmen; Llompart, Maria; Cela, Rafael

    2005-07-01

    In this paper a solid-phase microextraction-gas chromatography-mass spectrometry (SPME-GC-MS) method is proposed for a rapid analysis of some frequently prescribed selective serotonin re-uptake inhibitors (SSRI)-venlafaxine, fluvoxamine, mirtazapine, fluoxetine, citalopram, and sertraline-in urine samples. The SPME-based method enables simultaneous determination of the target SSRI after simple in-situ derivatization of some of the target compounds. Calibration curves in water and in urine were validated and statistically compared. This revealed the absence of matrix effect and, in consequence, the possibility of quantifying SSRI in urine samples by external water calibration. Intra-day and inter-day precision was satisfactory for all the target compounds (relative standard deviation, RSD, detection limits achieved were detected and tentatively identified.

  8. Selective extraction of dimethoate from cucumber samples by use of molecularly imprinted microspheres

    Directory of Open Access Journals (Sweden)

    Jiao-Jiao Du

    2015-06-01

    Full Text Available Molecularly imprinted polymers for dimethoate recognition were synthesized by the precipitation polymerization technique using methyl methacrylate (MMA as the functional monomer and ethylene glycol dimethacrylate (EGDMA as the cross-linker. The morphology, adsorption and recognition properties were investigated by scanning electron microscopy (SEM, static adsorption test, and competitive adsorption test. To obtain the best selectivity and binding performance, the synthesis and adsorption conditions of MIPs were optimized through single factor experiments. Under the optimized conditions, the resultant polymers exhibited uniform size, satisfactory binding capacity and significant selectivity. Furthermore, the imprinted polymers were successfully applied as a specific solid-phase extractants combined with high performance liquid chromatography (HPLC for determination of dimethoate residues in the cucumber samples. The average recoveries of three spiked samples ranged from 78.5% to 87.9% with the relative standard deviations (RSDs less than 4.4% and the limit of detection (LOD obtained for dimethoate as low as 2.3 μg/mL. Keywords: Molecularly imprinted polymer, Precipitation polymerization, Dimethoate, Cucumber, HPLC

  9. Using rule-based machine learning for candidate disease gene prioritization and sample classification of cancer gene expression data.

    Directory of Open Access Journals (Sweden)

    Enrico Glaab

    Full Text Available Microarray data analysis has been shown to provide an effective tool for studying cancer and genetic diseases. Although classical machine learning techniques have successfully been applied to find informative genes and to predict class labels for new samples, common restrictions of microarray analysis such as small sample sizes, a large attribute space and high noise levels still limit its scientific and clinical applications. Increasing the interpretability of prediction models while retaining a high accuracy would help to exploit the information content in microarray data more effectively. For this purpose, we evaluate our rule-based evolutionary machine learning systems, BioHEL and GAssist, on three public microarray cancer datasets, obtaining simple rule-based models for sample classification. A comparison with other benchmark microarray sample classifiers based on three diverse feature selection algorithms suggests that these evolutionary learning techniques can compete with state-of-the-art methods like support vector machines. The obtained models reach accuracies above 90% in two-level external cross-validation, with the added value of facilitating interpretation by using only combinations of simple if-then-else rules. As a further benefit, a literature mining analysis reveals that prioritizations of informative genes extracted from BioHEL's classification rule sets can outperform gene rankings obtained from a conventional ensemble feature selection in terms of the pointwise mutual information between relevant disease terms and the standardized names of top-ranked genes.

  10. A Reagentless Amperometric Formaldehyde-Selective Chemosensor Based on Platinized Gold Electrodes.

    Science.gov (United States)

    Demkiv, Olha; Smutok, Oleh; Gonchar, Mykhailo; Nisnevitch, Marina

    2017-05-06

    Fabrication and characterization of a new amperometric chemosensor for accurate formaldehyde analysis based on platinized gold electrodes is described. The platinization process was performed electrochemically on the surface of 4 mm gold planar electrodes by both electrolysis and cyclic voltamperometry. The produced electrodes were characterized using scanning electron microscopy and X-ray spectral analysis. Using a low working potential (0.0 V vs. Ag/AgCl) enabled an essential increase in the chemosensor's selectivity for the target analyte. The sensitivity of the best chemosensor prototype to formaldehyde is uniquely high (28180 A·M -1 ·m -2 ) with a detection limit of 0.05 mM. The chemosensor remained stable over a one-year storage period. The formaldehye-selective chemosensor was tested on samples of commercial preparations. A high correlation was demonstrated between the results obtained by the proposed chemosensor, chemical and enzymatic methods ( R = 0.998). The developed formaldehyde-selective amperometric chemosensor is very promising for use in industry and research, as well as for environmental control.

  11. X-Ray Temperatures, Luminosities, and Masses from XMM-Newton Follow-up of the First Shear-selected Galaxy Cluster Sample

    Energy Technology Data Exchange (ETDEWEB)

    Deshpande, Amruta J.; Hughes, John P. [Department of Physics and Astronomy, Rutgers the State University of New Jersey, 136 Frelinghuysen Road, Piscataway, NJ 08854 (United States); Wittman, David, E-mail: amrejd@physics.rutgers.edu, E-mail: jph@physics.rutgers.edu, E-mail: dwittman@physics.ucdavis.edu [Department of Physics, University of California, Davis, One Shields Avenue, Davis, CA 95616 (United States)

    2017-04-20

    We continue the study of the first sample of shear-selected clusters from the initial 8.6 square degrees of the Deep Lens Survey (DLS); a sample with well-defined selection criteria corresponding to the highest ranked shear peaks in the survey area. We aim to characterize the weak lensing selection by examining the sample’s X-ray properties. There are multiple X-ray clusters associated with nearly all the shear peaks: 14 X-ray clusters corresponding to seven DLS shear peaks. An additional three X-ray clusters cannot be definitively associated with shear peaks, mainly due to large positional offsets between the X-ray centroid and the shear peak. Here we report on the XMM-Newton properties of the 17 X-ray clusters. The X-ray clusters display a wide range of luminosities and temperatures; the L {sub X} − T {sub X} relation we determine for the shear-associated X-ray clusters is consistent with X-ray cluster samples selected without regard to dynamical state, while it is inconsistent with self-similarity. For a subset of the sample, we measure X-ray masses using temperature as a proxy, and compare to weak lensing masses determined by the DLS team. The resulting mass comparison is consistent with equality. The X-ray and weak lensing masses show considerable intrinsic scatter (∼48%), which is consistent with X-ray selected samples when their X-ray and weak lensing masses are independently determined.

  12. A regression-based differential expression detection algorithm for microarray studies with ultra-low sample size.

    Directory of Open Access Journals (Sweden)

    Daniel Vasiliu

    Full Text Available Global gene expression analysis using microarrays and, more recently, RNA-seq, has allowed investigators to understand biological processes at a system level. However, the identification of differentially expressed genes in experiments with small sample size, high dimensionality, and high variance remains challenging, limiting the usability of these tens of thousands of publicly available, and possibly many more unpublished, gene expression datasets. We propose a novel variable selection algorithm for ultra-low-n microarray studies using generalized linear model-based variable selection with a penalized binomial regression algorithm called penalized Euclidean distance (PED. Our method uses PED to build a classifier on the experimental data to rank genes by importance. In place of cross-validation, which is required by most similar methods but not reliable for experiments with small sample size, we use a simulation-based approach to additively build a list of differentially expressed genes from the rank-ordered list. Our simulation-based approach maintains a low false discovery rate while maximizing the number of differentially expressed genes identified, a feature critical for downstream pathway analysis. We apply our method to microarray data from an experiment perturbing the Notch signaling pathway in Xenopus laevis embryos. This dataset was chosen because it showed very little differential expression according to limma, a powerful and widely-used method for microarray analysis. Our method was able to detect a significant number of differentially expressed genes in this dataset and suggest future directions for investigation. Our method is easily adaptable for analysis of data from RNA-seq and other global expression experiments with low sample size and high dimensionality.

  13. Phytochemical analysis and biological evaluation of selected African propolis samples from Cameroon and Congo

    NARCIS (Netherlands)

    Papachroni, D.; Graikou, K.; Kosalec, I.; Damianakos, H.; Ingram, V.J.; Chinou, I.

    2015-01-01

    The objective of this study was the chemical analysis of four selected samples of African propolis (Congo and Cameroon) and their biological evaluation. Twenty-one secondary metabolites belonging to four different chemical groups were isolated from the 70% ethanolic extracts of propolis and their

  14. Climate Change and Agricultural Productivity in Sub-Saharan Africa: A Spatial Sample Selection Model

    NARCIS (Netherlands)

    Ward, P.S.; Florax, R.J.G.M.; Flores-Lagunes, A.

    2014-01-01

    Using spatially explicit data, we estimate a cereal yield response function using a recently developed estimator for spatial error models when endogenous sample selection is of concern. Our results suggest that yields across Sub-Saharan Africa will decline with projected climatic changes, and that

  15. Feasibility of self-sampled dried blood spot and saliva samples sent by mail in a population-based study

    International Nuclear Information System (INIS)

    Sakhi, Amrit Kaur; Bastani, Nasser Ezzatkhah; Ellingjord-Dale, Merete; Gundersen, Thomas Erik; Blomhoff, Rune; Ursin, Giske

    2015-01-01

    In large epidemiological studies it is often challenging to obtain biological samples. Self-sampling by study participants using dried blood spots (DBS) technique has been suggested to overcome this challenge. DBS is a type of biosampling where blood samples are obtained by a finger-prick lancet, blotted and dried on filter paper. However, the feasibility and efficacy of collecting DBS samples from study participants in large-scale epidemiological studies is not known. The aim of the present study was to test the feasibility and response rate of collecting self-sampled DBS and saliva samples in a population–based study of women above 50 years of age. We determined response proportions, number of phone calls to the study center with questions about sampling, and quality of the DBS. We recruited women through a study conducted within the Norwegian Breast Cancer Screening Program. Invitations, instructions and materials were sent to 4,597 women. The data collection took place over a 3 month period in the spring of 2009. Response proportions for the collection of DBS and saliva samples were 71.0% (3,263) and 70.9% (3,258), respectively. We received 312 phone calls (7% of the 4,597 women) with questions regarding sampling. Of the 3,263 individuals that returned DBS cards, 3,038 (93.1%) had been packaged and shipped according to instructions. A total of 3,032 DBS samples were sufficient for at least one biomarker analysis (i.e. 92.9% of DBS samples received by the laboratory). 2,418 (74.1%) of the DBS cards received by the laboratory were filled with blood according to the instructions (i.e. 10 completely filled spots with up to 7 punches per spot for up to 70 separate analyses). To assess the quality of the samples, we selected and measured two biomarkers (carotenoids and vitamin D). The biomarker levels were consistent with previous reports. Collecting self-sampled DBS and saliva samples through the postal services provides a low cost, effective and feasible

  16. Memory and selective attention in multiple sclerosis: cross-sectional computer-based assessment in a large outpatient sample.

    Science.gov (United States)

    Adler, Georg; Lembach, Yvonne

    2015-08-01

    Cognitive impairments may have a severe impact on everyday functioning and quality of life of patients with multiple sclerosis (MS). However, there are some methodological problems in the assessment and only a few studies allow a representative estimate of the prevalence and severity of cognitive impairments in MS patients. We applied a computer-based method, the memory and attention test (MAT), in 531 outpatients with MS, who were assessed at nine neurological practices or specialized outpatient clinics. The findings were compared with those obtained in an age-, sex- and education-matched control group of 84 healthy subjects. Episodic short-term memory was substantially decreased in the MS patients. About 20% of them reached a score of only less than two standard deviations below the mean of the control group. The episodic short-term memory score was negatively correlated with the EDSS score. Minor but also significant impairments in the MS patients were found for verbal short-term memory, episodic working memory and selective attention. The computer-based MAT was found to be useful for a routine assessment of cognition in MS outpatients.

  17. Hybrid Feature Selection Approach Based on GRASP for Cancer Microarray Data

    Directory of Open Access Journals (Sweden)

    Arpita Nagpal

    2017-01-01

    Full Text Available Microarray data usually contain a large number of genes, but a small number of samples. Feature subset selection for microarray data aims at reducing the number of genes so that useful information can be extracted from the samples. Reducing the dimension of data sets further helps in improving the computational efficiency of the learning model. In this paper, we propose a modified algorithm based on the tabu search as local search procedures to a Greedy Randomized Adaptive Search Procedure (GRASP for high dimensional microarray data sets. The proposed Tabu based Greedy Randomized Adaptive Search Procedure algorithm is named as TGRASP. In TGRASP, a new parameter has been introduced named as Tabu Tenure and the existing parameters, NumIter and size have been modified. We observed that different parameter settings affect the quality of the optimum. The second proposed algorithm known as FFGRASP (Firefly Greedy Randomized Adaptive Search Procedure uses a firefly optimization algorithm in the local search optimzation phase of the greedy randomized adaptive search procedure (GRASP. Firefly algorithm is one of the powerful algorithms for optimization of multimodal applications. Experimental results show that the proposed TGRASP and FFGRASP algorithms are much better than existing algorithm with respect to three performance parameters viz. accuracy, run time, number of a selected subset of features. We have also compared both the approaches with a unified metric (Extended Adjusted Ratio of Ratios which has shown that TGRASP approach outperforms existing approach for six out of nine cancer microarray datasets and FFGRASP performs better on seven out of nine datasets.

  18. A two-stage cluster sampling method using gridded population data, a GIS, and Google EarthTM imagery in a population-based mortality survey in Iraq

    Directory of Open Access Journals (Sweden)

    Galway LP

    2012-04-01

    Full Text Available Abstract Background Mortality estimates can measure and monitor the impacts of conflict on a population, guide humanitarian efforts, and help to better understand the public health impacts of conflict. Vital statistics registration and surveillance systems are rarely functional in conflict settings, posing a challenge of estimating mortality using retrospective population-based surveys. Results We present a two-stage cluster sampling method for application in population-based mortality surveys. The sampling method utilizes gridded population data and a geographic information system (GIS to select clusters in the first sampling stage and Google Earth TM imagery and sampling grids to select households in the second sampling stage. The sampling method is implemented in a household mortality study in Iraq in 2011. Factors affecting feasibility and methodological quality are described. Conclusion Sampling is a challenge in retrospective population-based mortality studies and alternatives that improve on the conventional approaches are needed. The sampling strategy presented here was designed to generate a representative sample of the Iraqi population while reducing the potential for bias and considering the context specific challenges of the study setting. This sampling strategy, or variations on it, are adaptable and should be considered and tested in other conflict settings.

  19. Assessment of fracture risk: value of random population-based samples--the Geelong Osteoporosis Study.

    Science.gov (United States)

    Henry, M J; Pasco, J A; Seeman, E; Nicholson, G C; Sanders, K M; Kotowicz, M A

    2001-01-01

    Fracture risk is determined by bone mineral density (BMD). The T-score, a measure of fracture risk, is the position of an individual's BMD in relation to a reference range. The aim of this study was to determine the magnitude of change in the T-score when different sampling techniques were used to produce the reference range. Reference ranges were derived from three samples, drawn from the same region: (1) an age-stratified population-based random sample, (2) unselected volunteers, and (3) a selected healthy subset of the population-based sample with no diseases or drugs known to affect bone. T-scores were calculated using the three reference ranges for a cohort of women who had sustained a fracture and as a group had a low mean BMD (ages 35-72 yr; n = 484). For most comparisons, the T-scores for the fracture cohort were more negative using the population reference range. The difference in T-scores reached 1.0 SD. The proportion of the fracture cohort classified as having osteoporosis at the spine was 26, 14, and 23% when the population, volunteer, and healthy reference ranges were applied, respectively. The use of inappropriate reference ranges results in substantial changes to T-scores and may lead to inappropriate management.

  20. Grouped fuzzy SVM with EM-based partition of sample space for clustered microcalcification detection.

    Science.gov (United States)

    Wang, Huiya; Feng, Jun; Wang, Hongyu

    2017-07-20

    Detection of clustered microcalcification (MC) from mammograms plays essential roles in computer-aided diagnosis for early stage breast cancer. To tackle problems associated with the diversity of data structures of MC lesions and the variability of normal breast tissues, multi-pattern sample space learning is required. In this paper, a novel grouped fuzzy Support Vector Machine (SVM) algorithm with sample space partition based on Expectation-Maximization (EM) (called G-FSVM) is proposed for clustered MC detection. The diversified pattern of training data is partitioned into several groups based on EM algorithm. Then a series of fuzzy SVM are integrated for classification with each group of samples from the MC lesions and normal breast tissues. From DDSM database, a total of 1,064 suspicious regions are selected from 239 mammography, and the measurement of Accuracy, True Positive Rate (TPR), False Positive Rate (FPR) and EVL = TPR* 1-FPR are 0.82, 0.78, 0.14 and 0.72, respectively. The proposed method incorporates the merits of fuzzy SVM and multi-pattern sample space learning, decomposing the MC detection problem into serial simple two-class classification. Experimental results from synthetic data and DDSM database demonstrate that our integrated classification framework reduces the false positive rate significantly while maintaining the true positive rate.

  1. Detection of Salmonella spp. in veterinary samples by combining selective enrichment and real-time PCR.

    Science.gov (United States)

    Goodman, Laura B; McDonough, Patrick L; Anderson, Renee R; Franklin-Guild, Rebecca J; Ryan, James R; Perkins, Gillian A; Thachil, Anil J; Glaser, Amy L; Thompson, Belinda S

    2017-11-01

    Rapid screening for enteric bacterial pathogens in clinical environments is essential for biosecurity. Salmonella found in veterinary hospitals, particularly Salmonella enterica serovar Dublin, can pose unique challenges for culture and testing because of its poor growth. Multiple Salmonella serovars including Dublin are emerging threats to public health given increasing prevalence and antimicrobial resistance. We adapted an automated food testing method to veterinary samples and evaluated the performance of the method in a variety of matrices including environmental samples ( n = 81), tissues ( n = 52), feces ( n = 148), and feed ( n = 29). A commercial kit was chosen as the basis for this approach in view of extensive performance characterizations published by multiple independent organizations. A workflow was established for efficiently and accurately testing veterinary matrices and environmental samples by use of real-time PCR after selective enrichment in Rappaport-Vassiliadis soya (RVS) medium. Using this method, the detection limit for S. Dublin improved by 100-fold over subculture on selective agars (eosin-methylene blue, brilliant green, and xylose-lysine-deoxycholate). Overall, the procedure was effective in detecting Salmonella spp. and provided next-day results.

  2. [A comparison of convenience sampling and purposive sampling].

    Science.gov (United States)

    Suen, Lee-Jen Wu; Huang, Hui-Man; Lee, Hao-Hsien

    2014-06-01

    Convenience sampling and purposive sampling are two different sampling methods. This article first explains sampling terms such as target population, accessible population, simple random sampling, intended sample, actual sample, and statistical power analysis. These terms are then used to explain the difference between "convenience sampling" and purposive sampling." Convenience sampling is a non-probabilistic sampling technique applicable to qualitative or quantitative studies, although it is most frequently used in quantitative studies. In convenience samples, subjects more readily accessible to the researcher are more likely to be included. Thus, in quantitative studies, opportunity to participate is not equal for all qualified individuals in the target population and study results are not necessarily generalizable to this population. As in all quantitative studies, increasing the sample size increases the statistical power of the convenience sample. In contrast, purposive sampling is typically used in qualitative studies. Researchers who use this technique carefully select subjects based on study purpose with the expectation that each participant will provide unique and rich information of value to the study. As a result, members of the accessible population are not interchangeable and sample size is determined by data saturation not by statistical power analysis.

  3. APTIMA assay on SurePath liquid-based cervical samples compared to endocervical swab samples facilitated by a real time database

    Directory of Open Access Journals (Sweden)

    Khader Samer

    2010-01-01

    Full Text Available Background: Liquid-based cytology (LBC cervical samples are increasingly being used to test for pathogens, including: HPV, Chlamydia trachomatis (CT and Neisseria gonorrhoeae (GC using nucleic acid amplification tests. Several reports have shown the accuracy of such testing on ThinPrep (TP LBC samples. Fewer studies have evaluated SurePath (SP LBC samples, which utilize a different specimen preservative. This study was undertaken to assess the performance of the Aptima Combo 2 Assay (AC2 for CT and GC on SP versus endocervical swab samples in our laboratory. Materials and Methods: The live pathology database of Montefiore Medical Center was searched for patients with AC2 endocervical swab specimens and SP Paps taken the same day. SP samples from CT- and/or GC-positive endocervical swab patients and randomly selected negative patients were studied. In each case, 1.5 ml of the residual SP vial sample, which was in SP preservative and stored at room temperature, was transferred within seven days of collection to APTIMA specimen transfer tubes without any sample or patient identifiers. Blind testing with the AC2 assay was performed on the Tigris DTS System (Gen-probe, San Diego, CA. Finalized SP results were compared with the previously reported endocervical swab results for the entire group and separately for patients 25 years and younger and patients over 25 years. Results: SP specimens from 300 patients were tested. This included 181 swab CT-positive, 12 swab GC-positive, 7 CT and GC positive and 100 randomly selected swab CT and GC negative patients. Using the endocervical swab results as the patient′s infection status, AC2 assay of the SP samples showed: CT sensitivity 89.3%, CT specificity 100.0%; GC sensitivity and specificity 100.0%. CT sensitivity for patients 25 years or younger was 93.1%, versus 80.7% for patients over 25 years, a statistically significant difference (P = 0.02. Conclusions: Our results show that AC2 assay of 1.5 ml SP

  4. Selection of rhizosphere local microbial as bioactive inoculant based on irradiated compost

    International Nuclear Information System (INIS)

    Dadang Sudrajat; Nana Mulyana; Arief Adhari

    2014-01-01

    One of the main components of carrier based on irradiation compost for bio organic fertilizer is a potential microbial isolates role in nutrient supply and growth hormone. This research was conducted to obtain microbial isolates from plant root zone (rhizosphere), further isolation and selection in order to obtain potential isolates capable of nitrogen fixation (N 2 ), resulting in growth hormone (Indole Acetic Acid), and phosphate solubilizing. Selected potential isolates used as bioactive microbial inoculants formulation in irradiation compost based. Forty eight (48) rhizosphere samples were collected from different areas of West and Central Java. One hundred sixteen (116) isolates have been characterized for their morphological, cultural, staining and biochemical characteristics. Isolates have been selected for further screening of PGPR traits. Parameters assessed were Indole Acetic Acid (IAA) content analysis with colorimetric methods, dinitrogen fixation using gas chromatography, phosphate solubility test qualitatively (in the media pikovskaya) and quantitative assay of dissolved phosphate (spectrophotometry). Evaluation of the ability of selected isolates on the growth of corn plants were done in pots. The isolates will be used as inoculant consortium base on compost irradiation. The selection obtained eight (8) bacterial isolates identified as Bacillus circulans (3 isolates), Bacillus stearothermophilus (1 isolate), Azotobacter sp (3 isolates), Pseudomonas diminuta (1 isolate). The highest phosphate released (91,21 mg/l) was by BD2 isolate (Bacillus circulan) with a holo zone size (1.32 cm) on Pikovskaya agar medium. Isolate of Pseudomonas diminuta (KACI) was capable to produce the highest IAA hormone (74.34 μg/ml). The highest nitrogen (N 2 ) fixation activity was shown by Azotobacter sp isolates (KDB2) at a rate of 235.05 nmol/hour. The viability test showed that all selected isolates in compost irradiation carrier slightly decreased after 3 months of

  5. Consensus of heterogeneous multi-agent systems based on sampled data with a small sampling delay

    International Nuclear Information System (INIS)

    Wang Na; Wu Zhi-Hai; Peng Li

    2014-01-01

    In this paper, consensus problems of heterogeneous multi-agent systems based on sampled data with a small sampling delay are considered. First, a consensus protocol based on sampled data with a small sampling delay for heterogeneous multi-agent systems is proposed. Then, the algebra graph theory, the matrix method, the stability theory of linear systems, and some other techniques are employed to derive the necessary and sufficient conditions guaranteeing heterogeneous multi-agent systems to asymptotically achieve the stationary consensus. Finally, simulations are performed to demonstrate the correctness of the theoretical results. (interdisciplinary physics and related areas of science and technology)

  6. Efficient sampling algorithms for Monte Carlo based treatment planning

    International Nuclear Information System (INIS)

    DeMarco, J.J.; Solberg, T.D.; Chetty, I.; Smathers, J.B.

    1998-01-01

    Efficient sampling algorithms are necessary for producing a fast Monte Carlo based treatment planning code. This study evaluates several aspects of a photon-based tracking scheme and the effect of optimal sampling algorithms on the efficiency of the code. Four areas were tested: pseudo-random number generation, generalized sampling of a discrete distribution, sampling from the exponential distribution, and delta scattering as applied to photon transport through a heterogeneous simulation geometry. Generalized sampling of a discrete distribution using the cutpoint method can produce speedup gains of one order of magnitude versus conventional sequential sampling. Photon transport modifications based upon the delta scattering method were implemented and compared with a conventional boundary and collision checking algorithm. The delta scattering algorithm is faster by a factor of six versus the conventional algorithm for a boundary size of 5 mm within a heterogeneous geometry. A comparison of portable pseudo-random number algorithms and exponential sampling techniques is also discussed

  7. Cermet based solar selective absorbers : further selectivity improvement and developing new fabrication technique

    OpenAIRE

    Nejati, Mohammadreza

    2008-01-01

    Spectral selectivity of cermet based selective absorbers were increased by inducing surface roughness on the surface of the cermet layer using a roughening technique (deposition on hot substrates) or by micro-structuring the metallic substrates before deposition of the absorber coating using laser and imprint structuring techniques. Cu-Al2O3 cermet absorbers with very rough surfaces and excellent selectivity were obtained by employing a roughness template layer under the infrared reflective l...

  8. A Heckman Selection- t Model

    KAUST Repository

    Marchenko, Yulia V.

    2012-03-01

    Sample selection arises often in practice as a result of the partial observability of the outcome of interest in a study. In the presence of sample selection, the observed data do not represent a random sample from the population, even after controlling for explanatory variables. That is, data are missing not at random. Thus, standard analysis using only complete cases will lead to biased results. Heckman introduced a sample selection model to analyze such data and proposed a full maximum likelihood estimation method under the assumption of normality. The method was criticized in the literature because of its sensitivity to the normality assumption. In practice, data, such as income or expenditure data, often violate the normality assumption because of heavier tails. We first establish a new link between sample selection models and recently studied families of extended skew-elliptical distributions. Then, this allows us to introduce a selection-t (SLt) model, which models the error distribution using a Student\\'s t distribution. We study its properties and investigate the finite-sample performance of the maximum likelihood estimators for this model. We compare the performance of the SLt model to the conventional Heckman selection-normal (SLN) model and apply it to analyze ambulatory expenditures. Unlike the SLNmodel, our analysis using the SLt model provides statistical evidence for the existence of sample selection bias in these data. We also investigate the performance of the test for sample selection bias based on the SLt model and compare it with the performances of several tests used with the SLN model. Our findings indicate that the latter tests can be misleading in the presence of heavy-tailed data. © 2012 American Statistical Association.

  9. Performance-Based Technology Selection Filter description report

    International Nuclear Information System (INIS)

    O'Brien, M.C.; Morrison, J.L.; Morneau, R.A.; Rudin, M.J.; Richardson, J.G.

    1992-05-01

    A formal methodology has been developed for identifying technology gaps and assessing innovative or postulated technologies for inclusion in proposed Buried Waste Integrated Demonstration (BWID) remediation systems. Called the Performance-Based Technology Selection Filter, the methodology provides a formalized selection process where technologies and systems are rated and assessments made based on performance measures, and regulatory and technical requirements. The results are auditable, and can be validated with field data. This analysis methodology will be applied to the remedial action of transuranic contaminated waste pits and trenches buried at the Idaho National Engineering Laboratory (INEL)

  10. Discriminative Projection Selection Based Face Image Hashing

    Science.gov (United States)

    Karabat, Cagatay; Erdogan, Hakan

    Face image hashing is an emerging method used in biometric verification systems. In this paper, we propose a novel face image hashing method based on a new technique called discriminative projection selection. We apply the Fisher criterion for selecting the rows of a random projection matrix in a user-dependent fashion. Moreover, another contribution of this paper is to employ a bimodal Gaussian mixture model at the quantization step. Our simulation results on three different databases demonstrate that the proposed method has superior performance in comparison to previously proposed random projection based methods.

  11. Performance-Based Technology Selection Filter description report

    Energy Technology Data Exchange (ETDEWEB)

    O' Brien, M.C.; Morrison, J.L.; Morneau, R.A.; Rudin, M.J.; Richardson, J.G.

    1992-05-01

    A formal methodology has been developed for identifying technology gaps and assessing innovative or postulated technologies for inclusion in proposed Buried Waste Integrated Demonstration (BWID) remediation systems. Called the Performance-Based Technology Selection Filter, the methodology provides a formalized selection process where technologies and systems are rated and assessments made based on performance measures, and regulatory and technical requirements. The results are auditable, and can be validated with field data. This analysis methodology will be applied to the remedial action of transuranic contaminated waste pits and trenches buried at the Idaho National Engineering Laboratory (INEL).

  12. Multi-scale textural feature extraction and particle swarm optimization based model selection for false positive reduction in mammography.

    Science.gov (United States)

    Zyout, Imad; Czajkowska, Joanna; Grzegorzek, Marcin

    2015-12-01

    The high number of false positives and the resulting number of avoidable breast biopsies are the major problems faced by current mammography Computer Aided Detection (CAD) systems. False positive reduction is not only a requirement for mass but also for calcification CAD systems which are currently deployed for clinical use. This paper tackles two problems related to reducing the number of false positives in the detection of all lesions and masses, respectively. Firstly, textural patterns of breast tissue have been analyzed using several multi-scale textural descriptors based on wavelet and gray level co-occurrence matrix. The second problem addressed in this paper is the parameter selection and performance optimization. For this, we adopt a model selection procedure based on Particle Swarm Optimization (PSO) for selecting the most discriminative textural features and for strengthening the generalization capacity of the supervised learning stage based on a Support Vector Machine (SVM) classifier. For evaluating the proposed methods, two sets of suspicious mammogram regions have been used. The first one, obtained from Digital Database for Screening Mammography (DDSM), contains 1494 regions (1000 normal and 494 abnormal samples). The second set of suspicious regions was obtained from database of Mammographic Image Analysis Society (mini-MIAS) and contains 315 (207 normal and 108 abnormal) samples. Results from both datasets demonstrate the efficiency of using PSO based model selection for optimizing both classifier hyper-parameters and parameters, respectively. Furthermore, the obtained results indicate the promising performance of the proposed textural features and more specifically, those based on co-occurrence matrix of wavelet image representation technique. Copyright © 2015 Elsevier Ltd. All rights reserved.

  13. Feature selection for portfolio optimization

    DEFF Research Database (Denmark)

    Bjerring, Thomas Trier; Ross, Omri; Weissensteiner, Alex

    2016-01-01

    Most portfolio selection rules based on the sample mean and covariance matrix perform poorly out-of-sample. Moreover, there is a growing body of evidence that such optimization rules are not able to beat simple rules of thumb, such as 1/N. Parameter uncertainty has been identified as one major....... While most of the diversification benefits are preserved, the parameter estimation problem is alleviated. We conduct out-of-sample back-tests to show that in most cases different well-established portfolio selection rules applied on the reduced asset universe are able to improve alpha relative...

  14. Highly selective thiocyanate optochemical sensor based on manganese(III)-salophen ionophore

    International Nuclear Information System (INIS)

    Abdel-Haleem, Fatehy M.; Rizk, Mahmoud S.

    2017-01-01

    We report on the development of optochemical sensor based on Mn(III)-salophen ionophore. The sensor was prepared by embedding the ionophore in a plasticized poly (vinyl chloride) impregnated with the chromoionophore ETH7075. Optical response to thiocyanate occurred due to thiocyanate extraction into the polymer via formation of strong complex with the ionophore and simultaneous protonation of the indicator dye yielding the optical response at 545 nm. The developed optochemical sensor exhibited high selectivity for thiocyanate over other anions including the most lipophilic species such as salicylate and perchlorate. For instance, the optical selectivity coefficients, logK SCN,anion opt , were as follow: ClO 4 − = − 5.8; Sal − = − 4.0; NO 3 − ˂ − 6. Further, the thiocyanate optical selectivity obtained using the present optochemical sensor was greatly enhanced in comparison with that obtained using an anion-exchanger based sensor. Also, the optimized optochemical sensor exhibited micro-molar detection limit with 2 min response time at pH 4.5 using acetate buffer. The reversibility of the optimized sensor was poor due to strong ligation of the thiocyanate to the central Metal ion, log K = 14.1, which can be overcome by soaking the optode in sodium hydroxide followed by soaking in buffer solution. The developed sensor was utilized successfully for the determination of thiocyanate in human saliva and in spiked saliva samples. - Highlights: • Preparation of different optodes using different compositions • Mechanism depends on co-extraction of thiocyanate and protons to membrane. • Sensor showed excellent selectivity. • Sensor could be applied for thiocyanate determination in real saliva.

  15. Highly selective thiocyanate optochemical sensor based on manganese(III)-salophen ionophore

    Energy Technology Data Exchange (ETDEWEB)

    Abdel-Haleem, Fatehy M., E-mail: fatehy@sci.cu.edu.eg; Rizk, Mahmoud S.

    2017-06-01

    We report on the development of optochemical sensor based on Mn(III)-salophen ionophore. The sensor was prepared by embedding the ionophore in a plasticized poly (vinyl chloride) impregnated with the chromoionophore ETH7075. Optical response to thiocyanate occurred due to thiocyanate extraction into the polymer via formation of strong complex with the ionophore and simultaneous protonation of the indicator dye yielding the optical response at 545 nm. The developed optochemical sensor exhibited high selectivity for thiocyanate over other anions including the most lipophilic species such as salicylate and perchlorate. For instance, the optical selectivity coefficients, logK{sub SCN,anion}{sup opt}, were as follow: ClO{sub 4}{sup −} = − 5.8; Sal{sup −} = − 4.0; NO{sub 3}{sup −} ˂ − 6. Further, the thiocyanate optical selectivity obtained using the present optochemical sensor was greatly enhanced in comparison with that obtained using an anion-exchanger based sensor. Also, the optimized optochemical sensor exhibited micro-molar detection limit with 2 min response time at pH 4.5 using acetate buffer. The reversibility of the optimized sensor was poor due to strong ligation of the thiocyanate to the central Metal ion, log K = 14.1, which can be overcome by soaking the optode in sodium hydroxide followed by soaking in buffer solution. The developed sensor was utilized successfully for the determination of thiocyanate in human saliva and in spiked saliva samples. - Highlights: • Preparation of different optodes using different compositions • Mechanism depends on co-extraction of thiocyanate and protons to membrane. • Sensor showed excellent selectivity. • Sensor could be applied for thiocyanate determination in real saliva.

  16. 40 CFR Appendix Xi to Part 86 - Sampling Plans for Selective Enforcement Auditing of Light-Duty Vehicles

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 19 2010-07-01 2010-07-01 false Sampling Plans for Selective Enforcement Auditing of Light-Duty Vehicles XI Appendix XI to Part 86 Protection of Environment ENVIRONMENTAL... Enforcement Auditing of Light-Duty Vehicles 40% AQL Table 1—Sampling Plan Code Letter Annual sales of...

  17. A novel dansyl-based fluorescent probe for highly selective detection of ferric ions.

    Science.gov (United States)

    Yang, Min; Sun, Mingtai; Zhang, Zhongping; Wang, Suhua

    2013-02-15

    A novel dansyl-based fluorescent probe was synthesized and characterized. It exhibits high selectivity and sensitivity towards Fe(3+) ion. This fluorescent probe is photostable, water soluble and pH insensitive. The limit of detection is found to be 0.62 μM. These properties make it a good fluorescent probe for Fe(3+) ion detection in both chemical and biological systems. Spike recovery test confirms its practical application in tap water samples. Copyright © 2012 Elsevier B.V. All rights reserved.

  18. Compressed sensing of roller bearing fault based on multiple down-sampling strategy

    Science.gov (United States)

    Wang, Huaqing; Ke, Yanliang; Luo, Ganggang; Tang, Gang

    2016-02-01

    Roller bearings are essential components of rotating machinery and are often exposed to complex operating conditions, which can easily lead to their failures. Thus, to ensure normal production and the safety of machine operators, it is essential to detect the failures as soon as possible. However, it is a major challenge to maintain a balance between detection efficiency and big data acquisition given the limitations of sampling theory. To overcome these limitations, we try to preserve the information pertaining to roller bearing failures using a sampling rate far below the Nyquist sampling rate, which can ease the pressure generated by the large-scale data. The big data of a faulty roller bearing’s vibration signals is firstly reduced by a down-sample strategy while preserving the fault features by selecting peaks to represent the data segments in time domain. However, a problem arises in that the fault features may be weaker than before, since the noise may be mistaken for the peaks when the noise is stronger than the vibration signals, which makes the fault features unable to be extracted by commonly-used envelope analysis. Here we employ compressive sensing theory to overcome this problem, which can make a signal enhancement and reduce the sample sizes further. Moreover, it is capable of detecting fault features from a small number of samples based on orthogonal matching pursuit approach, which can overcome the shortcomings of the multiple down-sample algorithm. Experimental results validate the effectiveness of the proposed technique in detecting roller bearing faults.

  19. Compressed sensing of roller bearing fault based on multiple down-sampling strategy

    International Nuclear Information System (INIS)

    Wang, Huaqing; Ke, Yanliang; Luo, Ganggang; Tang, Gang

    2016-01-01

    Roller bearings are essential components of rotating machinery and are often exposed to complex operating conditions, which can easily lead to their failures. Thus, to ensure normal production and the safety of machine operators, it is essential to detect the failures as soon as possible. However, it is a major challenge to maintain a balance between detection efficiency and big data acquisition given the limitations of sampling theory. To overcome these limitations, we try to preserve the information pertaining to roller bearing failures using a sampling rate far below the Nyquist sampling rate, which can ease the pressure generated by the large-scale data. The big data of a faulty roller bearing’s vibration signals is firstly reduced by a down-sample strategy while preserving the fault features by selecting peaks to represent the data segments in time domain. However, a problem arises in that the fault features may be weaker than before, since the noise may be mistaken for the peaks when the noise is stronger than the vibration signals, which makes the fault features unable to be extracted by commonly-used envelope analysis. Here we employ compressive sensing theory to overcome this problem, which can make a signal enhancement and reduce the sample sizes further. Moreover, it is capable of detecting fault features from a small number of samples based on orthogonal matching pursuit approach, which can overcome the shortcomings of the multiple down-sample algorithm. Experimental results validate the effectiveness of the proposed technique in detecting roller bearing faults. (paper)

  20. A Story-Based Simulation for Teaching Sampling Distributions

    Science.gov (United States)

    Turner, Stephen; Dabney, Alan R.

    2015-01-01

    Statistical inference relies heavily on the concept of sampling distributions. However, sampling distributions are difficult to teach. We present a series of short animations that are story-based, with associated assessments. We hope that our contribution can be useful as a tool to teach sampling distributions in the introductory statistics…

  1. The generalization ability of online SVM classification based on Markov sampling.

    Science.gov (United States)

    Xu, Jie; Yan Tang, Yuan; Zou, Bin; Xu, Zongben; Li, Luoqing; Lu, Yang

    2015-03-01

    In this paper, we consider online support vector machine (SVM) classification learning algorithms with uniformly ergodic Markov chain (u.e.M.c.) samples. We establish the bound on the misclassification error of an online SVM classification algorithm with u.e.M.c. samples based on reproducing kernel Hilbert spaces and obtain a satisfactory convergence rate. We also introduce a novel online SVM classification algorithm based on Markov sampling, and present the numerical studies on the learning ability of online SVM classification based on Markov sampling for benchmark repository. The numerical studies show that the learning performance of the online SVM classification algorithm based on Markov sampling is better than that of classical online SVM classification based on random sampling as the size of training samples is larger.

  2. Statistical surrogate model based sampling criterion for stochastic global optimization of problems with constraints

    Energy Technology Data Exchange (ETDEWEB)

    Cho, Su Gil; Jang, Jun Yong; Kim, Ji Hoon; Lee, Tae Hee [Hanyang University, Seoul (Korea, Republic of); Lee, Min Uk [Romax Technology Ltd., Seoul (Korea, Republic of); Choi, Jong Su; Hong, Sup [Korea Research Institute of Ships and Ocean Engineering, Daejeon (Korea, Republic of)

    2015-04-15

    Sequential surrogate model-based global optimization algorithms, such as super-EGO, have been developed to increase the efficiency of commonly used global optimization technique as well as to ensure the accuracy of optimization. However, earlier studies have drawbacks because there are three phases in the optimization loop and empirical parameters. We propose a united sampling criterion to simplify the algorithm and to achieve the global optimum of problems with constraints without any empirical parameters. It is able to select the points located in a feasible region with high model uncertainty as well as the points along the boundary of constraint at the lowest objective value. The mean squared error determines which criterion is more dominant among the infill sampling criterion and boundary sampling criterion. Also, the method guarantees the accuracy of the surrogate model because the sample points are not located within extremely small regions like super-EGO. The performance of the proposed method, such as the solvability of a problem, convergence properties, and efficiency, are validated through nonlinear numerical examples with disconnected feasible regions.

  3. 40 CFR Appendix X to Part 86 - Sampling Plans for Selective Enforcement Auditing of Heavy-Duty Engines and Light-Duty Trucks

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 19 2010-07-01 2010-07-01 false Sampling Plans for Selective Enforcement Auditing of Heavy-Duty Engines and Light-Duty Trucks X Appendix X to Part 86 Protection of... Plans for Selective Enforcement Auditing of Heavy-Duty Engines and Light-Duty Trucks Table 1—Sampling...

  4. Selective whole genome amplification for resequencing target microbial species from complex natural samples.

    Science.gov (United States)

    Leichty, Aaron R; Brisson, Dustin

    2014-10-01

    Population genomic analyses have demonstrated power to address major questions in evolutionary and molecular microbiology. Collecting populations of genomes is hindered in many microbial species by the absence of a cost effective and practical method to collect ample quantities of sufficiently pure genomic DNA for next-generation sequencing. Here we present a simple method to amplify genomes of a target microbial species present in a complex, natural sample. The selective whole genome amplification (SWGA) technique amplifies target genomes using nucleotide sequence motifs that are common in the target microbe genome, but rare in the background genomes, to prime the highly processive phi29 polymerase. SWGA thus selectively amplifies the target genome from samples in which it originally represented a minor fraction of the total DNA. The post-SWGA samples are enriched in target genomic DNA, which are ideal for population resequencing. We demonstrate the efficacy of SWGA using both laboratory-prepared mixtures of cultured microbes as well as a natural host-microbe association. Targeted amplification of Borrelia burgdorferi mixed with Escherichia coli at genome ratios of 1:2000 resulted in >10(5)-fold amplification of the target genomes with genomic extracts from Wolbachia pipientis-infected Drosophila melanogaster resulted in up to 70% of high-throughput resequencing reads mapping to the W. pipientis genome. By contrast, 2-9% of sequencing reads were derived from W. pipientis without prior amplification. The SWGA technique results in high sequencing coverage at a fraction of the sequencing effort, thus allowing population genomic studies at affordable costs. Copyright © 2014 by the Genetics Society of America.

  5. Construction Tender Subcontract Selection using Case-based Reasoning

    Directory of Open Access Journals (Sweden)

    Due Luu

    2012-11-01

    Full Text Available Obtaining competitive quotations from suitably qualified subcontractors at tender tim n significantly increase the chance of w1nmng a construction project. Amidst an increasingly growing trend to subcontracting in Australia, selecting appropriate subcontractors for a construction project can be a daunting task requiring the analysis of complex and dynamic criteria such as past performance, suitable experience, track record of competitive pricing, financial stability and so on. Subcontractor selection is plagued with uncertainty and vagueness and these conditions are difficul_t o represent in generalised sets of rules. DeciSIOns pertaining to the selection of subcontr:act?s tender time are usually based on the mtu1t1onand past experience of construction estimators. Case-based reasoning (CBR may be an appropriate method of addressing the chal_lenges of selecting subcontractors because CBR 1s able to harness the experiential knowledge of practitioners. This paper reviews the practicality and suitability of a CBR approach for subcontractor tender selection through the development of a prototype CBR procurement advisory system. In this system, subcontractor selection cases are represented by a set of attributes elicited from experienced construction estimators. The results indicate that CBR can enhance the appropriateness of the selection of subcontractors for construction projects.

  6. Concurrent and Longitudinal Associations Among Temperament, Parental Feeding Styles, and Selective Eating in a Preschool Sample.

    Science.gov (United States)

    Kidwell, Katherine M; Kozikowski, Chelsea; Roth, Taylor; Lundahl, Alyssa; Nelson, Timothy D

    2018-06-01

    To examine the associations among negative/reactive temperament, feeding styles, and selective eating in a sample of preschoolers because preschool eating behaviors likely have lasting implications for children's health. A community sample of preschoolers aged 3-5 years (M = 4.49 years, 49.5% female, 75.7% European American) in the Midwest of the United States was recruited to participate in the study (N = 297). Parents completed measures of temperament and feeding styles at two time points 6 months apart. A series of regressions indicated that children who had temperaments high in negative affectivity were significantly more likely to experience instrumental and emotional feeding styles. They were also significantly more likely to be selective eaters. These associations were present when examined both concurrently and after 6 months. This study provides a novel investigation of child temperament and eating behaviors, allowing for a better understanding of how negative affectivity is associated with instrumental feeding, emotional feeding, and selective eating. These results inform interventions to improve child health.

  7. Sample Based Unit Liter Dose Estimates

    International Nuclear Information System (INIS)

    JENSEN, L.

    1999-01-01

    The Tank Waste Characterization Program has taken many core samples, grab samples, and auger samples from the single-shell and double-shell tanks during the past 10 years. Consequently, the amount of sample data available has increased, both in terms of quantity of sample results and the number of tanks characterized. More and better data is available than when the current radiological and toxicological source terms used in the Basis for Interim Operation (BIO) (FDH 1999) and the Final Safety Analysis Report (FSAR) (FDH 1999) were developed. The Nuclear Safety and Licensing (NS and L) organization wants to use the new data to upgrade the radiological and toxicological source terms used in the BIO and FSAR. The NS and L organization requested assistance in developing a statistically based process for developing the source terms. This report describes the statistical techniques used and the assumptions made to support the development of a new radiological source term for liquid and solid wastes stored in single-shell and double-shell tanks

  8. Correlations Between Life-Detection Techniques and Implications for Sampling Site Selection in Planetary Analog Missions

    Science.gov (United States)

    Gentry, Diana M.; Amador, Elena S.; Cable, Morgan L.; Chaudry, Nosheen; Cullen, Thomas; Jacobsen, Malene B.; Murukesan, Gayathri; Schwieterman, Edward W.; Stevens, Adam H.; Stockton, Amanda; Tan, George; Yin, Chang; Cullen, David C.; Geppert, Wolf

    2017-10-01

    We conducted an analog sampling expedition under simulated mission constraints to areas dominated by basaltic tephra of the Eldfell and Fimmvörðuháls lava fields (Iceland). Sites were selected to be "homogeneous" at a coarse remote sensing resolution (10-100 m) in apparent color, morphology, moisture, and grain size, with best-effort realism in numbers of locations and replicates. Three different biomarker assays (counting of nucleic-acid-stained cells via fluorescent microscopy, a luciferin/luciferase assay for adenosine triphosphate, and quantitative polymerase chain reaction (qPCR) to detect DNA associated with bacteria, archaea, and fungi) were characterized at four nested spatial scales (1 m, 10 m, 100 m, and >1 km) by using five common metrics for sample site representativeness (sample mean variance, group F tests, pairwise t tests, and the distribution-free rank sum H and u tests). Correlations between all assays were characterized with Spearman's rank test. The bioluminescence assay showed the most variance across the sites, followed by qPCR for bacterial and archaeal DNA; these results could not be considered representative at the finest resolution tested (1 m). Cell concentration and fungal DNA also had significant local variation, but they were homogeneous over scales of >1 km. These results show that the selection of life detection assays and the number, distribution, and location of sampling sites in a low biomass environment with limited a priori characterization can yield both contrasting and complementary results, and that their interdependence must be given due consideration to maximize science return in future biomarker sampling expeditions.

  9. Effects of soil water saturation on sampling equilibrium and kinetics of selected polycyclic aromatic hydrocarbons.

    Science.gov (United States)

    Kim, Pil-Gon; Roh, Ji-Yeon; Hong, Yongseok; Kwon, Jung-Hwan

    2017-10-01

    Passive sampling can be applied for measuring the freely dissolved concentration of hydrophobic organic chemicals (HOCs) in soil pore water. When using passive samplers under field conditions, however, there are factors that might affect passive sampling equilibrium and kinetics, such as soil water saturation. To determine the effects of soil water saturation on passive sampling, the equilibrium and kinetics of passive sampling were evaluated by observing changes in the distribution coefficient between sampler and soil (K sampler/soil ) and the uptake rate constant (k u ) at various soil water saturations. Polydimethylsiloxane (PDMS) passive samplers were deployed into artificial soils spiked with seven selected polycyclic aromatic hydrocarbons (PAHs). In dry soil (0% water saturation), both K sampler/soil and k u values were much lower than those in wet soils likely due to the contribution of adsorption of PAHs onto soil mineral surfaces and the conformational changes in soil organic matter. For high molecular weight PAHs (chrysene, benzo[a]pyrene, and dibenzo[a,h]anthracene), both K sampler/soil and k u values increased with increasing soil water saturation, whereas they decreased with increasing soil water saturation for low molecular weight PAHs (phenanthrene, anthracene, fluoranthene, and pyrene). Changes in the sorption capacity of soil organic matter with soil water content would be the main cause of the changes in passive sampling equilibrium. Henry's law constant could explain the different behaviors in uptake kinetics of the selected PAHs. The results of this study would be helpful when passive samplers are deployed under various soil water saturations. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. Concentration of ions in selected bottled water samples sold in Malaysia

    Science.gov (United States)

    Aris, Ahmad Zaharin; Kam, Ryan Chuan Yang; Lim, Ai Phing; Praveena, Sarva Mangala

    2013-03-01

    Many consumers around the world, including Malaysians, have turned to bottled water as their main source of drinking water. The aim of this study is to determine the physical and chemical properties of bottled water samples sold in Selangor, Malaysia. A total of 20 bottled water brands consisting of `natural mineral (NM)' and `packaged drinking (PD)' types were randomly collected and analyzed for their physical-chemical characteristics: hydrogen ion concentration (pH), electrical conductivity (EC) and total dissolved solids (TDS), selected major ions: calcium (Ca), potassium (K), magnesium (Mg) and sodium (Na), and minor trace constituents: copper (Cu) and zinc (Zn) to ascertain their suitability for human consumption. The results obtained were compared with guideline values recommended by World Health Organization (WHO) and Malaysian Ministry of Health (MMOH), respectively. It was found that all bottled water samples were in accordance with the guidelines set by WHO and MMOH except for one sample (D3) which was below the pH limit of 6.5. Both NM and PD bottled water were dominated by Na + K > Ca > Mg. Low values for EC and TDS in the bottled water samples showed that water was deficient in essential elements, likely an indication that these were removed by water treatment. Minerals like major ions were present in very low concentrations which could pose a risk to individuals who consume this water on a regular basis. Generally, the overall quality of the supplied bottled water was in accordance to standards and guidelines set by WHO and MMOH and safe for consumption.

  11. A quantitative method to detect explosives and selected semivolatiles in soil samples by Fourier transform infrared spectroscopy

    International Nuclear Information System (INIS)

    Clapper-Gowdy, M.; Dermirgian, J.; Robitaille, G.

    1995-01-01

    This paper describes a novel Fourier transform infrared (FTIR) spectroscopic method that can be used to rapidly screen soil samples from potentially hazardous waste sites. Samples are heated in a thermal desorption unit and the resultant vapors are collected and analyzed in a long-path gas cell mounted in a FTIR. Laboratory analysis of a soil sample by FTIR takes approximately 10 minutes. This method has been developed to identify and quantify microgram concentrations of explosives in soil samples and is directly applicable to the detection of selected volatile organics, semivolatile organics, and pesticides

  12. Feature Selection Based on Mutual Correlation

    Czech Academy of Sciences Publication Activity Database

    Haindl, Michal; Somol, Petr; Ververidis, D.; Kotropoulos, C.

    2006-01-01

    Roč. 19, č. 4225 (2006), s. 569-577 ISSN 0302-9743. [Iberoamerican Congress on Pattern Recognition. CIARP 2006 /11./. Cancun, 14.11.2006-17.11.2006] R&D Projects: GA AV ČR 1ET400750407; GA MŠk 1M0572; GA AV ČR IAA2075302 EU Projects: European Commission(XE) 507752 - MUSCLE Institutional research plan: CEZ:AV0Z10750506 Keywords : feature selection Subject RIV: BD - Theory of Information Impact factor: 0.402, year: 2005 http://library.utia.cas.cz/separaty/historie/haindl-feature selection based on mutual correlation.pdf

  13. Information Gain Based Dimensionality Selection for Classifying Text Documents

    Energy Technology Data Exchange (ETDEWEB)

    Dumidu Wijayasekara; Milos Manic; Miles McQueen

    2013-06-01

    Selecting the optimal dimensions for various knowledge extraction applications is an essential component of data mining. Dimensionality selection techniques are utilized in classification applications to increase the classification accuracy and reduce the computational complexity. In text classification, where the dimensionality of the dataset is extremely high, dimensionality selection is even more important. This paper presents a novel, genetic algorithm based methodology, for dimensionality selection in text mining applications that utilizes information gain. The presented methodology uses information gain of each dimension to change the mutation probability of chromosomes dynamically. Since the information gain is calculated a priori, the computational complexity is not affected. The presented method was tested on a specific text classification problem and compared with conventional genetic algorithm based dimensionality selection. The results show an improvement of 3% in the true positives and 1.6% in the true negatives over conventional dimensionality selection methods.

  14. Evaluating the complexation behavior and regeneration of boron selective glucaminium-based ionic liquids when used as extraction solvents

    International Nuclear Information System (INIS)

    Joshi, Manishkumar D.; Steyer, Daniel J.; Anderson, Jared L.

    2012-01-01

    Highlights: ► Glucaminium-based ILs exhibit high selectivity for boron species using DLLME. ► The concentration of glucaminium-based IL affects type of boron complex formed. ► Use of 0.1 M HCl allows for regeneration of the IL solvent following extraction. ► Selectivity of the glucaminium-based ILs for boron species in seawater is similar to Milli-Q water. - Abstract: Glucaminium-based ionic liquids are a new class of solvents capable of extracting boron-species from water with high efficiency. The complexation behavior of these ILs with borate was thoroughly studied using 11 B NMR. Two different complexes, namely, monochelate complex and bischelate complex, were observed. 11 B NMR was used extensively to determine the formation constants for monochelate and bischelate complexes. The IL concentration was observed to have a significant effect on the IL–borate complexes. Using an in situ dispersive liquid–liquid microextraction (in situ DLLME) method, the extraction efficiency for boron species was increased dramatically when lithium bis[(trifluoromethyl)sulfonyl]imide (LiNTf 2 ) was used as the metathesis salt in an aqueous solution containing 0.1 M sodium chloride. IL regeneration after extraction was achieved using 0.1 M hydrochloric acid. The extraction efficiency of boron species was consistent when the IL was employed after three regeneration cycles. The selectivity of the IL for boron species in synthetic seawater samples was similar to performing the same extraction from Milli-Q water samples.

  15. Variable screening and ranking using sampling-based sensitivity measures

    International Nuclear Information System (INIS)

    Wu, Y-T.; Mohanty, Sitakanta

    2006-01-01

    This paper presents a methodology for screening insignificant random variables and ranking significant important random variables using sensitivity measures including two cumulative distribution function (CDF)-based and two mean-response based measures. The methodology features (1) using random samples to compute sensitivities and (2) using acceptance limits, derived from the test-of-hypothesis, to classify significant and insignificant random variables. Because no approximation is needed in either the form of the performance functions or the type of continuous distribution functions representing input variables, the sampling-based approach can handle highly nonlinear functions with non-normal variables. The main characteristics and effectiveness of the sampling-based sensitivity measures are investigated using both simple and complex examples. Because the number of samples needed does not depend on the number of variables, the methodology appears to be particularly suitable for problems with large, complex models that have large numbers of random variables but relatively few numbers of significant random variables

  16. The Atacama Cosmology Telescope: Physical Properties and Purity of a Galaxy Cluster Sample Selected Via the Sunyaev-Zel'Dovich Effect

    Science.gov (United States)

    Menanteau, Felipe; Gonzalez, Jorge; Juin, Jean-Baptiste; Marriage, Tobias; Reese, Erik D.; Acquaviva, Viviana; Aguirre, Paula; Appel, John Willam; Baker, Andrew J.; Barrientos, L. Felipe; hide

    2010-01-01

    We present optical and X-ray properties for the first confirmed galaxy cluster sample selected by the Sunyaev-Zel'dovich Effect from 148 GHz maps over 455 square degrees of sky made with the Atacama Cosmology Telescope. These maps. coupled with multi-band imaging on 4-meter-class optical telescopes, have yielded a sample of 23 galaxy clusters with redshifts between 0.118 and 1.066. Of these 23 clusters, 10 are newly discovered. The selection of this sample is approximately mass limited and essentially independent of redshift. We provide optical positions, images, redshifts and X-ray fluxes and luminosities for the full sample, and X-ray temperatures of an important subset. The mass limit of the full sample is around 8.0 x 10(exp 14) Stellar Mass. with a number distribution that peaks around a redshift of 0.4. For the 10 highest significance SZE-selected cluster candidates, all of which are optically confirmed, the mass threshold is 1 x 10(exp 15) Stellar Mass and the redshift range is 0.167 to 1.066. Archival observations from Chandra, XMM-Newton. and ROSAT provide X-ray luminosities and temperatures that are broadly consistent with this mass threshold. Our optical follow-up procedure also allowed us to assess the purity of the ACT cluster sample. Eighty (one hundred) percent of the 148 GHz candidates with signal-to-noise ratios greater than 5.1 (5.7) are confirmed as massive clusters. The reported sample represents one of the largest SZE-selected sample of massive clusters over all redshifts within a cosmologically-significant survey volume, which will enable cosmological studies as well as future studies on the evolution, morphology, and stellar populations in the most massive clusters in the Universe.

  17. Studying hardness, workability and minimum bending radius in selectively laser-sintered Ti–6Al–4V alloy samples

    Science.gov (United States)

    Galkina, N. V.; Nosova, Y. A.; Balyakin, A. V.

    2018-03-01

    This research is relevant as it tries to improve the mechanical and service performance of the Ti–6Al–4V titanium alloy obtained by selective laser sintering. For that purpose, sintered samples were annealed at 750 and 850°C for an hour. Sintered and annealed samples were tested for hardness, workability and microstructure. It was found that incomplete annealing of selectively laser-sintered Ti–6Al–4V samples results in an insignificant reduction in hardness and ductility. Sintered and incompletely annealed samples had a hardness of 32..33 HRC, which is lower than the value of annealed parts specified in standards. Complete annealing at temperature 850°C reduces the hardness to 25 HRC and ductility by 15...20%. Incomplete annealing lowers the ductility factor from 0.08 to 0.06. Complete annealing lowers that value to 0.025. Complete annealing probably results in the embrittlement of sintered samples, perhaps due to their oxidation and hydrogenation in the air. Optical metallography showed lateral fractures in both sintered and annealed samples, which might be the reason why they had lower hardness and ductility.

  18. Impact of Menu Sequencing on Internet-Based Educational Module Selection

    Science.gov (United States)

    Bensley, Robert; Brusk, John J.; Rivas, Jason; Anderson, Judith V.

    2006-01-01

    Patterns of Internet-based menu item selection can occur for a number of reasons, many of which may not be based on interest in topic. It then becomes important to ensure menu order is devised in a way that ensures the greatest accuracy in matching user need with selection. This study examined the impact of menu rotation on the selection of…

  19. Stability of selected volatile breath constituents in Tedlar, Kynar and Flexfilm sampling bags

    Science.gov (United States)

    Mochalski, Paweł; King, Julian; Unterkofler, Karl; Amann, Anton

    2016-01-01

    The stability of 41 selected breath constituents in three types of polymer sampling bags, Tedlar, Kynar, and Flexfilm, was investigated using solid phase microextraction and gas chromatography mass spectrometry. The tested molecular species belong to different chemical classes (hydrocarbons, ketones, aldehydes, aromatics, sulphurs, esters, terpenes, etc.) and exhibit close-to-breath low ppb levels (3–12 ppb) with the exception of isoprene, acetone and acetonitrile (106 ppb, 760 ppb, 42 ppb respectively). Stability tests comprised the background emission of contaminants, recovery from dry samples, recovery from humid samples (RH 80% at 37 °C), influence of the bag’s filling degree, and reusability. Findings yield evidence of the superiority of Tedlar bags over remaining polymers in terms of background emission, species stability (up to 7 days for dry samples), and reusability. Recoveries of species under study suffered from the presence of high amounts of water (losses up to 10%). However, only heavier volatiles, with molecular masses higher than 90, exhibited more pronounced losses (20–40%). The sample size (the degree of bag filling) was found to be one of the most important factors affecting the sample integrity. To sum up, it is recommended to store breath samples in pre-conditioned Tedlar bags up to 6 hours at the maximum possible filling volume. Among the remaining films, Kynar can be considered as an alternative to Tedlar; however, higher losses of compounds should be expected even within the first hours of storage. Due to the high background emission Flexfilm is not suitable for sampling and storage of samples for analyses aiming at volatiles at a low ppb level. PMID:23323261

  20. sideSPIM - selective plane illumination based on a conventional inverted microscope.

    Science.gov (United States)

    Hedde, Per Niklas; Malacrida, Leonel; Ahrar, Siavash; Siryaporn, Albert; Gratton, Enrico

    2017-09-01

    Previously described selective plane illumination microscopy techniques typically offset ease of use and sample handling for maximum imaging performance or vice versa . Also, to reduce cost and complexity while maximizing flexibility, it is highly desirable to implement light sheet microscopy such that it can be added to a standard research microscope instead of setting up a dedicated system. We devised a new approach termed sideSPIM that provides uncompromised imaging performance and easy sample handling while, at the same time, offering new applications of plane illumination towards fluidics and high throughput 3D imaging of multiple specimen. Based on an inverted epifluorescence microscope, all of the previous functionality is maintained and modifications to the existing system are kept to a minimum. At the same time, our implementation is able to take full advantage of the speed of the employed sCMOS camera and piezo stage to record data at rates of up to 5 stacks/s. Additionally, sample handling is compatible with established methods and switching magnification to change the field of view from single cells to whole organisms does not require labor intensive adjustments of the system.

  1. Semi-selective medium for Fusarium graminearum detection in seed samples

    Directory of Open Access Journals (Sweden)

    Marivane Segalin

    2010-12-01

    Full Text Available Fungi of the genus Fusarium cause a variety of difficult to control diseases in different crops, including winter cereals and maize. Among the species of this genus Fusarium graminearum deserves attention. The aim of this work was to develop a semi-selective medium to study this fungus. In several experiments, substrates for fungal growth were tested, including fungicides and antibiotics such as iprodiona, nystatin and triadimenol, and the antibacterial agents streptomycin and neomycin sulfate. Five seed samples of wheat, barley, oat, black beans and soybeans for F. graminearum detection by using the media Nash and Snyder agar (NSA, Segalin & Reis agar (SRA and one-quarter dextrose agar (1/4PDA; potato 50g; dextrose 5g and agar 20g, either unsupplemented or supplemented with various concentrations of the antimicrobial agents cited above. The selected components and concentrations (g.L-1 of the proposed medium, Segalin & Reis agar (SRA-FG, were: iprodiona 0.05; nystatin 0,025; triadimenol 0.015; neomycin sulfate 0.05; and streptomycin sulfate, 0.3 added of ¼ potato sucrose agar. In the isolation from seeds of cited plant species, the sensitivity of this medium was similar to that of NSA but with de advantage of maintaining the colony morphological aspects similar to those observed in potato-dextrose-agar medium.

  2. Biological sample collector

    Science.gov (United States)

    Murphy, Gloria A [French Camp, CA

    2010-09-07

    A biological sample collector is adapted to a collect several biological samples in a plurality of filter wells. A biological sample collector may comprise a manifold plate for mounting a filter plate thereon, the filter plate having a plurality of filter wells therein; a hollow slider for engaging and positioning a tube that slides therethrough; and a slide case within which the hollow slider travels to allow the tube to be aligned with a selected filter well of the plurality of filter wells, wherein when the tube is aligned with the selected filter well, the tube is pushed through the hollow slider and into the selected filter well to sealingly engage the selected filter well and to allow the tube to deposit a biological sample onto a filter in the bottom of the selected filter well. The biological sample collector may be portable.

  3. Heterogeneous Causal Effects and Sample Selection Bias

    DEFF Research Database (Denmark)

    Breen, Richard; Choi, Seongsoo; Holm, Anders

    2015-01-01

    The role of education in the process of socioeconomic attainment is a topic of long standing interest to sociologists and economists. Recently there has been growing interest not only in estimating the average causal effect of education on outcomes such as earnings, but also in estimating how...... causal effects might vary over individuals or groups. In this paper we point out one of the under-appreciated hazards of seeking to estimate heterogeneous causal effects: conventional selection bias (that is, selection on baseline differences) can easily be mistaken for heterogeneity of causal effects....... This might lead us to find heterogeneous effects when the true effect is homogenous, or to wrongly estimate not only the magnitude but also the sign of heterogeneous effects. We apply a test for the robustness of heterogeneous causal effects in the face of varying degrees and patterns of selection bias...

  4. 40 CFR Appendix A to Subpart F of... - Sampling Plans for Selective Enforcement Auditing of Small Nonroad Engines

    Science.gov (United States)

    2010-07-01

    ... Enforcement Auditing of Small Nonroad Engines A Appendix A to Subpart F of Part 90 Protection of Environment...-IGNITION ENGINES AT OR BELOW 19 KILOWATTS Selective Enforcement Auditing Pt. 90, Subpt. F, App. A Appendix A to Subpart F of Part 90—Sampling Plans for Selective Enforcement Auditing of Small Nonroad Engines...

  5. Improvements of the Vis-NIRS Model in the Prediction of Soil Organic Matter Content Using Spectral Pretreatments, Sample Selection, and Wavelength Optimization

    Science.gov (United States)

    Lin, Z. D.; Wang, Y. B.; Wang, R. J.; Wang, L. S.; Lu, C. P.; Zhang, Z. Y.; Song, L. T.; Liu, Y.

    2017-07-01

    A total of 130 topsoil samples collected from Guoyang County, Anhui Province, China, were used to establish a Vis-NIR model for the prediction of organic matter content (OMC) in lime concretion black soils. Different spectral pretreatments were applied for minimizing the irrelevant and useless information of the spectra and increasing the spectra correlation with the measured values. Subsequently, the Kennard-Stone (KS) method and sample set partitioning based on joint x-y distances (SPXY) were used to select the training set. Successive projection algorithm (SPA) and genetic algorithm (GA) were then applied for wavelength optimization. Finally, the principal component regression (PCR) model was constructed, in which the optimal number of principal components was determined using the leave-one-out cross validation technique. The results show that the combination of the Savitzky-Golay (SG) filter for smoothing and multiplicative scatter correction (MSC) can eliminate the effect of noise and baseline drift; the SPXY method is preferable to KS in the sample selection; both the SPA and the GA can significantly reduce the number of wavelength variables and favorably increase the accuracy, especially GA, which greatly improved the prediction accuracy of soil OMC with Rcc, RMSEP, and RPD up to 0.9316, 0.2142, and 2.3195, respectively.

  6. Evaluation of a low-cost liquid-based Pap test in rural El Salvador: a split-sample study.

    Science.gov (United States)

    Guo, Jin; Cremer, Miriam; Maza, Mauricio; Alfaro, Karla; Felix, Juan C

    2014-04-01

    We sought to test the diagnostic efficacy of a low-cost, liquid-based cervical cytology that could be implemented in low-resource settings. A prospective, split-sample Pap study was performed in 595 women attending a cervical cancer screening clinic in rural El Salvador. Collected cervical samples were used to make a conventional Pap (cell sample directly to glass slide), whereas residual material was used to make the liquid-based sample using the ClearPrep method. Selected samples were tested from the residual sample of the liquid-based collection for the presence of high-risk Human papillomaviruses. Of 595 patients, 570 were interpreted with the same diagnosis between the 2 methods (95.8% agreement). There were comparable numbers of unsatisfactory cases; however, ClearPrep significantly increased detection of low-grade squamous intraepithelial lesions and decreased the diagnoses of atypical squamous cells of undetermined significance. ClearPrep identified an equivalent number of high-grade squamous intraepithelial lesion cases as the conventional Pap. High-risk human papillomavirus was identified in all cases of high-grade squamous intraepithelial lesion, adenocarcinoma in situ, and cancer as well as in 78% of low-grade squamous intraepithelial lesions out of the residual fluid of the ClearPrep vials. The low-cost ClearPrep Pap test demonstrated equivalent detection of squamous intraepithelial lesions when compared with the conventional Pap smear and demonstrated the potential for ancillary molecular testing. The test seems a viable option for implementation in low-resource settings.

  7. Automatic Samples Selection Using Histogram of Oriented Gradients (HOG Feature Distance

    Directory of Open Access Journals (Sweden)

    Inzar Salfikar

    2018-01-01

    Full Text Available Finding victims at a disaster site is the primary goal of Search-and-Rescue (SAR operations. Many technologies created from research for searching disaster victims through aerial imaging. but, most of them are difficult to detect victims at tsunami disaster sites with victims and backgrounds which are look similar. This research collects post-tsunami aerial imaging data from the internet to builds dataset and model for detecting tsunami disaster victims. Datasets are built based on distance differences from features every sample using Histogram-of-Oriented-Gradient (HOG method. We use the longest distance to collect samples from photo to generate victim and non-victim samples. We claim steps to collect samples by measuring HOG feature distance from all samples. the longest distance between samples will take as a candidate to build the dataset, then classify victim (positives and non-victim (negatives samples manually. The dataset of tsunami disaster victims was re-analyzed using cross-validation Leave-One-Out (LOO with Support-Vector-Machine (SVM method. The experimental results show the performance of two test photos with 61.70% precision, 77.60% accuracy, 74.36% recall and f-measure 67.44% to distinguish victim (positives and non-victim (negatives.

  8. Selective extraction of emerging contaminants from water samples by dispersive liquid-liquid microextraction using functionalized ionic liquids.

    Science.gov (United States)

    Yao, Cong; Li, Tianhao; Twu, Pamela; Pitner, William R; Anderson, Jared L

    2011-03-25

    Functionalized ionic liquids containing the tris(pentafluoroethyl)trifluorophosphate (FAP) anion were used as extraction solvents in dispersive liquid-liquid microextraction (DLLME) for the extraction of 14 emerging contaminants from water samples. The extraction efficiencies and selectivities were compared to those of an in situ IL DLLME method which uses an in situ metathesis reaction to exchange 1-butyl-3-methylimidazolium chloride (BMIM-Cl) to 1-butyl-3-methylimidazolium bis[(trifluoromethyl)sulfonyl]imide (BMIM-NTf(2)). Compounds containing tertiary amine functionality were extracted with high selectivity and sensitivity by the 1-(6-amino-hexyl)-1-methylpyrrolidinium tris(pentafluoroethyl)trifluorophosphate (HNH(2)MPL-FAP) IL compared to other FAP-based ILs and the BMIM-NTf(2) IL. On the other hand, polar or acidic compounds without amine groups exhibited higher enrichment factors using the BMIM-NTf(2) IL. The detection limits for the studied analytes varied from 0.1 to 55.1 μg/L using the traditional IL DLLME method with the HNH(2)MPL-FAP IL as extraction solvent, and from 0.1 to 55.8 μg/L using in situ IL DLLME method with BMIM-Cl+LiNTf(2) as extraction solvent. A 93-fold decrease in the detection limit of caffeine was observed when using the HNH(2)MPL-FAP IL compared to that obtained using in situ IL DLLME method. Real water samples including tap water and creek water were analyzed with both IL DLLME methods and yielded recoveries ranging from 91% to 110%. Copyright © 2011 Elsevier B.V. All rights reserved.

  9. Efficacy of liquid-based cytology versus conventional smears in FNA samples

    Directory of Open Access Journals (Sweden)

    Kalpalata Tripathy

    2015-01-01

    Conclusion: LBC performed on FNA samples can be a simple and valuable technique. Only in few selected cases, where background factor is an essential diagnostic clue, a combination of both CP and TP is necessary.

  10. Improved selectivity towards NO₂ of phthalocyanine-based chemosensors by means of original indigo/nanocarbons hybrid material.

    Science.gov (United States)

    Brunet, J; Pauly, A; Dubois, M; Rodriguez-Mendez, M L; Ndiaye, A L; Varenne, C; Guérin, K

    2014-09-01

    A new and original gas sensor-system dedicated to the selective monitoring of nitrogen dioxide in air and in the presence of ozone, has been successfully achieved. Because of its high sensitivity and its partial selectivity towards oxidizing pollutants (nitrogen dioxide and ozone), copper phthalocyanine-based chemoresistors are relevant. The selectivity towards nitrogen dioxide results from the implementation of a high efficient and selective ozone filter upstream the sensing device. Thus, a powdered indigo/nanocarbons hybrid material has been developed and investigated for such an application. If nanocarbonaceous material acts as a highly permeable matrix with a high specific surface area, immobilized indigo nanoparticles are involved into an ozonolysis reaction with ozone leading to the selective removal of this analytes from air sample. The filtering yields towards each gas have been experimentally quantified and establish the complete removal of ozone while having the concentration of nitrogen dioxide unchanged. Long-term gas exposures reveal the higher durability of hybrid material as compared to nanocarbons and indigo separately. Synthesis, characterizations by many complementary techniques and tests of hybrid filters are detailed. Results on sensor-system including CuPc-based chemoresistors and indigo/carbon nanotubes hybrid material as in-line filter are illustrated. Sensing performances will be especially discussed. Copyright © 2014 Elsevier B.V. All rights reserved.

  11. The co-feature ratio, a novel method for the measurement of chromatographic and signal selectivity in LC-MS-based metabolomics

    International Nuclear Information System (INIS)

    Elmsjö, Albert; Haglöf, Jakob; Engskog, Mikael K.R.; Nestor, Marika; Arvidsson, Torbjörn; Pettersson, Curt

    2017-01-01

    Evaluation of analytical procedures, especially in regards to measuring chromatographic and signal selectivity, is highly challenging in untargeted metabolomics. The aim of this study was to suggest a new straightforward approach for a systematic examination of chromatographic and signal selectivity in LC-MS-based metabolomics. By calculating the ratio between each feature and its co-eluting features (the co-features), a measurement of the chromatographic selectivity (i.e. extent of co-elution) as well as the signal selectivity (e.g. amount of adduct formation) of each feature could be acquired, the co-feature ratio. This approach was used to examine possible differences in chromatographic and signal selectivity present in samples exposed to three different sample preparation procedures. The capability of the co-feature ratio was evaluated both in a classical targeted setting using isotope labelled standards as well as without standards in an untargeted setting. For the targeted analysis, several metabolites showed a skewed quantitative signal due to poor chromatographic selectivity and/or poor signal selectivity. Moreover, evaluation of the untargeted approach through multivariate analysis of the co-feature ratios demonstrated the possibility to screen for metabolites displaying poor chromatographic and/or signal selectivity characteristics. We conclude that the co-feature ratio can be a useful tool in the development and evaluation of analytical procedures in LC-MS-based metabolomics investigations. Increased selectivity through proper choice of analytical procedures may decrease the false positive and false negative discovery rate and thereby increase the validity of any metabolomic investigation. - Highlights: • The co-feature ratio (CFR) is introduced. • CFR measures chromatographic and signal selectivity of a feature. • CFR can be used for evaluating experimental procedures in metabolomics. • CFR can aid in locating features with poor selectivity.

  12. The co-feature ratio, a novel method for the measurement of chromatographic and signal selectivity in LC-MS-based metabolomics

    Energy Technology Data Exchange (ETDEWEB)

    Elmsjö, Albert, E-mail: Albert.Elmsjo@farmkemi.uu.se [Department of Medicinal Chemistry, Division of Analytical Pharmaceutical Chemistry, Uppsala University (Sweden); Haglöf, Jakob; Engskog, Mikael K.R. [Department of Medicinal Chemistry, Division of Analytical Pharmaceutical Chemistry, Uppsala University (Sweden); Nestor, Marika [Department of Immunology, Genetics and Pathology, Uppsala University (Sweden); Arvidsson, Torbjörn [Department of Medicinal Chemistry, Division of Analytical Pharmaceutical Chemistry, Uppsala University (Sweden); Medical Product Agency, Uppsala (Sweden); Pettersson, Curt [Department of Medicinal Chemistry, Division of Analytical Pharmaceutical Chemistry, Uppsala University (Sweden)

    2017-03-01

    Evaluation of analytical procedures, especially in regards to measuring chromatographic and signal selectivity, is highly challenging in untargeted metabolomics. The aim of this study was to suggest a new straightforward approach for a systematic examination of chromatographic and signal selectivity in LC-MS-based metabolomics. By calculating the ratio between each feature and its co-eluting features (the co-features), a measurement of the chromatographic selectivity (i.e. extent of co-elution) as well as the signal selectivity (e.g. amount of adduct formation) of each feature could be acquired, the co-feature ratio. This approach was used to examine possible differences in chromatographic and signal selectivity present in samples exposed to three different sample preparation procedures. The capability of the co-feature ratio was evaluated both in a classical targeted setting using isotope labelled standards as well as without standards in an untargeted setting. For the targeted analysis, several metabolites showed a skewed quantitative signal due to poor chromatographic selectivity and/or poor signal selectivity. Moreover, evaluation of the untargeted approach through multivariate analysis of the co-feature ratios demonstrated the possibility to screen for metabolites displaying poor chromatographic and/or signal selectivity characteristics. We conclude that the co-feature ratio can be a useful tool in the development and evaluation of analytical procedures in LC-MS-based metabolomics investigations. Increased selectivity through proper choice of analytical procedures may decrease the false positive and false negative discovery rate and thereby increase the validity of any metabolomic investigation. - Highlights: • The co-feature ratio (CFR) is introduced. • CFR measures chromatographic and signal selectivity of a feature. • CFR can be used for evaluating experimental procedures in metabolomics. • CFR can aid in locating features with poor selectivity.

  13. MO-FG-CAMPUS-JeP2-01: 4D-MRI with 3D Radial Sampling and Self-Gating-Based K-Space Sorting: Image Quality Improvement by Slab-Selective Excitation

    Energy Technology Data Exchange (ETDEWEB)

    Deng, Z; Pang, J; Tuli, R; Fraass, B; Fan, Z [Cedars Sinai Medical Center, Los Angeles, CA (United States); Yang, W [Cedars-Sinai Medical Center, Los Angeles, CA (United States); Bi, X [Siemens Healthcare, Los Angeles, CA (United States); Hakimian, B [Cedars Sinai Medical Center, Los Angeles CA (United States); Li, D [Cedars Sinai Medical Center, Los Angeles, California (United States)

    2016-06-15

    Purpose: A recent 4D MRI technique based on 3D radial sampling and self-gating-based K-space sorting has shown promising results in characterizing respiratory motion. However due to continuous acquisition and potentially drastic k-space undersampling resultant images could suffer from low blood-to-tissue contrast and streaking artifacts. In this study 3D radial sampling with slab-selective excitation (SS) was proposed in attempt to enhance blood-to-tissue contrast by exploiting the in-flow effect and to suppress the excess signal from the peripheral structures particularly in the superior-inferior direction. The feasibility of improving image quality by using this approach was investigated through a comparison with the previously developed non-selective excitation (NS) approach. Methods: Two excitation approaches SS and NS were compared in 5 cancer patients (1 lung 1 liver 2 pancreas and 1 esophagus) at 3Tesla. Image artifact was assessed in all patients on a 4-point scale (0: poor; 3: excellent). Signal-tonoise ratio (SNR) of the blood vessel (aorta) at the center of field-of-view and its nearby tissue were measured in 3 of the 5 patients (1 liver 2 pancreas) and blood-to-tissue contrast-to-noise ratio (CNR) were then determined. Results: Compared with NS the image quality of SS was visually improved with overall higher signal in all patients (2.6±0.55 vs. 3.4±0.55). SS showed an approximately 2-fold increase of SNR in the blood (aorta: 16.39±1.95 vs. 32.19±7.93) and slight increase in the surrounding tissue (liver/pancreas: 16.91±1.82 vs. 22.31±3.03). As a result the blood-totissue CNR was dramatically higher in the SS method (1.20±1.20 vs. 9.87±6.67). Conclusion: The proposed 3D radial sampling with slabselective excitation allows for reduced image artifact and improved blood SNR and blood-to-tissue CNR. The success of this technique could potentially benefit patients with cancerous tumors that have invaded the surrounding blood vessels where radiation

  14. MO-FG-CAMPUS-JeP2-01: 4D-MRI with 3D Radial Sampling and Self-Gating-Based K-Space Sorting: Image Quality Improvement by Slab-Selective Excitation

    International Nuclear Information System (INIS)

    Deng, Z; Pang, J; Tuli, R; Fraass, B; Fan, Z; Yang, W; Bi, X; Hakimian, B; Li, D

    2016-01-01

    Purpose: A recent 4D MRI technique based on 3D radial sampling and self-gating-based K-space sorting has shown promising results in characterizing respiratory motion. However due to continuous acquisition and potentially drastic k-space undersampling resultant images could suffer from low blood-to-tissue contrast and streaking artifacts. In this study 3D radial sampling with slab-selective excitation (SS) was proposed in attempt to enhance blood-to-tissue contrast by exploiting the in-flow effect and to suppress the excess signal from the peripheral structures particularly in the superior-inferior direction. The feasibility of improving image quality by using this approach was investigated through a comparison with the previously developed non-selective excitation (NS) approach. Methods: Two excitation approaches SS and NS were compared in 5 cancer patients (1 lung 1 liver 2 pancreas and 1 esophagus) at 3Tesla. Image artifact was assessed in all patients on a 4-point scale (0: poor; 3: excellent). Signal-tonoise ratio (SNR) of the blood vessel (aorta) at the center of field-of-view and its nearby tissue were measured in 3 of the 5 patients (1 liver 2 pancreas) and blood-to-tissue contrast-to-noise ratio (CNR) were then determined. Results: Compared with NS the image quality of SS was visually improved with overall higher signal in all patients (2.6±0.55 vs. 3.4±0.55). SS showed an approximately 2-fold increase of SNR in the blood (aorta: 16.39±1.95 vs. 32.19±7.93) and slight increase in the surrounding tissue (liver/pancreas: 16.91±1.82 vs. 22.31±3.03). As a result the blood-totissue CNR was dramatically higher in the SS method (1.20±1.20 vs. 9.87±6.67). Conclusion: The proposed 3D radial sampling with slabselective excitation allows for reduced image artifact and improved blood SNR and blood-to-tissue CNR. The success of this technique could potentially benefit patients with cancerous tumors that have invaded the surrounding blood vessels where radiation

  15. A novel PMT test system based on waveform sampling

    Science.gov (United States)

    Yin, S.; Ma, L.; Ning, Z.; Qian, S.; Wang, Y.; Jiang, X.; Wang, Z.; Yu, B.; Gao, F.; Zhu, Y.; Wang, Z.

    2018-01-01

    Comparing with the traditional test system based on a QDC and TDC and scaler, a test system based on waveform sampling is constructed for signal sampling of the 8"R5912 and the 20"R12860 Hamamatsu PMT in different energy states from single to multiple photoelectrons. In order to achieve high throughput and to reduce the dead time in data processing, the data acquisition software based on LabVIEW is developed and runs with a parallel mechanism. The analysis algorithm is realized in LabVIEW and the spectra of charge, amplitude, signal width and rising time are analyzed offline. The results from Charge-to-Digital Converter, Time-to-Digital Converter and waveform sampling are discussed in detailed comparison.

  16. ACTIVE LEARNING TO OVERCOME SAMPLE SELECTION BIAS: APPLICATION TO PHOTOMETRIC VARIABLE STAR CLASSIFICATION

    Energy Technology Data Exchange (ETDEWEB)

    Richards, Joseph W.; Starr, Dan L.; Miller, Adam A.; Bloom, Joshua S.; Butler, Nathaniel R.; Berian James, J. [Astronomy Department, University of California, Berkeley, CA 94720-7450 (United States); Brink, Henrik [Dark Cosmology Centre, Juliane Maries Vej 30, 2100 Copenhagen O (Denmark); Long, James P.; Rice, John, E-mail: jwrichar@stat.berkeley.edu [Statistics Department, University of California, Berkeley, CA 94720-7450 (United States)

    2012-01-10

    Despite the great promise of machine-learning algorithms to classify and predict astrophysical parameters for the vast numbers of astrophysical sources and transients observed in large-scale surveys, the peculiarities of the training data often manifest as strongly biased predictions on the data of interest. Typically, training sets are derived from historical surveys of brighter, more nearby objects than those from more extensive, deeper surveys (testing data). This sample selection bias can cause catastrophic errors in predictions on the testing data because (1) standard assumptions for machine-learned model selection procedures break down and (2) dense regions of testing space might be completely devoid of training data. We explore possible remedies to sample selection bias, including importance weighting, co-training, and active learning (AL). We argue that AL-where the data whose inclusion in the training set would most improve predictions on the testing set are queried for manual follow-up-is an effective approach and is appropriate for many astronomical applications. For a variable star classification problem on a well-studied set of stars from Hipparcos and Optical Gravitational Lensing Experiment, AL is the optimal method in terms of error rate on the testing data, beating the off-the-shelf classifier by 3.4% and the other proposed methods by at least 3.0%. To aid with manual labeling of variable stars, we developed a Web interface which allows for easy light curve visualization and querying of external databases. Finally, we apply AL to classify variable stars in the All Sky Automated Survey, finding dramatic improvement in our agreement with the ASAS Catalog of Variable Stars, from 65.5% to 79.5%, and a significant increase in the classifier's average confidence for the testing set, from 14.6% to 42.9%, after a few AL iterations.

  17. ACTIVE LEARNING TO OVERCOME SAMPLE SELECTION BIAS: APPLICATION TO PHOTOMETRIC VARIABLE STAR CLASSIFICATION

    International Nuclear Information System (INIS)

    Richards, Joseph W.; Starr, Dan L.; Miller, Adam A.; Bloom, Joshua S.; Butler, Nathaniel R.; Berian James, J.; Brink, Henrik; Long, James P.; Rice, John

    2012-01-01

    Despite the great promise of machine-learning algorithms to classify and predict astrophysical parameters for the vast numbers of astrophysical sources and transients observed in large-scale surveys, the peculiarities of the training data often manifest as strongly biased predictions on the data of interest. Typically, training sets are derived from historical surveys of brighter, more nearby objects than those from more extensive, deeper surveys (testing data). This sample selection bias can cause catastrophic errors in predictions on the testing data because (1) standard assumptions for machine-learned model selection procedures break down and (2) dense regions of testing space might be completely devoid of training data. We explore possible remedies to sample selection bias, including importance weighting, co-training, and active learning (AL). We argue that AL—where the data whose inclusion in the training set would most improve predictions on the testing set are queried for manual follow-up—is an effective approach and is appropriate for many astronomical applications. For a variable star classification problem on a well-studied set of stars from Hipparcos and Optical Gravitational Lensing Experiment, AL is the optimal method in terms of error rate on the testing data, beating the off-the-shelf classifier by 3.4% and the other proposed methods by at least 3.0%. To aid with manual labeling of variable stars, we developed a Web interface which allows for easy light curve visualization and querying of external databases. Finally, we apply AL to classify variable stars in the All Sky Automated Survey, finding dramatic improvement in our agreement with the ASAS Catalog of Variable Stars, from 65.5% to 79.5%, and a significant increase in the classifier's average confidence for the testing set, from 14.6% to 42.9%, after a few AL iterations.

  18. Active Learning to Overcome Sample Selection Bias: Application to Photometric Variable Star Classification

    Science.gov (United States)

    Richards, Joseph W.; Starr, Dan L.; Brink, Henrik; Miller, Adam A.; Bloom, Joshua S.; Butler, Nathaniel R.; James, J. Berian; Long, James P.; Rice, John

    2012-01-01

    Despite the great promise of machine-learning algorithms to classify and predict astrophysical parameters for the vast numbers of astrophysical sources and transients observed in large-scale surveys, the peculiarities of the training data often manifest as strongly biased predictions on the data of interest. Typically, training sets are derived from historical surveys of brighter, more nearby objects than those from more extensive, deeper surveys (testing data). This sample selection bias can cause catastrophic errors in predictions on the testing data because (1) standard assumptions for machine-learned model selection procedures break down and (2) dense regions of testing space might be completely devoid of training data. We explore possible remedies to sample selection bias, including importance weighting, co-training, and active learning (AL). We argue that AL—where the data whose inclusion in the training set would most improve predictions on the testing set are queried for manual follow-up—is an effective approach and is appropriate for many astronomical applications. For a variable star classification problem on a well-studied set of stars from Hipparcos and Optical Gravitational Lensing Experiment, AL is the optimal method in terms of error rate on the testing data, beating the off-the-shelf classifier by 3.4% and the other proposed methods by at least 3.0%. To aid with manual labeling of variable stars, we developed a Web interface which allows for easy light curve visualization and querying of external databases. Finally, we apply AL to classify variable stars in the All Sky Automated Survey, finding dramatic improvement in our agreement with the ASAS Catalog of Variable Stars, from 65.5% to 79.5%, and a significant increase in the classifier's average confidence for the testing set, from 14.6% to 42.9%, after a few AL iterations.

  19. Alumina physically loaded by thiosemicarbazide for selective preconcentration of mercury(II) ion from natural water samples

    International Nuclear Information System (INIS)

    Ahmed, Salwa A.

    2008-01-01

    The multifunctional ligand, thiosemicarbazide, was physically loaded on neutral alumina. The produced alumina-modified solid phase (SP) extractor named, alumina-modified thiosemicarbazide (AM-TSC), experienced high thermal and medium stability. This new phase was identified based on surface coverage determination by thermal desorption method to be 0.437 ± 0.1 mmol g -1 . The selectivity of AM-TSC phase towards the uptake of different nine metal ions was checked using simple, fast and direct batch equilibration technique. AM-TSC was found to have the highest capacity in selective extraction of Hg(II) from aqueous solutions all over the range of pH used (1.0-7.0), compared to the other eight tested metal ions. So, Hg(II) uptake was 1.82 mmol g -1 (distribution coefficient log K d = 5.658) at pH 1.0 or 2.0 and 1.78, 1.73, 1.48, 1.28 and 1.28 mmol g -1 (log K d = 4.607, 4.265, 3.634, 3.372 and 3.372), at pH 3.0, 4.0, 5.0, 6.0 and 7.0, respectively. On the other hand, the metal ions Ca(II), Fe(III), Co(II), Ni(II), Cu(II), Zn(II), Cd(II) and Pb(II) showed low uptake values in range 0.009-0.720 mmol g -1 (log K d < 3.0) at their optimum pH values. A mechanism was suggested to explain the unique uptake of Hg(II) ions based on their binding as neutral and chloroanionic species predominate at pH values ≤3.0 of a medium rich in chloride ions. Application of the new phase for the preconcentration of ultratrace amounts of Hg(II) ions spiked natural water samples: doubly distilled water (DDW), drinking tap water (DTW) and Nile river water (NRW) using cold vapor atomic absorption spectroscopy (CV-AAS) was studied. The high recovery values obtained using AM-TSC (98.5 ± 0.5, 98.0 ± 0.5 and 103.0 ± 1.0) for DDW, DTW and NRW samples, respectively based on excellent enrichment factor 1000, along with a good precision (R.S.D.% 0.51-0.97%, n 3) demonstrate the accuracy and validity of the new modified alumina sorbent for preconcentrating ultratrace amounts of Hg(II) with no

  20. Selective plane illumination microscopy (SPIM) with time-domain fluorescence lifetime imaging microscopy (FLIM) for volumetric measurement of cleared mouse brain samples

    Science.gov (United States)

    Funane, Tsukasa; Hou, Steven S.; Zoltowska, Katarzyna Marta; van Veluw, Susanne J.; Berezovska, Oksana; Kumar, Anand T. N.; Bacskai, Brian J.

    2018-05-01

    We have developed an imaging technique which combines selective plane illumination microscopy with time-domain fluorescence lifetime imaging microscopy (SPIM-FLIM) for three-dimensional volumetric imaging of cleared mouse brains with micro- to mesoscopic resolution. The main features of the microscope include a wavelength-adjustable pulsed laser source (Ti:sapphire) (near-infrared) laser, a BiBO frequency-doubling photonic crystal, a liquid chamber, an electrically focus-tunable lens, a cuvette based sample holder, and an air (dry) objective lens. The performance of the system was evaluated with a lifetime reference dye and micro-bead phantom measurements. Intensity and lifetime maps of three-dimensional human embryonic kidney (HEK) cell culture samples and cleared mouse brain samples expressing green fluorescent protein (GFP) (donor only) and green and red fluorescent protein [positive Förster (fluorescence) resonance energy transfer] were acquired. The results show that the SPIM-FLIM system can be used for sample sizes ranging from single cells to whole mouse organs and can serve as a powerful tool for medical and biological research.

  1. Norm based Threshold Selection for Fault Detectors

    DEFF Research Database (Denmark)

    Rank, Mike Lind; Niemann, Henrik

    1998-01-01

    The design of fault detectors for fault detection and isolation (FDI) in dynamic systems is considered from a norm based point of view. An analysis of norm based threshold selection is given based on different formulations of FDI problems. Both the nominal FDI problem as well as the uncertain FDI...... problem are considered. Based on this analysis, a performance index based on norms of the involved transfer functions is given. The performance index allows us also to optimize the structure of the fault detection filter directly...

  2. Adaptive Rate Sampling and Filtering Based on Level Crossing Sampling

    Directory of Open Access Journals (Sweden)

    Saeed Mian Qaisar

    2009-01-01

    Full Text Available The recent sophistications in areas of mobile systems and sensor networks demand more and more processing resources. In order to maintain the system autonomy, energy saving is becoming one of the most difficult industrial challenges, in mobile computing. Most of efforts to achieve this goal are focused on improving the embedded systems design and the battery technology, but very few studies target to exploit the input signal time-varying nature. This paper aims to achieve power efficiency by intelligently adapting the processing activity to the input signal local characteristics. It is done by completely rethinking the processing chain, by adopting a non conventional sampling scheme and adaptive rate filtering. The proposed approach, based on the LCSS (Level Crossing Sampling Scheme presents two filtering techniques, able to adapt their sampling rate and filter order by online analyzing the input signal variations. Indeed, the principle is to intelligently exploit the signal local characteristics—which is usually never considered—to filter only the relevant signal parts, by employing the relevant order filters. This idea leads towards a drastic gain in the computational efficiency and hence in the processing power when compared to the classical techniques.

  3. Bayesian Model Selection under Time Constraints

    Science.gov (United States)

    Hoege, M.; Nowak, W.; Illman, W. A.

    2017-12-01

    Bayesian model selection (BMS) provides a consistent framework for rating and comparing models in multi-model inference. In cases where models of vastly different complexity compete with each other, we also face vastly different computational runtimes of such models. For instance, time series of a quantity of interest can be simulated by an autoregressive process model that takes even less than a second for one run, or by a partial differential equations-based model with runtimes up to several hours or even days. The classical BMS is based on a quantity called Bayesian model evidence (BME). It determines the model weights in the selection process and resembles a trade-off between bias of a model and its complexity. However, in practice, the runtime of models is another weight relevant factor for model selection. Hence, we believe that it should be included, leading to an overall trade-off problem between bias, variance and computing effort. We approach this triple trade-off from the viewpoint of our ability to generate realizations of the models under a given computational budget. One way to obtain BME values is through sampling-based integration techniques. We argue with the fact that more expensive models can be sampled much less under time constraints than faster models (in straight proportion to their runtime). The computed evidence in favor of a more expensive model is statistically less significant than the evidence computed in favor of a faster model, since sampling-based strategies are always subject to statistical sampling error. We present a straightforward way to include this misbalance into the model weights that are the basis for model selection. Our approach follows directly from the idea of insufficient significance. It is based on a computationally cheap bootstrapping error estimate of model evidence and is easy to implement. The approach is illustrated in a small synthetic modeling study.

  4. Vis-NIR spectrometric determination of Brix and sucrose in sugar production samples using kernel partial least squares with interval selection based on the successive projections algorithm.

    Science.gov (United States)

    de Almeida, Valber Elias; de Araújo Gomes, Adriano; de Sousa Fernandes, David Douglas; Goicoechea, Héctor Casimiro; Galvão, Roberto Kawakami Harrop; Araújo, Mario Cesar Ugulino

    2018-05-01

    This paper proposes a new variable selection method for nonlinear multivariate calibration, combining the Successive Projections Algorithm for interval selection (iSPA) with the Kernel Partial Least Squares (Kernel-PLS) modelling technique. The proposed iSPA-Kernel-PLS algorithm is employed in a case study involving a Vis-NIR spectrometric dataset with complex nonlinear features. The analytical problem consists of determining Brix and sucrose content in samples from a sugar production system, on the basis of transflectance spectra. As compared to full-spectrum Kernel-PLS, the iSPA-Kernel-PLS models involve a smaller number of variables and display statistically significant superiority in terms of accuracy and/or bias in the predictions. Published by Elsevier B.V.

  5. Control charts for location based on different sampling schemes

    NARCIS (Netherlands)

    Mehmood, R.; Riaz, M.; Does, R.J.M.M.

    2013-01-01

    Control charts are the most important statistical process control tool for monitoring variations in a process. A number of articles are available in the literature for the X̄ control chart based on simple random sampling, ranked set sampling, median-ranked set sampling (MRSS), extreme-ranked set

  6. Individual and pen-based oral fluid sampling: A welfare-friendly sampling method for group-housed gestating sows.

    Science.gov (United States)

    Pol, Françoise; Dorenlor, Virginie; Eono, Florent; Eudier, Solveig; Eveno, Eric; Liégard-Vanhecke, Dorine; Rose, Nicolas; Fablet, Christelle

    2017-11-01

    The aims of this study were to assess the feasibility of individual and pen-based oral fluid sampling (OFS) in 35 pig herds with group-housed sows, compare these methods to blood sampling, and assess the factors influencing the success of sampling. Individual samples were collected from at least 30 sows per herd. Pen-based OFS was performed using devices placed in at least three pens for 45min. Information related to the farm, the sows, and their living conditions were collected. Factors significantly associated with the duration of sampling and the chewing behaviour of sows were identified by logistic regression. Individual OFS took 2min 42s on average; the type of floor, swab size, and operator were associated with a sampling time >2min. Pen-based OFS was obtained from 112 devices (62.2%). The type of floor, parity, pen-level activity, and type of feeding were associated with chewing behaviour. Pen activity was associated with the latency to interact with the device. The type of floor, gestation stage, parity, group size, and latency to interact with the device were associated with a chewing time >10min. After 15, 30 and 45min of pen-based OFS, 48%, 60% and 65% of the sows were lying down, respectively. The time spent after the beginning of sampling, genetic type, and time elapsed since the last meal were associated with 50% of the sows lying down at one time point. The mean time to blood sample the sows was 1min 16s and 2min 52s if the number of operators required was considered in the sampling time estimation. The genetic type, parity, and type of floor were significantly associated with a sampling time higher than 1min 30s. This study shows that individual OFS is easy to perform in group-housed sows by a single operator, even though straw-bedded animals take longer to sample than animals housed on slatted floors, and suggests some guidelines to optimise pen-based OFS success. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. Review of progresses on clinical applications of ion selective electrodes for electrolytic ion tests: from conventional ISEs to graphene-based ISEs

    Directory of Open Access Journals (Sweden)

    Rongguo Yan

    2016-10-01

    Full Text Available There exist several positively and negatively charged electrolytes or ions in human blood, urine, and other body fluids. Tests that measure the concentration of these ions in clinics are performed using a more affordable, portable, and disposable potentiometric sensing method with few sample volumes, which requires the use of ion-selective electrodes (ISEs and reference electrodes. This review summarily descriptively presents progressive developments and applications of ion selective electrodes in medical laboratory electrolytic ion tests, from conventional ISEs, solid-contact ISEs, carbon nanotube based ISEs, to graphene-based ISEs.

  8. Selection of 3013 Containers for Field Surveillance

    International Nuclear Information System (INIS)

    Larry Peppers; Elizabeth Kelly; James McClard; Gary Friday; Theodore Venetz; Jerry Stakebade

    2007-01-01

    This report revises and combines three earlier reports dealing with the binning, statistical sampling, and sample selection of 3013 containers for field surveillance. It includes changes to the binning specification resulting from completion of the Savannah River Site packaging campaign and new information from the shelf-life program and field surveillance activities. The revised bin assignments result in changes to the random sample specification. These changes are necessary to meet the statistical requirements of the surveillance program. This report will be reviewed regularly and revised as needed. Section 1 of this report summarizes the results of an extensive effort to assign all of the current and projected 3013 containers in the Department of Energy (DOE) inventory to one of three bins (Innocuous, Pressure and Corrosion, or Pressure) based on potential failure mechanisms. Grouping containers into bins provides a framework to make a statistical selection of individual containers from the entire population for destructive and nondestructive field surveillance. The binning process consisted of three main steps. First, the packaged containers were binned using information in the Integrated Surveillance Program database and a decision tree. The second task was to assign those containers that could not be binned using the decision tree to a specific bin using container-by-container engineering review. The final task was to evaluate containers not yet packaged and assign them to bins using process knowledge. The technical basis for the decisions made during the binning process is included in Section 1. A composite decision tree and a summary table show all of the containers projected to be in the DOE inventory at the conclusion of packaging at all sites. Decision trees that provide an overview of the binning process and logic are included for each site. Section 2 of this report describes the approach to the statistical selection of containers for surveillance and

  9. An empirical comparison of isolate-based and sample-based definitions of antimicrobial resistance and their effect on estimates of prevalence.

    Science.gov (United States)

    Humphry, R W; Evans, J; Webster, C; Tongue, S C; Innocent, G T; Gunn, G J

    2018-02-01

    Antimicrobial resistance is primarily a problem in human medicine but there are unquantified links of transmission in both directions between animal and human populations. Quantitative assessment of the costs and benefits of reduced antimicrobial usage in livestock requires robust quantification of transmission of resistance between animals, the environment and the human population. This in turn requires appropriate measurement of resistance. To tackle this we selected two different methods for determining whether a sample is resistant - one based on screening a sample, the other on testing individual isolates. Our overall objective was to explore the differences arising from choice of measurement. A literature search demonstrated the widespread use of testing of individual isolates. The first aim of this study was to compare, quantitatively, sample level and isolate level screening. Cattle or sheep faecal samples (n=41) submitted for routine parasitology were tested for antimicrobial resistance in two ways: (1) "streak" direct culture onto plates containing the antimicrobial of interest; (2) determination of minimum inhibitory concentration (MIC) of 8-10 isolates per sample compared to published MIC thresholds. Two antibiotics (ampicillin and nalidixic acid) were tested. With ampicillin, direct culture resulted in more than double the number of resistant samples than the MIC method based on eight individual isolates. The second aim of this study was to demonstrate the utility of the observed relationship between these two measures of antimicrobial resistance to re-estimate the prevalence of antimicrobial resistance from a previous study, in which we had used "streak" cultures. Boot-strap methods were used to estimate the proportion of samples that would have tested resistant in the historic study, had we used the isolate-based MIC method instead. Our boot-strap results indicate that our estimates of prevalence of antimicrobial resistance would have been

  10. [Hyperspectral remote sensing image classification based on SVM optimized by clonal selection].

    Science.gov (United States)

    Liu, Qing-Jie; Jing, Lin-Hai; Wang, Meng-Fei; Lin, Qi-Zhong

    2013-03-01

    Model selection for support vector machine (SVM) involving kernel and the margin parameter values selection is usually time-consuming, impacts training efficiency of SVM model and final classification accuracies of SVM hyperspectral remote sensing image classifier greatly. Firstly, based on combinatorial optimization theory and cross-validation method, artificial immune clonal selection algorithm is introduced to the optimal selection of SVM (CSSVM) kernel parameter a and margin parameter C to improve the training efficiency of SVM model. Then an experiment of classifying AVIRIS in India Pine site of USA was performed for testing the novel CSSVM, as well as a traditional SVM classifier with general Grid Searching cross-validation method (GSSVM) for comparison. And then, evaluation indexes including SVM model training time, classification overall accuracy (OA) and Kappa index of both CSSVM and GSSVM were all analyzed quantitatively. It is demonstrated that OA of CSSVM on test samples and whole image are 85.1% and 81.58, the differences from that of GSSVM are both within 0.08% respectively; And Kappa indexes reach 0.8213 and 0.7728, the differences from that of GSSVM are both within 0.001; While the ratio of model training time of CSSVM and GSSVM is between 1/6 and 1/10. Therefore, CSSVM is fast and accurate algorithm for hyperspectral image classification and is superior to GSSVM.

  11. Polymer platforms for selective detection of cocaine in street samples adulterated with levamisole.

    Science.gov (United States)

    Florea, Anca; Cowen, Todd; Piletsky, Sergey; De Wael, Karolien

    2018-08-15

    Accurate drug detection is of utmost importance for fighting against drug abuse. With a high number of cutting agents and adulterants being added to cut or mask drugs in street powders the number of false results is increasing. We demonstrate for the first time the usefulness of employing polymers readily synthesized by electrodeposition to selectively detect cocaine in the presence of the commonly used adulterant levamisole. The polymers were selected by computational modelling to exhibit high binding affinity towards cocaine and deposited directly on the surface of graphene-modified electrodes via electropolymerization. The resulting platforms allowed a distinct electrochemical signal for cocaine, which is otherwise suppressed by levamisole. Square wave voltammetry was used to quantify cocaine alone and in the presence of levamisole. The usefulness of the platforms was demonstrated in the screening of real street samples. Copyright © 2018 Elsevier B.V. All rights reserved.

  12. A novel EMD selecting thresholding method based on multiple iteration for denoising LIDAR signal

    Science.gov (United States)

    Li, Meng; Jiang, Li-hui; Xiong, Xing-long

    2015-06-01

    Empirical mode decomposition (EMD) approach has been believed to be potentially useful for processing the nonlinear and non-stationary LIDAR signals. To shed further light on its performance, we proposed the EMD selecting thresholding method based on multiple iteration, which essentially acts as a development of EMD interval thresholding (EMD-IT), and randomly alters the samples of noisy parts of all the corrupted intrinsic mode functions to generate a better effect of iteration. Simulations on both synthetic signals and LIDAR signals from real world support this method.

  13. Impact of selective genotyping in the training population on accuracy and bias of genomic selection.

    Science.gov (United States)

    Zhao, Yusheng; Gowda, Manje; Longin, Friedrich H; Würschum, Tobias; Ranc, Nicolas; Reif, Jochen C

    2012-08-01

    Estimating marker effects based on routinely generated phenotypic data of breeding programs is a cost-effective strategy to implement genomic selection. Truncation selection in breeding populations, however, could have a strong impact on the accuracy to predict genomic breeding values. The main objective of our study was to investigate the influence of phenotypic selection on the accuracy and bias of genomic selection. We used experimental data of 788 testcross progenies from an elite maize breeding program. The testcross progenies were evaluated in unreplicated field trials in ten environments and fingerprinted with 857 SNP markers. Random regression best linear unbiased prediction method was used in combination with fivefold cross-validation based on genotypic sampling. We observed a substantial loss in the accuracy to predict genomic breeding values in unidirectional selected populations. In contrast, estimating marker effects based on bidirectional selected populations led to only a marginal decrease in the prediction accuracy of genomic breeding values. We concluded that bidirectional selection is a valuable approach to efficiently implement genomic selection in applied plant breeding programs.

  14. A schiff-base receptor based naphthalimide derivative: Highly selective and colorimetric fluorescent turn-on sensor for Al{sup 3+}

    Energy Technology Data Exchange (ETDEWEB)

    Kang, Lei; Liu, Ya-Tong; Li, Na-Na; Dang, Qian-Xi [Department of Applied Chemistry, College of Science, Northeast Agricultural University, Harbin 150030 (China); Xing, Zhi-Yong, E-mail: zyxing@neau.edu.cn [Department of Applied Chemistry, College of Science, Northeast Agricultural University, Harbin 150030 (China); Li, Jin-Long; Zhang, Yu [College of Heilongjiang Province Key Laboratory of Fine Chemicals, Qiqihar University, Qiqihar 161006 (China)

    2017-06-15

    A new schiff-base receptor L based on naphthalimide had been investigated as a selective and sensitive chemosensor for Al{sup 3+} in CH{sub 3}OH. Upon addition of Al{sup 3+}, L showed a 39-fold enhancement at 508 nm with colorimetric and fluorometric dual-signaling response which might be induced by the integration of ICT and CHEF. A 1:1 stoichiometry for the L-Al{sup 3+} complex was formed with an association constant of 1.62×10{sup 4} M{sup −1}, and the limit of detection for Al{sup 3+} was determined as 7.4 nM. In addition, the potential utility of L in sensing Al{sup 3+} was also examined in real water samples.

  15. The influence of sampling unit size and spatial arrangement patterns on neighborhood-based spatial structure analyses of forest stands

    Energy Technology Data Exchange (ETDEWEB)

    Wang, H.; Zhang, G.; Hui, G.; Li, Y.; Hu, Y.; Zhao, Z.

    2016-07-01

    Aim of study: Neighborhood-based stand spatial structure parameters can quantify and characterize forest spatial structure effectively. How these neighborhood-based structure parameters are influenced by the selection of different numbers of nearest-neighbor trees is unclear, and there is some disagreement in the literature regarding the appropriate number of nearest-neighbor trees to sample around reference trees. Understanding how to efficiently characterize forest structure is critical for forest management. Area of study: Multi-species uneven-aged forests of Northern China. Material and methods: We simulated stands with different spatial structural characteristics and systematically compared their structure parameters when two to eight neighboring trees were selected. Main results: Results showed that values of uniform angle index calculated in the same stand were different with different sizes of structure unit. When tree species and sizes were completely randomly interspersed, different numbers of neighbors had little influence on mingling and dominance indices. Changes of mingling or dominance indices caused by different numbers of neighbors occurred when the tree species or size classes were not randomly interspersed and their changing characteristics can be detected according to the spatial arrangement patterns of tree species and sizes. Research highlights: The number of neighboring trees selected for analyzing stand spatial structure parameters should be fixed. We proposed that the four-tree structure unit is the best compromise between sampling accuracy and costs for practical forest management. (Author)

  16. Sampling-based exploration of folded state of a protein under kinematic and geometric constraints

    KAUST Repository

    Yao, Peggy

    2011-10-04

    Flexibility is critical for a folded protein to bind to other molecules (ligands) and achieve its functions. The conformational selection theory suggests that a folded protein deforms continuously and its ligand selects the most favorable conformations to bind to. Therefore, one of the best options to study protein-ligand binding is to sample conformations broadly distributed over the protein-folded state. This article presents a new sampler, called kino-geometric sampler (KGS). This sampler encodes dominant energy terms implicitly by simple kinematic and geometric constraints. Two key technical contributions of KGS are (1) a robotics-inspired Jacobian-based method to simultaneously deform a large number of interdependent kinematic cycles without any significant break-up of the closure constraints, and (2) a diffusive strategy to generate conformation distributions that diffuse quickly throughout the protein folded state. Experiments on four very different test proteins demonstrate that KGS can efficiently compute distributions containing conformations close to target (e.g., functional) conformations. These targets are not given to KGS, hence are not used to bias the sampling process. In particular, for a lysine-binding protein, KGS was able to sample conformations in both the intermediate and functional states without the ligand, while previous work using molecular dynamics simulation had required the ligand to be taken into account in the potential function. Overall, KGS demonstrates that kino-geometric constraints characterize the folded subset of a protein conformation space and that this subset is small enough to be approximated by a relatively small distribution of conformations. © 2011 Wiley Periodicals, Inc.

  17. Soft X-Ray Observations of a Complete Sample of X-Ray--selected BL Lacertae Objects

    Science.gov (United States)

    Perlman, Eric S.; Stocke, John T.; Wang, Q. Daniel; Morris, Simon L.

    1996-01-01

    We present the results of ROSAT PSPC observations of the X-ray selected BL Lacertae objects (XBLs) in the complete Einstein Extended Medium Sensitivity Survey (EM MS) sample. None of the objects is resolved in their respective PSPC images, but all are easily detected. All BL Lac objects in this sample are well-fitted by single power laws. Their X-ray spectra exhibit a variety of spectral slopes, with best-fit energy power-law spectral indices between α = 0.5-2.3. The PSPC spectra of this sample are slightly steeper than those typical of flat ratio-spectrum quasars. Because almost all of the individual PSPC spectral indices are equal to or slightly steeper than the overall optical to X-ray spectral indices for these same objects, we infer that BL Lac soft X-ray continua are dominated by steep-spectrum synchrotron radiation from a broad X-ray jet, rather than flat-spectrum inverse Compton radiation linked to the narrower radio/millimeter jet. The softness of the X-ray spectra of these XBLs revives the possibility proposed by Guilbert, Fabian, & McCray (1983) that BL Lac objects are lineless because the circumnuclear gas cannot be heated sufficiently to permit two stable gas phases, the cooler of which would comprise the broad emission-line clouds. Because unified schemes predict that hard self-Compton radiation is beamed only into a small solid angle in BL Lac objects, the steep-spectrum synchrotron tail controls the temperature of the circumnuclear gas at r ≤ 1018 cm and prevents broad-line cloud formation. We use these new ROSAT data to recalculate the X-ray luminosity function and cosmological evolution of the complete EMSS sample by determining accurate K-corrections for the sample and estimating the effects of variability and the possibility of incompleteness in the sample. Our analysis confirms that XBLs are evolving "negatively," opposite in sense to quasars, with Ve/Va = 0.331±0.060. The statistically significant difference between the values for X

  18. Multispectral iris recognition based on group selection and game theory

    Science.gov (United States)

    Ahmad, Foysal; Roy, Kaushik

    2017-05-01

    A commercially available iris recognition system uses only a narrow band of the near infrared spectrum (700-900 nm) while iris images captured in the wide range of 405 nm to 1550 nm offer potential benefits to enhance recognition performance of an iris biometric system. The novelty of this research is that a group selection algorithm based on coalition game theory is explored to select the best patch subsets. In this algorithm, patches are divided into several groups based on their maximum contribution in different groups. Shapley values are used to evaluate the contribution of patches in different groups. Results show that this group selection based iris recognition

  19. 40 CFR 86.607-84 - Sample selection.

    Science.gov (United States)

    2010-07-01

    ... Auditing of New Light-Duty Vehicles, Light-Duty Trucks, and Heavy-Duty Vehicles § 86.607-84 Sample..., once a manufacturer ships any vehicle from the test sample, it relinquishes the prerogative to conduct...

  20. MITIE: Simultaneous RNA-Seq-based transcript identification and quantification in multiple samples.

    Science.gov (United States)

    Behr, Jonas; Kahles, André; Zhong, Yi; Sreedharan, Vipin T; Drewe, Philipp; Rätsch, Gunnar

    2013-10-15

    High-throughput sequencing of mRNA (RNA-Seq) has led to tremendous improvements in the detection of expressed genes and reconstruction of RNA transcripts. However, the extensive dynamic range of gene expression, technical limitations and biases, as well as the observed complexity of the transcriptional landscape, pose profound computational challenges for transcriptome reconstruction. We present the novel framework MITIE (Mixed Integer Transcript IdEntification) for simultaneous transcript reconstruction and quantification. We define a likelihood function based on the negative binomial distribution, use a regularization approach to select a few transcripts collectively explaining the observed read data and show how to find the optimal solution using Mixed Integer Programming. MITIE can (i) take advantage of known transcripts, (ii) reconstruct and quantify transcripts simultaneously in multiple samples, and (iii) resolve the location of multi-mapping reads. It is designed for genome- and assembly-based transcriptome reconstruction. We present an extensive study based on realistic simulated RNA-Seq data. When compared with state-of-the-art approaches, MITIE proves to be significantly more sensitive and overall more accurate. Moreover, MITIE yields substantial performance gains when used with multiple samples. We applied our system to 38 Drosophila melanogaster modENCODE RNA-Seq libraries and estimated the sensitivity of reconstructing omitted transcript annotations and the specificity with respect to annotated transcripts. Our results corroborate that a well-motivated objective paired with appropriate optimization techniques lead to significant improvements over the state-of-the-art in transcriptome reconstruction. MITIE is implemented in C++ and is available from http://bioweb.me/mitie under the GPL license.

  1. Development of Base Transceiver Station Selection Algorithm for ...

    African Journals Online (AJOL)

    TEMS) equipment was carried out on the existing BTSs, and a linear algorithm optimization program based on the spectral link efficiency of each BTS was developed, the output of this site optimization gives the selected number of base station sites ...

  2. A Simple K-Map Based Variable Selection Scheme in the Direct ...

    African Journals Online (AJOL)

    A multiplexer with (n-l) data select inputs can realise directly a function of n variables. In this paper, a simple k-map based variable selection scheme is proposed such that an n variable logic function can be synthesised using a multiplexer with (n-q) data input variables and q data select variables. The procedure is based on ...

  3. Analysis of Selected Legacy 85Kr Samples

    Energy Technology Data Exchange (ETDEWEB)

    Jubin, Robert Thomas [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Bruffey, Stephanie H. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2016-09-02

    Legacy samples composed of 85Kr encapsulated in solid zeolite 5A material and five small metal tubes containing a mixture of the zeolite combined with a glass matrix resulting from hot isostatic pressing have been preserved. The samples were a result of krypton R&D encapsulation efforts in the late 1970s performed at the Idaho Chemical Processing Plant. These samples were shipped to Oak Ridge National Laboratory (ORNL) in mid-FY 2014. Upon receipt the outer shipping package was opened, and the inner package, removed and placed in a radiological hood. The individual capsules were double bagged as they were removed from the inner shipping pig and placed into individual glass sample bottles for further analysis. The five capsules were then x-ray imaged. Capsules 1 and 4 appear intact and to contain an amorphous mass within the capsules. Capsule 2 clearly shows the saw marks on the capsule and a quantity of loose pellet or bead-like material remaining in the capsule. Capsule 3 shows similar bead-like material within the intact capsule. Capsule 5 had been opened at an undetermined time in the past. The end of this capsule appears to have been cut off, and there are additional saw marks on the side of the capsule. X-ray tomography allowed the capsules to be viewed along the three axes. Of most interest was determining whether there was any residual material in the closed end of Capsule 5. The images confirmed the presence of residual material within this capsule. The material appears to be compacted but still retains some of the bead-like morphology. Based on the nondestructive analysis (NDA) results, a proposed path forward was formulated to advance this effort toward the original goals of understanding the effects of extended storage on the waste form and package. Based on the initial NDA and the fact that there are at least two breached samples, it was proposed that exploratory tests be conducted with the breached specimens before opening the three intact

  4. A dansyl based fluorescence chemosensor for Hg2+ and its application in the complicated environment samples

    Science.gov (United States)

    Zhou, Shuai; Zhou, Ze-Quan; Zhao, Xuan-Xuan; Xiao, Yu-Hao; Xi, Gang; Liu, Jin-Ting; Zhao, Bao-Xiang

    2015-09-01

    We have developed a novel fluorescent chemosensor (DAM) based on dansyl and morpholine units for the detection of mercury ion with excellent selectivity and sensitivity. In the presence of Hg2+ in a mixture solution of HEPES buffer (pH 7.5, 20 mM) and MeCN (2/8, v/v) at room temperature, the fluorescence of DAM was almost completely quenched from green to colorless with fast response time. Moreover, DAM also showed its excellent anti-interference capability even in the presence of large amount of interfering ions. It is worth noting that DAM could be used to detect Hg2+ specifically in the Yellow River samples, which significantly implied the potential applications of DAM in the complicated environment samples.

  5. Passive sampling of selected pesticides in aquatic environment using polar organic chemical integrative samplers.

    Science.gov (United States)

    Thomatou, Alphanna-Akrivi; Zacharias, Ierotheos; Hela, Dimitra; Konstantinou, Ioannis

    2011-08-01

    Polar chemical integrative samplers (POCIS) were examined for their sampling efficiency of 12 pesticides and one metabolite commonly detected in surface waters. Laboratory-based calibration experiments of POCISs were conducted. The determined passive sampling rates were applied for the monitoring of pesticides levels in Lake Amvrakia, Western Greece. Spot sampling was also performed for comparison purposes. Calibration experiments were performed on the basis of static renewal exposure of POCIS under stirred conditions for different time periods of up to 28 days. The analytical procedures were based on the coupling of POCIS and solid phase extraction by Oasis HLB cartridges with gas chromatography-mass spectrometry. The recovery of the target pesticides from the POCIS was generally >79% with relative standard deviation (RSD) monitoring campaign using both passive and spot sampling whereas higher concentrations were measured by spot sampling in most cases. Passive sampling by POCIS provides a useful tool for the monitoring of pesticides in aquatic systems since integrative sampling at rates sufficient for analytical quantitation of ambient levels was observed. Calibration data are in demand for a greater number of compounds in order to extend the use in environmental monitoring.

  6. Selective detection of Co2+ by fluorescent nano probe: Diagnostic approach for analysis of environmental samples and biological activities

    Science.gov (United States)

    Mahajan, Prasad G.; Dige, Nilam C.; Desai, Netaji K.; Patil, Shivajirao R.; Kondalkar, Vijay V.; Hong, Seong-Karp; Lee, Ki Hwan

    2018-06-01

    Nowadays scientist over the world are engaging to put forth improved methods to detect metal ion in an aqueous medium based on fluorescence studies. A simple, selective and sensitive method was proposed for detection of Co2+ ion using fluorescent organic nanoparticles. We synthesized a fluorescent small molecule viz. 4,4‧-{benzene-1,4-diylbis-[(Z)methylylidenenitrilo]}dibenzoic acid (BMBA) to explore its suitability as sensor for Co2+ ion and biocompatibility in form of nanoparticles. Fluorescence nanoparticles (BMBANPs) prepared by simple reprecipitation method. Aggregation induced enhanced emission of BMBANPs exhibits the narrower particle size of 68 nm and sphere shape morphology. The selective fluorescence quenching was observed by addition of Co2+ and does not affected by presence of other coexisting ion solutions. The photo-physical properties, viz. UV-absorption, fluorescence emission, and lifetime measurements are in support of ligand-metal interaction followed by static fluorescence quenching phenomenon in emission of BMBANPs. Finally, we develop a simple analytical method for selective and sensitive determination of Co2+ ion in environmental samples. The cell culture E. coli, Bacillus sps., and M. tuberculosis H37RV strain in the vicinity of BMBANPs indicates virtuous anti-bacterial and anti-tuberculosis activity which is of additional novel application shown by prepared nanoparticles.

  7. Entropy-based gene ranking without selection bias for the predictive classification of microarray data

    Directory of Open Access Journals (Sweden)

    Serafini Maria

    2003-11-01

    Full Text Available Abstract Background We describe the E-RFE method for gene ranking, which is useful for the identification of markers in the predictive classification of array data. The method supports a practical modeling scheme designed to avoid the construction of classification rules based on the selection of too small gene subsets (an effect known as the selection bias, in which the estimated predictive errors are too optimistic due to testing on samples already considered in the feature selection process. Results With E-RFE, we speed up the recursive feature elimination (RFE with SVM classifiers by eliminating chunks of uninteresting genes using an entropy measure of the SVM weights distribution. An optimal subset of genes is selected according to a two-strata model evaluation procedure: modeling is replicated by an external stratified-partition resampling scheme, and, within each run, an internal K-fold cross-validation is used for E-RFE ranking. Also, the optimal number of genes can be estimated according to the saturation of Zipf's law profiles. Conclusions Without a decrease of classification accuracy, E-RFE allows a speed-up factor of 100 with respect to standard RFE, while improving on alternative parametric RFE reduction strategies. Thus, a process for gene selection and error estimation is made practical, ensuring control of the selection bias, and providing additional diagnostic indicators of gene importance.

  8. Pierre Gy's sampling theory and sampling practice heterogeneity, sampling correctness, and statistical process control

    CERN Document Server

    Pitard, Francis F

    1993-01-01

    Pierre Gy's Sampling Theory and Sampling Practice, Second Edition is a concise, step-by-step guide for process variability management and methods. Updated and expanded, this new edition provides a comprehensive study of heterogeneity, covering the basic principles of sampling theory and its various applications. It presents many practical examples to allow readers to select appropriate sampling protocols and assess the validity of sampling protocols from others. The variability of dynamic process streams using variography is discussed to help bridge sampling theory with statistical process control. Many descriptions of good sampling devices, as well as descriptions of poor ones, are featured to educate readers on what to look for when purchasing sampling systems. The book uses its accessible, tutorial style to focus on professional selection and use of methods. The book will be a valuable guide for mineral processing engineers; metallurgists; geologists; miners; chemists; environmental scientists; and practit...

  9. Estimated ventricle size using Evans index: reference values from a population-based sample.

    Science.gov (United States)

    Jaraj, D; Rabiei, K; Marlow, T; Jensen, C; Skoog, I; Wikkelsø, C

    2017-03-01

    Evans index is an estimate of ventricular size used in the diagnosis of idiopathic normal-pressure hydrocephalus (iNPH). Values >0.3 are considered pathological and are required by guidelines for the diagnosis of iNPH. However, there are no previous epidemiological studies on Evans index, and normal values in adults are thus not precisely known. We examined a representative sample to obtain reference values and descriptive data on Evans index. A population-based sample (n = 1235) of men and women aged ≥70 years was examined. The sample comprised people living in private households and residential care, systematically selected from the Swedish population register. Neuropsychiatric examinations, including head computed tomography, were performed between 1986 and 2000. Evans index ranged from 0.11 to 0.46. The mean value in the total sample was 0.28 (SD, 0.04) and 20.6% (n = 255) had values >0.3. Among men aged ≥80 years, the mean value of Evans index was 0.3 (SD, 0.03). Individuals with dementia had a mean value of Evans index of 0.31 (SD, 0.05) and those with radiological signs of iNPH had a mean value of 0.36 (SD, 0.04). A substantial number of subjects had ventricular enlargement according to current criteria. Clinicians and researchers need to be aware of the range of values among older individuals. © 2017 EAN.

  10. sideSPIM – selective plane illumination based on a conventional inverted microscope

    Science.gov (United States)

    Hedde, Per Niklas; Malacrida, Leonel; Ahrar, Siavash; Siryaporn, Albert; Gratton, Enrico

    2017-01-01

    Previously described selective plane illumination microscopy techniques typically offset ease of use and sample handling for maximum imaging performance or vice versa. Also, to reduce cost and complexity while maximizing flexibility, it is highly desirable to implement light sheet microscopy such that it can be added to a standard research microscope instead of setting up a dedicated system. We devised a new approach termed sideSPIM that provides uncompromised imaging performance and easy sample handling while, at the same time, offering new applications of plane illumination towards fluidics and high throughput 3D imaging of multiple specimen. Based on an inverted epifluorescence microscope, all of the previous functionality is maintained and modifications to the existing system are kept to a minimum. At the same time, our implementation is able to take full advantage of the speed of the employed sCMOS camera and piezo stage to record data at rates of up to 5 stacks/s. Additionally, sample handling is compatible with established methods and switching magnification to change the field of view from single cells to whole organisms does not require labor intensive adjustments of the system. PMID:29026679

  11. SELECTING QUASARS BY THEIR INTRINSIC VARIABILITY

    International Nuclear Information System (INIS)

    Schmidt, Kasper B.; Rix, Hans-Walter; Jester, Sebastian; Hennawi, Joseph F.; Marshall, Philip J.; Dobler, Gregory

    2010-01-01

    We present a new and simple technique for selecting extensive, complete, and pure quasar samples, based on their intrinsic variability. We parameterize the single-band variability by a power-law model for the light-curve structure function, with amplitude A and power-law index γ. We show that quasars can be efficiently separated from other non-variable and variable sources by the location of the individual sources in the A-γ plane. We use ∼60 epochs of imaging data, taken over ∼5 years, from the SDSS stripe 82 (S82) survey, where extensive spectroscopy provides a reference sample of quasars, to demonstrate the power of variability as a quasar classifier in multi-epoch surveys. For UV-excess selected objects, variability performs just as well as the standard SDSS color selection, identifying quasars with a completeness of 90% and a purity of 95%. In the redshift range 2.5 < z < 3, where color selection is known to be problematic, variability can select quasars with a completeness of 90% and a purity of 96%. This is a factor of 5-10 times more pure than existing color selection of quasars in this redshift range. Selecting objects from a broad griz color box without u-band information, variability selection in S82 can afford completeness and purity of 92%, despite a factor of 30 more contaminants than quasars in the color-selected feeder sample. This confirms that the fraction of quasars hidden in the 'stellar locus' of color space is small. To test variability selection in the context of Pan-STARRS 1 (PS1) we created mock PS1 data by down-sampling the S82 data to just six epochs over 3 years. Even with this much sparser time sampling, variability is an encouragingly efficient classifier. For instance, a 92% pure and 44% complete quasar candidate sample is attainable from the above griz-selected catalog. Finally, we show that the presented A-γ technique, besides selecting clean and pure samples of quasars (which are stochastically varying objects), is also

  12. RF Sub-sampling Receiver Architecture based on Milieu Adapting Techniques

    DEFF Research Database (Denmark)

    Behjou, Nastaran; Larsen, Torben; Jensen, Ole Kiel

    2012-01-01

    A novel sub-sampling based architecture is proposed which has the ability of reducing the problem of image distortion and improving the signal to noise ratio significantly. The technique is based on sensing the environment and adapting the sampling rate of the receiver to the best possible...

  13. Tripodal chelating ligand-based sensor for selective determination of Zn(II) in biological and environmental samples

    Energy Technology Data Exchange (ETDEWEB)

    Kumar Singh, Ashok; Mehtab, Sameena; Singh, Udai P.; Aggarwal, Vaibhave [Indian Institute of Technology-Roorkee, Department of Chemistry, Roorkee (India)

    2007-08-15

    Potassium hydrotris(N-tert-butyl-2-thioimidazolyl)borate [KTt{sup t-Bu}] and potassium hydrotris(3-tert-butyl-5-isopropyl-l-pyrazolyl)borate [KTp{sup t-Bu,i-Pr}] have been synthesized and evaluated as ionophores for preparation of a poly(vinyl chloride) (PVC) membrane sensor for Zn(II) ions. The effect of different plasticizers, viz. benzyl acetate (BA), dioctyl phthalate (DOP), dibutyl phthalate (DBP), tributyl phosphate (TBP), and o-nitrophenyl octyl ether (o-NPOE), and the anion excluders sodium tetraphenylborate (NaTPB), potassium tetrakis(p-chlorophenyl)borate (KTpClPB), and oleic acid (OA) were studied to improve the performance of the membrane sensor. The best performance was obtained from a sensor with a of [KTt{sup t-Bu}] membrane of composition (mg): [KTt{sup t-Bu}] (15), PVC (150), DBP (275), and NaTPB (4). This sensor had a Nernstian response (slope, 29.4 {+-} 0.2 mV decade of activity) for Zn{sup 2+} ions over a wide concentration range (1.4 x 10{sup -7} to 1.0 x 10{sup -1} mol L{sup -1}) with a limit of detection of 9.5 x 10{sup -8} mol L{sup -1}. It had a relatively fast response time (12 s) and could be used for 3 months without substantial change of the potential. The membrane sensor had very good selectivity for Zn{sup 2+} ions over a wide variety of other cations and could be used in a working pH range of 3.5-7.8. The sensor was also found to work satisfactorily in partially non-aqueous media and could be successfully used for estimation of zinc at trace levels in biological and environmental samples. (orig.)

  14. Ionic liquids: solvents and sorbents in sample preparation.

    Science.gov (United States)

    Clark, Kevin D; Emaus, Miranda N; Varona, Marcelino; Bowers, Ashley N; Anderson, Jared L

    2018-01-01

    The applications of ionic liquids (ILs) and IL-derived sorbents are rapidly expanding. By careful selection of the cation and anion components, the physicochemical properties of ILs can be altered to meet the requirements of specific applications. Reports of IL solvents possessing high selectivity for specific analytes are numerous and continue to motivate the development of new IL-based sample preparation methods that are faster, more selective, and environmentally benign compared to conventional organic solvents. The advantages of ILs have also been exploited in solid/polymer formats in which ordinarily nonspecific sorbents are functionalized with IL moieties in order to impart selectivity for an analyte or analyte class. Furthermore, new ILs that incorporate a paramagnetic component into the IL structure, known as magnetic ionic liquids (MILs), have emerged as useful solvents for bioanalytical applications. In this rapidly changing field, this Review focuses on the applications of ILs and IL-based sorbents in sample preparation with a special emphasis on liquid phase extraction techniques using ILs and MILs, IL-based solid-phase extraction, ILs in mass spectrometry, and biological applications. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  15. Antimicrobial and antibiofilm effects of selected food preservatives against Salmonella spp. isolated from chicken samples.

    Science.gov (United States)

    Er, Buket; Demirhan, Burak; Onurdag, Fatma Kaynak; Ozgacar, Selda Özgen; Oktem, Aysel Bayhan

    2014-03-01

    Salmonella spp. are widespread foodborne pathogens that contaminate egg and poultry meats. Attachment, colonization, as well as biofilm formation capacity of Salmonella spp. on food and contact surfaces of food may cause continuous contamination. Biofilm may play a crucial role in the survival of salmonellae under unfavorable environmental conditions, such as in animal slaughterhouses and processing plants. This could serve as a reservoir compromising food safety and human health. Addition of antimicrobial preservatives extends shelf lives of food products, but even when products are supplemented with adequate amounts of preservatives, it is not always possible to inhibit the microorganisms in a biofilm community. In this study, our aims were i) to determine the minimum inhibitory concentrations (MIC) and minimum biofilm inhibitory concentrations (MBIC) of selected preservatives against planktonic and biofilm forms of Salmonella spp. isolated from chicken samples and Salmonella Typhimurium SL1344 standard strain, ii) to show the differences in the susceptibility patterns of same strains versus the planktonic and biofilm forms to the same preservative agent, and iii) to determine and compare antimicrobial and antibiofilm effects of selected food preservatives against Salmonella spp. For this purpose, Salmonella Typhimurium SL1344 standard strain and 4 Salmonella spp. strains isolated from chicken samples were used. Investigation of antimicrobial and antibiofilm effects of selected food preservatives against Salmonella spp. was done according to Clinical and Laboratory Standards Institute M100-S18 guidelines and BioTimer assay, respectively. As preservative agents, pure ciprofloxacin, sodium nitrite, potassium sorbate, sodium benzoate, methyl paraben, and propyl paraben were selected. As a result, it was determined that MBIC values are greater than the MIC values of the preservatives. This result verified the resistance seen in a biofilm community to food

  16. New Approach Based on Compressive Sampling for Sample Rate Enhancement in DASs for Low-Cost Sensing Nodes

    Directory of Open Access Journals (Sweden)

    Francesco Bonavolontà

    2014-10-01

    Full Text Available The paper deals with the problem of improving the maximum sample rate of analog-to-digital converters (ADCs included in low cost wireless sensing nodes. To this aim, the authors propose an efficient acquisition strategy based on the combined use of high-resolution time-basis and compressive sampling. In particular, the high-resolution time-basis is adopted to provide a proper sequence of random sampling instants, and a suitable software procedure, based on compressive sampling approach, is exploited to reconstruct the signal of interest from the acquired samples. Thanks to the proposed strategy, the effective sample rate of the reconstructed signal can be as high as the frequency of the considered time-basis, thus significantly improving the inherent ADC sample rate. Several tests are carried out in simulated and real conditions to assess the performance of the proposed acquisition strategy in terms of reconstruction error. In particular, the results obtained in experimental tests with ADC included in actual 8- and 32-bits microcontrollers highlight the possibility of achieving effective sample rate up to 50 times higher than that of the original ADC sample rate.

  17. Computational intelligence-based polymerase chain reaction primer selection based on a novel teaching-learning-based optimisation.

    Science.gov (United States)

    Cheng, Yu-Huei

    2014-12-01

    Specific primers play an important role in polymerase chain reaction (PCR) experiments, and therefore it is essential to find specific primers of outstanding quality. Unfortunately, many PCR constraints must be simultaneously inspected which makes specific primer selection difficult and time-consuming. This paper introduces a novel computational intelligence-based method, Teaching-Learning-Based Optimisation, to select the specific and feasible primers. The specified PCR product lengths of 150-300 bp and 500-800 bp with three melting temperature formulae of Wallace's formula, Bolton and McCarthy's formula and SantaLucia's formula were performed. The authors calculate optimal frequency to estimate the quality of primer selection based on a total of 500 runs for 50 random nucleotide sequences of 'Homo species' retrieved from the National Center for Biotechnology Information. The method was then fairly compared with the genetic algorithm (GA) and memetic algorithm (MA) for primer selection in the literature. The results show that the method easily found suitable primers corresponding with the setting primer constraints and had preferable performance than the GA and the MA. Furthermore, the method was also compared with the common method Primer3 according to their method type, primers presentation, parameters setting, speed and memory usage. In conclusion, it is an interesting primer selection method and a valuable tool for automatic high-throughput analysis. In the future, the usage of the primers in the wet lab needs to be validated carefully to increase the reliability of the method.

  18. Discrete Multiwavelet Critical-Sampling Transform-Based OFDM System over Rayleigh Fading Channels

    Directory of Open Access Journals (Sweden)

    Sameer A. Dawood

    2015-01-01

    Full Text Available Discrete multiwavelet critical-sampling transform (DMWCST has been proposed instead of fast Fourier transform (FFT in the realization of the orthogonal frequency division multiplexing (OFDM system. The proposed structure further reduces the level of interference and improves the bandwidth efficiency through the elimination of the cyclic prefix due to the good orthogonality and time-frequency localization properties of the multiwavelet transform. The proposed system was simulated using MATLAB to allow various parameters of the system to be varied and tested. The performance of DMWCST-based OFDM (DMWCST-OFDM was compared with that of the discrete wavelet transform-based OFDM (DWT-OFDM and the traditional FFT-based OFDM (FFT-OFDM over flat fading and frequency-selective fading channels. Results obtained indicate that the performance of the proposed DMWCST-OFDM system achieves significant improvement compared to those of DWT-OFDM and FFT-OFDM systems. DMWCST improves the performance of the OFDM system by a factor of 1.5–2.5 dB and 13–15.5 dB compared with the DWT and FFT, respectively. Therefore the proposed system offers higher data rate in wireless mobile communications.

  19. Advances in paper-based sample pretreatment for point-of-care testing.

    Science.gov (United States)

    Tang, Rui Hua; Yang, Hui; Choi, Jane Ru; Gong, Yan; Feng, Shang Sheng; Pingguan-Murphy, Belinda; Huang, Qing Sheng; Shi, Jun Ling; Mei, Qi Bing; Xu, Feng

    2017-06-01

    In recent years, paper-based point-of-care testing (POCT) has been widely used in medical diagnostics, food safety and environmental monitoring. However, a high-cost, time-consuming and equipment-dependent sample pretreatment technique is generally required for raw sample processing, which are impractical for low-resource and disease-endemic areas. Therefore, there is an escalating demand for a cost-effective, simple and portable pretreatment technique, to be coupled with the commonly used paper-based assay (e.g. lateral flow assay) in POCT. In this review, we focus on the importance of using paper as a platform for sample pretreatment. We firstly discuss the beneficial use of paper for sample pretreatment, including sample collection and storage, separation, extraction, and concentration. We highlight the working principle and fabrication of each sample pretreatment device, the existing challenges and the future perspectives for developing paper-based sample pretreatment technique.

  20. Fabrication of copper-selective PVC membrane electrode based on newly synthesized copper complex of Schiff base as carrier

    Directory of Open Access Journals (Sweden)

    Sulekh Chandra

    2016-09-01

    Full Text Available The newly synthesized copper(II complex of Schiff base p-hydroxyacetophenone semicarbazone was explored as neutral ionophore for the fabrication of poly(vinylchloride (PVC based membrane electrode selective to Cu(II ions. The electrode shows a Nernstian slope of 29.8 ± 0.3 mV/decade with improved linear range of 1.8 × 10−7 to 1.0 × 10−1 M, comparatively lower detection limit 5.7 × 10−8 M between pH range of 2.0–8.0, giving a relatively fast response within 5s and can be used for at least 16 weeks without any divergence in potential. The selectivity coefficient was calculated using the fixed interference method (FIM. The electrode can also be used in partially non-aqueous media having up to 25% (v/v methanol, ethanol or acetone content with no significant change in the value of slope or working concentration range. It was successfully applied for the direct determination of copper content in water and tea samples with satisfactory results. The electrode has been used in the potentiometric titration of Cu2+ with EDTA.

  1. Supplier selection based on multi-criterial AHP method

    Directory of Open Access Journals (Sweden)

    Jana Pócsová

    2010-03-01

    Full Text Available This paper describes a case-study of supplier selection based on multi-criterial Analytic Hierarchy Process (AHP method.It is demonstrated that using adequate mathematical method can bring us “unprejudiced” conclusion, even if the alternatives (suppliercompanies are very similar in given selection-criteria. The result is the best possible supplier company from the viewpoint of chosen criteriaand the price of the product.

  2. Automatic learning-based beam angle selection for thoracic IMRT

    International Nuclear Information System (INIS)

    Amit, Guy; Marshall, Andrea; Purdie, Thomas G.; Jaffray, David A.; Levinshtein, Alex; Hope, Andrew J.; Lindsay, Patricia; Pekar, Vladimir

    2015-01-01

    Purpose: The treatment of thoracic cancer using external beam radiation requires an optimal selection of the radiation beam directions to ensure effective coverage of the target volume and to avoid unnecessary treatment of normal healthy tissues. Intensity modulated radiation therapy (IMRT) planning is a lengthy process, which requires the planner to iterate between choosing beam angles, specifying dose–volume objectives and executing IMRT optimization. In thorax treatment planning, where there are no class solutions for beam placement, beam angle selection is performed manually, based on the planner’s clinical experience. The purpose of this work is to propose and study a computationally efficient framework that utilizes machine learning to automatically select treatment beam angles. Such a framework may be helpful for reducing the overall planning workload. Methods: The authors introduce an automated beam selection method, based on learning the relationships between beam angles and anatomical features. Using a large set of clinically approved IMRT plans, a random forest regression algorithm is trained to map a multitude of anatomical features into an individual beam score. An optimization scheme is then built to select and adjust the beam angles, considering the learned interbeam dependencies. The validity and quality of the automatically selected beams evaluated using the manually selected beams from the corresponding clinical plans as the ground truth. Results: The analysis included 149 clinically approved thoracic IMRT plans. For a randomly selected test subset of 27 plans, IMRT plans were generated using automatically selected beams and compared to the clinical plans. The comparison of the predicted and the clinical beam angles demonstrated a good average correspondence between the two (angular distance 16.8° ± 10°, correlation 0.75 ± 0.2). The dose distributions of the semiautomatic and clinical plans were equivalent in terms of primary target volume

  3. A Lightweight Structure Redesign Method Based on Selective Laser Melting

    Directory of Open Access Journals (Sweden)

    Li Tang

    2016-11-01

    Full Text Available The purpose of this paper is to present a new design method of lightweight parts fabricated by selective laser melting (SLM based on the “Skin-Frame” and to explore the influence of fabrication defects on SLM parts with different sizes. Some standard lattice parts were designed according to the Chinese GB/T 1452-2005 standard and manufactured by SLM. Then these samples were tested in an MTS Insight 30 compression testing machine to study the trends of the yield process with different structure sizes. A set of standard cylinder samples were also designed according to the Chinese GB/T 228-2010 standard. These samples, which were made of iron-nickel alloy (IN718, were also processed by SLM, and then tested in the universal material testing machine INSTRON 1346 to obtain their tensile strength. Furthermore, a lightweight redesigned method was researched. Then some common parts such as a stopper and connecting plate were redesigned using this method. These redesigned parts were fabricated and some application tests have already been performed. The compression testing results show that when the minimum structure size is larger than 1.5 mm, the mechanical characteristics will hardly be affected by process defects. The cylinder parts were fractured by the universal material testing machine at about 1069.6 MPa. These redesigned parts worked well in application tests, with both the weight and fabrication time of these parts reduced more than 20%.

  4. An automatic fuzzy-based multi-temporal brain digital subtraction angiography image fusion algorithm using curvelet transform and content selection strategy.

    Science.gov (United States)

    Momeni, Saba; Pourghassem, Hossein

    2014-08-01

    Recently image fusion has prominent role in medical image processing and is useful to diagnose and treat many diseases. Digital subtraction angiography is one of the most applicable imaging to diagnose brain vascular diseases and radiosurgery of brain. This paper proposes an automatic fuzzy-based multi-temporal fusion algorithm for 2-D digital subtraction angiography images. In this algorithm, for blood vessel map extraction, the valuable frames of brain angiography video are automatically determined to form the digital subtraction angiography images based on a novel definition of vessel dispersion generated by injected contrast material. Our proposed fusion scheme contains different fusion methods for high and low frequency contents based on the coefficient characteristic of wrapping second generation of curvelet transform and a novel content selection strategy. Our proposed content selection strategy is defined based on sample correlation of the curvelet transform coefficients. In our proposed fuzzy-based fusion scheme, the selection of curvelet coefficients are optimized by applying weighted averaging and maximum selection rules for the high frequency coefficients. For low frequency coefficients, the maximum selection rule based on local energy criterion is applied to better visual perception. Our proposed fusion algorithm is evaluated on a perfect brain angiography image dataset consisting of one hundred 2-D internal carotid rotational angiography videos. The obtained results demonstrate the effectiveness and efficiency of our proposed fusion algorithm in comparison with common and basic fusion algorithms.

  5. Knowledge based expert system approach to instrumentation selection (INSEL

    Directory of Open Access Journals (Sweden)

    S. Barai

    2004-08-01

    Full Text Available The selection of appropriate instrumentation for any structural measurement of civil engineering structure is a complex task. Recent developments in Artificial Intelligence (AI can help in an organized use of experiential knowledge available on instrumentation for laboratory and in-situ measurement. Usually, the instrumentation decision is based on the experience and judgment of experimentalists. The heuristic knowledge available for different types of measurement is domain dependent and the information is scattered in varied knowledge sources. The knowledge engineering techniques can help in capturing the experiential knowledge. This paper demonstrates a prototype knowledge based system for INstrument SELection (INSEL assistant where the experiential knowledge for various structural domains can be captured and utilized for making instrumentation decision. In particular, this Knowledge Based Expert System (KBES encodes the heuristics on measurement and demonstrates the instrument selection process with reference to steel bridges. INSEL runs on a microcomputer and uses an INSIGHT 2+ environment.

  6. Diversified models for portfolio selection based on uncertain semivariance

    Science.gov (United States)

    Chen, Lin; Peng, Jin; Zhang, Bo; Rosyida, Isnaini

    2017-02-01

    Since the financial markets are complex, sometimes the future security returns are represented mainly based on experts' estimations due to lack of historical data. This paper proposes a semivariance method for diversified portfolio selection, in which the security returns are given subjective to experts' estimations and depicted as uncertain variables. In the paper, three properties of the semivariance of uncertain variables are verified. Based on the concept of semivariance of uncertain variables, two types of mean-semivariance diversified models for uncertain portfolio selection are proposed. Since the models are complex, a hybrid intelligent algorithm which is based on 99-method and genetic algorithm is designed to solve the models. In this hybrid intelligent algorithm, 99-method is applied to compute the expected value and semivariance of uncertain variables, and genetic algorithm is employed to seek the best allocation plan for portfolio selection. At last, several numerical examples are presented to illustrate the modelling idea and the effectiveness of the algorithm.

  7. Experimental and Sampling Design for the INL-2 Sample Collection Operational Test

    Energy Technology Data Exchange (ETDEWEB)

    Piepel, Gregory F.; Amidan, Brett G.; Matzke, Brett D.

    2009-02-16

    , sample extraction, and analytical methods to be used in the INL-2 study. For each of the five test events, the specified floor of the INL building will be contaminated with BG using a point-release device located in the room specified in the experimental design. Then quality control (QC), reference material coupon (RMC), judgmental, and probabilistic samples will be collected according to the sampling plan for each test event. Judgmental samples will be selected based on professional judgment and prior information. Probabilistic samples were selected with a random aspect and in sufficient numbers to provide desired confidence for detecting contamination or clearing uncontaminated (or decontaminated) areas. Following sample collection for a given test event, the INL building will be decontaminated. For possibly contaminated areas, the numbers of probabilistic samples were chosen to provide 95% confidence of detecting contaminated areas of specified sizes. For rooms that may be uncontaminated following a contamination event, or for whole floors after decontamination, the numbers of judgmental and probabilistic samples were chosen using the CJR approach. The numbers of samples were chosen to support making X%/Y% clearance statements with X = 95% or 99% and Y = 96% or 97%. The experimental and sampling design also provides for making X%/Y% clearance statements using only probabilistic samples. For each test event, the numbers of characterization and clearance samples were selected within limits based on operational considerations while still maintaining high confidence for detection and clearance aspects. The sampling design for all five test events contains 2085 samples, with 1142 after contamination and 943 after decontamination. These numbers include QC, RMC, judgmental, and probabilistic samples. The experimental and sampling design specified in this report provides a good statistical foundation for achieving the objectives of the INL-2 study.

  8. Survey of sampling-based methods for uncertainty and sensitivity analysis

    International Nuclear Information System (INIS)

    Helton, J.C.; Johnson, J.D.; Sallaberry, C.J.; Storlie, C.B.

    2006-01-01

    Sampling-based methods for uncertainty and sensitivity analysis are reviewed. The following topics are considered: (i) definition of probability distributions to characterize epistemic uncertainty in analysis inputs (ii) generation of samples from uncertain analysis inputs (iii) propagation of sampled inputs through an analysis (iv) presentation of uncertainty analysis results, and (v) determination of sensitivity analysis results. Special attention is given to the determination of sensitivity analysis results, with brief descriptions and illustrations given for the following procedures/techniques: examination of scatterplots, correlation analysis, regression analysis, partial correlation analysis, rank transformations, statistical tests for patterns based on gridding, entropy tests for patterns based on gridding, nonparametric regression analysis, squared rank differences/rank correlation coefficient test, two-dimensional Kolmogorov-Smirnov test, tests for patterns based on distance measures, top down coefficient of concordance, and variance decomposition

  9. Survey of sampling-based methods for uncertainty and sensitivity analysis.

    Energy Technology Data Exchange (ETDEWEB)

    Johnson, Jay Dean; Helton, Jon Craig; Sallaberry, Cedric J. PhD. (.; .); Storlie, Curt B. (Colorado State University, Fort Collins, CO)

    2006-06-01

    Sampling-based methods for uncertainty and sensitivity analysis are reviewed. The following topics are considered: (1) Definition of probability distributions to characterize epistemic uncertainty in analysis inputs, (2) Generation of samples from uncertain analysis inputs, (3) Propagation of sampled inputs through an analysis, (4) Presentation of uncertainty analysis results, and (5) Determination of sensitivity analysis results. Special attention is given to the determination of sensitivity analysis results, with brief descriptions and illustrations given for the following procedures/techniques: examination of scatterplots, correlation analysis, regression analysis, partial correlation analysis, rank transformations, statistical tests for patterns based on gridding, entropy tests for patterns based on gridding, nonparametric regression analysis, squared rank differences/rank correlation coefficient test, two dimensional Kolmogorov-Smirnov test, tests for patterns based on distance measures, top down coefficient of concordance, and variance decomposition.

  10. Robust Ground Target Detection by SAR and IR Sensor Fusion Using Adaboost-Based Feature Selection

    Science.gov (United States)

    Kim, Sungho; Song, Woo-Jin; Kim, So-Hyun

    2016-01-01

    Long-range ground targets are difficult to detect in a noisy cluttered environment using either synthetic aperture radar (SAR) images or infrared (IR) images. SAR-based detectors can provide a high detection rate with a high false alarm rate to background scatter noise. IR-based approaches can detect hot targets but are affected strongly by the weather conditions. This paper proposes a novel target detection method by decision-level SAR and IR fusion using an Adaboost-based machine learning scheme to achieve a high detection rate and low false alarm rate. The proposed method consists of individual detection, registration, and fusion architecture. This paper presents a single framework of a SAR and IR target detection method using modified Boolean map visual theory (modBMVT) and feature-selection based fusion. Previous methods applied different algorithms to detect SAR and IR targets because of the different physical image characteristics. One method that is optimized for IR target detection produces unsuccessful results in SAR target detection. This study examined the image characteristics and proposed a unified SAR and IR target detection method by inserting a median local average filter (MLAF, pre-filter) and an asymmetric morphological closing filter (AMCF, post-filter) into the BMVT. The original BMVT was optimized to detect small infrared targets. The proposed modBMVT can remove the thermal and scatter noise by the MLAF and detect extended targets by attaching the AMCF after the BMVT. Heterogeneous SAR and IR images were registered automatically using the proposed RANdom SAmple Region Consensus (RANSARC)-based homography optimization after a brute-force correspondence search using the detected target centers and regions. The final targets were detected by feature-selection based sensor fusion using Adaboost. The proposed method showed good SAR and IR target detection performance through feature selection-based decision fusion on a synthetic database generated

  11. Robust Ground Target Detection by SAR and IR Sensor Fusion Using Adaboost-Based Feature Selection

    Directory of Open Access Journals (Sweden)

    Sungho Kim

    2016-07-01

    Full Text Available Long-range ground targets are difficult to detect in a noisy cluttered environment using either synthetic aperture radar (SAR images or infrared (IR images. SAR-based detectors can provide a high detection rate with a high false alarm rate to background scatter noise. IR-based approaches can detect hot targets but are affected strongly by the weather conditions. This paper proposes a novel target detection method by decision-level SAR and IR fusion using an Adaboost-based machine learning scheme to achieve a high detection rate and low false alarm rate. The proposed method consists of individual detection, registration, and fusion architecture. This paper presents a single framework of a SAR and IR target detection method using modified Boolean map visual theory (modBMVT and feature-selection based fusion. Previous methods applied different algorithms to detect SAR and IR targets because of the different physical image characteristics. One method that is optimized for IR target detection produces unsuccessful results in SAR target detection. This study examined the image characteristics and proposed a unified SAR and IR target detection method by inserting a median local average filter (MLAF, pre-filter and an asymmetric morphological closing filter (AMCF, post-filter into the BMVT. The original BMVT was optimized to detect small infrared targets. The proposed modBMVT can remove the thermal and scatter noise by the MLAF and detect extended targets by attaching the AMCF after the BMVT. Heterogeneous SAR and IR images were registered automatically using the proposed RANdom SAmple Region Consensus (RANSARC-based homography optimization after a brute-force correspondence search using the detected target centers and regions. The final targets were detected by feature-selection based sensor fusion using Adaboost. The proposed method showed good SAR and IR target detection performance through feature selection-based decision fusion on a synthetic

  12. Studies on the matched potential method for determining the selectivity coefficients of ion-selective electrodes based on neutral ionophores: experimental and theoretical verification.

    Science.gov (United States)

    Tohda, K; Dragoe, D; Shibata, M; Umezawa, Y

    2001-06-01

    dependent, the determined selectivity should be used not as "coefficient", but as "factor". Contrary to such a criticism, it was shown theoretically and experimentally that the values of the MPM selectivity coefficient for ions with equal charge (ZA = ZB) never vary with the primary and interfering ion concentrations in the sample solutions even when non-Nernstian responses are observed. This paper is the first comprehensive demonstration of an electrostatics-based theory for the MPM and should be of great value theoretically and experimentally for the audience of the fundamental and applied ISE researchers.

  13. The Recent Developments in Sample Preparation for Mass Spectrometry-Based Metabolomics.

    Science.gov (United States)

    Gong, Zhi-Gang; Hu, Jing; Wu, Xi; Xu, Yong-Jiang

    2017-07-04

    Metabolomics is a critical member in systems biology. Although great progress has been achieved in metabolomics, there are still some problems in sample preparation, data processing and data interpretation. In this review, we intend to explore the roles, challenges and trends in sample preparation for mass spectrometry- (MS-) based metabolomics. The newly emerged sample preparation methods were also critically examined, including laser microdissection, in vivo sampling, dried blood spot, microwave, ultrasound and enzyme-assisted extraction, as well as microextraction techniques. Finally, we provide some conclusions and perspectives for sample preparation in MS-based metabolomics.

  14. A highly sensitive and selective aptamer-based colorimetric sensor for the rapid detection of PCB 77.

    Science.gov (United States)

    Cheng, Ruojie; Liu, Siyao; Shi, Huijie; Zhao, Guohua

    2018-01-05

    A highly sensitive, specific and simple colorimetric sensor based on aptamer was established for the detection of polychlorinated biphenyls (PCB 77). The use of unmodified gold nanoparticles as a colorimetric probe for aptamer sensors enabled the highly sensitive and selective detection of polychlorinated biphenyls (PCB 77). A linear range of 0.5nM to 900nM was obtained for the colorimetric assay with a minimum detection limit of 0.05nM. In addition, by the methods of circular dichroism, UV and naked eyes, we found that the 35 base fragments retained after cutting 5 bases from the 5 'end of aptamer plays the most significant role in the PCB 77 specific recognition process. We found a novel way to truncated nucleotides to optimize the detection of PCB 77, and the selected nucleotides also could achieve high affinity with PCB 77. At the same time, the efficient detection of the PCB 77 by our colorimetric sensor in the complex environmental water samples was realized, which shows a good application prospect. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. 44 CFR 321.2 - Selection of the mobilization base.

    Science.gov (United States)

    2010-10-01

    ... base. 321.2 Section 321.2 Emergency Management and Assistance FEDERAL EMERGENCY MANAGEMENT AGENCY, DEPARTMENT OF HOMELAND SECURITY PREPAREDNESS MAINTENANCE OF THE MOBILIZATION BASE (DEPARTMENT OF DEFENSE, DEPARTMENT OF ENERGY, MARITIME ADMINISTRATION) § 321.2 Selection of the mobilization base. (a) The Department...

  16. 40 CFR 91.506 - Engine sample selection.

    Science.gov (United States)

    2010-07-01

    ... paragraph (b)(2) of this section. It defines one-tail, 95 percent confidence intervals. σ=actual test sample... individual engine x=mean of emission test results of the actual sample FEL=Family Emission Limit n=The actual... carry-over engine families: After one engine is tested, the manufacturer will combine the test with the...

  17. Evaluation of sampling strategies to estimate crown biomass

    Directory of Open Access Journals (Sweden)

    Krishna P Poudel

    2015-01-01

    Full Text Available Background Depending on tree and site characteristics crown biomass accounts for a significant portion of the total aboveground biomass in the tree. Crown biomass estimation is useful for different purposes including evaluating the economic feasibility of crown utilization for energy production or forest products, fuel load assessments and fire management strategies, and wildfire modeling. However, crown biomass is difficult to predict because of the variability within and among species and sites. Thus the allometric equations used for predicting crown biomass should be based on data collected with precise and unbiased sampling strategies. In this study, we evaluate the performance different sampling strategies to estimate crown biomass and to evaluate the effect of sample size in estimating crown biomass. Methods Using data collected from 20 destructively sampled trees, we evaluated 11 different sampling strategies using six evaluation statistics: bias, relative bias, root mean square error (RMSE, relative RMSE, amount of biomass sampled, and relative biomass sampled. We also evaluated the performance of the selected sampling strategies when different numbers of branches (3, 6, 9, and 12 are selected from each tree. Tree specific log linear model with branch diameter and branch length as covariates was used to obtain individual branch biomass. Results Compared to all other methods stratified sampling with probability proportional to size estimation technique produced better results when three or six branches per tree were sampled. However, the systematic sampling with ratio estimation technique was the best when at least nine branches per tree were sampled. Under the stratified sampling strategy, selecting unequal number of branches per stratum produced approximately similar results to simple random sampling, but it further decreased RMSE when information on branch diameter is used in the design and estimation phases. Conclusions Use of

  18. A dansyl based fluorescence chemosensor for Hg(2+) and its application in the complicated environment samples.

    Science.gov (United States)

    Zhou, Shuai; Zhou, Ze-Quan; Zhao, Xuan-Xuan; Xiao, Yu-Hao; Xi, Gang; Liu, Jin-Ting; Zhao, Bao-Xiang

    2015-09-05

    We have developed a novel fluorescent chemosensor (DAM) based on dansyl and morpholine units for the detection of mercury ion with excellent selectivity and sensitivity. In the presence of Hg(2+) in a mixture solution of HEPES buffer (pH 7.5, 20 mM) and MeCN (2/8, v/v) at room temperature, the fluorescence of DAM was almost completely quenched from green to colorless with fast response time. Moreover, DAM also showed its excellent anti-interference capability even in the presence of large amount of interfering ions. It is worth noting that DAM could be used to detect Hg(2+) specifically in the Yellow River samples, which significantly implied the potential applications of DAM in the complicated environment samples. Copyright © 2015 Elsevier B.V. All rights reserved.

  19. Performance of local information-based link prediction: a sampling perspective

    Science.gov (United States)

    Zhao, Jichang; Feng, Xu; Dong, Li; Liang, Xiao; Xu, Ke

    2012-08-01

    Link prediction is pervasively employed to uncover the missing links in the snapshots of real-world networks, which are usually obtained through different kinds of sampling methods. In the previous literature, in order to evaluate the performance of the prediction, known edges in the sampled snapshot are divided into the training set and the probe set randomly, without considering the underlying sampling approaches. However, different sampling methods might lead to different missing links, especially for the biased ways. For this reason, random partition-based evaluation of performance is no longer convincing if we take the sampling method into account. In this paper, we try to re-evaluate the performance of local information-based link predictions through sampling method governed division of the training set and the probe set. It is interesting that we find that for different sampling methods, each prediction approach performs unevenly. Moreover, most of these predictions perform weakly when the sampling method is biased, which indicates that the performance of these methods might have been overestimated in the prior works.

  20. Networked Estimation for Event-Based Sampling Systems with Packet Dropouts

    Directory of Open Access Journals (Sweden)

    Young Soo Suh

    2009-04-01

    Full Text Available This paper is concerned with a networked estimation problem in which sensor data are transmitted over the network. In the event-based sampling scheme known as level-crossing or send-on-delta (SOD, sensor data are transmitted to the estimator node if the difference between the current sensor value and the last transmitted one is greater than a given threshold. Event-based sampling has been shown to be more efficient than the time-triggered one in some situations, especially in network bandwidth improvement. However, it cannot detect packet dropout situations because data transmission and reception do not use a periodical time-stamp mechanism as found in time-triggered sampling systems. Motivated by this issue, we propose a modified event-based sampling scheme called modified SOD in which sensor data are sent when either the change of sensor output exceeds a given threshold or the time elapses more than a given interval. Through simulation results, we show that the proposed modified SOD sampling significantly improves estimation performance when packet dropouts happen.

  1. Sampling Key Populations for HIV Surveillance: Results From Eight Cross-Sectional Studies Using Respondent-Driven Sampling and Venue-Based Snowball Sampling.

    Science.gov (United States)

    Rao, Amrita; Stahlman, Shauna; Hargreaves, James; Weir, Sharon; Edwards, Jessie; Rice, Brian; Kochelani, Duncan; Mavimbela, Mpumelelo; Baral, Stefan

    2017-10-20

    In using regularly collected or existing surveillance data to characterize engagement in human immunodeficiency virus (HIV) services among marginalized populations, differences in sampling methods may produce different pictures of the target population and may therefore result in different priorities for response. The objective of this study was to use existing data to evaluate the sample distribution of eight studies of female sex workers (FSW) and men who have sex with men (MSM), who were recruited using different sampling approaches in two locations within Sub-Saharan Africa: Manzini, Swaziland and Yaoundé, Cameroon. MSM and FSW participants were recruited using either respondent-driven sampling (RDS) or venue-based snowball sampling. Recruitment took place between 2011 and 2016. Participants at each study site were administered a face-to-face survey to assess sociodemographics, along with the prevalence of self-reported HIV status, frequency of HIV testing, stigma, and other HIV-related characteristics. Crude and RDS-adjusted prevalence estimates were calculated. Crude prevalence estimates from the venue-based snowball samples were compared with the overlap of the RDS-adjusted prevalence estimates, between both FSW and MSM in Cameroon and Swaziland. RDS samples tended to be younger (MSM aged 18-21 years in Swaziland: 47.6% [139/310] in RDS vs 24.3% [42/173] in Snowball, in Cameroon: 47.9% [99/306] in RDS vs 20.1% [52/259] in Snowball; FSW aged 18-21 years in Swaziland 42.5% [82/325] in RDS vs 8.0% [20/249] in Snowball; in Cameroon 15.6% [75/576] in RDS vs 8.1% [25/306] in Snowball). They were less educated (MSM: primary school completed or less in Swaziland 42.6% [109/310] in RDS vs 4.0% [7/173] in Snowball, in Cameroon 46.2% [138/306] in RDS vs 14.3% [37/259] in Snowball; FSW: primary school completed or less in Swaziland 86.6% [281/325] in RDS vs 23.9% [59/247] in Snowball, in Cameroon 87.4% [520/576] in RDS vs 77.5% [238/307] in Snowball) than the snowball

  2. A sampling-based Bayesian model for gas saturation estimationusing seismic AVA and marine CSEM data

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Jinsong; Hoversten, Michael; Vasco, Don; Rubin, Yoram; Hou,Zhangshuan

    2006-04-04

    We develop a sampling-based Bayesian model to jointly invertseismic amplitude versus angles (AVA) and marine controlled-sourceelectromagnetic (CSEM) data for layered reservoir models. The porosityand fluid saturation in each layer of the reservoir, the seismic P- andS-wave velocity and density in the layers below and above the reservoir,and the electrical conductivity of the overburden are considered asrandom variables. Pre-stack seismic AVA data in a selected time windowand real and quadrature components of the recorded electrical field areconsidered as data. We use Markov chain Monte Carlo (MCMC) samplingmethods to obtain a large number of samples from the joint posteriordistribution function. Using those samples, we obtain not only estimatesof each unknown variable, but also its uncertainty information. Thedeveloped method is applied to both synthetic and field data to explorethe combined use of seismic AVA and EM data for gas saturationestimation. Results show that the developed method is effective for jointinversion, and the incorporation of CSEM data reduces uncertainty influid saturation estimation, when compared to results from inversion ofAVA data only.

  3. Object width modulates object-based attentional selection.

    Science.gov (United States)

    Nah, Joseph C; Neppi-Modona, Marco; Strother, Lars; Behrmann, Marlene; Shomstein, Sarah

    2018-04-24

    Visual input typically includes a myriad of objects, some of which are selected for further processing. While these objects vary in shape and size, most evidence supporting object-based guidance of attention is drawn from paradigms employing two identical objects. Importantly, object size is a readily perceived stimulus dimension, and whether it modulates the distribution of attention remains an open question. Across four experiments, the size of the objects in the display was manipulated in a modified version of the two-rectangle paradigm. In Experiment 1, two identical parallel rectangles of two sizes (thin or thick) were presented. Experiments 2-4 employed identical trapezoids (each having a thin and thick end), inverted in orientation. In the experiments, one end of an object was cued and participants performed either a T/L discrimination or a simple target-detection task. Combined results show that, in addition to the standard object-based attentional advantage, there was a further attentional benefit for processing information contained in the thick versus thin end of objects. Additionally, eye-tracking measures demonstrated increased saccade precision towards thick object ends, suggesting that Fitts's Law may play a role in object-based attentional shifts. Taken together, these results suggest that object-based attentional selection is modulated by object width.

  4. Woody species diversity in forest plantations in a mountainous region of Beijing, China: effects of sampling scale and species selection.

    Directory of Open Access Journals (Sweden)

    Yuxin Zhang

    Full Text Available The role of forest plantations in biodiversity conservation has gained more attention in recent years. However, most work on evaluating the diversity of forest plantations focuses only on one spatial scale; thus, we examined the effects of sampling scale on diversity in forest plantations. We designed a hierarchical sampling strategy to collect data on woody species diversity in planted pine (Pinus tabuliformis Carr., planted larch (Larix principis-rupprechtii Mayr., and natural secondary deciduous broadleaf forests in a mountainous region of Beijing, China. Additive diversity partition analysis showed that, compared to natural forests, the planted pine forests had a different woody species diversity partitioning pattern at multi-scales (except the Simpson diversity in the regeneration layer, while the larch plantations did not show multi-scale diversity partitioning patterns that were obviously different from those in the natural secondary broadleaf forest. Compare to the natural secondary broadleaf forests, the effects of planted pine forests on woody species diversity are dependent on the sampling scale and layers selected for analysis. Diversity in the planted larch forest, however, was not significantly different from that in the natural forest for all diversity components at all sampling levels. Our work demonstrated that the species selected for afforestation and the sampling scales selected for data analysis alter the conclusions on the levels of diversity supported by plantations. We suggest that a wide range of scales should be considered in the evaluation of the role of forest plantations on biodiversity conservation.

  5. Generalized Selectivity Description for Polymeric Ion-Selective Electrodes Based on the Phase Boundary Potential Model.

    Science.gov (United States)

    Bakker, Eric

    2010-02-15

    A generalized description of the response behavior of potentiometric polymer membrane ion-selective electrodes is presented on the basis of ion-exchange equilibrium considerations at the sample-membrane interface. This paper includes and extends on previously reported theoretical advances in a more compact yet more comprehensive form. Specifically, the phase boundary potential model is used to derive the origin of the Nernstian response behavior in a single expression, which is valid for a membrane containing any charge type and complex stoichiometry of ionophore and ion-exchanger. This forms the basis for a generalized expression of the selectivity coefficient, which may be used for the selectivity optimization of ion-selective membranes containing electrically charged and neutral ionophores of any desired stoichiometry. It is shown to reduce to expressions published previously for specialized cases, and may be effectively applied to problems relevant in modern potentiometry. The treatment is extended to mixed ion solutions, offering a comprehensive yet formally compact derivation of the response behavior of ion-selective electrodes to a mixture of ions of any desired charge. It is compared to predictions by the less accurate Nicolsky-Eisenman equation. The influence of ion fluxes or any form of electrochemical excitation is not considered here, but may be readily incorporated if an ion-exchange equilibrium at the interface may be assumed in these cases.

  6. Strategy for Ranking the Science Value of the Surface of Asteroid 101955 Bennu for Sample Site Selection for Osiris-REx

    Science.gov (United States)

    Nakamura-Messenger, K.; Connolly, H. C., Jr.; Lauretta, D. S.

    2014-01-01

    OSRIS-REx is NASA's New Frontiers 3 sample return mission that will return at least 60 g of pristine surface material from near-Earth asteroid 101955 Bennu in September 2023. The scientific value of the sample increases enormously with the amount of knowledge captured about the geological context from which the sample is collected. The OSIRIS-REx spacecraft is highly maneuverable and capable of investigating the surface of Bennu at scales down to the sub-cm. The OSIRIS-REx instruments will characterize the overall surface geology including spectral properties, microtexture, and geochemistry of the regolith at the sampling site in exquisite detail for up to 505 days after encountering Bennu in August 2018. The mission requires at the very minimum one acceptable location on the asteroid where a touch-and-go (TAG) sample collection maneuver can be successfully per-formed. Sample site selection requires that the follow-ing maps be produced: Safety, Deliverability, Sampleability, and finally Science Value. If areas on the surface are designated as safe, navigation can fly to them, and they have ingestible regolith, then the scientific value of one site over another will guide site selection.

  7. The profile of selected samples of Croatian athletes based on the items of sport jealousy scale (SJS

    Directory of Open Access Journals (Sweden)

    Sindik Joško

    2016-01-01

    Full Text Available The role of jealousy in sport, as a negative emotional reaction, accompanied by thoughts of inadequacy when compared to others, is the issue of this article. This study had a purpose to define the characteristic profiles of the Croatian athletes, based on single items of Sport Jealousy Scale (SJS II, labeled by several variables: gender, type of sport, age group. Purposive sample of 73 athletes competing at Croatian championships in different sports (football, bowling, volleyball and handball were examined with Croatian version of SJS-II. Three clusters obtained are similarly balanced, according to the number of cases in each cluster. The most simply explained, clusters clearly differentiate the most jealous, moderately jealous and slightly/low jealous athletes. Among the features of the athletes in each cluster, in the most jealous (first cluster are the athletes from team sports, women and older athletes. Females, bowling athletes, athletes from individual (coactive sports and the youngest athletes are the least jealous (grouped in third cluster.

  8. Modular microfluidic system for biological sample preparation

    Science.gov (United States)

    Rose, Klint A.; Mariella, Jr., Raymond P.; Bailey, Christopher G.; Ness, Kevin Dean

    2015-09-29

    A reconfigurable modular microfluidic system for preparation of a biological sample including a series of reconfigurable modules for automated sample preparation adapted to selectively include a) a microfluidic acoustic focusing filter module, b) a dielectrophoresis bacteria filter module, c) a dielectrophoresis virus filter module, d) an isotachophoresis nucleic acid filter module, e) a lyses module, and f) an isotachophoresis-based nucleic acid filter.

  9. PeptideManager: A Peptide Selection Tool for Targeted Proteomic Studies Involving Mixed Samples from Different Species

    Directory of Open Access Journals (Sweden)

    Kevin eDemeure

    2014-09-01

    Full Text Available The search for clinically useful protein biomarkers using advanced mass spectrometry approaches represents a major focus in cancer research. However, the direct analysis of human samples may be challenging due to limited availability, the absence of appropriate control samples, or the large background variability observed in patient material. As an alternative approach, human tumors orthotopically implanted into a different species (xenografts are clinically relevant models that have proven their utility in pre-clinical research. Patient derived xenografts for glioblastoma have been extensively characterized in our laboratory and have been shown to retain the characteristics of the parental tumor at the phenotypic and genetic level. Such models were also found to adequately mimic the behavior and treatment response of human tumors. The reproducibility of such xenograft models, the possibility to identify their host background and perform tumor-host interaction studies, are major advantages over the direct analysis of human samples.At the proteome level, the analysis of xenograft samples is challenged by the presence of proteins from two different species which, depending on tumor size, type or location, often appear at variable ratios. Any proteomics approach aimed at quantifying proteins within such samples must consider the identification of species specific peptides in order to avoid biases introduced by the host proteome. Here, we present an in-house methodology and tool developed to select peptides used as surrogates for protein candidates from a defined proteome (e.g., human in a host proteome background (e.g., mouse, rat suited for a mass spectrometry analysis. The tools presented here are applicable to any species specific proteome, provided a protein database is available. By linking the information from both proteomes, PeptideManager significantly facilitates and expedites the selection of peptides used as surrogates to analyze

  10. Automatic Peak Selection by a Benjamini-Hochberg-Based Algorithm

    KAUST Repository

    Abbas, Ahmed; Kong, Xin-Bing; Liu, Zhi; Jing, Bing-Yi; Gao, Xin

    2013-01-01

    A common issue in bioinformatics is that computational methods often generate a large number of predictions sorted according to certain confidence scores. A key problem is then determining how many predictions must be selected to include most of the true predictions while maintaining reasonably high precision. In nuclear magnetic resonance (NMR)-based protein structure determination, for instance, computational peak picking methods are becoming more and more common, although expert-knowledge remains the method of choice to determine how many peaks among thousands of candidate peaks should be taken into consideration to capture the true peaks. Here, we propose a Benjamini-Hochberg (B-H)-based approach that automatically selects the number of peaks. We formulate the peak selection problem as a multiple testing problem. Given a candidate peak list sorted by either volumes or intensities, we first convert the peaks into p-values and then apply the B-H-based algorithm to automatically select the number of peaks. The proposed approach is tested on the state-of-the-art peak picking methods, including WaVPeak [1] and PICKY [2]. Compared with the traditional fixed number-based approach, our approach returns significantly more true peaks. For instance, by combining WaVPeak or PICKY with the proposed method, the missing peak rates are on average reduced by 20% and 26%, respectively, in a benchmark set of 32 spectra extracted from eight proteins. The consensus of the B-H-selected peaks from both WaVPeak and PICKY achieves 88% recall and 83% precision, which significantly outperforms each individual method and the consensus method without using the B-H algorithm. The proposed method can be used as a standard procedure for any peak picking method and straightforwardly applied to some other prediction selection problems in bioinformatics. The source code, documentation and example data of the proposed method is available at http://sfb.kaust.edu.sa/pages/software.aspx. © 2013

  11. Automatic Peak Selection by a Benjamini-Hochberg-Based Algorithm

    KAUST Repository

    Abbas, Ahmed

    2013-01-07

    A common issue in bioinformatics is that computational methods often generate a large number of predictions sorted according to certain confidence scores. A key problem is then determining how many predictions must be selected to include most of the true predictions while maintaining reasonably high precision. In nuclear magnetic resonance (NMR)-based protein structure determination, for instance, computational peak picking methods are becoming more and more common, although expert-knowledge remains the method of choice to determine how many peaks among thousands of candidate peaks should be taken into consideration to capture the true peaks. Here, we propose a Benjamini-Hochberg (B-H)-based approach that automatically selects the number of peaks. We formulate the peak selection problem as a multiple testing problem. Given a candidate peak list sorted by either volumes or intensities, we first convert the peaks into p-values and then apply the B-H-based algorithm to automatically select the number of peaks. The proposed approach is tested on the state-of-the-art peak picking methods, including WaVPeak [1] and PICKY [2]. Compared with the traditional fixed number-based approach, our approach returns significantly more true peaks. For instance, by combining WaVPeak or PICKY with the proposed method, the missing peak rates are on average reduced by 20% and 26%, respectively, in a benchmark set of 32 spectra extracted from eight proteins. The consensus of the B-H-selected peaks from both WaVPeak and PICKY achieves 88% recall and 83% precision, which significantly outperforms each individual method and the consensus method without using the B-H algorithm. The proposed method can be used as a standard procedure for any peak picking method and straightforwardly applied to some other prediction selection problems in bioinformatics. The source code, documentation and example data of the proposed method is available at http://sfb.kaust.edu.sa/pages/software.aspx. © 2013

  12. Selective Sequential Zero-Base Budgeting Procedures Based on Total Factor Productivity Indicators

    OpenAIRE

    A. Ishikawa; E. F. Sudit

    1981-01-01

    The authors' purpose in this paper is to develop productivity-based sequential budgeting procedures designed to expedite identification of major problem areas in bugetary performance, as well as to reduce the costs associated with comprehensive zero-base analyses. The concept of total factor productivity is reviewed and its relations to ordinary and zero-based budgeting are discussed in detail. An outline for a selective sequential analysis based on monitoring of three key indicators of (a) i...

  13. Oral cancer prognosis based on clinicopathologic and genomic markers using a hybrid of feature selection and machine learning methods

    Science.gov (United States)

    2013-01-01

    Background Machine learning techniques are becoming useful as an alternative approach to conventional medical diagnosis or prognosis as they are good for handling noisy and incomplete data, and significant results can be attained despite a small sample size. Traditionally, clinicians make prognostic decisions based on clinicopathologic markers. However, it is not easy for the most skilful clinician to come out with an accurate prognosis by using these markers alone. Thus, there is a need to use genomic markers to improve the accuracy of prognosis. The main aim of this research is to apply a hybrid of feature selection and machine learning methods in oral cancer prognosis based on the parameters of the correlation of clinicopathologic and genomic markers. Results In the first stage of this research, five feature selection methods have been proposed and experimented on the oral cancer prognosis dataset. In the second stage, the model with the features selected from each feature selection methods are tested on the proposed classifiers. Four types of classifiers are chosen; these are namely, ANFIS, artificial neural network, support vector machine and logistic regression. A k-fold cross-validation is implemented on all types of classifiers due to the small sample size. The hybrid model of ReliefF-GA-ANFIS with 3-input features of drink, invasion and p63 achieved the best accuracy (accuracy = 93.81%; AUC = 0.90) for the oral cancer prognosis. Conclusions The results revealed that the prognosis is superior with the presence of both clinicopathologic and genomic markers. The selected features can be investigated further to validate the potential of becoming as significant prognostic signature in the oral cancer studies. PMID:23725313

  14. Proteomic profiling of renal allograft rejection in serum using magnetic bead-based sample fractionation and MALDI-TOF MS.

    Science.gov (United States)

    Sui, Weiguo; Huang, Liling; Dai, Yong; Chen, Jiejing; Yan, Qiang; Huang, He

    2010-12-01

    Proteomics is one of the emerging techniques for biomarker discovery. Biomarkers can be used for early noninvasive diagnosis and prognosis of diseases and treatment efficacy evaluation. In the present study, the well-established research systems of ClinProt Micro solution incorporated unique magnetic bead sample preparation technology, which, based on matrix-assisted laser desorption ionization time-of-flight mass spectrometry (MALDI-TOF MS), have become very successful in bioinformatics due to its outstanding performance and reproducibility for discovery disease-related biomarker. We collected fasting blood samples from patients with biopsy-confirmed acute renal allograft rejection (n = 12), chronic rejection (n = 12), stable graft function (n = 12) and also from healthy volunteers (n = 13) to study serum peptidome patterns. Specimens were purified with magnetic bead-based weak cation exchange chromatography and analyzed with a MALDI-TOF mass spectrometer. The results indicated that 18 differential peptide peaks were selected as potential biomarkers of acute renal allograft rejection, and 6 differential peptide peaks were selected as potential biomarkers of chronic rejection. A Quick Classifier Algorithm was used to set up the classification models for acute and chronic renal allograft rejection. The algorithm models recognize 82.64% of acute rejection and 98.96% of chronic rejection episodes, respectively. We were able to identify serum protein fingerprints in small sample sizes of recipients with renal allograft rejection and establish the models for diagnosis of renal allograft rejection. This preliminary study demonstrated that proteomics is an emerging tool for early diagnosis of renal allograft rejection and helps us to better understand the pathogenesis of disease process.

  15. Optimal time points sampling in pathway modelling.

    Science.gov (United States)

    Hu, Shiyan

    2004-01-01

    Modelling cellular dynamics based on experimental data is at the heart of system biology. Considerable progress has been made to dynamic pathway modelling as well as the related parameter estimation. However, few of them gives consideration for the issue of optimal sampling time selection for parameter estimation. Time course experiments in molecular biology rarely produce large and accurate data sets and the experiments involved are usually time consuming and expensive. Therefore, to approximate parameters for models with only few available sampling data is of significant practical value. For signal transduction, the sampling intervals are usually not evenly distributed and are based on heuristics. In the paper, we investigate an approach to guide the process of selecting time points in an optimal way to minimize the variance of parameter estimates. In the method, we first formulate the problem to a nonlinear constrained optimization problem by maximum likelihood estimation. We then modify and apply a quantum-inspired evolutionary algorithm, which combines the advantages of both quantum computing and evolutionary computing, to solve the optimization problem. The new algorithm does not suffer from the morass of selecting good initial values and being stuck into local optimum as usually accompanied with the conventional numerical optimization techniques. The simulation results indicate the soundness of the new method.

  16. Risk-Based Sampling: I Don't Want to Weight in Vain.

    Science.gov (United States)

    Powell, Mark R

    2015-12-01

    Recently, there has been considerable interest in developing risk-based sampling for food safety and animal and plant health for efficient allocation of inspection and surveillance resources. The problem of risk-based sampling allocation presents a challenge similar to financial portfolio analysis. Markowitz (1952) laid the foundation for modern portfolio theory based on mean-variance optimization. However, a persistent challenge in implementing portfolio optimization is the problem of estimation error, leading to false "optimal" portfolios and unstable asset weights. In some cases, portfolio diversification based on simple heuristics (e.g., equal allocation) has better out-of-sample performance than complex portfolio optimization methods due to estimation uncertainty. Even for portfolios with a modest number of assets, the estimation window required for true optimization may imply an implausibly long stationary period. The implications for risk-based sampling are illustrated by a simple simulation model of lot inspection for a small, heterogeneous group of producers. © 2015 Society for Risk Analysis.

  17. Sample Curation at a Lunar Outpost

    Science.gov (United States)

    Allen, Carlton C.; Lofgren, Gary E.; Treiman, A. H.; Lindstrom, Marilyn L.

    2007-01-01

    The six Apollo surface missions returned 2,196 individual rock and soil samples, with a total mass of 381.6 kg. Samples were collected based on visual examination by the astronauts and consultation with geologists in the science back room in Houston. The samples were photographed during collection, packaged in uniquely-identified containers, and transported to the Lunar Module. All samples collected on the Moon were returned to Earth. NASA's upcoming return to the Moon will be different. Astronauts will have extended stays at an out-post and will collect more samples than they will return. They will need curation and analysis facilities on the Moon in order to carefully select samples for return to Earth.

  18. Balanced sampling

    NARCIS (Netherlands)

    Brus, D.J.

    2015-01-01

    In balanced sampling a linear relation between the soil property of interest and one or more covariates with known means is exploited in selecting the sampling locations. Recent developments make this sampling design attractive for statistical soil surveys. This paper introduces balanced sampling

  19. SADA: Ecological Risk Based Decision Support System for Selective Remediation

    Science.gov (United States)

    Spatial Analysis and Decision Assistance (SADA) is freeware that implements terrestrial ecological risk assessment and yields a selective remediation design using its integral geographical information system, based on ecological and risk assessment inputs. Selective remediation ...

  20. Model selection with multiple regression on distance matrices leads to incorrect inferences.

    Directory of Open Access Journals (Sweden)

    Ryan P Franckowiak

    Full Text Available In landscape genetics, model selection procedures based on Information Theoretic and Bayesian principles have been used with multiple regression on distance matrices (MRM to test the relationship between multiple vectors of pairwise genetic, geographic, and environmental distance. Using Monte Carlo simulations, we examined the ability of model selection criteria based on Akaike's information criterion (AIC, its small-sample correction (AICc, and the Bayesian information criterion (BIC to reliably rank candidate models when applied with MRM while varying the sample size. The results showed a serious problem: all three criteria exhibit a systematic bias toward selecting unnecessarily complex models containing spurious random variables and erroneously suggest a high level of support for the incorrectly ranked best model. These problems effectively increased with increasing sample size. The failure of AIC, AICc, and BIC was likely driven by the inflated sample size and different sum-of-squares partitioned by MRM, and the resulting effect on delta values. Based on these findings, we strongly discourage the continued application of AIC, AICc, and BIC for model selection with MRM.

  1. A simple and selective method for determination of phthalate biomarkers in vegetable samples by high pressure liquid chromatography-electrospray ionization-tandem mass spectrometry.

    Science.gov (United States)

    Zhou, Xi; Cui, Kunyan; Zeng, Feng; Li, Shoucong; Zeng, Zunxiang

    2016-06-01

    In the present study, solid-phase extraction cartridges including silica reversed-phase Isolute C18, polymeric reversed-phase Oasis HLB and mixed-mode anion-exchange Oasis MAX, and liquid-liquid extractions with ethyl acetate, n-hexane, dichloromethane and its mixtures were compared for clean-up of phthalate monoesters from vegetable samples. Best recoveries and minimised matrix effects were achieved using ethyl acetate/n-hexane liquid-liquid extraction for these target compounds. A simple and selective method, based on sample preparation by ultrasonic extraction and liquid-liquid extraction clean-up, for the determination of phthalate monoesters in vegetable samples by liquid chromatography/electrospray ionisation-tandem mass spectrometry was developed. The method detection limits for phthalate monoesters ranged from 0.013 to 0.120 ng g(-1). Good linearity (r(2)>0.991) between MQLs and 1000× MQLs was achieved. The intra- and inter-day relative standard deviation values were less than 11.8%. The method was successfully used to determine phthalate monoester metabolites in the vegetable samples. Copyright © 2016 Elsevier Ltd. All rights reserved.

  2. Mobile phone based imaging system for selected tele-healthcare applications

    OpenAIRE

    Condominas Guàrdia, Jordi

    2013-01-01

    A mobile phone based telemedicine study is developed to see how feasible phone usage is in selected health care applications. The research is divided into three different objectives. The first objective is to compile the technical characteristics of selected mobile phones from telemedicine perspective. The second objective is to develop techniques to acquire quality images of skin with mobile phones. Finally a smartphone based telemedicine application will be developed to asses...

  3. On the benefits of location-based relay selection in mobile wireless networks

    DEFF Research Database (Denmark)

    Nielsen, Jimmy Jessen; Madsen, Tatiana Kozlova; Schwefel, Hans-Peter

    2016-01-01

    We consider infrastructure-based mobile networks that are assisted by a single relay transmission where both the downstream destination and relay nodes are mobile. Selecting the optimal transmission path for a destination node requires up-to-date link quality estimates of all relevant links....... If the relay selection is based on link quality measurements, the number of links to update grows quadratically with the number of nodes, and measurements need to be updated frequently when nodes are mobile. In this paper, we consider a location-based relay selection scheme where link qualities are estimated...... from node positions; in the scenario of a node-based location system such as GPS, the location-based approach reduces signaling overhead, which in this case only grows linearly with the number of nodes. This paper studies these two relay selection approaches and investigates how they are affected...

  4. Large-scale Reconstructions and Independent, Unbiased Clustering Based on Morphological Metrics to Classify Neurons in Selective Populations.

    Science.gov (United States)

    Bragg, Elise M; Briggs, Farran

    2017-02-15

    This protocol outlines large-scale reconstructions of neurons combined with the use of independent and unbiased clustering analyses to create a comprehensive survey of the morphological characteristics observed among a selective neuronal population. Combination of these techniques constitutes a novel approach for the collection and analysis of neuroanatomical data. Together, these techniques enable large-scale, and therefore more comprehensive, sampling of selective neuronal populations and establish unbiased quantitative methods for describing morphologically unique neuronal classes within a population. The protocol outlines the use of modified rabies virus to selectively label neurons. G-deleted rabies virus acts like a retrograde tracer following stereotaxic injection into a target brain structure of interest and serves as a vehicle for the delivery and expression of EGFP in neurons. Large numbers of neurons are infected using this technique and express GFP throughout their dendrites, producing "Golgi-like" complete fills of individual neurons. Accordingly, the virus-mediated retrograde tracing method improves upon traditional dye-based retrograde tracing techniques by producing complete intracellular fills. Individual well-isolated neurons spanning all regions of the brain area under study are selected for reconstruction in order to obtain a representative sample of neurons. The protocol outlines procedures to reconstruct cell bodies and complete dendritic arborization patterns of labeled neurons spanning multiple tissue sections. Morphological data, including positions of each neuron within the brain structure, are extracted for further analysis. Standard programming functions were utilized to perform independent cluster analyses and cluster evaluations based on morphological metrics. To verify the utility of these analyses, statistical evaluation of a cluster analysis performed on 160 neurons reconstructed in the thalamic reticular nucleus of the thalamus

  5. Triangulation based inclusion probabilities: a design-unbiased sampling approach

    OpenAIRE

    Fehrmann, Lutz; Gregoire, Timothy; Kleinn, Christoph

    2011-01-01

    A probabilistic sampling approach for design-unbiased estimation of area-related quantitative characteristics of spatially dispersed population units is proposed. The developed field protocol includes a fixed number of 3 units per sampling location and is based on partial triangulations over their natural neighbors to derive the individual inclusion probabilities. The performance of the proposed design is tested in comparison to fixed area sample plots in a simulation with two forest stands. ...

  6. Feature Selection and Predictors of Falls with Foot Force Sensors Using KNN-Based Algorithms

    Directory of Open Access Journals (Sweden)

    Shengyun Liang

    2015-11-01

    Full Text Available The aging process may lead to the degradation of lower extremity function in the elderly population, which can restrict their daily quality of life and gradually increase the fall risk. We aimed to determine whether objective measures of physical function could predict subsequent falls. Ground reaction force (GRF data, which was quantified by sample entropy, was collected by foot force sensors. Thirty eight subjects (23 fallers and 15 non-fallers participated in functional movement tests, including walking and sit-to-stand (STS. A feature selection algorithm was used to select relevant features to classify the elderly into two groups: at risk and not at risk of falling down, for three KNN-based classifiers: local mean-based k-nearest neighbor (LMKNN, pseudo nearest neighbor (PNN, local mean pseudo nearest neighbor (LMPNN classification. We compared classification performances, and achieved the best results with LMPNN, with sensitivity, specificity and accuracy all 100%. Moreover, a subset of GRFs was significantly different between the two groups via Wilcoxon rank sum test, which is compatible with the classification results. This method could potentially be used by non-experts to monitor balance and the risk of falling down in the elderly population.

  7. Recent results of the investigation of a micro-fluidic sampling chip and sampling system for hot cell aqueous processing streams

    International Nuclear Information System (INIS)

    Tripp, J.; Smith, T.; Law, J.

    2013-01-01

    A Fuel Cycle Research and Development project has investigated an innovative sampling method that could evolve into the next generation sampling and analysis system for metallic elements present in aqueous processing streams. Initially sampling technologies were evaluated and micro-fluidic sampling chip technology was selected and tested. A conceptual design for a fully automated microcapillary-based system was completed and a robotic automated sampling system was fabricated. The mechanical and sampling operation of the completed sampling system was investigated. Different sampling volumes have been tested. It appears that the 10 μl volume has produced data that had much smaller relative standard deviations than the 2 μl volume. In addition, the production of a less expensive, mass produced sampling chip was investigated to avoid chip reuse thus increasing sampling reproducibility/accuracy. The micro-fluidic-based robotic sampling system's mechanical elements were tested to ensure analytical reproducibility and the optimum robotic handling of micro-fluidic sampling chips. (authors)

  8. Personnel Selection Method Based on Personnel-Job Matching

    OpenAIRE

    Li Wang; Xilin Hou; Lili Zhang

    2013-01-01

    The existing personnel selection decisions in practice are based on the evaluation of job seeker's human capital, and it may be difficult to make personnel-job matching and make each party satisfy. Therefore, this paper puts forward a new personnel selection method by consideration of bilateral matching. Starting from the employment thoughts of ¡°satisfy¡±, the satisfaction evaluation indicator system of each party are constructed. The multi-objective optimization model is given according to ...

  9. Selective inferior petrosal sinus sampling without venous outflow diversion in the detection of a pituitary adenoma in Cushing's syndrome

    International Nuclear Information System (INIS)

    Andereggen, Lukas; Schroth, Gerhard; Gralla, Jan; Ozdoba, Christoph; Seiler, Rolf; Mariani, Luigi; Beck, Juergen; Widmer, Hans-Rudolf; Andres, Robert H.; Christ, Emanuel

    2012-01-01

    Conventional MRI may still be an inaccurate method for the non-invasive detection of a microadenoma in adrenocorticotropin (ACTH)-dependent Cushing's syndrome (CS). Bilateral inferior petrosal sinus sampling (BIPSS) with ovine corticotropin-releasing hormone (oCRH) stimulation is an invasive, but accurate, intervention in the diagnostic armamentarium surrounding CS. Until now, there is a continuous controversial debate regarding lateralization data in detecting a microadenoma. Using BIPSS, we evaluated whether a highly selective placement of microcatheters without diversion of venous outflow might improve detection of pituitary microadenoma. We performed BIPSS in 23 patients that met clinical and biochemical criteria of CS and with equivocal MRI findings. For BIPSS, the femoral veins were catheterized bilaterally with a 6-F catheter and the inferior petrosal sinus bilaterally with a 2.7-F microcatheter. A third catheter was placed in the right femoral vein. Blood samples were collected from each catheter to determine ACTH blood concentration before and after oCRH stimulation. In 21 patients, a central-to-peripheral ACTH gradient was found and the affected side determined. In 18 of 20 patients where transsphenoidal partial hypophysectomy was performed based on BIPSS findings, microadenoma was histologically confirmed. BIPSS had a sensitivity of 94% and a specificity of 67% after oCRH stimulation in detecting a microadenoma. Correct localization of the adenoma was achieved in all Cushing's disease patients. BIPSS remains the gold standard in the detection of a microadenoma in CS. Our findings show that the selective placement of microcatheters without venous outflow diversion might further enhance better recognition to localize the pituitary tumor. (orig.)

  10. On incomplete sampling under birth-death models and connections to the sampling-based coalescent.

    Science.gov (United States)

    Stadler, Tanja

    2009-11-07

    The constant rate birth-death process is used as a stochastic model for many biological systems, for example phylogenies or disease transmission. As the biological data are usually not fully available, it is crucial to understand the effect of incomplete sampling. In this paper, we analyze the constant rate birth-death process with incomplete sampling. We derive the density of the bifurcation events for trees on n leaves which evolved under this birth-death-sampling process. This density is used for calculating prior distributions in Bayesian inference programs and for efficiently simulating trees. We show that the birth-death-sampling process can be interpreted as a birth-death process with reduced rates and complete sampling. This shows that joint inference of birth rate, death rate and sampling probability is not possible. The birth-death-sampling process is compared to the sampling-based population genetics model, the coalescent. It is shown that despite many similarities between these two models, the distribution of bifurcation times remains different even in the case of very large population sizes. We illustrate these findings on an Hepatitis C virus dataset from Egypt. We show that the transmission times estimates are significantly different-the widely used Gamma statistic even changes its sign from negative to positive when switching from the coalescent to the birth-death process.

  11. Determination of terbium in phosphate rock by Tb{sup 3+}-selective fluorimetric optode based on dansyl derivative as a neutral fluorogenic ionophore

    Energy Technology Data Exchange (ETDEWEB)

    Hosseini, Morteza, E-mail: smhosseini@khayam.ut.ac.ir [Department of Chemistry, Islamic Azad University, Savadkooh Branch, Savadkooh (Iran, Islamic Republic of); Ganjali, Mohammad Reza [Center of Excellence in Electrochemistry, Faculty of Chemistry, University of Tehran, Tehran (Iran, Islamic Republic of); Endocrinology and Metabolism Research Center, Tehran University of Medical Sciences, Tehran (Iran, Islamic Republic of); Veismohammadi, Bahareh [Center of Excellence in Electrochemistry, Faculty of Chemistry, University of Tehran, Tehran (Iran, Islamic Republic of); Faridbod, Farnoush [Endocrinology and Metabolism Research Center, Tehran University of Medical Sciences, Tehran (Iran, Islamic Republic of); Abkenar, Shiva Dehghan [Department of Chemistry, Islamic Azad University, Savadkooh Branch, Savadkooh (Iran, Islamic Republic of); Norouzi, Parviz [Endocrinology and Metabolism Research Center, Tehran University of Medical Sciences, Tehran (Iran, Islamic Republic of); Center of Excellence in Electrochemistry, Faculty of Chemistry, University of Tehran, Tehran (Iran, Islamic Republic of)

    2010-04-07

    For the first time a highly sensitive and selective fluorimetric optode membrane was prepared for determination of trace amounts of Tb(III) ions in phosphate rock samples. The Tb(III) sensing system was constructed by incorporating 5-(dimethylamino)-N'-(2-hydroxy-1-naphthoyl) naphthalene-1-sulfonohydrazine (L) as a neutral Tb(III)-selective fluoroionophore, in the plasticized PVC membrane containing sodium tetraphenyl borate as a liphophilic anionic additive. The response of the optode is based on the strong fluorescence quenching of L by Tb{sup 3+} ions. At a pH value of 5.0, the optode displays a wide concentration range of 1.0 x 10{sup -7} to 1.0 x 10{sup -2} M, with a relatively fast response time of less than 45 s. In addition, to high stability and reproducibility, the sensor shows a unique selectivity towards Tb{sup 3+} ion with respect to common cations. The optode was applied successfully to the trace determination of terbium ion in binary mixture and water samples and the determination of Tb{sup 3+} in phosphate rock samples.

  12. Soil sampling intercomparison exercise by selected laboratories of the ALMERA Network

    International Nuclear Information System (INIS)

    2009-01-01

    The IAEA's Seibersdorf Laboratories in Austria have the programmatic responsibility to provide assistance to Member State laboratories in maintaining and improving the reliability of analytical measurement results, both in radionuclide and trace element determinations. This is accomplished through the provision of reference materials of terrestrial origin, validated analytical procedures, training in the implementation of internal quality control, and through the evaluation of measurement performance by the organization of worldwide and regional interlaboratory comparison exercises. The IAEA is mandated to support global radionuclide measurement systems related to accidental or intentional releases of radioactivity in the environment. To fulfil this obligation and ensure a reliable, worldwide, rapid and consistent response, the IAEA coordinates an international network of analytical laboratories for the measurement of environmental radioactivity (ALMERA). The network was established by the IAEA in 1995 and makes available to Member States a world-wide network of analytical laboratories capable of providing reliable and timely analysis of environmental samples in the event of an accidental or intentional release of radioactivity. A primary requirement for the ALMERA members is participation in the IAEA interlaboratory comparison exercises, which are specifically organized for ALMERA on a regular basis. These exercises are designed to monitor and demonstrate the performance and analytical capabilities of the network members, and to identify gaps and problem areas where further development is needed. In this framework, the IAEA organized a soil sampling intercomparison exercise (IAEA/SIE/01) for selected laboratories of the ALMERA network. The main objective of this exercise was to compare soil sampling procedures used by different participating laboratories. The performance evaluation results of the interlaboratory comparison exercises performed in the framework of

  13. Greater fruit selection following an appearance-based compared with a health-based health promotion poster

    Science.gov (United States)

    2016-01-01

    Abstract Background This study investigated the impact of an appearance-based compared with a traditional health-based public health message for healthy eating. Methods A total of 166 British University students (41 males; aged 20.6 ± 1.9 years) were randomized to view either an appearance-based (n = 82) or a health-based (n = 84) fruit promotion poster. Intentions to consume fruit and immediate fruit selection (laboratory observation) were assessed immediately after poster viewing, and subsequent self-report fruit consumption was assessed 3 days later. Results Intentions to consume fruit were not predicted by poster type (largest β = 0.03, P = 0.68) but were associated with fruit-based liking, past consumption, attitudes and social norms (smallest β = 0.16, P = 0.04). Immediate fruit selection was greater following the appearance-based compared with the health-based poster (β = −0.24, P poster (β = −0.22, P = 0.03), but this effect became non-significant on consideration of participant characteristics (β = −0.15, P = 0.13), and was instead associated with fruit-based liking and past consumption (smallest β = 0.24, P = 0.03). Conclusions These findings demonstrate the clear value of an appearance-based compared with a health-based health promotion poster for increasing fruit selection. A distinction between outcome measures and the value of a behavioural measure is also demonstrated. PMID:28158693

  14. Recent trends in sorption-based sample preparation and liquid chromatography techniques for food analysis.

    Science.gov (United States)

    V Soares Maciel, Edvaldo; de Toffoli, Ana Lúcia; Lanças, Fernando Mauro

    2018-04-20

    The accelerated rising of the world's population increased the consumption of food, thus demanding more rigors in the control of residue and contaminants in food-based products marketed for human consumption. In view of the complexity of most food matrices, including fruits, vegetables, different types of meat, beverages, among others, a sample preparation step is important to provide more reliable results when combined with HPLC separations. An adequate sample preparation step before the chromatographic analysis is mandatory in obtaining higher precision and accuracy in order to improve the extraction of the target analytes, one of the priorities in analytical chemistry. The recent discovery of new materials such as ionic liquids, graphene-derived materials, molecularly imprinted polymers, restricted access media, magnetic nanoparticles, and carbonaceous nanomaterials, provided high sensitivity and selectivity results in an extensive variety of applications. These materials, as well as their several possible combinations, have been demonstrated to be highly appropriate for the extraction of different analytes in complex samples such as food products. The main characteristics and application of these new materials in food analysis will be presented and discussed in this paper. Another topic discussed in this review covers the main advantages and limitations of sample preparation microtechniques, as well as their off-line and on-line combination with HPLC for food analysis. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  15. Gene-Based Multiclass Cancer Diagnosis with Class-Selective Rejections

    Science.gov (United States)

    Jrad, Nisrine; Grall-Maës, Edith; Beauseroy, Pierre

    2009-01-01

    Supervised learning of microarray data is receiving much attention in recent years. Multiclass cancer diagnosis, based on selected gene profiles, are used as adjunct of clinical diagnosis. However, supervised diagnosis may hinder patient care, add expense or confound a result. To avoid this misleading, a multiclass cancer diagnosis with class-selective rejection is proposed. It rejects some patients from one, some, or all classes in order to ensure a higher reliability while reducing time and expense costs. Moreover, this classifier takes into account asymmetric penalties dependant on each class and on each wrong or partially correct decision. It is based on ν-1-SVM coupled with its regularization path and minimizes a general loss function defined in the class-selective rejection scheme. The state of art multiclass algorithms can be considered as a particular case of the proposed algorithm where the number of decisions is given by the classes and the loss function is defined by the Bayesian risk. Two experiments are carried out in the Bayesian and the class selective rejection frameworks. Five genes selected datasets are used to assess the performance of the proposed method. Results are discussed and accuracies are compared with those computed by the Naive Bayes, Nearest Neighbor, Linear Perceptron, Multilayer Perceptron, and Support Vector Machines classifiers. PMID:19584932

  16. Parameters selection in gene selection using Gaussian kernel support vector machines by genetic algorithm

    Institute of Scientific and Technical Information of China (English)

    2005-01-01

    In microarray-based cancer classification, gene selection is an important issue owing to the large number of variables and small number of samples as well as its non-linearity. It is difficult to get satisfying results by using conventional linear statistical methods. Recursive feature elimination based on support vector machine (SVM RFE) is an effective algorithm for gene selection and cancer classification, which are integrated into a consistent framework. In this paper, we propose a new method to select parameters of the aforementioned algorithm implemented with Gaussian kernel SVMs as better alternatives to the common practice of selecting the apparently best parameters by using a genetic algorithm to search for a couple of optimal parameter. Fast implementation issues for this method are also discussed for pragmatic reasons. The proposed method was tested on two representative hereditary breast cancer and acute leukaemia datasets. The experimental results indicate that the proposed method performs well in selecting genes and achieves high classification accuracies with these genes.

  17. Highly sensitive and selective cholesterol biosensor based on direct electron transfer of hemoglobin.

    Science.gov (United States)

    Zhao, Changzhi; Wan, Li; Jiang, Li; Wang, Qin; Jiao, Kui

    2008-12-01

    A cholesterol biosensor based on direct electron transfer of a hemoglobin-encapsulated chitosan-modified glassy carbon electrode has been developed for highly sensitive and selective analysis of serum samples. Modified by films containing hemoglobin and cholesterol oxidase, the electrode was prepared by encapsulation of enzyme in chitosan matrix. The hydrogen peroxide produced by the catalytic oxidation of cholesterol by cholesterol oxidase was reduced electrocatalytically by immobilized hemoglobin and used to obtain a sensitive amperometric response to cholesterol. The linear response of cholesterol concentrations ranged from 1.00 x 10(-5) to 6.00 x 10(-4) mol/L, with a correlation coefficient of 0.9969 and estimated detection limit of cholesterol of 9.5 micromol/L at a signal/noise ratio of 3. The cholesterol biosensor can efficiently exclude interference by the commonly coexisting ascorbic acid, uric acid, dopamine, and epinephrine. The sensitivity to the change in the concentration of cholesterol as the slope of the calibration curve was 0.596 A/M. The relative standard deviation was under 4.0% (n=5) for the determination of real samples. The biosensor is satisfactory in the determination of human serum samples.

  18. Dietary trace element intakes of a selected sample of Canadian elderly women

    International Nuclear Information System (INIS)

    Gibson, R.S.; MacDonald, A.C.; Martinez, O.B.

    1984-01-01

    Energy, and selected trace intakes of a sample of 90 noninstitutionalized Canadian women (mean age 66.2 +/- 6.2 years) living in a University community and consuming self-selected diets were assessed by chemical analysis of one-day duplicate diets and via 1-day dietary records collected by the subjects. Mean gross energy intake (determined via bomb calorimetry was 6.0 +/- 2.4 MJ (1435 +/- 580 kcal) and mean intakes of Cu and Mn (determined via atomic absorption spectrophotometry) were 1.2 +/- 0.6 mg and 3.8 +/- 2.1 mg/day, respectively. Instrumental neutron activation analysis was used for Cr - median = 77.4 μg/day; Se - median = 69.6 μg/day; Zn - mean + SD = 7.7 +/- 3.6 mg/day; Ag - median = 26.9 μg/day; Cs - median = 4.8 μg/day; Rb - median = 1.6 mg/day; Sb - median = 1.8 μg/day; Sc - median = 0.3 μg/day. Dietary intakes of Cr, Mn and Se for the majority of the subjects fell within the US safe and adequate range. In contrast, a high proportion of subjects had apparently low intakes of dietary Cu and Zn in relation to current US dietary recommendations

  19. Fatigue behavior of thin-walled grade 2 titanium samples processed by selective laser melting. Application to life prediction of porous titanium implants.

    Science.gov (United States)

    Lipinski, P; Barbas, A; Bonnet, A-S

    2013-12-01

    Because of its biocompatibility and high mechanical properties, the commercially pure grade 2 titanium (CPG2Ti) is largely used for fabrication of patient specific implants or hard tissue substitutes with complex shape. To avoid the stress-shielding and help their colonization by bone, prostheses with a controlled porosity are designed. The selective laser melting (SLM) is well adapted to manufacture such geometrically complicated structures constituted by struts with rough surfaces and relatively small diameters. Few studies were dedicated to characterize the fatigue properties of SLM processed samples and bulk parts. They followed conventional or standard protocols. The fatigue behavior of standard samples is very different from the one of porous raw structures. In this study, the SLM made "as built" (AB) and "heat treated" (HT) tubular samples were tested in fatigue. Wöhler curves were determined in both cases. The obtained endurance limits were equal to σD(AB)=74.5MPa and σD(HT)=65.7MPa, respectively. The heat treatment worsened the endurance limit by relaxation of negative residual stresses measured on the external surface of the samples. Modified Goodman diagram was established for raw specimens. Porous samples, based on the pattern developed by Barbas et al. (2012), were manufactured by SLM. Fatigue tests and finite element simulations performed on these samples enabled the determination of a simple rule of fatigue assessment. The method based on the stress gradient appeared as the best approach to take into account the notch influence on the fatigue life of CPG2Ti structures with a controlled porosity. The direction dependent apparent fatigue strength was found. A criterion based on the effective, or global, nominal stress was proposed taking into account the anisotropy of the porous structures. Thanks to this criterion, the usual calculation methods can be used to design bone substitutes, without a precise modelling of their internal fine porosity.

  20. Sample-Based Extreme Learning Machine with Missing Data

    Directory of Open Access Journals (Sweden)

    Hang Gao

    2015-01-01

    Full Text Available Extreme learning machine (ELM has been extensively studied in machine learning community during the last few decades due to its high efficiency and the unification of classification, regression, and so forth. Though bearing such merits, existing ELM algorithms cannot efficiently handle the issue of missing data, which is relatively common in practical applications. The problem of missing data is commonly handled by imputation (i.e., replacing missing values with substituted values according to available information. However, imputation methods are not always effective. In this paper, we propose a sample-based learning framework to address this issue. Based on this framework, we develop two sample-based ELM algorithms for classification and regression, respectively. Comprehensive experiments have been conducted in synthetic data sets, UCI benchmark data sets, and a real world fingerprint image data set. As indicated, without introducing extra computational complexity, the proposed algorithms do more accurate and stable learning than other state-of-the-art ones, especially in the case of higher missing ratio.

  1. A Robust Service Selection Method Based on Uncertain QoS

    Directory of Open Access Journals (Sweden)

    Yanping Chen

    2016-01-01

    Full Text Available Nowadays, the number of Web services on the Internet is quickly increasing. Meanwhile, different service providers offer numerous services with the similar functions. Quality of Service (QoS has become an important factor used to select the most appropriate service for users. The most prominent QoS-based service selection models only take the certain attributes into account, which is an ideal assumption. In the real world, there are a large number of uncertain factors. In particular, at the runtime, QoS may become very poor or unacceptable. In order to solve the problem, a global service selection model based on uncertain QoS was proposed, including the corresponding normalization and aggregation functions, and then a robust optimization model adopted to transform the model. Experiment results show that the proposed method can effectively select services with high robustness and optimality.

  2. Moment Conditions Selection Based on Adaptive Penalized Empirical Likelihood

    Directory of Open Access Journals (Sweden)

    Yunquan Song

    2014-01-01

    Full Text Available Empirical likelihood is a very popular method and has been widely used in the fields of artificial intelligence (AI and data mining as tablets and mobile application and social media dominate the technology landscape. This paper proposes an empirical likelihood shrinkage method to efficiently estimate unknown parameters and select correct moment conditions simultaneously, when the model is defined by moment restrictions in which some are possibly misspecified. We show that our method enjoys oracle-like properties; that is, it consistently selects the correct moment conditions and at the same time its estimator is as efficient as the empirical likelihood estimator obtained by all correct moment conditions. Moreover, unlike the GMM, our proposed method allows us to carry out confidence regions for the parameters included in the model without estimating the covariances of the estimators. For empirical implementation, we provide some data-driven procedures for selecting the tuning parameter of the penalty function. The simulation results show that the method works remarkably well in terms of correct moment selection and the finite sample properties of the estimators. Also, a real-life example is carried out to illustrate the new methodology.

  3. Process observation in fiber laser-based selective laser melting

    Science.gov (United States)

    Thombansen, Ulrich; Gatej, Alexander; Pereira, Milton

    2015-01-01

    The process observation in selective laser melting (SLM) focuses on observing the interaction point where the powder is processed. To provide process relevant information, signals have to be acquired that are resolved in both time and space. Especially in high-power SLM, where more than 1 kW of laser power is used, processing speeds of several meters per second are required for a high-quality processing results. Therefore, an implementation of a suitable process observation system has to acquire a large amount of spatially resolved data at low sampling speeds or it has to restrict the acquisition to a predefined area at a high sampling speed. In any case, it is vitally important to synchronously record the laser beam position and the acquired signal. This is a prerequisite that allows the recorded data become information. Today, most SLM systems employ f-theta lenses to focus the processing laser beam onto the powder bed. This report describes the drawbacks that result for process observation and suggests a variable retro-focus system which solves these issues. The beam quality of fiber lasers delivers the processing laser beam to the powder bed at relevant focus diameters, which is a key prerequisite for this solution to be viable. The optical train we present here couples the processing laser beam and the process observation coaxially, ensuring consistent alignment of interaction zone and observed area. With respect to signal processing, we have developed a solution that synchronously acquires signals from a pyrometer and the position of the laser beam by sampling the data with a field programmable gate array. The relevance of the acquired signals has been validated by the scanning of a sample filament. Experiments with grooved samples show a correlation between different powder thicknesses and the acquired signals at relevant processing parameters. This basic work takes a first step toward self-optimization of the manufacturing process in SLM. It enables the

  4. Honest Importance Sampling with Multiple Markov Chains.

    Science.gov (United States)

    Tan, Aixin; Doss, Hani; Hobert, James P

    2015-01-01

    selection in linear regression, and for this application, importance sampling based on multiple chains enables an empirical Bayes approach to variable selection.

  5. A Cancer Gene Selection Algorithm Based on the K-S Test and CFS

    Directory of Open Access Journals (Sweden)

    Qiang Su

    2017-01-01

    Full Text Available Background. To address the challenging problem of selecting distinguished genes from cancer gene expression datasets, this paper presents a gene subset selection algorithm based on the Kolmogorov-Smirnov (K-S test and correlation-based feature selection (CFS principles. The algorithm selects distinguished genes first using the K-S test, and then, it uses CFS to select genes from those selected by the K-S test. Results. We adopted support vector machines (SVM as the classification tool and used the criteria of accuracy to evaluate the performance of the classifiers on the selected gene subsets. This approach compared the proposed gene subset selection algorithm with the K-S test, CFS, minimum-redundancy maximum-relevancy (mRMR, and ReliefF algorithms. The average experimental results of the aforementioned gene selection algorithms for 5 gene expression datasets demonstrate that, based on accuracy, the performance of the new K-S and CFS-based algorithm is better than those of the K-S test, CFS, mRMR, and ReliefF algorithms. Conclusions. The experimental results show that the K-S test-CFS gene selection algorithm is a very effective and promising approach compared to the K-S test, CFS, mRMR, and ReliefF algorithms.

  6. A nanoparticle-based solid-phase extraction procedure followed by spectrofluorimetry to determine carbaryl in different water samples

    Energy Technology Data Exchange (ETDEWEB)

    Tabrizi, Ahad Bavili, E-mail: a.bavili@tbzmed.ac.ir, E-mail: abavilitabrizia@gmail.com [Biotechnology Research Center, Tabriz University of Medical Sciences, Tabriz (Iran, Islamic Republic of); Rashidi, Mohammad Reza [Research Center for Pharmaceutical Nanotechnology, Tabriz University of Medical Sciences, Tabriz, (Iran, Islamic Republic of); Ostadi, Hadi [Department of Chemistry, Payam-e-noor University, Ardabil Branch, Ardabil (Iran, Islamic Republic of)

    2014-04-15

    In this study, a new method based on Fe{sub 3}O{sub 4} magnetite nanoparticles (MNPs) has been developed for the extraction, preconcentration and determination of trace amounts of carbaryl from environmental water samples. Fe{sub 3}O{sub 4} MNPs were synthesized and modified by the surfactant sodium dodecyl sulfate (SDS), then successfully applied for the extraction of carbaryl and its determination by spectrofluorimetry. Main factors affecting the adsolubilization of carbaryl such as the amount of SDS, pH value, standing time, desorption solvent and maximal extraction volume were optimized. Under the selected conditions, carbaryl could be quantitatively extracted. Acceptable recoveries (84.5-91.9%) and relative standard deviations (6.2%) were achieved in analyzing spiked water samples. A concentration factor of 20 was achieved by the extraction of 100 mL of environmental water samples. The limit of detection and quantification were found to be 2.1 and 6.9 μg L{sup -1}, respectively. The proposed method was successfully applied for the extraction and determination of carbaryl in environmental water samples. (author)

  7. Multi-Objective Particle Swarm Optimization Approach for Cost-Based Feature Selection in Classification.

    Science.gov (United States)

    Zhang, Yong; Gong, Dun-Wei; Cheng, Jian

    2017-01-01

    Feature selection is an important data-preprocessing technique in classification problems such as bioinformatics and signal processing. Generally, there are some situations where a user is interested in not only maximizing the classification performance but also minimizing the cost that may be associated with features. This kind of problem is called cost-based feature selection. However, most existing feature selection approaches treat this task as a single-objective optimization problem. This paper presents the first study of multi-objective particle swarm optimization (PSO) for cost-based feature selection problems. The task of this paper is to generate a Pareto front of nondominated solutions, that is, feature subsets, to meet different requirements of decision-makers in real-world applications. In order to enhance the search capability of the proposed algorithm, a probability-based encoding technology and an effective hybrid operator, together with the ideas of the crowding distance, the external archive, and the Pareto domination relationship, are applied to PSO. The proposed PSO-based multi-objective feature selection algorithm is compared with several multi-objective feature selection algorithms on five benchmark datasets. Experimental results show that the proposed algorithm can automatically evolve a set of nondominated solutions, and it is a highly competitive feature selection method for solving cost-based feature selection problems.

  8. Metabolic Profiling and Classification of Propolis Samples from Southern Brazil: An NMR-Based Platform Coupled with Machine Learning.

    Science.gov (United States)

    Maraschin, Marcelo; Somensi-Zeggio, Amélia; Oliveira, Simone K; Kuhnen, Shirley; Tomazzoli, Maíra M; Raguzzoni, Josiane C; Zeri, Ana C M; Carreira, Rafael; Correia, Sara; Costa, Christopher; Rocha, Miguel

    2016-01-22

    The chemical composition of propolis is affected by environmental factors and harvest season, making it difficult to standardize its extracts for medicinal usage. By detecting a typical chemical profile associated with propolis from a specific production region or season, certain types of propolis may be used to obtain a specific pharmacological activity. In this study, propolis from three agroecological regions (plain, plateau, and highlands) from southern Brazil, collected over the four seasons of 2010, were investigated through a novel NMR-based metabolomics data analysis workflow. Chemometrics and machine learning algorithms (PLS-DA and RF), including methods to estimate variable importance in classification, were used in this study. The machine learning and feature selection methods permitted construction of models for propolis sample classification with high accuracy (>75%, reaching ∼90% in the best case), better discriminating samples regarding their collection seasons comparatively to the harvest regions. PLS-DA and RF allowed the identification of biomarkers for sample discrimination, expanding the set of discriminating features and adding relevant information for the identification of the class-determining metabolites. The NMR-based metabolomics analytical platform, coupled to bioinformatic tools, allowed characterization and classification of Brazilian propolis samples regarding the metabolite signature of important compounds, i.e., chemical fingerprint, harvest seasons, and production regions.

  9. A kernel-based multivariate feature selection method for microarray data classification.

    Directory of Open Access Journals (Sweden)

    Shiquan Sun

    Full Text Available High dimensionality and small sample sizes, and their inherent risk of overfitting, pose great challenges for constructing efficient classifiers in microarray data classification. Therefore a feature selection technique should be conducted prior to data classification to enhance prediction performance. In general, filter methods can be considered as principal or auxiliary selection mechanism because of their simplicity, scalability, and low computational complexity. However, a series of trivial examples show that filter methods result in less accurate performance because they ignore the dependencies of features. Although few publications have devoted their attention to reveal the relationship of features by multivariate-based methods, these methods describe relationships among features only by linear methods. While simple linear combination relationship restrict the improvement in performance. In this paper, we used kernel method to discover inherent nonlinear correlations among features as well as between feature and target. Moreover, the number of orthogonal components was determined by kernel Fishers linear discriminant analysis (FLDA in a self-adaptive manner rather than by manual parameter settings. In order to reveal the effectiveness of our method we performed several experiments and compared the results between our method and other competitive multivariate-based features selectors. In our comparison, we used two classifiers (support vector machine, [Formula: see text]-nearest neighbor on two group datasets, namely two-class and multi-class datasets. Experimental results demonstrate that the performance of our method is better than others, especially on three hard-classify datasets, namely Wang's Breast Cancer, Gordon's Lung Adenocarcinoma and Pomeroy's Medulloblastoma.

  10. Determination of terbium in phosphate rock by Tb3+-selective fluorimetric optode based on dansyl derivative as a neutral fluorogenic ionophore.

    Science.gov (United States)

    Hosseini, Morteza; Ganjali, Mohammad Reza; Veismohammadi, Bahareh; Faridbod, Farnoush; Abkenar, Shiva Dehghan; Norouzi, Parviz

    2010-04-07

    For the first time a highly sensitive and selective fluorimetric optode membrane was prepared for determination of trace amounts of Tb(III) ions in phosphate rock samples. The Tb(III) sensing system was constructed by incorporating 5-(dimethylamino)-N'-(2-hydroxy-1-naphthoyl) naphthalene-1-sulfonohydrazine (L) as a neutral Tb(III)-selective fluoroionophore, in the plasticized PVC membrane containing sodium tetraphenyl borate as a liphophilic anionic additive. The response of the optode is based on the strong fluorescence quenching of L by Tb(3+) ions. At a pH value of 5.0, the optode displays a wide concentration range of 1.0 x 10(-7) to 1.0 x 10(-2)M, with a relatively fast response time of less than 45 s. In addition, to high stability and reproducibility, the sensor shows a unique selectivity towards Tb(3+) ion with respect to common cations. The optode was applied successfully to the trace determination of terbium ion in binary mixture and water samples and the determination of Tb(3+) in phosphate rock samples. Crown Copyright 2010. Published by Elsevier B.V. All rights reserved.

  11. Large scale sample management and data analysis via MIRACLE

    DEFF Research Database (Denmark)

    Block, Ines; List, Markus; Pedersen, Marlene Lemvig

    Reverse-phase protein arrays (RPPAs) allow sensitive quantification of relative protein abundance in thousands of samples in parallel. In the past years the technology advanced based on improved methods and protocols concerning sample preparation and printing, antibody selection, optimization...... of staining conditions and mode of signal analysis. However, the sample management and data analysis still poses challenges because of the high number of samples, sample dilutions, customized array patterns, and various programs necessary for array construction and data processing. We developed...... a comprehensive and user-friendly web application called MIRACLE (MIcroarray R-based Analysis of Complex Lysate Experiments), which bridges the gap between sample management and array analysis by conveniently keeping track of the sample information from lysate preparation, through array construction and signal...

  12. Characteristic of selected frequency luminescence for samples collected in deserts north to Beijing

    International Nuclear Information System (INIS)

    Li Dongxu; Wei Mingjian; Wang Junping; Pan Baolin; Zhao Shiyuan; Liu Zhaowen

    2009-01-01

    Surface sand samples were collected in eight sites of the Horqin and Otindag deserts located in north to Beijing. BG2003 luminescence spectrograph was used to analyze the emitted photons and characteristic spectra of the selected frequency luminescence were obtained. It was found that high intensities of emitted photons stimulated by heat from 85 degree C-135 degree C and 350 degree C-400 degree C. It belong to the traps of 4.13 eV (300 nm), 4.00 eV (310 nm), 3.88 eV (320 nm) and 2.70 eV (460 nm), and the emitted photons belong to traps of 4.00 eV (310 nm), 3.88 eV (320 nm) and 2.70 eV (460 nm) were stimulated by green laser. And sand samples of the eight sites can respond to the increase of definite radiological dose at each wavelength, which is the characteristic spectrum to provide radiation dosimetry basis for dating. There are definite district characteristic in their characteristic spectra. (authors)

  13. Evaporation rate-based selection of supramolecular chirality.

    Science.gov (United States)

    Hattori, Shingo; Vandendriessche, Stefaan; Koeckelberghs, Guy; Verbiest, Thierry; Ishii, Kazuyuki

    2017-03-09

    We demonstrate the evaporation rate-based selection of supramolecular chirality for the first time. P-type aggregates prepared by fast evaporation, and M-type aggregates prepared by slow evaporation are kinetic and thermodynamic products under dynamic reaction conditions, respectively. These findings provide a novel solution reaction chemistry under the dynamic reaction conditions.

  14. Representative process sampling - in practice

    DEFF Research Database (Denmark)

    Esbensen, Kim; Friis-Pedersen, Hans Henrik; Julius, Lars Petersen

    2007-01-01

    Didactic data sets representing a range of real-world processes are used to illustrate "how to do" representative process sampling and process characterisation. The selected process data lead to diverse variogram expressions with different systematics (no range vs. important ranges; trends and....../or periodicity; different nugget effects and process variations ranging from less than one lag to full variogram lag). Variogram data analysis leads to a fundamental decomposition into 0-D sampling vs. 1-D process variances, based on the three principal variogram parameters: range, sill and nugget effect...

  15. Data splitting for artificial neural networks using SOM-based stratified sampling.

    Science.gov (United States)

    May, R J; Maier, H R; Dandy, G C

    2010-03-01

    Data splitting is an important consideration during artificial neural network (ANN) development where hold-out cross-validation is commonly employed to ensure generalization. Even for a moderate sample size, the sampling methodology used for data splitting can have a significant effect on the quality of the subsets used for training, testing and validating an ANN. Poor data splitting can result in inaccurate and highly variable model performance; however, the choice of sampling methodology is rarely given due consideration by ANN modellers. Increased confidence in the sampling is of paramount importance, since the hold-out sampling is generally performed only once during ANN development. This paper considers the variability in the quality of subsets that are obtained using different data splitting approaches. A novel approach to stratified sampling, based on Neyman sampling of the self-organizing map (SOM), is developed, with several guidelines identified for setting the SOM size and sample allocation in order to minimize the bias and variance in the datasets. Using an example ANN function approximation task, the SOM-based approach is evaluated in comparison to random sampling, DUPLEX, systematic stratified sampling, and trial-and-error sampling to minimize the statistical differences between data sets. Of these approaches, DUPLEX is found to provide benchmark performance with good model performance, with no variability. The results show that the SOM-based approach also reliably generates high-quality samples and can therefore be used with greater confidence than other approaches, especially in the case of non-uniform datasets, with the benefit of scalability to perform data splitting on large datasets. Copyright 2009 Elsevier Ltd. All rights reserved.

  16. Verification of Representative Sampling in RI waste

    International Nuclear Information System (INIS)

    Ahn, Hong Joo; Song, Byung Cheul; Sohn, Se Cheul; Song, Kyu Seok; Jee, Kwang Yong; Choi, Kwang Seop

    2009-01-01

    For evaluating the radionuclide inventories for RI wastes, representative sampling is one of the most important parts in the process of radiochemical assay. Sampling to characterized RI waste conditions typically has been based on judgment or convenience sampling of individual or groups. However, it is difficult to get a sample representatively among the numerous drums. In addition, RI waste drums might be classified into heterogeneous wastes because they have a content of cotton, glass, vinyl, gloves, etc. In order to get the representative samples, the sample to be analyzed must be collected from selected every drum. Considering the expense and time of analysis, however, the number of sample has to be minimized. In this study, RI waste drums were classified by the various conditions of the half-life, surface dose, acceptance date, waste form, generator, etc. A sample for radiochemical assay was obtained through mixing samples of each drum. The sample has to be prepared for radiochemical assay and although the sample should be reasonably uniform, it is rare that a completely homogeneous material is received. Every sample is shredded by a 1 ∼ 2 cm 2 diameter and a representative aliquot taken for the required analysis. For verification of representative sampling, classified every group is tested for evaluation of 'selection of representative drum in a group' and 'representative sampling in a drum'

  17. Enhanced and selective ammonia sensing of reduced graphene oxide based chemo resistive sensor at room temperature

    Science.gov (United States)

    Kumar, Ramesh; Kaur, Amarjeet

    2016-05-01

    The reduced graphene oxide thin films were fabricated by using the spin coating method. The reduced graphene oxide samples were characterised by Raman studies to obtain corresponding D and G bands at 1360 and 1590 cm-1 respectively. Fourier transform infra-red (FTIR) spectra consists of peak corresponds to sp2 hybridisation of carbon atoms at 1560 cm-1. The reduced graphene oxide based chemoresistive sensor exhibited a p-type semiconductor behaviour in ambient conditions and showed good sensitivity to different concentration of ammonia from 25 ppm to 500 ppm and excellent selectivity at room temperature. The sensor displays selectivity to several hazardous vapours such as methanol, ethanol, acetone and hydrazine hydrate. The sensor demonstrated a sensitivity of 9.8 at 25 ppm concentration of ammonia with response time of 163 seconds.

  18. Summary of micrographic analysis of selected core samples from Well ER-20-6n number 1 in support of matrix diffusion testing

    International Nuclear Information System (INIS)

    1998-01-01

    ER-20-6number s ign1 was cored to determine fracture and lithologic properties proximal to the BULLION test cavity. Selected samples from ER-20-6number s ign1 were subjected to matrix and/or fracture diffusion experiments to assess solute movement in this environment. Micrographic analysis of these samples suggests that the similarity in bulk chemical composition results in very similar mineral assemblages forming along natural fractures. These samples are all part of the mafic-poor Calico Hills Formation and exhibit fracture-coating mineral assemblages dominated by mixed illite/smectite clay and illite, with local opaline silica (2,236 and 2, 812 feet), and zeolite (at 2,236 feet). Based on this small sample population, the magnitude to which secondary phases have formed on fracture surfaces bears an apparently inverse relationship to the competency of the host lithology, reflected by variations in the degree of fracturing and the development of secondary phases on fracture surfaces. In the flow breccia at 2,851 feet, thinly developed, localized coatings are developed along persistent open fracture apertures in this competent rock type. Fractures in the devitrified lava from 2,812 feet are irregular, and locally blocked by secondary mineral phases. Natural fractures on the zeolitized tuff from 2,236 feet are discontinuous and irregular and typically obstructed with secondary mineral phases. There are also a second set of clean fractures in the 2,236 foot sample which lack secondary mineral phases and are interpreted to have been induced by the BULLION test. Based on these results, it is expected that matrix diffusion will be enhanced in samples where potentially transmissive fractures exhibit the greatest degree of obstruction (2,236>2,812=2,835>2,851). It is unclear what influence the induced fractures observed at 2,236 feet may have on diffusion given the lack of knowledge on their extent. It is assumed that the bulk matrix diffusion characteristics of the

  19. A new electrochemical sensor for highly sensitive and selective detection of nitrite in food samples based on sonochemical synthesized Calcium Ferrite (CaFe2O4) clusters modified screen printed carbon electrode.

    Science.gov (United States)

    Balasubramanian, Paramasivam; Settu, Ramki; Chen, Shen-Ming; Chen, Tse-Wei; Sharmila, Ganapathi

    2018-08-15

    Herein, we report a novel, disposable electrochemical sensor for the detection of nitrite ions in food samples based on the sonochemical synthesized orthorhombic CaFe 2 O 4 (CFO) clusters modified screen printed electrode. As synthesized CFO clusters were characterized by scanning electron microscopy (SEM), X-ray diffraction (XRD), Fourier transformer infrared spectroscopy (FT-IR), Thermogravimetric analysis (TGA), X-ray photoelectron spectroscopy (XPS), electrochemical impedance spectroscopy (EIS), cyclic voltammetry (CV) and amperometry (i-t). Under optimal condition, the CFO modified electrode displayed a rapid current response to nitrite, a linear response range from 0.016 to 1921 µM associated with a low detection limit 6.6 nM. The suggested sensor also showed the excellent sensitivity of 3.712 μA μM -1  cm -2 . Furthermore, a good reproducibility, long-term stability and excellent selectivity were also attained on the proposed sensor. In addition, the practical applicability of the sensor was investigated via meat samples, tap water and drinking water, and showed desirable recovery rate, representing its possibilities for practical application. Copyright © 2018 Elsevier Inc. All rights reserved.

  20. Asymptotic Effectiveness of the Event-Based Sampling According to the Integral Criterion

    Directory of Open Access Journals (Sweden)

    Marek Miskowicz

    2007-01-01

    Full Text Available A rapid progress in intelligent sensing technology creates new interest in a development of analysis and design of non-conventional sampling schemes. The investigation of the event-based sampling according to the integral criterion is presented in this paper. The investigated sampling scheme is an extension of the pure linear send-on- delta/level-crossing algorithm utilized for reporting the state of objects monitored by intelligent sensors. The motivation of using the event-based integral sampling is outlined. The related works in adaptive sampling are summarized. The analytical closed-form formulas for the evaluation of the mean rate of event-based traffic, and the asymptotic integral sampling effectiveness, are derived. The simulation results verifying the analytical formulas are reported. The effectiveness of the integral sampling is compared with the related linear send-on-delta/level-crossing scheme. The calculation of the asymptotic effectiveness for common signals, which model the state evolution of dynamic systems in time, is exemplified.

  1. Performance analysis of a threshold-based parallel multiple beam selection scheme for WDM-based systems for Gamma-Gamma distributions

    KAUST Repository

    Nam, Sung Sik

    2017-03-02

    In this paper, we statistically analyze the performance of a threshold-based parallel multiple beam selection scheme (TPMBS) for Free-space optical (FSO) based system with wavelength division multiplexing (WDM) in cases where a pointing error has occurred for practical consideration over independent identically distributed (i.i.d.) Gamma-Gamma fading conditions. Specifically, we statistically analyze the characteristics in operation under conventional heterodyne detection (HD) scheme for both adaptive modulation (AM) case in addition to non-AM case (i.e., coherentnon-coherent binary modulation). Then, based on the statistically derived results, we evaluate the outage probability (CDF) of a selected beam, the average spectral efficiency (ASE), the average number of selected beams (ANSB), and the average bit error rate (BER). Some selected results shows that we can obtain the higher spectral efficiency and simultaneously reduce the potential increasing of the complexity of implementation caused by applying the selection based beam selection scheme without a considerable performance loss.

  2. A sampling-based approach to probabilistic pursuit evasion

    KAUST Repository

    Mahadevan, Aditya; Amato, Nancy M.

    2012-01-01

    Probabilistic roadmaps (PRMs) are a sampling-based approach to motion-planning that encodes feasible paths through the environment using a graph created from a subset of valid positions. Prior research has shown that PRMs can be augmented

  3. The NuSTAR  Extragalactic Surveys: X-Ray Spectroscopic Analysis of the Bright Hard-band Selected Sample

    Science.gov (United States)

    Zappacosta, L.; Comastri, A.; Civano, F.; Puccetti, S.; Fiore, F.; Aird, J.; Del Moro, A.; Lansbury, G. B.; Lanzuisi, G.; Goulding, A.; Mullaney, J. R.; Stern, D.; Ajello, M.; Alexander, D. M.; Ballantyne, D. R.; Bauer, F. E.; Brandt, W. N.; Chen, C.-T. J.; Farrah, D.; Harrison, F. A.; Gandhi, P.; Lanz, L.; Masini, A.; Marchesi, S.; Ricci, C.; Treister, E.

    2018-02-01

    We discuss the spectral analysis of a sample of 63 active galactic nuclei (AGN) detected above a limiting flux of S(8{--}24 {keV})=7× {10}-14 {erg} {{{s}}}-1 {{cm}}-2 in the multi-tiered NuSTAR extragalactic survey program. The sources span a redshift range z=0{--}2.1 (median =0.58). The spectral analysis is performed over the broad 0.5–24 keV energy range, combining NuSTAR with Chandra and/or XMM-Newton data and employing empirical and physically motivated models. This constitutes the largest sample of AGN selected at > 10 {keV} to be homogeneously spectrally analyzed at these flux levels. We study the distribution of spectral parameters such as photon index, column density ({N}{{H}}), reflection parameter ({\\boldsymbol{R}}), and 10–40 keV luminosity ({L}{{X}}). Heavily obscured ({log}[{N}{{H}}/{{cm}}-2]≥slant 23) and Compton-thick (CT; {log}[{N}{{H}}/{{cm}}-2]≥slant 24) AGN constitute ∼25% (15–17 sources) and ∼2–3% (1–2 sources) of the sample, respectively. The observed {N}{{H}} distribution agrees fairly well with predictions of cosmic X-ray background population-synthesis models (CXBPSM). We estimate the intrinsic fraction of AGN as a function of {N}{{H}}, accounting for the bias against obscured AGN in a flux-selected sample. The fraction of CT AGN relative to {log}[{N}{{H}}/{{cm}}-2]=20{--}24 AGN is poorly constrained, formally in the range 2–56% (90% upper limit of 66%). We derived a fraction (f abs) of obscured AGN ({log}[{N}{{H}}/{{cm}}-2]=22{--}24) as a function of {L}{{X}} in agreement with CXBPSM and previous zvalues.

  4. Contingency inferences driven by base rates: Valid by sampling

    Directory of Open Access Journals (Sweden)

    Florian Kutzner

    2011-04-01

    Full Text Available Fiedler et al. (2009, reviewed evidence for the utilization of a contingency inference strategy termed pseudocontingencies (PCs. In PCs, the more frequent levels (and, by implication, the less frequent levels are assumed to be associated. PCs have been obtained using a wide range of task settings and dependent measures. Yet, the readiness with which decision makers rely on PCs is poorly understood. A computer simulation explored two potential sources of subjective validity of PCs. First, PCs are shown to perform above chance level when the task is to infer the sign of moderate to strong population contingencies from a sample of observations. Second, contingency inferences based on PCs and inferences based on cell frequencies are shown to partially agree across samples. Intriguingly, this criterion and convergent validity are by-products of random sampling error, highlighting the inductive nature of contingency inferences.

  5. Systematic sampling of discrete and continuous populations: sample selection and the choice of estimator

    Science.gov (United States)

    Harry T. Valentine; David L. R. Affleck; Timothy G. Gregoire

    2009-01-01

    Systematic sampling is easy, efficient, and widely used, though it is not generally recognized that a systematic sample may be drawn from the population of interest with or without restrictions on randomization. The restrictions or the lack of them determine which estimators are unbiased, when using the sampling design as the basis for inference. We describe the...

  6. A Heckman Selection- t Model

    KAUST Repository

    Marchenko, Yulia V.; Genton, Marc G.

    2012-01-01

    for sample selection bias based on the SLt model and compare it with the performances of several tests used with the SLN model. Our findings indicate that the latter tests can be misleading in the presence of heavy-tailed data. © 2012 American Statistical

  7. Selecting the optimum plot size for a California design-based stream and wetland mapping program.

    Science.gov (United States)

    Lackey, Leila G; Stein, Eric D

    2014-04-01

    Accurate estimates of the extent and distribution of wetlands and streams are the foundation of wetland monitoring, management, restoration, and regulatory programs. Traditionally, these estimates have relied on comprehensive mapping. However, this approach is prohibitively resource-intensive over large areas, making it both impractical and statistically unreliable. Probabilistic (design-based) approaches to evaluating status and trends provide a more cost-effective alternative because, compared with comprehensive mapping, overall extent is inferred from mapping a statistically representative, randomly selected subset of the target area. In this type of design, the size of sample plots has a significant impact on program costs and on statistical precision and accuracy; however, no consensus exists on the appropriate plot size for remote monitoring of stream and wetland extent. This study utilized simulated sampling to assess the performance of four plot sizes (1, 4, 9, and 16 km(2)) for three geographic regions of California. Simulation results showed smaller plot sizes (1 and 4 km(2)) were most efficient for achieving desired levels of statistical accuracy and precision. However, larger plot sizes were more likely to contain rare and spatially limited wetland subtypes. Balancing these considerations led to selection of 4 km(2) for the California status and trends program.

  8. Soft magnetic properties of bulk amorphous Co-based samples

    International Nuclear Information System (INIS)

    Fuezer, J.; Bednarcik, J.; Kollar, P.

    2006-01-01

    Ball milling of melt-spun ribbons and subsequent compaction of the resulting powders in the supercooled liquid region were used to prepare disc shaped bulk amorphous Co-based samples. The several bulk samples have been prepared by hot compaction with subsequent heat treatment (500 deg C - 575 deg C). The influence of the consolidation temperature and follow-up heat treatment on the magnetic properties of bulk samples was investigated. The final heat treatment leads to decrease of the coercivity to the value between the 7.5 to 9 A/m (Authors)

  9. Using Load Balancing to Scalably Parallelize Sampling-Based Motion Planning Algorithms

    KAUST Repository

    Fidel, Adam; Jacobs, Sam Ade; Sharma, Shishir; Amato, Nancy M.; Rauchwerger, Lawrence

    2014-01-01

    Motion planning, which is the problem of computing feasible paths in an environment for a movable object, has applications in many domains ranging from robotics, to intelligent CAD, to protein folding. The best methods for solving this PSPACE-hard problem are so-called sampling-based planners. Recent work introduced uniform spatial subdivision techniques for parallelizing sampling-based motion planning algorithms that scaled well. However, such methods are prone to load imbalance, as planning time depends on region characteristics and, for most problems, the heterogeneity of the sub problems increases as the number of processors increases. In this work, we introduce two techniques to address load imbalance in the parallelization of sampling-based motion planning algorithms: an adaptive work stealing approach and bulk-synchronous redistribution. We show that applying these techniques to representatives of the two major classes of parallel sampling-based motion planning algorithms, probabilistic roadmaps and rapidly-exploring random trees, results in a more scalable and load-balanced computation on more than 3,000 cores. © 2014 IEEE.

  10. Using Load Balancing to Scalably Parallelize Sampling-Based Motion Planning Algorithms

    KAUST Repository

    Fidel, Adam

    2014-05-01

    Motion planning, which is the problem of computing feasible paths in an environment for a movable object, has applications in many domains ranging from robotics, to intelligent CAD, to protein folding. The best methods for solving this PSPACE-hard problem are so-called sampling-based planners. Recent work introduced uniform spatial subdivision techniques for parallelizing sampling-based motion planning algorithms that scaled well. However, such methods are prone to load imbalance, as planning time depends on region characteristics and, for most problems, the heterogeneity of the sub problems increases as the number of processors increases. In this work, we introduce two techniques to address load imbalance in the parallelization of sampling-based motion planning algorithms: an adaptive work stealing approach and bulk-synchronous redistribution. We show that applying these techniques to representatives of the two major classes of parallel sampling-based motion planning algorithms, probabilistic roadmaps and rapidly-exploring random trees, results in a more scalable and load-balanced computation on more than 3,000 cores. © 2014 IEEE.

  11. Statistical sampling strategies

    International Nuclear Information System (INIS)

    Andres, T.H.

    1987-01-01

    Systems assessment codes use mathematical models to simulate natural and engineered systems. Probabilistic systems assessment codes carry out multiple simulations to reveal the uncertainty in values of output variables due to uncertainty in the values of the model parameters. In this paper, methods are described for sampling sets of parameter values to be used in a probabilistic systems assessment code. Three Monte Carlo parameter selection methods are discussed: simple random sampling, Latin hypercube sampling, and sampling using two-level orthogonal arrays. Three post-selection transformations are also described: truncation, importance transformation, and discretization. Advantages and disadvantages of each method are summarized

  12. Wavelength selection-based nonlinear calibration for transcutaneous blood glucose sensing using Raman spectroscopy

    Science.gov (United States)

    Dingari, Narahara Chari; Barman, Ishan; Kang, Jeon Woong; Kong, Chae-Ryon; Dasari, Ramachandra R.; Feld, Michael S.

    2011-01-01

    While Raman spectroscopy provides a powerful tool for noninvasive and real time diagnostics of biological samples, its translation to the clinical setting has been impeded by the lack of robustness of spectroscopic calibration models and the size and cumbersome nature of conventional laboratory Raman systems. Linear multivariate calibration models employing full spectrum analysis are often misled by spurious correlations, such as system drift and covariations among constituents. In addition, such calibration schemes are prone to overfitting, especially in the presence of external interferences that may create nonlinearities in the spectra-concentration relationship. To address both of these issues we incorporate residue error plot-based wavelength selection and nonlinear support vector regression (SVR). Wavelength selection is used to eliminate uninformative regions of the spectrum, while SVR is used to model the curved effects such as those created by tissue turbidity and temperature fluctuations. Using glucose detection in tissue phantoms as a representative example, we show that even a substantial reduction in the number of wavelengths analyzed using SVR lead to calibration models of equivalent prediction accuracy as linear full spectrum analysis. Further, with clinical datasets obtained from human subject studies, we also demonstrate the prospective applicability of the selected wavelength subsets without sacrificing prediction accuracy, which has extensive implications for calibration maintenance and transfer. Additionally, such wavelength selection could substantially reduce the collection time of serial Raman acquisition systems. Given the reduced footprint of serial Raman systems in relation to conventional dispersive Raman spectrometers, we anticipate that the incorporation of wavelength selection in such hardware designs will enhance the possibility of miniaturized clinical systems for disease diagnosis in the near future. PMID:21895336

  13. SDSS-IV MaNGA: faint quenched galaxies - I. Sample selection and evidence for environmental quenching

    Science.gov (United States)

    Penny, Samantha J.; Masters, Karen L.; Weijmans, Anne-Marie; Westfall, Kyle B.; Bershady, Matthew A.; Bundy, Kevin; Drory, Niv; Falcón-Barroso, Jesús; Law, David; Nichol, Robert C.; Thomas, Daniel; Bizyaev, Dmitry; Brownstein, Joel R.; Freischlad, Gordon; Gaulme, Patrick; Grabowski, Katie; Kinemuchi, Karen; Malanushenko, Elena; Malanushenko, Viktor; Oravetz, Daniel; Roman-Lopes, Alexandre; Pan, Kaike; Simmons, Audrey; Wake, David A.

    2016-11-01

    Using kinematic maps from the Sloan Digital Sky Survey (SDSS) Mapping Nearby Galaxies at Apache Point Observatory (MaNGA) survey, we reveal that the majority of low-mass quenched galaxies exhibit coherent rotation in their stellar kinematics. Our sample includes all 39 quenched low-mass galaxies observed in the first year of MaNGA. The galaxies are selected with Mr > -19.1, stellar masses 109 M⊙ 1.9. They lie on the size-magnitude and σ-luminosity relations for previously studied dwarf galaxies. Just six (15 ± 5.7 per cent) are found to have rotation speeds ve, rot 5 × 1010 M⊙), supporting the hypothesis that galaxy-galaxy or galaxy-group interactions quench star formation in low-mass galaxies. The local bright galaxy density for our sample is ρproj = 8.2 ± 2.0 Mpc-2, compared to ρproj = 2.1 ± 0.4 Mpc-2 for a star-forming comparison sample, confirming that the quenched low-mass galaxies are preferentially found in higher density environments.

  14. Simulation of selected genealogies.

    Science.gov (United States)

    Slade, P F

    2000-02-01

    Algorithms for generating genealogies with selection conditional on the sample configuration of n genes in one-locus, two-allele haploid and diploid models are presented. Enhanced integro-recursions using the ancestral selection graph, introduced by S. M. Krone and C. Neuhauser (1997, Theor. Popul. Biol. 51, 210-237), which is the non-neutral analogue of the coalescent, enables accessible simulation of the embedded genealogy. A Monte Carlo simulation scheme based on that of R. C. Griffiths and S. Tavaré (1996, Math. Comput. Modelling 23, 141-158), is adopted to consider the estimation of ancestral times under selection. Simulations show that selection alters the expected depth of the conditional ancestral trees, depending on a mutation-selection balance. As a consequence, branch lengths are shown to be an ineffective criterion for detecting the presence of selection. Several examples are given which quantify the effects of selection on the conditional expected time to the most recent common ancestor. Copyright 2000 Academic Press.

  15. FiGS: a filter-based gene selection workbench for microarray data

    Directory of Open Access Journals (Sweden)

    Yun Taegyun

    2010-01-01

    Full Text Available Abstract Background The selection of genes that discriminate disease classes from microarray data is widely used for the identification of diagnostic biomarkers. Although various gene selection methods are currently available and some of them have shown excellent performance, no single method can retain the best performance for all types of microarray datasets. It is desirable to use a comparative approach to find the best gene selection result after rigorous test of different methodological strategies for a given microarray dataset. Results FiGS is a web-based workbench that automatically compares various gene selection procedures and provides the optimal gene selection result for an input microarray dataset. FiGS builds up diverse gene selection procedures by aligning different feature selection techniques and classifiers. In addition to the highly reputed techniques, FiGS diversifies the gene selection procedures by incorporating gene clustering options in the feature selection step and different data pre-processing options in classifier training step. All candidate gene selection procedures are evaluated by the .632+ bootstrap errors and listed with their classification accuracies and selected gene sets. FiGS runs on parallelized computing nodes that capacitate heavy computations. FiGS is freely accessible at http://gexp.kaist.ac.kr/figs. Conclusion FiGS is an web-based application that automates an extensive search for the optimized gene selection analysis for a microarray dataset in a parallel computing environment. FiGS will provide both an efficient and comprehensive means of acquiring optimal gene sets that discriminate disease states from microarray datasets.

  16. Different cortical mechanisms for spatial vs. feature-based attentional selection in visual working memory

    Directory of Open Access Journals (Sweden)

    Anna Heuer

    2016-08-01

    Full Text Available The limited capacity of visual working memory necessitates attentional mechanisms that selectively update and maintain only the most task-relevant content. Psychophysical experiments have shown that the retroactive selection of memory content can be based on visual properties such as location or shape, but the neural basis for such differential selection is unknown. For example, it is not known if there are different cortical modules specialized for spatial versus feature-based mnemonic attention, in the same way that has been demonstrated for attention to perceptual input. Here, we used transcranial magnetic stimulation (TMS to identify areas in human parietal and occipital cortex involved in the selection of objects from memory based on cues to their location (spatial information or their shape (featural information. We found that TMS over the supramarginal gyrus (SMG selectively facilitated spatial selection, whereas TMS over the lateral occipital cortex selectively enhanced feature-based selection for remembered objects in the contralateral visual field. Thus, different cortical regions are responsible for spatial vs. feature-based selection of working memory representations. Since the same regions are involved in attention to external events, these new findings indicate overlapping mechanisms for attentional control over perceptual input and mnemonic representations.

  17. Structure and mechanical properties of parts obtained by selective laser melting of metal powder based on intermetallic compounds Ni3Al

    Science.gov (United States)

    Smelov, V. G.; Sotov, A. V.; Agapovichev, A. V.; Nosova, E. A.

    2018-03-01

    The structure and mechanical properties of samples are obtained from metal powder based on intermetallic compound by selective laser melting. The chemical analysis of the raw material and static tensile test of specimens were made. Change in the samples’ structure and mechanical properties after homogenization during four and twenty-four hours were investigated. A small-sized combustion chamber of a gas turbine engine was performed by the selective laser melting method. The print combustion chamber was subjected to the gas-dynamic test in a certain temperature and time range.

  18. An active learning representative subset selection method using net analyte signal

    Science.gov (United States)

    He, Zhonghai; Ma, Zhenhe; Luan, Jingmin; Cai, Xi

    2018-05-01

    To guarantee accurate predictions, representative samples are needed when building a calibration model for spectroscopic measurements. However, in general, it is not known whether a sample is representative prior to measuring its concentration, which is both time-consuming and expensive. In this paper, a method to determine whether a sample should be selected into a calibration set is presented. The selection is based on the difference of Euclidean norm of net analyte signal (NAS) vector between the candidate and existing samples. First, the concentrations and spectra of a group of samples are used to compute the projection matrix, NAS vector, and scalar values. Next, the NAS vectors of candidate samples are computed by multiplying projection matrix with spectra of samples. Scalar value of NAS is obtained by norm computation. The distance between the candidate set and the selected set is computed, and samples with the largest distance are added to selected set sequentially. Last, the concentration of the analyte is measured such that the sample can be used as a calibration sample. Using a validation test, it is shown that the presented method is more efficient than random selection. As a result, the amount of time and money spent on reference measurements is greatly reduced.

  19. New procedure of selected biogenic amines determination in wine samples by HPLC

    Energy Technology Data Exchange (ETDEWEB)

    Piasta, Anna M.; Jastrzębska, Aneta, E-mail: aj@chem.uni.torun.pl; Krzemiński, Marek P.; Muzioł, Tadeusz M.; Szłyk, Edward

    2014-06-27

    Highlights: • We proposed new procedure for derivatization of biogenic amines. • The NMR and XRD analysis confirmed the purity and uniqueness of derivatives. • Concentration of biogenic amines in wine samples were analyzed by RP-HPLC. • Sample contamination and derivatization reactions interferences were minimized. - Abstract: A new procedure for determination of biogenic amines (BA): histamine, phenethylamine, tyramine and tryptamine, based on the derivatization reaction with 2-chloro-1,3-dinitro-5-(trifluoromethyl)-benzene (CNBF), is proposed. The amines derivatives with CNBF were isolated and characterized by X-ray crystallography and {sup 1}H, {sup 13}C, {sup 19}F NMR spectroscopy in solution. The novelty of the procedure is based on the pure and well-characterized products of the amines derivatization reaction. The method was applied for the simultaneous analysis of the above mentioned biogenic amines in wine samples by the reversed phase-high performance liquid chromatography. The procedure revealed correlation coefficients (R{sup 2}) between 0.9997 and 0.9999, and linear range: 0.10–9.00 mg L{sup −1} (histamine); 0.10–9.36 mg L{sup -1} (tyramine); 0.09–8.64 mg L{sup −1} (tryptamine) and 0.10–8.64 mg L{sup −1} (phenethylamine), whereas accuracy was 97%–102% (recovery test). Detection limit of biogenic amines in wine samples was 0.02–0.03 mg L{sup −1}, whereas quantification limit ranged 0.05–0.10 mg L{sup −1}. The variation coefficients for the analyzed amines ranged between 0.49% and 3.92%. Obtained BA derivatives enhanced separation the analytes on chromatograms due to the inhibition of hydrolysis reaction and the reduction of by-products formation.

  20. Biomolecule-free, selective detection of o-diphenol and its derivatives with WS2/TiO2-based photoelectrochemical platform.

    Science.gov (United States)

    Ma, Weiguang; Wang, Lingnan; Zhang, Nan; Han, Dongxue; Dong, Xiandui; Niu, Li

    2015-01-01

    Herein, a novel photoelectrochemical platform with WS2/TiO2 composites as optoelectronic materials was designed for selective detection of o-diphenol and its derivatives without any biomolecule auxiliary. First, catechol was chosen as a model compound for the discrimination from resorcinol and hydroquinone; then several o-diphenol derivatives such as dopamine, caffeic acid, and catechin were also detected by employing this proposed photoelectrochemical sensor. Finally, the mechanism of such a selective detection has been elaborately explored. The excellent selectivity and high sensitivity should be attributed to two aspects: (i) chelate effect of adjacent double oxygen atoms in the o-diphenol with the Ti(IV) surface site to form a five/six-atom ring structure, which is considered as the key point for distinction and selective detection. (ii) This selected WS2/TiO2 composites with proper band level between WS2 and TiO2, which could make the photogenerated electron and hole easily separated and results in great improvement of sensitivity. By employing such a photoelectrochemical platform, practical samples including commercial clinic drugs and human urine samples have been successfully performed for dopamine detection. This biomolecule-free WS2/TiO2 based photoelectrochemical platform demonstrates excellent stability, reproducibility, remarkably convenient, and cost-effective advantages, as well as low detection limit (e.g., 0.32 μmol L(-1) for dopamine). It holds great promise to be applied for detection of o-diphenol kind species in environment and food fields.

  1. Ranking and selection of commercial off-the-shelf using fuzzy distance based approach

    Directory of Open Access Journals (Sweden)

    Rakesh Garg

    2015-06-01

    Full Text Available There is a tremendous growth of the use of the component based software engineering (CBSE approach for the development of software systems. The selection of the best suited COTS components which fulfils the necessary requirement for the development of software(s has become a major challenge for the software developers. The complexity of the optimal selection problem increases with an increase in alternative potential COTS components and the corresponding selection criteria. In this research paper, the problem of ranking and selection of Data Base Management Systems (DBMS components is modeled as a multi-criteria decision making problem. A ‘Fuzzy Distance Based Approach (FDBA’ method is proposed for the optimal ranking and selection of DBMS COTS components of an e-payment system based on 14 selection criteria grouped under three major categories i.e. ‘Vendor Capabilities’, ‘Business Issues’ and ‘Cost’. The results of this method are compared with other Analytical Hierarchy Process (AHP which is termed as a typical multi-criteria decision making approach. The proposed methodology is explained with an illustrated example.

  2. Using machine learning to accelerate sampling-based inversion

    Science.gov (United States)

    Valentine, A. P.; Sambridge, M.

    2017-12-01

    In most cases, a complete solution to a geophysical inverse problem (including robust understanding of the uncertainties associated with the result) requires a sampling-based approach. However, the computational burden is high, and proves intractable for many problems of interest. There is therefore considerable value in developing techniques that can accelerate sampling procedures.The main computational cost lies in evaluation of the forward operator (e.g. calculation of synthetic seismograms) for each candidate model. Modern machine learning techniques-such as Gaussian Processes-offer a route for constructing a computationally-cheap approximation to this calculation, which can replace the accurate solution during sampling. Importantly, the accuracy of the approximation can be refined as inversion proceeds, to ensure high-quality results.In this presentation, we describe and demonstrate this approach-which can be seen as an extension of popular current methods, such as the Neighbourhood Algorithm, and bridges the gap between prior- and posterior-sampling frameworks.

  3. Uniform design based SVM model selection for face recognition

    Science.gov (United States)

    Li, Weihong; Liu, Lijuan; Gong, Weiguo

    2010-02-01

    Support vector machine (SVM) has been proved to be a powerful tool for face recognition. The generalization capacity of SVM depends on the model with optimal hyperparameters. The computational cost of SVM model selection results in application difficulty in face recognition. In order to overcome the shortcoming, we utilize the advantage of uniform design--space filling designs and uniformly scattering theory to seek for optimal SVM hyperparameters. Then we propose a face recognition scheme based on SVM with optimal model which obtained by replacing the grid and gradient-based method with uniform design. The experimental results on Yale and PIE face databases show that the proposed method significantly improves the efficiency of SVM model selection.

  4. Representing major soil variability at regional scale by constrained Latin Hypercube Sampling of remote sensing data

    NARCIS (Netherlands)

    Mulder, V.L.; Bruin, de S.; Schaepman, M.E.

    2013-01-01

    This paper presents a sparse, remote sensing-based sampling approach making use of conditioned Latin Hypercube Sampling (cLHS) to assess variability in soil properties at regional scale. The method optimizes the sampling scheme for a defined spatial population based on selected covariates, which are

  5. Sampling guidelines for oral fluid-based surveys of group-housed animals.

    Science.gov (United States)

    Rotolo, Marisa L; Sun, Yaxuan; Wang, Chong; Giménez-Lirola, Luis; Baum, David H; Gauger, Phillip C; Harmon, Karen M; Hoogland, Marlin; Main, Rodger; Zimmerman, Jeffrey J

    2017-09-01

    Formulas and software for calculating sample size for surveys based on individual animal samples are readily available. However, sample size formulas are not available for oral fluids and other aggregate samples that are increasingly used in production settings. Therefore, the objective of this study was to develop sampling guidelines for oral fluid-based porcine reproductive and respiratory syndrome virus (PRRSV) surveys in commercial swine farms. Oral fluid samples were collected in 9 weekly samplings from all pens in 3 barns on one production site beginning shortly after placement of weaned pigs. Samples (n=972) were tested by real-time reverse-transcription PCR (RT-rtPCR) and the binary results analyzed using a piecewise exponential survival model for interval-censored, time-to-event data with misclassification. Thereafter, simulation studies were used to study the barn-level probability of PRRSV detection as a function of sample size, sample allocation (simple random sampling vs fixed spatial sampling), assay diagnostic sensitivity and specificity, and pen-level prevalence. These studies provided estimates of the probability of detection by sample size and within-barn prevalence. Detection using fixed spatial sampling was as good as, or better than, simple random sampling. Sampling multiple barns on a site increased the probability of detection with the number of barns sampled. These results are relevant to PRRSV control or elimination projects at the herd, regional, or national levels, but the results are also broadly applicable to contagious pathogens of swine for which oral fluid tests of equivalent performance are available. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  6. Failure Probability Calculation Method Using Kriging Metamodel-based Importance Sampling Method

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Seunggyu [Korea Aerospace Research Institue, Daejeon (Korea, Republic of); Kim, Jae Hoon [Chungnam Nat’l Univ., Daejeon (Korea, Republic of)

    2017-05-15

    The kernel density was determined based on sampling points obtained in a Markov chain simulation and was assumed to be an important sampling function. A Kriging metamodel was constructed in more detail in the vicinity of a limit state. The failure probability was calculated based on importance sampling, which was performed for the Kriging metamodel. A pre-existing method was modified to obtain more sampling points for a kernel density in the vicinity of a limit state. A stable numerical method was proposed to find a parameter of the kernel density. To assess the completeness of the Kriging metamodel, the possibility of changes in the calculated failure probability due to the uncertainty of the Kriging metamodel was calculated.

  7. Adaptive list sequential sampling method for population-based observational studies

    NARCIS (Netherlands)

    Hof, Michel H.; Ravelli, Anita C. J.; Zwinderman, Aeilko H.

    2014-01-01

    In population-based observational studies, non-participation and delayed response to the invitation to participate are complications that often arise during the recruitment of a sample. When both are not properly dealt with, the composition of the sample can be different from the desired

  8. Attribute based selection of thermoplastic resin for vacuum infusion process

    DEFF Research Database (Denmark)

    Prabhakaran, R.T. Durai; Lystrup, Aage; Løgstrup Andersen, Tom

    2011-01-01

    The composite industry looks toward a new material system (resins) based on thermoplastic polymers for the vacuum infusion process, similar to the infusion process using thermosetting polymers. A large number of thermoplastics are available in the market with a variety of properties suitable...... for different engineering applications, and few of those are available in a not yet polymerised form suitable for resin infusion. The proper selection of a new resin system among these thermoplastic polymers is a concern for manufactures in the current scenario and a special mathematical tool would...... be beneficial. In this paper, the authors introduce a new decision making tool for resin selection based on significant attributes. This article provides a broad overview of suitable thermoplastic material systems for vacuum infusion process available in today’s market. An illustrative example—resin selection...

  9. Robot soccer action selection based on Q learning

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    This paper researches robot soccer action selection based on Q learning . The robot learn to activate particular behavior given their current situation and reward signal. We adopt neural network to implementations of Q learning for their generalization properties and limited computer memory requirements

  10. Structure and Mechanical Properties of the AlSi10Mg Alloy Samples Manufactured by Selective Laser Melting

    Science.gov (United States)

    Li, Xiaodan; Ni, Jiaqiang; Zhu, Qingfeng; Su, Hang; Cui, Jianzhong; Zhang, Yifei; Li, Jianzhong

    2017-11-01

    The AlSi10Mg alloy samples with the size of 14×14×91mm were produced by the selective laser melting (SLM) method in different building direction. The structures and the properties at -70°C of the sample in different direction were investigated. The results show that the structure in different building direction shows different morphology. The fish scale structures distribute on the side along the building direction, and the oval structures distribute on the side vertical to the building direction. Some pores in with the maximum size of 100 μm exist of the structure. And there is no major influence for the build orientation on the tensile properties. The tensile strength and the elongation of the sample in the building direction are 340 Mpa and 11.2 % respectively. And the tensile strength and the elongation of the sample vertical to building direction are 350 Mpa and 13.4 % respectively

  11. Stochastic bounded consensus tracking of leader-follower multi-agent systems with measurement noises based on sampled data with general sampling delay

    International Nuclear Information System (INIS)

    Wu Zhi-Hai; Peng Li; Xie Lin-Bo; Wen Ji-Wei

    2013-01-01

    In this paper we provide a unified framework for consensus tracking of leader-follower multi-agent systems with measurement noises based on sampled data with a general sampling delay. First, a stochastic bounded consensus tracking protocol based on sampled data with a general sampling delay is presented by employing the delay decomposition technique. Then, necessary and sufficient conditions are derived for guaranteeing leader-follower multi-agent systems with measurement noises and a time-varying reference state to achieve mean square bounded consensus tracking. The obtained results cover no sampling delay, a small sampling delay and a large sampling delay as three special cases. Last, simulations are provided to demonstrate the effectiveness of the theoretical results. (interdisciplinary physics and related areas of science and technology)

  12. Space-based visual attention: a marker of immature selective attention in toddlers?

    Science.gov (United States)

    Rivière, James; Brisson, Julie

    2014-11-01

    Various studies suggested that attentional difficulties cause toddlers' failure in some spatial search tasks. However, attention is not a unitary construct and this study investigated two attentional mechanisms: location selection (space-based attention) and object selection (object-based attention). We investigated how toddlers' attention is distributed in the visual field during a manual search task for objects moving out of sight, namely the moving boxes task. Results show that 2.5-year-olds who failed this task allocated more attention to the location of the relevant object than to the object itself. These findings suggest that in some manual search tasks the primacy of space-based attention over object-based attention could be a marker of immature selective attention in toddlers. © 2014 Wiley Periodicals, Inc.

  13. ReliefF-Based EEG Sensor Selection Methods for Emotion Recognition.

    Science.gov (United States)

    Zhang, Jianhai; Chen, Ming; Zhao, Shaokai; Hu, Sanqing; Shi, Zhiguo; Cao, Yu

    2016-09-22

    Electroencephalogram (EEG) signals recorded from sensor electrodes on the scalp can directly detect the brain dynamics in response to different emotional states. Emotion recognition from EEG signals has attracted broad attention, partly due to the rapid development of wearable computing and the needs of a more immersive human-computer interface (HCI) environment. To improve the recognition performance, multi-channel EEG signals are usually used. A large set of EEG sensor channels will add to the computational complexity and cause users inconvenience. ReliefF-based channel selection methods were systematically investigated for EEG-based emotion recognition on a database for emotion analysis using physiological signals (DEAP). Three strategies were employed to select the best channels in classifying four emotional states (joy, fear, sadness and relaxation). Furthermore, support vector machine (SVM) was used as a classifier to validate the performance of the channel selection results. The experimental results showed the effectiveness of our methods and the comparison with the similar strategies, based on the F-score, was given. Strategies to evaluate a channel as a unity gave better performance in channel reduction with an acceptable loss of accuracy. In the third strategy, after adjusting channels' weights according to their contribution to the classification accuracy, the number of channels was reduced to eight with a slight loss of accuracy (58.51% ± 10.05% versus the best classification accuracy 59.13% ± 11.00% using 19 channels). In addition, the study of selecting subject-independent channels, related to emotion processing, was also implemented. The sensors, selected subject-independently from frontal, parietal lobes, have been identified to provide more discriminative information associated with emotion processing, and are distributed symmetrically over the scalp, which is consistent with the existing literature. The results will make a contribution to the

  14. ReliefF-Based EEG Sensor Selection Methods for Emotion Recognition

    Directory of Open Access Journals (Sweden)

    Jianhai Zhang

    2016-09-01

    Full Text Available Electroencephalogram (EEG signals recorded from sensor electrodes on the scalp can directly detect the brain dynamics in response to different emotional states. Emotion recognition from EEG signals has attracted broad attention, partly due to the rapid development of wearable computing and the needs of a more immersive human-computer interface (HCI environment. To improve the recognition performance, multi-channel EEG signals are usually used. A large set of EEG sensor channels will add to the computational complexity and cause users inconvenience. ReliefF-based channel selection methods were systematically investigated for EEG-based emotion recognition on a database for emotion analysis using physiological signals (DEAP. Three strategies were employed to select the best channels in classifying four emotional states (joy, fear, sadness and relaxation. Furthermore, support vector machine (SVM was used as a classifier to validate the performance of the channel selection results. The experimental results showed the effectiveness of our methods and the comparison with the similar strategies, based on the F-score, was given. Strategies to evaluate a channel as a unity gave better performance in channel reduction with an acceptable loss of accuracy. In the third strategy, after adjusting channels’ weights according to their contribution to the classification accuracy, the number of channels was reduced to eight with a slight loss of accuracy (58.51% ± 10.05% versus the best classification accuracy 59.13% ± 11.00% using 19 channels. In addition, the study of selecting subject-independent channels, related to emotion processing, was also implemented. The sensors, selected subject-independently from frontal, parietal lobes, have been identified to provide more discriminative information associated with emotion processing, and are distributed symmetrically over the scalp, which is consistent with the existing literature. The results will make a

  15. Highly selective gas sensor arrays based on thermally reduced graphene oxide.

    Science.gov (United States)

    Lipatov, Alexey; Varezhnikov, Alexey; Wilson, Peter; Sysoev, Victor; Kolmakov, Andrei; Sinitskii, Alexander

    2013-06-21

    The electrical properties of reduced graphene oxide (rGO) have been previously shown to be very sensitive to surface adsorbates, thus making rGO a very promising platform for highly sensitive gas sensors. However, poor selectivity of rGO-based gas sensors remains a major problem for their practical use. In this paper, we address the selectivity problem by employing an array of rGO-based integrated sensors instead of focusing on the performance of a single sensing element. Each rGO-based device in such an array has a unique sensor response due to the irregular structure of rGO films at different levels of organization, ranging from nanoscale to macroscale. The resulting rGO-based gas sensing system could reliably recognize analytes of nearly the same chemical nature. In our experiments rGO-based sensor arrays demonstrated a high selectivity that was sufficient to discriminate between different alcohols, such as methanol, ethanol and isopropanol, at a 100% success rate. We also discuss a possible sensing mechanism that provides the basis for analyte differentiation.

  16. Molecularly imprinted polymer microspheres prepared by Pickering emulsion polymerization for selective solid-phase extraction of eight bisphenols from human urine samples

    International Nuclear Information System (INIS)

    Yang, Jiajia; Li, Yun; Wang, Jincheng; Sun, Xiaoli; Cao, Rong; Sun, Hao; Huang, Chaonan; Chen, Jiping

    2015-01-01

    Highlights: • BPA imprinted polymer microspheres were prepared by Pickering emulsion polymerization. • Regular spherical shape and narrow diameter distribution. • Good specific adsorption capacity for BPA. • Good class-selectivity and clean-up efficiency for bisphenols in human urine under SPE mode. • Good recoveries and sensitivity for bisphenols using the MIPMS-SPE coupled with HPLC-DAD method. - Abstract: The bisphenol A (BPA) imprinted polymer microspheres were prepared by simple Pickering emulsion polymerization. Compared to traditional bulk polymerization, both high yields of polymer and good control of particle sizes were achieved. The characterization results of scanning electron microscopy and nitrogen adsorption–desorption measurements showed that the obtained molecularly imprinted polymer microsphere (MIPMS) particles possessed regular spherical shape, narrow diameter distribution (30–60 μm), a specific surface area (S BET ) of 281.26 m 2 g −1 and a total pore volume (V t ) of 0.459 cm 3 g −1 . Good specific adsorption capacity for BPA was obtained in the sorption experiment and good class selectivity for BPA and its seven structural analogs (bisphenol F, bisphenol B, bisphenol E, bisphenol AF, bisphenol S, bisphenol AP and bisphenol Z) was demonstrated by the chromatographic evaluation experiment. The MIPMS as solid-phase extraction (SPE) packing material was then evaluated for extraction and clean-up of these bisphenols (BPs) from human urine samples. An accurate and sensitive analytical method based on the MIPMS-SPE coupled with HPLC-DAD has been successfully established for simultaneous determination of eight BPs from human urine samples with detection limits of 1.2–2.2 ng mL −1 . The recoveries of BPs for urine samples at two spiking levels (100 and 500 ng mL −1 for each BP) were in the range of 81.3–106.7% with RSD values below 8.3%

  17. Selective detection of antibodies in microstructured polymer optical fibers

    DEFF Research Database (Denmark)

    Jensen, Jesper Bo Damm; Hoiby, P.E.; Emiliyanov, Grigoriy Andreev

    2005-01-01

    was applied to selectively capture either α-streptavidin or α-CRP antibodies inside these air holes. A sensitive and easy-to-use fluorescence method was used for the optical detection. Our results show that mPOF based biosensors can provide reliable and selective antibody detection in ultra small sample......We demonstrate selective detection of fluorophore labeled antibodies from minute samples probed by a sensor layer of complementary biomolecules immobilized inside the air holes of microstructured Polymer Optical Fiber (mPOF). The fiber core is defined by a ring of 6 air holes and a simple procedure...

  18. Clinical impact of strict criteria for selectivity and lateralization in adrenal vein sampling.

    Science.gov (United States)

    Gasparetto, Alessandro; Angle, John F; Darvishi, Pasha; Freeman, Colbey W; Norby, Ray G; Carey, Robert M

    2015-04-01

    Selectivity index (SI) and lateralization index (LI) thresholds determine the adequacy of adrenal vein sampling (AVS) and the degree of lateralization. The purpose of this study was investigate the clinical outcome of patients whose adrenal vein sampling was interpreted using "strict criteria" (SC) (SIpre-stimuli≥3, SIpost-stimuli≥5 and LIpre-stimuli≥4, LIpost-stimuli≥4). A retrospective review of 73 consecutive AVS procedures was performed and 67 were technically successful. Forty-three patients showed lateralization and underwent surgery, while 24 did not lateralize and were managed conservatively. Systolic blood pressure (SBP), diastolic blood pressure (DBP), kalemia (K(+)), and the change in number of blood pressure (BP) medications were recorded for each patient before and after AVS and potential surgery were performed. In the surgery group, BP and K(+) changed respectively from 160±5.3/100±2.0 mmHg to 127±3.3/80±1.9 (p blood pressure medications were six (14.0%) in the lateralized group and 22 (91.7%) in the non-lateralized group (p <0.001). AVS interpretation with SC leads to significant clinical improvement in both patients who underwent surgery and those managed conservatively.

  19. Enhanced and selective ammonia sensing of reduced graphene oxide based chemo resistive sensor at room temperature

    Energy Technology Data Exchange (ETDEWEB)

    Kumar, Ramesh, E-mail: rameshphysicsdu@gmail.com; Kaur, Amarjeet, E-mail: amarkaur@physics.du.ac.in [Department of Physics and Astrophysics, University of Delhi, Delhi-110007 (India)

    2016-05-06

    The reduced graphene oxide thin films were fabricated by using the spin coating method. The reduced graphene oxide samples were characterised by Raman studies to obtain corresponding D and G bands at 1360 and 1590 cm{sup −1} respectively. Fourier transform infra-red (FTIR) spectra consists of peak corresponds to sp{sup 2} hybridisation of carbon atoms at 1560 cm{sup −1}. The reduced graphene oxide based chemoresistive sensor exhibited a p-type semiconductor behaviour in ambient conditions and showed good sensitivity to different concentration of ammonia from 25 ppm to 500 ppm and excellent selectivity at room temperature. The sensor displays selectivity to several hazardous vapours such as methanol, ethanol, acetone and hydrazine hydrate. The sensor demonstrated a sensitivity of 9.8 at 25 ppm concentration of ammonia with response time of 163 seconds.

  20. Greedy Sampling and Incremental Surrogate Model-Based Tailoring of Aeroservoelastic Model Database for Flexible Aircraft

    Science.gov (United States)

    Wang, Yi; Pant, Kapil; Brenner, Martin J.; Ouellette, Jeffrey A.

    2018-01-01

    This paper presents a data analysis and modeling framework to tailor and develop linear parameter-varying (LPV) aeroservoelastic (ASE) model database for flexible aircrafts in broad 2D flight parameter space. The Kriging surrogate model is constructed using ASE models at a fraction of grid points within the original model database, and then the ASE model at any flight condition can be obtained simply through surrogate model interpolation. The greedy sampling algorithm is developed to select the next sample point that carries the worst relative error between the surrogate model prediction and the benchmark model in the frequency domain among all input-output channels. The process is iterated to incrementally improve surrogate model accuracy till a pre-determined tolerance or iteration budget is met. The methodology is applied to the ASE model database of a flexible aircraft currently being tested at NASA/AFRC for flutter suppression and gust load alleviation. Our studies indicate that the proposed method can reduce the number of models in the original database by 67%. Even so the ASE models obtained through Kriging interpolation match the model in the original database constructed directly from the physics-based tool with the worst relative error far below 1%. The interpolated ASE model exhibits continuously-varying gains along a set of prescribed flight conditions. More importantly, the selected grid points are distributed non-uniformly in the parameter space, a) capturing the distinctly different dynamic behavior and its dependence on flight parameters, and b) reiterating the need and utility for adaptive space sampling techniques for ASE model database compaction. The present framework is directly extendible to high-dimensional flight parameter space, and can be used to guide the ASE model development, model order reduction, robust control synthesis and novel vehicle design of flexible aircraft.

  1. How does observation uncertainty influence which stream water samples are most informative for model calibration?

    Science.gov (United States)

    Wang, Ling; van Meerveld, Ilja; Seibert, Jan

    2016-04-01

    Streamflow isotope samples taken during rainfall-runoff events are very useful for multi-criteria model calibration because they can help decrease parameter uncertainty and improve internal model consistency. However, the number of samples that can be collected and analysed is often restricted by practical and financial constraints. It is, therefore, important to choose an appropriate sampling strategy and to obtain samples that have the highest information content for model calibration. We used the Birkenes hydrochemical model and synthetic rainfall, streamflow and isotope data to explore which samples are most informative for model calibration. Starting with error-free observations, we investigated how many samples are needed to obtain a certain model fit. Based on different parameter sets, representing different catchments, and different rainfall events, we also determined which sampling times provide the most informative data for model calibration. Our results show that simulation performance for models calibrated with the isotopic data from two intelligently selected samples was comparable to simulations based on isotopic data for all 100 time steps. The models calibrated with the intelligently selected samples also performed better than the model calibrations with two benchmark sampling strategies (random selection and selection based on hydrologic information). Surprisingly, samples on the rising limb and at the peak were less informative than expected and, generally, samples taken at the end of the event were most informative. The timing of the most informative samples depends on the proportion of different flow components (baseflow, slow response flow, fast response flow and overflow). For events dominated by baseflow and slow response flow, samples taken at the end of the event after the fast response flow has ended were most informative; when the fast response flow was dominant, samples taken near the peak were most informative. However when overflow

  2. A Fast Algorithm of Convex Hull Vertices Selection for Online Classification.

    Science.gov (United States)

    Ding, Shuguang; Nie, Xiangli; Qiao, Hong; Zhang, Bo

    2018-04-01

    Reducing samples through convex hull vertices selection (CHVS) within each class is an important and effective method for online classification problems, since the classifier can be trained rapidly with the selected samples. However, the process of CHVS is NP-hard. In this paper, we propose a fast algorithm to select the convex hull vertices, based on the convex hull decomposition and the property of projection. In the proposed algorithm, the quadratic minimization problem of computing the distance between a point and a convex hull is converted into a linear equation problem with a low computational complexity. When the data dimension is high, an approximate, instead of exact, convex hull is allowed to be selected by setting an appropriate termination condition in order to delete more nonimportant samples. In addition, the impact of outliers is also considered, and the proposed algorithm is improved by deleting the outliers in the initial procedure. Furthermore, a dimension convention technique via the kernel trick is used to deal with nonlinearly separable problems. An upper bound is theoretically proved for the difference between the support vector machines based on the approximate convex hull vertices selected and all the training samples. Experimental results on both synthetic and real data sets show the effectiveness and validity of the proposed algorithm.

  3. The size selectivity of the main body of a sampling pelagic pair trawl in freshwater reservoirs during the night

    Czech Academy of Sciences Publication Activity Database

    Říha, Milan; Jůza, Tomáš; Prchalová, Marie; Mrkvička, Tomáš; Čech, Martin; Draštík, Vladislav; Muška, Milan; Kratochvíl, Michal; Peterka, Jiří; Tušer, Michal; Vašek, Mojmír; Kubečka, Jan

    2012-01-01

    Roč. 127, September (2012), s. 56-60 ISSN 0165-7836 R&D Projects: GA MZe(CZ) QH81046 Institutional support: RVO:60077344 Keywords : quantitative sampling * gear selectivity * trawl * reservoirs Subject RIV: GL - Fishing Impact factor: 1.695, year: 2012

  4. Variable Selection via Partial Correlation.

    Science.gov (United States)

    Li, Runze; Liu, Jingyuan; Lou, Lejia

    2017-07-01

    Partial correlation based variable selection method was proposed for normal linear regression models by Bühlmann, Kalisch and Maathuis (2010) as a comparable alternative method to regularization methods for variable selection. This paper addresses two important issues related to partial correlation based variable selection method: (a) whether this method is sensitive to normality assumption, and (b) whether this method is valid when the dimension of predictor increases in an exponential rate of the sample size. To address issue (a), we systematically study this method for elliptical linear regression models. Our finding indicates that the original proposal may lead to inferior performance when the marginal kurtosis of predictor is not close to that of normal distribution. Our simulation results further confirm this finding. To ensure the superior performance of partial correlation based variable selection procedure, we propose a thresholded partial correlation (TPC) approach to select significant variables in linear regression models. We establish the selection consistency of the TPC in the presence of ultrahigh dimensional predictors. Since the TPC procedure includes the original proposal as a special case, our theoretical results address the issue (b) directly. As a by-product, the sure screening property of the first step of TPC was obtained. The numerical examples also illustrate that the TPC is competitively comparable to the commonly-used regularization methods for variable selection.

  5. The structured ancestral selection graph and the many-demes limit.

    Science.gov (United States)

    Slade, Paul F; Wakeley, John

    2005-02-01

    We show that the unstructured ancestral selection graph applies to part of the history of a sample from a population structured by restricted migration among subpopulations, or demes. The result holds in the limit as the number of demes tends to infinity with proportionately weak selection, and we have also made the assumptions of island-type migration and that demes are equivalent in size. After an instantaneous sample-size adjustment, this structured ancestral selection graph converges to an unstructured ancestral selection graph with a mutation parameter that depends inversely on the migration rate. In contrast, the selection parameter for the population is independent of the migration rate and is identical to the selection parameter in an unstructured population. We show analytically that estimators of the migration rate, based on pairwise sequence differences, derived under the assumption of neutrality should perform equally well in the presence of weak selection. We also modify an algorithm for simulating genealogies conditional on the frequencies of two selected alleles in a sample. This permits efficient simulation of stronger selection than was previously possible. Using this new algorithm, we simulate gene genealogies under the many-demes ancestral selection graph and identify some situations in which migration has a strong effect on the time to the most recent common ancestor of the sample. We find that a similar effect also increases the sensitivity of the genealogy to selection.

  6. A threshold-based multiple optical signal selection scheme for WDM FSO systems

    KAUST Repository

    Nam, Sung Sik

    2017-07-20

    In this paper, we propose a threshold-based-multiple optical signal selection scheme (TMOS) for free-space optical systems based on wavelength division multiplexing. With the proposed TMOS, we can obtain higher spectral efficiency while reducing the potential increase in complexity of implementation caused by applying a selection-based beam selection scheme without a considerable performance loss. Here, to accurately characterize the performance of the proposed TMOS, we statistically analyze the characteristics with heterodyne detection technique over independent and identically distributed Log-normal turbulence conditions taking into considerations the impact of pointing error. Specifically, we derive exact closed-form expressions for the average bit error rate, and the average spectral efficiency by adopting an adaptive modulation. Some selected results shows that the average spectral efficiency can be increased with TMOS while the system requirement is satisfied.

  7. A threshold-based multiple optical signal selection scheme for WDM FSO systems

    KAUST Repository

    Nam, Sung Sik; Alouini, Mohamed-Slim; Ko, Young-Chai; Cho, Sung Ho

    2017-01-01

    In this paper, we propose a threshold-based-multiple optical signal selection scheme (TMOS) for free-space optical systems based on wavelength division multiplexing. With the proposed TMOS, we can obtain higher spectral efficiency while reducing the potential increase in complexity of implementation caused by applying a selection-based beam selection scheme without a considerable performance loss. Here, to accurately characterize the performance of the proposed TMOS, we statistically analyze the characteristics with heterodyne detection technique over independent and identically distributed Log-normal turbulence conditions taking into considerations the impact of pointing error. Specifically, we derive exact closed-form expressions for the average bit error rate, and the average spectral efficiency by adopting an adaptive modulation. Some selected results shows that the average spectral efficiency can be increased with TMOS while the system requirement is satisfied.

  8. Transmit antenna selection based on shadowing side information

    KAUST Repository

    Yilmaz, Ferkan

    2011-05-01

    In this paper, we propose a new transmit antenna selection scheme based on shadowing side information. In the proposed scheme, single transmit antenna which has the highest shadowing coefficient is selected. By the proposed technique, usage of the feedback channel and channel estimation complexity at the receiver can be reduced. We consider independent but not identically distributed Generalized-K composite fading model, which is a general composite fading & shadowing channel model for wireless environments. Exact closed-form outage probability, moment generating function and symbol error probability expressions are derived. In addition, theoretical performance results are validated by Monte Carlo simulations. © 2011 IEEE.

  9. Transmit antenna selection based on shadowing side information

    KAUST Repository

    Yilmaz, Ferkan; Yilmaz, Ahmet Oǧuz; Alouini, Mohamed-Slim; Kucur, Oǧuz

    2011-01-01

    In this paper, we propose a new transmit antenna selection scheme based on shadowing side information. In the proposed scheme, single transmit antenna which has the highest shadowing coefficient is selected. By the proposed technique, usage of the feedback channel and channel estimation complexity at the receiver can be reduced. We consider independent but not identically distributed Generalized-K composite fading model, which is a general composite fading & shadowing channel model for wireless environments. Exact closed-form outage probability, moment generating function and symbol error probability expressions are derived. In addition, theoretical performance results are validated by Monte Carlo simulations. © 2011 IEEE.

  10. Total arsenic in selected food samples from Argentina: Estimation of their contribution to inorganic arsenic dietary intake.

    Science.gov (United States)

    Sigrist, Mirna; Hilbe, Nandi; Brusa, Lucila; Campagnoli, Darío; Beldoménico, Horacio

    2016-11-01

    An optimized flow injection hydride generation atomic absorption spectroscopy (FI-HGAAS) method was used to determine total arsenic in selected food samples (beef, chicken, fish, milk, cheese, egg, rice, rice-based products, wheat flour, corn flour, oats, breakfast cereals, legumes and potatoes) and to estimate their contributions to inorganic arsenic dietary intake. The limit of detection (LOD) and limit of quantification (LOQ) values obtained were 6μgkg(-)(1) and 18μgkg(-)(1), respectively. The mean recovery range obtained for all food at a fortification level of 200μgkg(-)(1) was 85-110%. Accuracy was evaluated using dogfish liver certified reference material (DOLT-3 NRC) for trace metals. The highest total arsenic concentrations (in μgkg(-)(1)) were found in fish (152-439), rice (87-316) and rice-based products (52-201). The contribution to inorganic arsenic (i-As) intake was calculated from the mean i-As content of each food (calculated by applying conversion factors to total arsenic data) and the mean consumption per day. The primary contributors to inorganic arsenic intake were wheat flour, including its proportion in wheat flour-based products (breads, pasta and cookies), followed by rice; both foods account for close to 53% and 17% of the intake, respectively. The i-As dietary intake, estimated as 10.7μgday(-)(1), was significantly lower than that from drinking water in vast regions of Argentina. Copyright © 2016 Elsevier Ltd. All rights reserved.

  11. Determination and optimization of spatial samples for distributed measurements.

    Energy Technology Data Exchange (ETDEWEB)

    Huo, Xiaoming (Georgia Institute of Technology, Atlanta, GA); Tran, Hy D.; Shilling, Katherine Meghan; Kim, Heeyong (Georgia Institute of Technology, Atlanta, GA)

    2010-10-01

    There are no accepted standards for determining how many measurements to take during part inspection or where to take them, or for assessing confidence in the evaluation of acceptance based on these measurements. The goal of this work was to develop a standard method for determining the number of measurements, together with the spatial distribution of measurements and the associated risks for false acceptance and false rejection. Two paths have been taken to create a standard method for selecting sampling points. A wavelet-based model has been developed to select measurement points and to determine confidence in the measurement after the points are taken. An adaptive sampling strategy has been studied to determine implementation feasibility on commercial measurement equipment. Results using both real and simulated data are presented for each of the paths.

  12. Observer-Based Stabilization of Spacecraft Rendezvous with Variable Sampling and Sensor Nonlinearity

    Directory of Open Access Journals (Sweden)

    Zhuoshi Li

    2013-01-01

    Full Text Available This paper addresses the observer-based control problem of spacecraft rendezvous with nonuniform sampling period. The relative dynamic model is based on the classical Clohessy-Wiltshire equation, and sensor nonlinearity and sampling are considered together in a unified framework. The purpose of this paper is to perform an observer-based controller synthesis by using sampled and saturated output measurements, such that the resulting closed-loop system is exponentially stable. A time-dependent Lyapunov functional is developed which depends on time and the upper bound of the sampling period and also does not grow along the input update times. The controller design problem is solved in terms of the linear matrix inequality method, and the obtained results are less conservative than using the traditional Lyapunov functionals. Finally, a numerical simulation example is built to show the validity of the developed sampled-data control strategy.

  13. Construction and evaluation of As(V) selective electrodes based on iron oxyhydroxide embedded in silica gel membrane

    International Nuclear Information System (INIS)

    Rodriguez, J.A.; Barrado, E.; Vega, M.; Prieto, F.; Lima, J.L.F.C.

    2005-01-01

    Two As(V) selective electrodes (with and without inner reference solution) have been developed using selective membranes based on iron oxyhydroxide embedded on silica gel mixed with ultrapure graphite at a 2/98 (w/w) ratio. The active component of the membrane was synthesised by means of the sol-gel technique and characterized by X-ray and FTIR spectroscopy. This compound shows a great affinity towards As(V) ions adsorbing 408 mg g -1 . Using 1 mol l -1 phosphate buffer (at a 1/1, v/v ratio) to adjust the pH and the ionic strength to 7 and 0.5 mol l -1 , respectively, the calibration curve is linear from 1.0 x 10 -1 to 1.0 x 10 -6 mol l -1 As(V), with a practical detection limit of 4 x 10 -7 mol l -1 (0.03 mg l -1 ) and a slope close to 30 mV decade -1 . The effect of potentially interfering ions was investigated. The accuracy and precision of the procedure have been tested on arsenic-free drinking water samples spiked with known amounts of arsenic and on groundwater samples containing high levels of arsenic. Total arsenic in these samples was determined after oxidation of As(III) with iodine at pH 7. The results of total As were comparable to those generated by ET-AAS

  14. Speeding Up Non-Parametric Bootstrap Computations for Statistics Based on Sample Moments in Small/Moderate Sample Size Applications.

    Directory of Open Access Journals (Sweden)

    Elias Chaibub Neto

    Full Text Available In this paper we propose a vectorized implementation of the non-parametric bootstrap for statistics based on sample moments. Basically, we adopt the multinomial sampling formulation of the non-parametric bootstrap, and compute bootstrap replications of sample moment statistics by simply weighting the observed data according to multinomial counts instead of evaluating the statistic on a resampled version of the observed data. Using this formulation we can generate a matrix of bootstrap weights and compute the entire vector of bootstrap replications with a few matrix multiplications. Vectorization is particularly important for matrix-oriented programming languages such as R, where matrix/vector calculations tend to be faster than scalar operations implemented in a loop. We illustrate the application of the vectorized implementation in real and simulated data sets, when bootstrapping Pearson's sample correlation coefficient, and compared its performance against two state-of-the-art R implementations of the non-parametric bootstrap, as well as a straightforward one based on a for loop. Our investigations spanned varying sample sizes and number of bootstrap replications. The vectorized bootstrap compared favorably against the state-of-the-art implementations in all cases tested, and was remarkably/considerably faster for small/moderate sample sizes. The same results were observed in the comparison with the straightforward implementation, except for large sample sizes, where the vectorized bootstrap was slightly slower than the straightforward implementation due to increased time expenditures in the generation of weight matrices via multinomial sampling.

  15. A novel selection method of seismic attributes based on gray relational degree and support vector machine.

    Directory of Open Access Journals (Sweden)

    Yaping Huang

    Full Text Available The selection of seismic attributes is a key process in reservoir prediction because the prediction accuracy relies on the reliability and credibility of the seismic attributes. However, effective selection method for useful seismic attributes is still a challenge. This paper presents a novel selection method of seismic attributes for reservoir prediction based on the gray relational degree (GRD and support vector machine (SVM. The proposed method has a two-hierarchical structure. In the first hierarchy, the primary selection of seismic attributes is achieved by calculating the GRD between seismic attributes and reservoir parameters, and the GRD between the seismic attributes. The principle of the primary selection is that these seismic attributes with higher GRD to the reservoir parameters will have smaller GRD between themselves as compared to those with lower GRD to the reservoir parameters. Then the SVM is employed in the second hierarchy to perform an interactive error verification using training samples for the purpose of determining the final seismic attributes. A real-world case study was conducted to evaluate the proposed GRD-SVM method. Reliable seismic attributes were selected to predict the coalbed methane (CBM content in southern Qinshui basin, China. In the analysis, the instantaneous amplitude, instantaneous bandwidth, instantaneous frequency, and minimum negative curvature were selected, and the predicted CBM content was fundamentally consistent with the measured CBM content. This real-world case study demonstrates that the proposed method is able to effectively select seismic attributes, and improve the prediction accuracy. Thus, the proposed GRD-SVM method can be used for the selection of seismic attributes in practice.

  16. Product-selective blot: a technique for measuring enzyme activities in large numbers of samples and in native electrophoresis gels

    International Nuclear Information System (INIS)

    Thompson, G.A.; Davies, H.M.; McDonald, N.

    1985-01-01

    A method termed product-selective blotting has been developed for screening large numbers of samples for enzyme activity. The technique is particularly well suited to detection of enzymes in native electrophoresis gels. The principle of the method was demonstrated by blotting samples from glutaminase or glutamate synthase reactions into an agarose gel embedded with ion-exchange resin under conditions favoring binding of product (glutamate) over substrates and other substances in the reaction mixture. After washes to remove these unbound substances, the product was measured using either fluorometric staining or radiometric techniques. Glutaminase activity in native electrophoresis gels was visualized by a related procedure in which substrates and products from reactions run in the electrophoresis gel were blotted directly into a resin-containing image gel. Considering the selective-binding materials available for use in the image gel, along with the possible detection systems, this method has potentially broad application

  17. ITOUGH2 sample problems

    International Nuclear Information System (INIS)

    Finsterle, S.

    1997-11-01

    This report contains a collection of ITOUGH2 sample problems. It complements the ITOUGH2 User's Guide [Finsterle, 1997a], and the ITOUGH2 Command Reference [Finsterle, 1997b]. ITOUGH2 is a program for parameter estimation, sensitivity analysis, and uncertainty propagation analysis. It is based on the TOUGH2 simulator for non-isothermal multiphase flow in fractured and porous media [Preuss, 1987, 1991a]. The report ITOUGH2 User's Guide [Finsterle, 1997a] describes the inverse modeling framework and provides the theoretical background. The report ITOUGH2 Command Reference [Finsterle, 1997b] contains the syntax of all ITOUGH2 commands. This report describes a variety of sample problems solved by ITOUGH2. Table 1.1 contains a short description of the seven sample problems discussed in this report. The TOUGH2 equation-of-state (EOS) module that needs to be linked to ITOUGH2 is also indicated. Each sample problem focuses on a few selected issues shown in Table 1.2. ITOUGH2 input features and the usage of program options are described. Furthermore, interpretations of selected inverse modeling results are given. Problem 1 is a multipart tutorial, describing basic ITOUGH2 input files for the main ITOUGH2 application modes; no interpretation of results is given. Problem 2 focuses on non-uniqueness, residual analysis, and correlation structure. Problem 3 illustrates a variety of parameter and observation types, and describes parameter selection strategies. Problem 4 compares the performance of minimization algorithms and discusses model identification. Problem 5 explains how to set up a combined inversion of steady-state and transient data. Problem 6 provides a detailed residual and error analysis. Finally, Problem 7 illustrates how the estimation of model-related parameters may help compensate for errors in that model

  18. Selection of examples in case-based computer-aided decision systems

    International Nuclear Information System (INIS)

    Mazurowski, Maciej A; Zurada, Jacek M; Tourassi, Georgia D

    2008-01-01

    Case-based computer-aided decision (CB-CAD) systems rely on a database of previously stored, known examples when classifying new, incoming queries. Such systems can be particularly useful since they do not need retraining every time a new example is deposited in the case base. The adaptive nature of case-based systems is well suited to the current trend of continuously expanding digital databases in the medical domain. To maintain efficiency, however, such systems need sophisticated strategies to effectively manage the available evidence database. In this paper, we discuss the general problem of building an evidence database by selecting the most useful examples to store while satisfying existing storage requirements. We evaluate three intelligent techniques for this purpose: genetic algorithm-based selection, greedy selection and random mutation hill climbing. These techniques are compared to a random selection strategy used as the baseline. The study is performed with a previously presented CB-CAD system applied for false positive reduction in screening mammograms. The experimental evaluation shows that when the development goal is to maximize the system's diagnostic performance, the intelligent techniques are able to reduce the size of the evidence database to 37% of the original database by eliminating superfluous and/or detrimental examples while at the same time significantly improving the CAD system's performance. Furthermore, if the case-base size is a main concern, the total number of examples stored in the system can be reduced to only 2-4% of the original database without a decrease in the diagnostic performance. Comparison of the techniques shows that random mutation hill climbing provides the best balance between the diagnostic performance and computational efficiency when building the evidence database of the CB-CAD system.

  19. Information Source Selection and Management Framework in Wireless Sensor Network

    DEFF Research Database (Denmark)

    Tobgay, Sonam; Olsen, Rasmus Løvenstein; Prasad, Ramjee

    2013-01-01

    information source selection and management framework and presents an algorithm which selects the information source based on the information mismatch probability [1]. The sampling rate for every access is decided as per the maximum allowable power consumption limit. Index Terms-wireless sensor network...

  20. Highly selective single-use fluoride ion optical sensor based on aluminum(III)-salen complex in thin polymeric film

    International Nuclear Information System (INIS)

    Badr, Ibrahim H.A.; Meyerhoff, Mark E.

    2005-01-01

    A highly selective optical sensor for fluoride ion based on the use of an aluminum(III)-salen complex as an ionophore within a thin polymeric film is described. The sensor is prepared by embedding the aluminum(III)-salen ionophore and a suitable lipophilic pH-sensitive indicator (ETH-7075) in a plasticized poly(vinyl chloride) (PVC) film. Optical response to fluoride occurs due to fluoride extraction into the polymer via formation of a strong complex with the aluminum(III)-salen species. Co-extraction of protons occurs simultaneously, with protonation of the indicator dye yielding the optical response at 529 nm. Films prepared using dioctylsebacate (DOS) are shown to exhibit better response (e.g., linear range, detection limit, and optical signal stability) compared to those prepared using ortho-nitrophenyloctyl ether (o-NPOE). Films formulated with aluminum(III)-salen and ETH-7075 indicator in 2 DOS:1 PVC, exhibit a significantly enhanced selectivity for fluoride over a wide range of lipophilic anions including salicylate, perchlorate, nitrate, and thiocyanate. The optimized films exhibit a sub-micromolar detection limit, using glycine-phosphate buffer, pH 3.00, as the test sample. The response times of the fluoride optical sensing films are in the range of 1-10 min depending on the fluoride ion concentration in the sample. The sensor exhibits very poor reversibility owing to a high co-extraction constant (log K = 8.5 ± 0.4), indicating that it can best be employed as a single-use transduction device. The utility of the aluminum(III)-salen based fluoride sensitive films as single-use sensors is demonstrated by casting polymeric films on the bottom of standard polypropylene microtiter plate wells (96 wells/plate). The modified microtiter plate optode format sensors exhibit response characteristics comparable to the classical optode films cast on quartz slides. The modified microtiter is utilized for the analysis of fluoride in diluted anti-cavity fluoride rinse