WorldWideScience

Sample records for select samples based

  1. Patch-based visual tracking with online representative sample selection

    Science.gov (United States)

    Ou, Weihua; Yuan, Di; Li, Donghao; Liu, Bin; Xia, Daoxun; Zeng, Wu

    2017-05-01

    Occlusion is one of the most challenging problems in visual object tracking. Recently, a lot of discriminative methods have been proposed to deal with this problem. For the discriminative methods, it is difficult to select the representative samples for the target template updating. In general, the holistic bounding boxes that contain tracked results are selected as the positive samples. However, when the objects are occluded, this simple strategy easily introduces the noises into the training data set and the target template and then leads the tracker to drift away from the target seriously. To address this problem, we propose a robust patch-based visual tracker with online representative sample selection. Different from previous works, we divide the object and the candidates into several patches uniformly and propose a score function to calculate the score of each patch independently. Then, the average score is adopted to determine the optimal candidate. Finally, we utilize the non-negative least square method to find the representative samples, which are used to update the target template. The experimental results on the object tracking benchmark 2013 and on the 13 challenging sequences show that the proposed method is robust to the occlusion and achieves promising results.

  2. Selecting Sample Preparation Workflows for Mass Spectrometry-Based Proteomic and Phosphoproteomic Analysis of Patient Samples with Acute Myeloid Leukemia.

    Science.gov (United States)

    Hernandez-Valladares, Maria; Aasebø, Elise; Selheim, Frode; Berven, Frode S; Bruserud, Øystein

    2016-08-22

    Global mass spectrometry (MS)-based proteomic and phosphoproteomic studies of acute myeloid leukemia (AML) biomarkers represent a powerful strategy to identify and confirm proteins and their phosphorylated modifications that could be applied in diagnosis and prognosis, as a support for individual treatment regimens and selection of patients for bone marrow transplant. MS-based studies require optimal and reproducible workflows that allow a satisfactory coverage of the proteome and its modifications. Preparation of samples for global MS analysis is a crucial step and it usually requires method testing, tuning and optimization. Different proteomic workflows that have been used to prepare AML patient samples for global MS analysis usually include a standard protein in-solution digestion procedure with a urea-based lysis buffer. The enrichment of phosphopeptides from AML patient samples has previously been carried out either with immobilized metal affinity chromatography (IMAC) or metal oxide affinity chromatography (MOAC). We have recently tested several methods of sample preparation for MS analysis of the AML proteome and phosphoproteome and introduced filter-aided sample preparation (FASP) as a superior methodology for the sensitive and reproducible generation of peptides from patient samples. FASP-prepared peptides can be further fractionated or IMAC-enriched for proteome or phosphoproteome analyses. Herein, we will review both in-solution and FASP-based sample preparation workflows and encourage the use of the latter for the highest protein and phosphorylation coverage and reproducibility.

  3. Selecting Sample Preparation Workflows for Mass Spectrometry-Based Proteomic and Phosphoproteomic Analysis of Patient Samples with Acute Myeloid Leukemia

    Directory of Open Access Journals (Sweden)

    Maria Hernandez-Valladares

    2016-08-01

    Full Text Available Global mass spectrometry (MS-based proteomic and phosphoproteomic studies of acute myeloid leukemia (AML biomarkers represent a powerful strategy to identify and confirm proteins and their phosphorylated modifications that could be applied in diagnosis and prognosis, as a support for individual treatment regimens and selection of patients for bone marrow transplant. MS-based studies require optimal and reproducible workflows that allow a satisfactory coverage of the proteome and its modifications. Preparation of samples for global MS analysis is a crucial step and it usually requires method testing, tuning and optimization. Different proteomic workflows that have been used to prepare AML patient samples for global MS analysis usually include a standard protein in-solution digestion procedure with a urea-based lysis buffer. The enrichment of phosphopeptides from AML patient samples has previously been carried out either with immobilized metal affinity chromatography (IMAC or metal oxide affinity chromatography (MOAC. We have recently tested several methods of sample preparation for MS analysis of the AML proteome and phosphoproteome and introduced filter-aided sample preparation (FASP as a superior methodology for the sensitive and reproducible generation of peptides from patient samples. FASP-prepared peptides can be further fractionated or IMAC-enriched for proteome or phosphoproteome analyses. Herein, we will review both in-solution and FASP-based sample preparation workflows and encourage the use of the latter for the highest protein and phosphorylation coverage and reproducibility.

  4. Sample selection based on kernel-subclustering for the signal reconstruction of multifunctional sensors

    International Nuclear Information System (INIS)

    Wang, Xin; Wei, Guo; Sun, Jinwei

    2013-01-01

    The signal reconstruction methods based on inverse modeling for the signal reconstruction of multifunctional sensors have been widely studied in recent years. To improve the accuracy, the reconstruction methods have become more and more complicated because of the increase in the model parameters and sample points. However, there is another factor that affects the reconstruction accuracy, the position of the sample points, which has not been studied. A reasonable selection of the sample points could improve the signal reconstruction quality in at least two ways: improved accuracy with the same number of sample points or the same accuracy obtained with a smaller number of sample points. Both ways are valuable for improving the accuracy and decreasing the workload, especially for large batches of multifunctional sensors. In this paper, we propose a sample selection method based on kernel-subclustering distill groupings of the sample data and produce the representation of the data set for inverse modeling. The method calculates the distance between two data points based on the kernel-induced distance instead of the conventional distance. The kernel function is a generalization of the distance metric by mapping the data that are non-separable in the original space into homogeneous groups in the high-dimensional space. The method obtained the best results compared with the other three methods in the simulation. (paper)

  5. Imaging a Large Sample with Selective Plane Illumination Microscopy Based on Multiple Fluorescent Microsphere Tracking

    Science.gov (United States)

    Ryu, Inkeon; Kim, Daekeun

    2018-04-01

    A typical selective plane illumination microscopy (SPIM) image size is basically limited by the field of view, which is a characteristic of the objective lens. If an image larger than the imaging area of the sample is to be obtained, image stitching, which combines step-scanned images into a single panoramic image, is required. However, accurately registering the step-scanned images is very difficult because the SPIM system uses a customized sample mount where uncertainties for the translational and the rotational motions exist. In this paper, an image registration technique based on multiple fluorescent microsphere tracking is proposed in the view of quantifying the constellations and measuring the distances between at least two fluorescent microspheres embedded in the sample. Image stitching results are demonstrated for optically cleared large tissue with various staining methods. Compensation for the effect of the sample rotation that occurs during the translational motion in the sample mount is also discussed.

  6. Transfer function design based on user selected samples for intuitive multivariate volume exploration

    KAUST Repository

    Zhou, Liang; Hansen, Charles

    2013-01-01

    Multivariate volumetric datasets are important to both science and medicine. We propose a transfer function (TF) design approach based on user selected samples in the spatial domain to make multivariate volumetric data visualization more accessible for domain users. Specifically, the user starts the visualization by probing features of interest on slices and the data values are instantly queried by user selection. The queried sample values are then used to automatically and robustly generate high dimensional transfer functions (HDTFs) via kernel density estimation (KDE). Alternatively, 2D Gaussian TFs can be automatically generated in the dimensionality reduced space using these samples. With the extracted features rendered in the volume rendering view, the user can further refine these features using segmentation brushes. Interactivity is achieved in our system and different views are tightly linked. Use cases show that our system has been successfully applied for simulation and complicated seismic data sets. © 2013 IEEE.

  7. Transfer function design based on user selected samples for intuitive multivariate volume exploration

    KAUST Repository

    Zhou, Liang

    2013-02-01

    Multivariate volumetric datasets are important to both science and medicine. We propose a transfer function (TF) design approach based on user selected samples in the spatial domain to make multivariate volumetric data visualization more accessible for domain users. Specifically, the user starts the visualization by probing features of interest on slices and the data values are instantly queried by user selection. The queried sample values are then used to automatically and robustly generate high dimensional transfer functions (HDTFs) via kernel density estimation (KDE). Alternatively, 2D Gaussian TFs can be automatically generated in the dimensionality reduced space using these samples. With the extracted features rendered in the volume rendering view, the user can further refine these features using segmentation brushes. Interactivity is achieved in our system and different views are tightly linked. Use cases show that our system has been successfully applied for simulation and complicated seismic data sets. © 2013 IEEE.

  8. A GMM-Based Test for Normal Disturbances of the Heckman Sample Selection Model

    Directory of Open Access Journals (Sweden)

    Michael Pfaffermayr

    2014-10-01

    Full Text Available The Heckman sample selection model relies on the assumption of normal and homoskedastic disturbances. However, before considering more general, alternative semiparametric models that do not need the normality assumption, it seems useful to test this assumption. Following Meijer and Wansbeek (2007, the present contribution derives a GMM-based pseudo-score LM test on whether the third and fourth moments of the disturbances of the outcome equation of the Heckman model conform to those implied by the truncated normal distribution. The test is easy to calculate and in Monte Carlo simulations it shows good performance for sample sizes of 1000 or larger.

  9. Progressive sampling-based Bayesian optimization for efficient and automatic machine learning model selection.

    Science.gov (United States)

    Zeng, Xueqiang; Luo, Gang

    2017-12-01

    Machine learning is broadly used for clinical data analysis. Before training a model, a machine learning algorithm must be selected. Also, the values of one or more model parameters termed hyper-parameters must be set. Selecting algorithms and hyper-parameter values requires advanced machine learning knowledge and many labor-intensive manual iterations. To lower the bar to machine learning, miscellaneous automatic selection methods for algorithms and/or hyper-parameter values have been proposed. Existing automatic selection methods are inefficient on large data sets. This poses a challenge for using machine learning in the clinical big data era. To address the challenge, this paper presents progressive sampling-based Bayesian optimization, an efficient and automatic selection method for both algorithms and hyper-parameter values. We report an implementation of the method. We show that compared to a state of the art automatic selection method, our method can significantly reduce search time, classification error rate, and standard deviation of error rate due to randomization. This is major progress towards enabling fast turnaround in identifying high-quality solutions required by many machine learning-based clinical data analysis tasks.

  10. Field-based random sampling without a sampling frame: control selection for a case-control study in rural Africa.

    Science.gov (United States)

    Crampin, A C; Mwinuka, V; Malema, S S; Glynn, J R; Fine, P E

    2001-01-01

    Selection bias, particularly of controls, is common in case-control studies and may materially affect the results. Methods of control selection should be tailored both for the risk factors and disease under investigation and for the population being studied. We present here a control selection method devised for a case-control study of tuberculosis in rural Africa (Karonga, northern Malawi) that selects an age/sex frequency-matched random sample of the population, with a geographical distribution in proportion to the population density. We also present an audit of the selection process, and discuss the potential of this method in other settings.

  11. Acrylamide exposure among Turkish toddlers from selected cereal-based baby food samples.

    Science.gov (United States)

    Cengiz, Mehmet Fatih; Gündüz, Cennet Pelin Boyacı

    2013-10-01

    In this study, acrylamide exposure from selected cereal-based baby food samples was investigated among toddlers aged 1-3 years in Turkey. The study contained three steps. The first step was collecting food consumption data and toddlers' physical properties, such as gender, age and body weight, using a questionnaire given to parents by a trained interviewer between January and March 2012. The second step was determining the acrylamide levels in food samples that were reported on by the parents in the questionnaire, using a gas chromatography-mass spectrometry (GC-MS) method. The last step was combining the determined acrylamide levels in selected food samples with individual food consumption and body weight data using a deterministic approach to estimate the acrylamide exposure levels. The mean acrylamide levels of baby biscuits, breads, baby bread-rusks, crackers, biscuits, breakfast cereals and powdered cereal-based baby foods were 153, 225, 121, 604, 495, 290 and 36 μg/kg, respectively. The minimum, mean and maximum acrylamide exposures were estimated to be 0.06, 1.43 and 6.41 μg/kg BW per day, respectively. The foods that contributed to acrylamide exposure were aligned from high to low as bread, crackers, biscuits, baby biscuits, powdered cereal-based baby foods, baby bread-rusks and breakfast cereals. Copyright © 2013 Elsevier Ltd. All rights reserved.

  12. Selective information sampling

    Directory of Open Access Journals (Sweden)

    Peter A. F. Fraser-Mackenzie

    2009-06-01

    Full Text Available This study investigates the amount and valence of information selected during single item evaluation. One hundred and thirty-five participants evaluated a cell phone by reading hypothetical customers reports. Some participants were first asked to provide a preliminary rating based on a picture of the phone and some technical specifications. The participants who were given the customer reports only after they made a preliminary rating exhibited valence bias in their selection of customers reports. In contrast, the participants that did not make an initial rating sought subsequent information in a more balanced, albeit still selective, manner. The preliminary raters used the least amount of information in their final decision, resulting in faster decision times. The study appears to support the notion that selective exposure is utilized in order to develop cognitive coherence.

  13. A genetic algorithm-based framework for wavelength selection on sample categorization.

    Science.gov (United States)

    Anzanello, Michel J; Yamashita, Gabrielli; Marcelo, Marcelo; Fogliatto, Flávio S; Ortiz, Rafael S; Mariotti, Kristiane; Ferrão, Marco F

    2017-08-01

    In forensic and pharmaceutical scenarios, the application of chemometrics and optimization techniques has unveiled common and peculiar features of seized medicine and drug samples, helping investigative forces to track illegal operations. This paper proposes a novel framework aimed at identifying relevant subsets of attenuated total reflectance Fourier transform infrared (ATR-FTIR) wavelengths for classifying samples into two classes, for example authentic or forged categories in case of medicines, or salt or base form in cocaine analysis. In the first step of the framework, the ATR-FTIR spectra were partitioned into equidistant intervals and the k-nearest neighbour (KNN) classification technique was applied to each interval to insert samples into proper classes. In the next step, selected intervals were refined through the genetic algorithm (GA) by identifying a limited number of wavelengths from the intervals previously selected aimed at maximizing classification accuracy. When applied to Cialis®, Viagra®, and cocaine ATR-FTIR datasets, the proposed method substantially decreased the number of wavelengths needed to categorize, and increased the classification accuracy. From a practical perspective, the proposed method provides investigative forces with valuable information towards monitoring illegal production of drugs and medicines. In addition, focusing on a reduced subset of wavelengths allows the development of portable devices capable of testing the authenticity of samples during police checking events, avoiding the need for later laboratorial analyses and reducing equipment expenses. Theoretically, the proposed GA-based approach yields more refined solutions than the current methods relying on interval approaches, which tend to insert irrelevant wavelengths in the retained intervals. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  14. Polymeric membrane sensors based on Cd(II) Schiff base complexes for selective iodide determination in environmental and medicinal samples.

    Science.gov (United States)

    Singh, Ashok Kumar; Mehtab, Sameena

    2008-01-15

    The two cadmium chelates of schiff bases, N,N'-bis(salicylidene)-1,4-diaminobutane, (Cd-S(1)) and N,N'-bis(salicylidene)-3,4-diaminotoluene (Cd-S(2)), have been synthesized and explored as ionophores for preparing PVC-based membrane sensors selective to iodide(I) ion. Potentiometric investigations indicate high affinity of these receptors for iodide ion. Polyvinyl chloride (PVC)-based membranes of Cd-S(1) and Cd-S(2) using as hexadecyltrimethylammonium bromide (HTAB) cation discriminator and o-nitrophenyloctyl ether (o-NPOE), dibutylphthalate (DBP), acetophenone (AP) and tributylphosphate (TBP) as plasticizing solvent mediators were prepared and investigated as iodide-selective sensors. The best performance was shown by the membrane of composition (w/w) of (Cd-S(1)) (7%):PVC (31%):DBP (60%):HTAB (2%). The sensor works well over a wide concentration range 5.3x10(-7) to 1.0x10(-2)M with Nernstian compliance (59.2mVdecade(-1) of activity) within pH range 2.5-9.0 with a response time of 11s and showed good selectivity for iodide ion over a number of anions. The sensor exhibits adequate life (3 months) with good reproducibility (S.D.+/-0.24mV) and could be used successfully for the determination of iodide content in environmental water samples and mouth wash samples.

  15. A quick method based on SIMPLISMA-KPLS for simultaneously selecting outlier samples and informative samples for model standardization in near infrared spectroscopy

    Science.gov (United States)

    Li, Li-Na; Ma, Chang-Ming; Chang, Ming; Zhang, Ren-Cheng

    2017-12-01

    A novel method based on SIMPLe-to-use Interactive Self-modeling Mixture Analysis (SIMPLISMA) and Kernel Partial Least Square (KPLS), named as SIMPLISMA-KPLS, is proposed in this paper for selection of outlier samples and informative samples simultaneously. It is a quick algorithm used to model standardization (or named as model transfer) in near infrared (NIR) spectroscopy. The NIR experiment data of the corn for analysis of the protein content is introduced to evaluate the proposed method. Piecewise direct standardization (PDS) is employed in model transfer. And the comparison of SIMPLISMA-PDS-KPLS and KS-PDS-KPLS is given in this research by discussion of the prediction accuracy of protein content and calculation speed of each algorithm. The conclusions include that SIMPLISMA-KPLS can be utilized as an alternative sample selection method for model transfer. Although it has similar accuracy to Kennard-Stone (KS), it is different from KS as it employs concentration information in selection program. This means that it ensures analyte information is involved in analysis, and the spectra (X) of the selected samples is interrelated with concentration (y). And it can be used for outlier sample elimination simultaneously by validation of calibration. According to the statistical data results of running time, it is clear that the sample selection process is more rapid when using KPLS. The quick algorithm of SIMPLISMA-KPLS is beneficial to improve the speed of online measurement using NIR spectroscopy.

  16. Selection bias in population-based cancer case-control studies due to incomplete sampling frame coverage.

    Science.gov (United States)

    Walsh, Matthew C; Trentham-Dietz, Amy; Gangnon, Ronald E; Nieto, F Javier; Newcomb, Polly A; Palta, Mari

    2012-06-01

    Increasing numbers of individuals are choosing to opt out of population-based sampling frames due to privacy concerns. This is especially a problem in the selection of controls for case-control studies, as the cases often arise from relatively complete population-based registries, whereas control selection requires a sampling frame. If opt out is also related to risk factors, bias can arise. We linked breast cancer cases who reported having a valid driver's license from the 2004-2008 Wisconsin women's health study (N = 2,988) with a master list of licensed drivers from the Wisconsin Department of Transportation (WDOT). This master list excludes Wisconsin drivers that requested their information not be sold by the state. Multivariate-adjusted selection probability ratios (SPR) were calculated to estimate potential bias when using this driver's license sampling frame to select controls. A total of 962 cases (32%) had opted out of the WDOT sampling frame. Cases age <40 (SPR = 0.90), income either unreported (SPR = 0.89) or greater than $50,000 (SPR = 0.94), lower parity (SPR = 0.96 per one-child decrease), and hormone use (SPR = 0.93) were significantly less likely to be covered by the WDOT sampling frame (α = 0.05 level). Our results indicate the potential for selection bias due to differential opt out between various demographic and behavioral subgroups of controls. As selection bias may differ by exposure and study base, the assessment of potential bias needs to be ongoing. SPRs can be used to predict the direction of bias when cases and controls stem from different sampling frames in population-based case-control studies.

  17. Recurrent pain is associated with decreased selective attention in a population-based sample.

    Science.gov (United States)

    Gijsen, C P; Dijkstra, J B; van Boxtel, M P J

    2011-01-01

    Studies which have examined the impact of pain on cognitive functioning in the general population are scarce. In the present study we assessed the predictive value of recurrent pain on cognitive functioning in a population-based study (N=1400). Furthermore, we investigated the effect of pain on cognitive functioning in individuals with specific pain complaints (i.e. back pain, gastric pain, muscle pain and headache). Cognitive functioning was assessed using the Stroop Color-Word Interference test (Stroop interference), the Letter-Digit-Substitution test (LDST) and the Visual Verbal learning Task (VVLT). Pain was measured with the COOP/WONCA pain scale (Dartmouth Primary Care Cooperative Information Project/World Organization of National Colleges, Academies, and Academic Associations of General Practice /Family Physicians). We controlled for the effects of age, sex, level of education and depressive symptoms. It was demonstrated that pain had a negative impact on the performance on the Stroop interference but not on the VVLT and the LDST. This indicates that subjects who reported extreme pain had more problems with selective attention and were more easily distracted. Effects were in general larger in the specific pain groups when compared to the associations found in the total group. Implications of these findings are discussed. The experience of recurrent pain has a negative influence on selective attention in a healthy population. Copyright © 2010 International Association for the Study of Pain. Published by Elsevier B.V. All rights reserved.

  18. Colorimetric biomimetic sensor systems based on molecularly imprinted polymer membranes for highly-selective detection of phenol in environmental samples

    Directory of Open Access Journals (Sweden)

    Sergeyeva T. A.

    2014-05-01

    Full Text Available Aim. Development of an easy-to-use colorimetric sensor system for fast and accurate detection of phenol in envi- ronmental samples. Methods. Technique of molecular imprinting, method of in situ polymerization of molecularly imprinted polymer membranes. Results. The proposed sensor is based on free-standing molecularly imprinted polymer (MIP membranes, synthesized by in situ polymerization, and having in their structure artificial binding sites capable of selective phenol recognition. The quantitative detection of phenol, selectively adsorbed by the MIP membranes, is based on its reaction with 4-aminoantipyrine, which gives a pink-colored product. The intensity of staining of the MIP membrane is proportional to phenol concentration in the analyzed sample. Phenol can be detected within the range 50 nM–10 mM with limit of detection 50 nM, which corresponds to the concentrations that have to be detected in natural and waste waters in accordance with environmental protection standards. Stability of the MIP-membrane-based sensors was assessed during 12 months storage at room temperature. Conclusions. The sensor system provides highly-selective and sensitive detection of phenol in both mo- del and real (drinking, natural, and waste water samples. As compared to traditional methods of phenol detection, the proposed system is characterized by simplicity of operation and can be used in non-laboratory conditions.

  19. Study on the effects of sample selection on spectral reflectance reconstruction based on the algorithm of compressive sensing

    International Nuclear Information System (INIS)

    Zhang, Leihong; Liang, Dong

    2016-01-01

    In order to solve the problem that reconstruction efficiency and precision is not high, in this paper different samples are selected to reconstruct spectral reflectance, and a new kind of spectral reflectance reconstruction method based on the algorithm of compressive sensing is provided. Four different color numbers of matte color cards such as the ColorChecker Color Rendition Chart and Color Checker SG, the copperplate paper spot color card of Panton, and the Munsell colors card are chosen as training samples, the spectral image is reconstructed respectively by the algorithm of compressive sensing and pseudo-inverse and Wiener, and the results are compared. These methods of spectral reconstruction are evaluated by root mean square error and color difference accuracy. The experiments show that the cumulative contribution rate and color difference of the Munsell colors card are better than those of the other three numbers of color cards in the same conditions of reconstruction, and the accuracy of the spectral reconstruction will be affected by the training sample of different numbers of color cards. The key technology of reconstruction means that the uniformity and representation of the training sample selection has important significance upon reconstruction. In this paper, the influence of the sample selection on the spectral image reconstruction is studied. The precision of the spectral reconstruction based on the algorithm of compressive sensing is higher than that of the traditional algorithm of spectral reconstruction. By the MATLAB simulation results, it can be seen that the spectral reconstruction precision and efficiency are affected by the different color numbers of the training sample. (paper)

  20. A simple highly sensitive and selective aptamer-based colorimetric sensor for environmental toxins microcystin-LR in water samples.

    Science.gov (United States)

    Li, Xiuyan; Cheng, Ruojie; Shi, Huijie; Tang, Bo; Xiao, Hanshuang; Zhao, Guohua

    2016-03-05

    A simple and highly sensitive aptamer-based colorimetric sensor was developed for selective detection of Microcystin-LR (MC-LR). The aptamer (ABA) was employed as recognition element which could bind MC-LR with high-affinity, while gold nanoparticles (AuNPs) worked as sensing materials whose plasma resonance absorption peaks red shifted upon binding of the targets at a high concentration of sodium chloride. With the addition of MC-LR, the random coil aptamer adsorbed on Au NPs altered into regulated structure to form MC-LR-aptamer complexes and broke away from the surface of Au NPs, leading to the aggregation of AuNPs, and the color converted from red to blue due to the interparticle plasmon coupling. Results showed that our aptamer-based colorimetric sensor exhibited rapid and sensitive detection performance for MC-LR with linear range from 0.5 nM to 7.5 μM and the detection limit reached 0.37 nM. Meanwhile, the pollutants usually coexisting with MC-LR in pollutant water samples had not demonstrated disturbance for detecting of MC-LR. The mechanism was also proposed suggesting that high affinity interaction between aptamer and MC-LR significantly enhanced the sensitivity and selectivity for MC-LR detection. Besides, the established method was utilized in analyzing real water samples and splendid sensitivity and selectivity were obtained as well. Copyright © 2015 Elsevier B.V. All rights reserved.

  1. An improved selective sampling method

    International Nuclear Information System (INIS)

    Miyahara, Hiroshi; Iida, Nobuyuki; Watanabe, Tamaki

    1986-01-01

    The coincidence methods which are currently used for the accurate activity standardisation of radio-nuclides, require dead time and resolving time corrections which tend to become increasingly uncertain as countrates exceed about 10 K. To reduce the dependence on such corrections, Muller, in 1981, proposed the selective sampling method using a fast multichannel analyser (50 ns ch -1 ) for measuring the countrates. It is, in many ways, more convenient and possibly potentially more reliable to replace the MCA with scalers and a circuit is described employing five scalers; two of them serving to measure the background correction. Results of comparisons using our new method and the coincidence method for measuring the activity of 60 Co sources yielded agree-ment within statistical uncertainties. (author)

  2. Sample Selection for Training Cascade Detectors.

    Science.gov (United States)

    Vállez, Noelia; Deniz, Oscar; Bueno, Gloria

    2015-01-01

    Automatic detection systems usually require large and representative training datasets in order to obtain good detection and false positive rates. Training datasets are such that the positive set has few samples and/or the negative set should represent anything except the object of interest. In this respect, the negative set typically contains orders of magnitude more images than the positive set. However, imbalanced training databases lead to biased classifiers. In this paper, we focus our attention on a negative sample selection method to properly balance the training data for cascade detectors. The method is based on the selection of the most informative false positive samples generated in one stage to feed the next stage. The results show that the proposed cascade detector with sample selection obtains on average better partial AUC and smaller standard deviation than the other compared cascade detectors.

  3. Sample Selection for Training Cascade Detectors.

    Directory of Open Access Journals (Sweden)

    Noelia Vállez

    Full Text Available Automatic detection systems usually require large and representative training datasets in order to obtain good detection and false positive rates. Training datasets are such that the positive set has few samples and/or the negative set should represent anything except the object of interest. In this respect, the negative set typically contains orders of magnitude more images than the positive set. However, imbalanced training databases lead to biased classifiers. In this paper, we focus our attention on a negative sample selection method to properly balance the training data for cascade detectors. The method is based on the selection of the most informative false positive samples generated in one stage to feed the next stage. The results show that the proposed cascade detector with sample selection obtains on average better partial AUC and smaller standard deviation than the other compared cascade detectors.

  4. Highly selective ionic liquid-based microextraction method for sensitive trace cobalt determination in environmental and biological samples

    International Nuclear Information System (INIS)

    Berton, Paula; Wuilloud, Rodolfo G.

    2010-01-01

    A simple and rapid dispersive liquid-liquid microextraction procedure based on an ionic liquid (IL-DLLME) was developed for selective determination of cobalt (Co) with electrothermal atomic absorption spectrometry (ETAAS) detection. Cobalt was initially complexed with 1-nitroso-2-naphtol (1N2N) reagent at pH 4.0. The IL-DLLME procedure was then performed by using a few microliters of the room temperature ionic liquid (RTIL) 1-hexyl-3-methylimidazolium hexafluorophosphate [C 6 mim][PF 6 ] as extractant while methanol was the dispersant solvent. After microextraction procedure, the Co-enriched RTIL phase was solubilized in methanol and directly injected into the graphite furnace. The effect of several variables on Co-1N2N complex formation, extraction with the dispersed RTIL phase, and analyte detection with ETAAS, was carefully studied in this work. An enrichment factor of 120 was obtained with only 6 mL of sample solution and under optimal experimental conditions. The resultant limit of detection (LOD) was 3.8 ng L -1 , while the relative standard deviation (RSD) was 3.4% (at 1 μg L -1 Co level and n = 10), calculated from the peak height of absorbance signals. The accuracy of the proposed methodology was tested by analysis of a certified reference material. The method was successfully applied for the determination of Co in environmental and biological samples.

  5. Microbiological sampling plan based on risk classification to verify supplier selection and production of served meals in food service operation.

    Science.gov (United States)

    Lahou, Evy; Jacxsens, Liesbeth; Van Landeghem, Filip; Uyttendaele, Mieke

    2014-08-01

    Food service operations are confronted with a diverse range of raw materials and served meals. The implementation of a microbial sampling plan in the framework of verification of suppliers and their own production process (functionality of their prerequisite and HACCP program), demands selection of food products and sampling frequencies. However, these are often selected without a well described scientifically underpinned sampling plan. Therefore, an approach on how to set-up a focused sampling plan, enabled by a microbial risk categorization of food products, for both incoming raw materials and meals served to the consumers is presented. The sampling plan was implemented as a case study during a one-year period in an institutional food service operation to test the feasibility of the chosen approach. This resulted in 123 samples of raw materials and 87 samples of meal servings (focused on high risk categorized food products) which were analyzed for spoilage bacteria, hygiene indicators and food borne pathogens. Although sampling plans are intrinsically limited in assessing the quality and safety of sampled foods, it was shown to be useful to reveal major non-compliances and opportunities to improve the food safety management system in place. Points of attention deduced in the case study were control of Listeria monocytogenes in raw meat spread and raw fish as well as overall microbial quality of served sandwiches and salads. Copyright © 2014 Elsevier Ltd. All rights reserved.

  6. 40 CFR 89.507 - Sample selection.

    Science.gov (United States)

    2010-07-01

    ... Auditing § 89.507 Sample selection. (a) Engines comprising a test sample will be selected at the location...). However, once the manufacturer ships any test engine, it relinquishes the prerogative to conduct retests...

  7. 40 CFR 90.507 - Sample selection.

    Science.gov (United States)

    2010-07-01

    ... Auditing § 90.507 Sample selection. (a) Engines comprising a test sample will be selected at the location... manufacturer ships any test engine, it relinquishes the prerogative to conduct retests as provided in § 90.508...

  8. Interval-value Based Particle Swarm Optimization algorithm for cancer-type specific gene selection and sample classification

    Directory of Open Access Journals (Sweden)

    D. Ramyachitra

    2015-09-01

    Full Text Available Microarray technology allows simultaneous measurement of the expression levels of thousands of genes within a biological tissue sample. The fundamental power of microarrays lies within the ability to conduct parallel surveys of gene expression using microarray data. The classification of tissue samples based on gene expression data is an important problem in medical diagnosis of diseases such as cancer. In gene expression data, the number of genes is usually very high compared to the number of data samples. Thus the difficulty that lies with data are of high dimensionality and the sample size is small. This research work addresses the problem by classifying resultant dataset using the existing algorithms such as Support Vector Machine (SVM, K-nearest neighbor (KNN, Interval Valued Classification (IVC and the improvised Interval Value based Particle Swarm Optimization (IVPSO algorithm. Thus the results show that the IVPSO algorithm outperformed compared with other algorithms under several performance evaluation functions.

  9. Interval-value Based Particle Swarm Optimization algorithm for cancer-type specific gene selection and sample classification.

    Science.gov (United States)

    Ramyachitra, D; Sofia, M; Manikandan, P

    2015-09-01

    Microarray technology allows simultaneous measurement of the expression levels of thousands of genes within a biological tissue sample. The fundamental power of microarrays lies within the ability to conduct parallel surveys of gene expression using microarray data. The classification of tissue samples based on gene expression data is an important problem in medical diagnosis of diseases such as cancer. In gene expression data, the number of genes is usually very high compared to the number of data samples. Thus the difficulty that lies with data are of high dimensionality and the sample size is small. This research work addresses the problem by classifying resultant dataset using the existing algorithms such as Support Vector Machine (SVM), K-nearest neighbor (KNN), Interval Valued Classification (IVC) and the improvised Interval Value based Particle Swarm Optimization (IVPSO) algorithm. Thus the results show that the IVPSO algorithm outperformed compared with other algorithms under several performance evaluation functions.

  10. Memory and selective attention in multiple sclerosis: cross-sectional computer-based assessment in a large outpatient sample.

    Science.gov (United States)

    Adler, Georg; Lembach, Yvonne

    2015-08-01

    Cognitive impairments may have a severe impact on everyday functioning and quality of life of patients with multiple sclerosis (MS). However, there are some methodological problems in the assessment and only a few studies allow a representative estimate of the prevalence and severity of cognitive impairments in MS patients. We applied a computer-based method, the memory and attention test (MAT), in 531 outpatients with MS, who were assessed at nine neurological practices or specialized outpatient clinics. The findings were compared with those obtained in an age-, sex- and education-matched control group of 84 healthy subjects. Episodic short-term memory was substantially decreased in the MS patients. About 20% of them reached a score of only less than two standard deviations below the mean of the control group. The episodic short-term memory score was negatively correlated with the EDSS score. Minor but also significant impairments in the MS patients were found for verbal short-term memory, episodic working memory and selective attention. The computer-based MAT was found to be useful for a routine assessment of cognition in MS outpatients.

  11. Tripodal chelating ligand-based sensor for selective determination of Zn(II) in biological and environmental samples

    Energy Technology Data Exchange (ETDEWEB)

    Kumar Singh, Ashok; Mehtab, Sameena; Singh, Udai P.; Aggarwal, Vaibhave [Indian Institute of Technology-Roorkee, Department of Chemistry, Roorkee (India)

    2007-08-15

    Potassium hydrotris(N-tert-butyl-2-thioimidazolyl)borate [KTt{sup t-Bu}] and potassium hydrotris(3-tert-butyl-5-isopropyl-l-pyrazolyl)borate [KTp{sup t-Bu,i-Pr}] have been synthesized and evaluated as ionophores for preparation of a poly(vinyl chloride) (PVC) membrane sensor for Zn(II) ions. The effect of different plasticizers, viz. benzyl acetate (BA), dioctyl phthalate (DOP), dibutyl phthalate (DBP), tributyl phosphate (TBP), and o-nitrophenyl octyl ether (o-NPOE), and the anion excluders sodium tetraphenylborate (NaTPB), potassium tetrakis(p-chlorophenyl)borate (KTpClPB), and oleic acid (OA) were studied to improve the performance of the membrane sensor. The best performance was obtained from a sensor with a of [KTt{sup t-Bu}] membrane of composition (mg): [KTt{sup t-Bu}] (15), PVC (150), DBP (275), and NaTPB (4). This sensor had a Nernstian response (slope, 29.4 {+-} 0.2 mV decade of activity) for Zn{sup 2+} ions over a wide concentration range (1.4 x 10{sup -7} to 1.0 x 10{sup -1} mol L{sup -1}) with a limit of detection of 9.5 x 10{sup -8} mol L{sup -1}. It had a relatively fast response time (12 s) and could be used for 3 months without substantial change of the potential. The membrane sensor had very good selectivity for Zn{sup 2+} ions over a wide variety of other cations and could be used in a working pH range of 3.5-7.8. The sensor was also found to work satisfactorily in partially non-aqueous media and could be successfully used for estimation of zinc at trace levels in biological and environmental samples. (orig.)

  12. Titanium (III) cation selective electrode based on synthesized tris(2pyridyl) methylamine ionophore and its application in water samples

    Science.gov (United States)

    Rezayi, Majid; Karazhian, Reza; Abdollahi, Yadollah; Narimani, Leila; Sany, Seyedeh Belin Tavakoly; Ahmadzadeh, Saeid; Alias, Yatimah

    2014-04-01

    The introduction of low detection limit ion selective electrodes (ISEs) may well pave the way for the determination of trace targets of cationic compounds. This research focuses on the detection of titanium (III) cation using a new PVC-membrane sensor based on synthesized tris(2pyridyl) methylamine (tpm) ionophore. The application and validation of the proposed sensor was done using potentiometric titration, inductively coupled plasma atomic emission spectrometry (ICP-AES), and atomic absorption spectrometry (AAS). The membrane sensor exhibited a Nernstian response to the titanium (III) cation over a concentration range of 1.0 × 10-6-1.0 × 10-2 M and pH range from 1-2.5. The Nernstian slope, the lower of detection (LOD), and the response time (t95%) of the proposed sensor were 29.17 +/- 0.24 mV/dec, 7.9 × 10-7 M, and 20 s, respectively. The direct determination of 4-39 μg/ml of titanium (III) standard solution showed an average recovery of 94.60 and a mean relative standard deviation of 1.8 at 100.0 μg/ml. Finally, the utilization of the electrodes as end-point indicators for potentiometric titration with EDTA solutions for titanium (III) sensor was successfully carried out.

  13. The profile of selected samples of Croatian athletes based on the items of sport jealousy scale (SJS

    Directory of Open Access Journals (Sweden)

    Sindik Joško

    2016-01-01

    Full Text Available The role of jealousy in sport, as a negative emotional reaction, accompanied by thoughts of inadequacy when compared to others, is the issue of this article. This study had a purpose to define the characteristic profiles of the Croatian athletes, based on single items of Sport Jealousy Scale (SJS II, labeled by several variables: gender, type of sport, age group. Purposive sample of 73 athletes competing at Croatian championships in different sports (football, bowling, volleyball and handball were examined with Croatian version of SJS-II. Three clusters obtained are similarly balanced, according to the number of cases in each cluster. The most simply explained, clusters clearly differentiate the most jealous, moderately jealous and slightly/low jealous athletes. Among the features of the athletes in each cluster, in the most jealous (first cluster are the athletes from team sports, women and older athletes. Females, bowling athletes, athletes from individual (coactive sports and the youngest athletes are the least jealous (grouped in third cluster.

  14. Electroanalytical performance of a terbium(III)-selective sensor based on a neutral ionophore in environmental and medicinal samples

    Energy Technology Data Exchange (ETDEWEB)

    Gupta, V.K.; Singh, A.K.; Gupta, Barkha [Indian Institute of Technology-Roorkee, Department of Chemistry, Roorkee (India)

    2008-04-15

    A new highly selective terbium(III) electrode was prepared with a polymeric film doped using S-2-benzothiazolyl-2-amino-{alpha}-(methoxyimino)-4-thiazolethiol acetate as an electroactive material, benzyl acetate (BA) as a plasticizer, and potassium tetrakis(4-chlorophenyl) borate (KTpClPB) as an anionic site in the percentage ratio 3.17:1.58:63.4:31.7 (ionophore-KTpClPB-BA-PVC, w/w). The electrode exhibited a linear response with a near Nernstian slope of 19.5 mV/decade within the concentration range 1.5 x 10{sup -7}-1.0 x 10{sup -2} M terbium ions, with a working pH range from 2.0 to 8.0, and a fast response time of 10 s and presented satisfactory reproducibility. The limit of detection was 9.3 x 10{sup -8} M. The results show that this electrode can be used in ethanol media up to 30% (v/v) concentration without interference. It can be used for 3 months without any considerable divergence in the potentials. Selectivity coefficients for terbium(III) with respect to many cations were investigated. The electrode is highly selective for terbium(III) ions over a large number of monovalent, bivalent, and trivalent cations. This shows the valuable property of the proposed electrode. The stability constant of the ionophore towards Tb{sup 3+} ions was determined with the sandwich membrane method. It was successfully used as an indicator electrode in potentiometric determination of terbium(III) ions with EDTA and in direct determination in tap water and binary mixtures with quantitative results. The utility of the proposed electrode was also determined in the presence of ionic and nonionic surfactants and in the presence of fluoride ions in four pharmaceutical (mouthwash) preparations. (orig.)

  15. Electroanalytical performance of a terbium(III)-selective sensor based on a neutral ionophore in environmental and medicinal samples

    International Nuclear Information System (INIS)

    Gupta, V.K.; Singh, A.K.; Gupta, Barkha

    2008-01-01

    A new highly selective terbium(III) electrode was prepared with a polymeric film doped using S-2-benzothiazolyl-2-amino-α-(methoxyimino)-4-thiazolethiol acetate as an electroactive material, benzyl acetate (BA) as a plasticizer, and potassium tetrakis(4-chlorophenyl) borate (KTpClPB) as an anionic site in the percentage ratio 3.17:1.58:63.4:31.7 (ionophore-KTpClPB-BA-PVC, w/w). The electrode exhibited a linear response with a near Nernstian slope of 19.5 mV/decade within the concentration range 1.5 x 10 -7 -1.0 x 10 -2 M terbium ions, with a working pH range from 2.0 to 8.0, and a fast response time of 10 s and presented satisfactory reproducibility. The limit of detection was 9.3 x 10 -8 M. The results show that this electrode can be used in ethanol media up to 30% (v/v) concentration without interference. It can be used for 3 months without any considerable divergence in the potentials. Selectivity coefficients for terbium(III) with respect to many cations were investigated. The electrode is highly selective for terbium(III) ions over a large number of monovalent, bivalent, and trivalent cations. This shows the valuable property of the proposed electrode. The stability constant of the ionophore towards Tb 3+ ions was determined with the sandwich membrane method. It was successfully used as an indicator electrode in potentiometric determination of terbium(III) ions with EDTA and in direct determination in tap water and binary mixtures with quantitative results. The utility of the proposed electrode was also determined in the presence of ionic and nonionic surfactants and in the presence of fluoride ions in four pharmaceutical (mouthwash) preparations. (orig.)

  16. Highly selective solid phase extraction and preconcentration of Azathioprine with nano-sized imprinted polymer based on multivariate optimization and its trace determination in biological and pharmaceutical samples

    International Nuclear Information System (INIS)

    Davarani, Saied Saeed Hosseiny; Rezayati zad, Zeinab; Taheri, Ali Reza; Rahmatian, Nasrin

    2017-01-01

    In this research, for first time selective separation and determination of Azathioprine is demonstrated using molecularly imprinted polymer as the solid-phase extraction adsorbent, measured by spectrophotometry at λ max 286 nm. The selective molecularly imprinted polymer was produced using Azathioprine and methacrylic acid as a template molecule and monomer, respectively. A molecularly imprinted solid-phase extraction procedure was performed in column for the analyte from pharmaceutical and serum samples. The synthesized polymers were characterized by infrared spectroscopy (IR), field emission scanning electron microscopy (FESEM). In order to investigate the effect of independent variables on the extraction efficiency, the response surface methodology (RSM) based on Box–Behnken design (BBD) was employed. The analytical parameters such as precision, accuracy and linear working range were also determined in optimal experimental conditions and the proposed method was applied to analysis of Azathioprine. The linear dynamic range and limits of detection were 2.5–0.01 and 0.008 mg L ‐1 respectively. The recoveries for analyte were higher than 95% and relative standard deviation values were found to be in the range of 0.83–4.15%. This method was successfully applied for the determination of Azathioprine in biological and pharmaceutical samples. - Graphical abstract: A new-nano sized imprinted polymer was synthesized and applied as sorbent in SPE in order to selective recognition, preconcentration, and determination of Azathioprine with the response surface methodology based on Box–Behnken design and was successfully investigated for the clean-up of human blood serum and pharmaceutical samples. - Highlights: • The nanosized-imprinted polymer has been synthesized by precipitation polymerization technique. • A molecularly imprinted solid-phase extraction procedure was performed for determination of Azathioprine. • The Azathioprine-molecular imprinting

  17. The Multi-Template Molecularly Imprinted Polymer Based on SBA-15 for Selective Separation and Determination of Panax notoginseng Saponins Simultaneously in Biological Samples

    Directory of Open Access Journals (Sweden)

    Chenghong Sun

    2017-11-01

    Full Text Available The feasible, reliable and selective multi-template molecularly imprinted polymers (MT-MIPs based on SBA-15 (SBA-15@MT-MIPs for the selective separation and determination of the trace level of ginsenoside Rb1 (Rb1, ginsenoside Rg1 (Rg1 and notoginsenoside R1 (R1 simultaneously from biological samples were developed. The polymers were constructed by SBA-15 as support, Rb1, Rg1, R1 as multi-template, acrylamide (AM as functional monomer and ethylene glycol dimethacrylate (EGDMA as cross-linker. The new synthetic SBA-15@MT-MIPs were satisfactorily applied to solid-phase extraction (SPE coupled with high performance liquid chromatography (HPLC for the separation and determination of trace Rb1, Rg1 and R1 in plasma samples. Under the optimized conditions, the limits of detection (LODs and quantitation (LOQs of the proposed method for Rb1, Rg1 and R1 were in the range of 0.63–0.75 ng·mL−1 and 2.1–2.5 ng·mL−1, respectively. The recoveries of R1, Rb1 and Rg1 were obtained between 93.4% and 104.3% with relative standard deviations (RSDs in the range of 3.3–4.2%. All results show that the obtained SBA-15@MT-MIPs could be a promising prospect for the practical application in the selective separation and enrichment of trace Panax notoginseng saponins (PNS in the biological samples.

  18. Robust inference in sample selection models

    KAUST Repository

    Zhelonkin, Mikhail; Genton, Marc G.; Ronchetti, Elvezio

    2015-01-01

    The problem of non-random sample selectivity often occurs in practice in many fields. The classical estimators introduced by Heckman are the backbone of the standard statistical analysis of these models. However, these estimators are very sensitive to small deviations from the distributional assumptions which are often not satisfied in practice. We develop a general framework to study the robustness properties of estimators and tests in sample selection models. We derive the influence function and the change-of-variance function of Heckman's two-stage estimator, and we demonstrate the non-robustness of this estimator and its estimated variance to small deviations from the model assumed. We propose a procedure for robustifying the estimator, prove its asymptotic normality and give its asymptotic variance. Both cases with and without an exclusion restriction are covered. This allows us to construct a simple robust alternative to the sample selection bias test. We illustrate the use of our new methodology in an analysis of ambulatory expenditures and we compare the performance of the classical and robust methods in a Monte Carlo simulation study.

  19. Robust inference in sample selection models

    KAUST Repository

    Zhelonkin, Mikhail

    2015-11-20

    The problem of non-random sample selectivity often occurs in practice in many fields. The classical estimators introduced by Heckman are the backbone of the standard statistical analysis of these models. However, these estimators are very sensitive to small deviations from the distributional assumptions which are often not satisfied in practice. We develop a general framework to study the robustness properties of estimators and tests in sample selection models. We derive the influence function and the change-of-variance function of Heckman\\'s two-stage estimator, and we demonstrate the non-robustness of this estimator and its estimated variance to small deviations from the model assumed. We propose a procedure for robustifying the estimator, prove its asymptotic normality and give its asymptotic variance. Both cases with and without an exclusion restriction are covered. This allows us to construct a simple robust alternative to the sample selection bias test. We illustrate the use of our new methodology in an analysis of ambulatory expenditures and we compare the performance of the classical and robust methods in a Monte Carlo simulation study.

  20. Selective ionic liquid ferrofluid based dispersive-solid phase extraction for simultaneous preconcentration/separation of lead and cadmium in milk and biological samples.

    Science.gov (United States)

    Fasih Ramandi, Negin; Shemirani, Farzaneh

    2015-01-01

    For the first time, a selective ionic liquid ferrofluid has been used in dispersive solid phase extraction (IL-FF-D-SPE) for simultaneous preconcentration and separation of lead and cadmium in milk and biological samples combined with flame atomic absorption spectrometry. To improve the selectivity of the ionic liquid ferrofluid, the surface of TiO2 nanoparticles with a magnetic core as sorbent was modified by loading 1-(2-pyridylazo)-2-naphtol. Due to the rapid injection of an appropriate amount of ionic liquid ferrofluid into the aqueous sample by a syringe, extraction can be achieved within a few seconds. In addition, based on the attraction of the ionic liquid ferrofluid to a magnet, no centrifugation step is needed for phase separation. The experimental parameters of IL-FF-D-SPE were optimized using a Box-Behnken design (BBD) after a Plackett-Burman screening design. Under the optimum conditions, the relative standard deviations of 2.2% and 2.4% were obtained for lead and cadmium, respectively (n=7). The limit of detections were 1.21 µg L(-1) for Pb(II) and 0.21 µg L(-1) for Cd(II). The preconcentration factors were 250 for lead and 200 for cadmium and the maximum adsorption capacities of the sorbent were 11.18 and 9.34 mg g(-1) for lead and cadmium, respectively. Copyright © 2014 Elsevier B.V. All rights reserved.

  1. Automated sample plan selection for OPC modeling

    Science.gov (United States)

    Casati, Nathalie; Gabrani, Maria; Viswanathan, Ramya; Bayraktar, Zikri; Jaiswal, Om; DeMaris, David; Abdo, Amr Y.; Oberschmidt, James; Krause, Andreas

    2014-03-01

    It is desired to reduce the time required to produce metrology data for calibration of Optical Proximity Correction (OPC) models and also maintain or improve the quality of the data collected with regard to how well that data represents the types of patterns that occur in real circuit designs. Previous work based on clustering in geometry and/or image parameter space has shown some benefit over strictly manual or intuitive selection, but leads to arbitrary pattern exclusion or selection which may not be the best representation of the product. Forming the pattern selection as an optimization problem, which co-optimizes a number of objective functions reflecting modelers' insight and expertise, has shown to produce models with equivalent quality to the traditional plan of record (POR) set but in a less time.

  2. Highly selective solid phase extraction and preconcentration of Azathioprine with nano-sized imprinted polymer based on multivariate optimization and its trace determination in biological and pharmaceutical samples

    Energy Technology Data Exchange (ETDEWEB)

    Davarani, Saied Saeed Hosseiny, E-mail: ss-hosseiny@cc.sbu.ac.ir [Faculty of Chemistry, Shahid Beheshti University, G. C., P.O. Box 19839-4716, Tehran (Iran, Islamic Republic of); Rezayati zad, Zeinab [Faculty of Chemistry, Shahid Beheshti University, G. C., P.O. Box 19839-4716, Tehran (Iran, Islamic Republic of); Taheri, Ali Reza; Rahmatian, Nasrin [Islamic Azad University, Ilam Branch, Ilam (Iran, Islamic Republic of)

    2017-02-01

    In this research, for first time selective separation and determination of Azathioprine is demonstrated using molecularly imprinted polymer as the solid-phase extraction adsorbent, measured by spectrophotometry at λ{sub max} 286 nm. The selective molecularly imprinted polymer was produced using Azathioprine and methacrylic acid as a template molecule and monomer, respectively. A molecularly imprinted solid-phase extraction procedure was performed in column for the analyte from pharmaceutical and serum samples. The synthesized polymers were characterized by infrared spectroscopy (IR), field emission scanning electron microscopy (FESEM). In order to investigate the effect of independent variables on the extraction efficiency, the response surface methodology (RSM) based on Box–Behnken design (BBD) was employed. The analytical parameters such as precision, accuracy and linear working range were also determined in optimal experimental conditions and the proposed method was applied to analysis of Azathioprine. The linear dynamic range and limits of detection were 2.5–0.01 and 0.008 mg L{sup ‐1} respectively. The recoveries for analyte were higher than 95% and relative standard deviation values were found to be in the range of 0.83–4.15%. This method was successfully applied for the determination of Azathioprine in biological and pharmaceutical samples. - Graphical abstract: A new-nano sized imprinted polymer was synthesized and applied as sorbent in SPE in order to selective recognition, preconcentration, and determination of Azathioprine with the response surface methodology based on Box–Behnken design and was successfully investigated for the clean-up of human blood serum and pharmaceutical samples. - Highlights: • The nanosized-imprinted polymer has been synthesized by precipitation polymerization technique. • A molecularly imprinted solid-phase extraction procedure was performed for determination of Azathioprine. • The Azathioprine

  3. Privacy problems in the small sample selection

    Directory of Open Access Journals (Sweden)

    Loredana Cerbara

    2013-05-01

    Full Text Available The side of social research that uses small samples for the production of micro data, today finds some operating difficulties due to the privacy law. The privacy code is a really important and necessary law because it guarantees the Italian citizen’s rights, as already happens in other Countries of the world. However it does not seem appropriate to limit once more the possibilities of the data production of the national centres of research. That possibilities are already moreover compromised due to insufficient founds is a common problem becoming more and more frequent in the research field. It would be necessary, therefore, to include in the law the possibility to use telephonic lists to select samples useful for activities directly of interest and importance to the citizen, such as the collection of the data carried out on the basis of opinion polls by the centres of research of the Italian CNR and some universities.

  4. Selective determination of four arsenic species in rice and water samples by modified graphite electrode-based electrolytic hydride generation coupled with atomic fluorescence spectrometry.

    Science.gov (United States)

    Yang, Xin-An; Lu, Xiao-Ping; Liu, Lin; Chi, Miao-Bin; Hu, Hui-Hui; Zhang, Wang-Bing

    2016-10-01

    This work describes a novel non-chromatographic approach for the accurate and selective determining As species by modified graphite electrode-based electrolytic hydride generation (EHG) for sample introduction coupled with atomic fluorescence spectrometry (AFS) detection. Two kinds of sulfydryl-containing modifiers, l-cysteine (Cys) and glutathione (GSH), are used to modify cathode. The EHG performance of As has been changed greatly at the modified cathode, which has never been reported. Arsenite [As(III)] on the GSH modified graphite electrode (GSH/GE)-based EHG can be selectively and quantitatively converted to AsH3 at applied current of 0.4A. As(III) and arsenate [As(V)] on the Cys modified graphite electrode (Cys/GE) EHG can be selectively and efficiently converted to arsine at applied current of 0.6A, whereas monomethylarsonic acid (MMA) and dimethylarsinic acid (DMA) do not form any or only less volatile hydrides under this condition. By changing the analytical conditions, we also have achieved the analysis of total As (tAs) and DMA. Under the optimal condition, the detection limits (3s) of As(III), iAs and tAs in aqueous solutions are 0.25μgL(-1), 0.22μgL(-1) and 0.10μgL(-1), respectively. The accuracy of the method is verified through the analysis of standard reference materials (SRM 1568a). Copyright © 2016 Elsevier B.V. All rights reserved.

  5. Sample size estimation and sampling techniques for selecting a representative sample

    Directory of Open Access Journals (Sweden)

    Aamir Omair

    2014-01-01

    Full Text Available Introduction: The purpose of this article is to provide a general understanding of the concepts of sampling as applied to health-related research. Sample Size Estimation: It is important to select a representative sample in quantitative research in order to be able to generalize the results to the target population. The sample should be of the required sample size and must be selected using an appropriate probability sampling technique. There are many hidden biases which can adversely affect the outcome of the study. Important factors to consider for estimating the sample size include the size of the study population, confidence level, expected proportion of the outcome variable (for categorical variables/standard deviation of the outcome variable (for numerical variables, and the required precision (margin of accuracy from the study. The more the precision required, the greater is the required sample size. Sampling Techniques: The probability sampling techniques applied for health related research include simple random sampling, systematic random sampling, stratified random sampling, cluster sampling, and multistage sampling. These are more recommended than the nonprobability sampling techniques, because the results of the study can be generalized to the target population.

  6. Mineral Composition of Selected Serbian Propolis Samples

    Directory of Open Access Journals (Sweden)

    Tosic Snezana

    2017-06-01

    Full Text Available The aim of this work was to determine the content of 22 macro- and microelements in ten raw Serbian propolis samples which differ in geographical and botanical origin as well as in polluted agent contents by atomic emission spectrometry with inductively coupled plasma (ICP-OES. The macroelements were more common and present Ca content was the highest while Na content the lowest. Among the studied essential trace elements Fe was the most common element. The levels of toxic elements (Pb, Cd, As and Hg were also analyzed, since they were possible environmental contaminants that could be transferred into propolis products for human consumption. As and Hg were not detected in any of the analyzed samples but a high level of Pb (2.0-9.7 mg/kg was detected and only selected portions of raw propolis could be used to produce natural medicines and dietary supplements for humans. Obtained results were statistically analyzed, and the examined samples showed a wide range of element content.

  7. A Highly Sensitive and Selective Method for the Determination of an Iodate in Table-salt Samples Using Malachite Green-based Spectrophotometry.

    Science.gov (United States)

    Konkayan, Mongkol; Limchoowong, Nunticha; Sricharoen, Phitchan; Chanthai, Saksit

    2016-01-01

    A simple, rapid, and sensitive malachite green-based spectrophotometric method for the selective trace determination of an iodate has been developed and presented for the first time. The reaction mixture was specifically involved in the liberation of iodine in the presence of an excess of iodide in an acidic condition following an instantaneous reaction between the liberated iodine and malachite green dye. The optimum condition was obtained with a buffer solution pH of 5.2 in the presence of 40 mg L -1 potassium iodide and 1.5 × 10 -5 M malachite green for a 5-min incubation time. The iodate contents in some table-salt samples were in the range of 26 to 45 mg kg -1 , while those of drinking water, tap water, canal water, and seawater samples were not detectable (< 96 ng mL -1 of limits of detection, LOQ) with their satisfied method of recoveries of between 93 and 108%. The results agreed with those obtained using ICP-OES for comparison.

  8. Spectroelectrochemical Sensing Based on Multimode Selectivity simultaneously Achievable in a Single Device. 11. Design and Evaluation of a Small Portable Sensor for the Determination of Ferrocyanide in Hanford Waste Samples

    International Nuclear Information System (INIS)

    Stegemiller, Michael L.; Heineman, William R.; Seliskar, Carl J.; Ridgway, Thomas H.; Bryan, Samuel A.; Hubler, Timothy L.; Sell, Richard L.

    2003-01-01

    Spectroelectrochemical sensing based on multimode selectivity simultaneously achievable in a single device. 11. Design and evaluation of a small portable sensor for the determination of ferrocyanide in Hanford waste samples

  9. Thermal properties of selected cheeses samples

    Directory of Open Access Journals (Sweden)

    Monika BOŽIKOVÁ

    2016-02-01

    Full Text Available The thermophysical parameters of selected cheeses (processed cheese and half hard cheese are presented in the article. Cheese is a generic term for a diverse group of milk-based food products. Cheese is produced throughout the world in wide-ranging flavors, textures, and forms. Cheese goes during processing through the thermal and mechanical manipulation, so thermal properties are one of the most important. Knowledge about thermal parameters of cheeses could be used in the process of quality evaluation. Based on the presented facts thermal properties of selected cheeses which are produced by Slovak producers were measured. Theoretical part of article contains description of cheese and description of plane source method which was used for thermal parameters detection. Thermophysical parameters as thermal conductivity, thermal diffusivity and volume specific heat were measured during the temperature stabilisation. The results are presented as relations of thermophysical parameters to the temperature in temperature range from 13.5°C to 24°C. Every point of graphic relation was obtained as arithmetic average from measured values for the same temperature. Obtained results were statistically processed. Presented graphical relations were chosen according to the results of statistical evaluation and also according to the coefficients of determination for every relation. The results of thermal parameters are in good agreement with values measured by other authors for similar types of cheeses.

  10. New competitive dendrimer-based and highly selective immunosensor for determination of atrazine in environmental, feed and food samples: the importance of antibody selectivity for discrimination among related triazinic metabolites.

    Science.gov (United States)

    Giannetto, Marco; Umiltà, Eleonora; Careri, Maria

    2014-01-02

    A new voltammetric competitive immunosensor selective for atrazine, based on the immobilization of a conjugate atrazine-bovine serum albumine on a nanostructured gold substrate previously functionalized with poliamidoaminic dendrimers, was realized, characterized, and validated in different real samples of environmental and food concern. Response of the sensor was reliable, highly selective and suitable for the detection and quantification of atrazine at trace levels in complex matrices such as territorial waters, corn-cultivated soils, corn-containing poultry and bovine feeds and corn flakes for human use. Selectivity studies were focused on desethylatrazine, the principal metabolite generated by long-term microbiological degradation of atrazine, terbutylazine-2-hydroxy and simazine as potential interferents. The response of the developed immunosensor for atrazine was explored over the 10(-2)-10(3) ng mL(-1) range. Good sensitivity was proved, as limit of detection and limit of quantitation of 1.2 and 5 ng mL(-1), respectively, were estimated for atrazine. RSD values <5% over the entire explored range attested a good precision of the device. Copyright © 2013 Elsevier B.V. All rights reserved.

  11. Acceptance sampling using judgmental and randomly selected samples

    Energy Technology Data Exchange (ETDEWEB)

    Sego, Landon H.; Shulman, Stanley A.; Anderson, Kevin K.; Wilson, John E.; Pulsipher, Brent A.; Sieber, W. Karl

    2010-09-01

    We present a Bayesian model for acceptance sampling where the population consists of two groups, each with different levels of risk of containing unacceptable items. Expert opinion, or judgment, may be required to distinguish between the high and low-risk groups. Hence, high-risk items are likely to be identifed (and sampled) using expert judgment, while the remaining low-risk items are sampled randomly. We focus on the situation where all observed samples must be acceptable. Consequently, the objective of the statistical inference is to quantify the probability that a large percentage of the unsampled items in the population are also acceptable. We demonstrate that traditional (frequentist) acceptance sampling and simpler Bayesian formulations of the problem are essentially special cases of the proposed model. We explore the properties of the model in detail, and discuss the conditions necessary to ensure that required samples sizes are non-decreasing function of the population size. The method is applicable to a variety of acceptance sampling problems, and, in particular, to environmental sampling where the objective is to demonstrate the safety of reoccupying a remediated facility that has been contaminated with a lethal agent.

  12. Robust online tracking via adaptive samples selection with saliency detection

    Science.gov (United States)

    Yan, Jia; Chen, Xi; Zhu, QiuPing

    2013-12-01

    Online tracking has shown to be successful in tracking of previously unknown objects. However, there are two important factors which lead to drift problem of online tracking, the one is how to select the exact labeled samples even when the target locations are inaccurate, and the other is how to handle the confusors which have similar features with the target. In this article, we propose a robust online tracking algorithm with adaptive samples selection based on saliency detection to overcome the drift problem. To deal with the problem of degrading the classifiers using mis-aligned samples, we introduce the saliency detection method to our tracking problem. Saliency maps and the strong classifiers are combined to extract the most correct positive samples. Our approach employs a simple yet saliency detection algorithm based on image spectral residual analysis. Furthermore, instead of using the random patches as the negative samples, we propose a reasonable selection criterion, in which both the saliency confidence and similarity are considered with the benefits that confusors in the surrounding background are incorporated into the classifiers update process before the drift occurs. The tracking task is formulated as a binary classification via online boosting framework. Experiment results in several challenging video sequences demonstrate the accuracy and stability of our tracker.

  13. Vis-NIR spectrometric determination of Brix and sucrose in sugar production samples using kernel partial least squares with interval selection based on the successive projections algorithm.

    Science.gov (United States)

    de Almeida, Valber Elias; de Araújo Gomes, Adriano; de Sousa Fernandes, David Douglas; Goicoechea, Héctor Casimiro; Galvão, Roberto Kawakami Harrop; Araújo, Mario Cesar Ugulino

    2018-05-01

    This paper proposes a new variable selection method for nonlinear multivariate calibration, combining the Successive Projections Algorithm for interval selection (iSPA) with the Kernel Partial Least Squares (Kernel-PLS) modelling technique. The proposed iSPA-Kernel-PLS algorithm is employed in a case study involving a Vis-NIR spectrometric dataset with complex nonlinear features. The analytical problem consists of determining Brix and sucrose content in samples from a sugar production system, on the basis of transflectance spectra. As compared to full-spectrum Kernel-PLS, the iSPA-Kernel-PLS models involve a smaller number of variables and display statistically significant superiority in terms of accuracy and/or bias in the predictions. Published by Elsevier B.V.

  14. Heterogeneous Causal Effects and Sample Selection Bias

    DEFF Research Database (Denmark)

    Breen, Richard; Choi, Seongsoo; Holm, Anders

    2015-01-01

    The role of education in the process of socioeconomic attainment is a topic of long standing interest to sociologists and economists. Recently there has been growing interest not only in estimating the average causal effect of education on outcomes such as earnings, but also in estimating how...... causal effects might vary over individuals or groups. In this paper we point out one of the under-appreciated hazards of seeking to estimate heterogeneous causal effects: conventional selection bias (that is, selection on baseline differences) can easily be mistaken for heterogeneity of causal effects....... This might lead us to find heterogeneous effects when the true effect is homogenous, or to wrongly estimate not only the magnitude but also the sign of heterogeneous effects. We apply a test for the robustness of heterogeneous causal effects in the face of varying degrees and patterns of selection bias...

  15. Sample Selection for Training Cascade Detectors

    OpenAIRE

    V?llez, Noelia; Deniz, Oscar; Bueno, Gloria

    2015-01-01

    Automatic detection systems usually require large and representative training datasets in order to obtain good detection and false positive rates. Training datasets are such that the positive set has few samples and/or the negative set should represent anything except the object of interest. In this respect, the negative set typically contains orders of magnitude more images than the positive set. However, imbalanced training databases lead to biased classifiers. In this paper, we focus our a...

  16. 40 CFR 205.171-3 - Test motorcycle sample selection.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 24 2010-07-01 2010-07-01 false Test motorcycle sample selection. 205... ABATEMENT PROGRAMS TRANSPORTATION EQUIPMENT NOISE EMISSION CONTROLS Motorcycle Exhaust Systems § 205.171-3 Test motorcycle sample selection. A test motorcycle to be used for selective enforcement audit testing...

  17. Enhancement of the spectral selectivity of complex samples by measuring them in a frozen state at low temperatures in order to improve accuracy for quantitative analysis. Part II. Determination of viscosity for lube base oils using Raman spectroscopy.

    Science.gov (United States)

    Kim, Mooeung; Chung, Hoeil

    2013-03-07

    The use of selectivity-enhanced Raman spectra of lube base oil (LBO) samples achieved by the spectral collection under frozen conditions at low temperatures was effective for improving accuracy for the determination of the kinematic viscosity at 40 °C (KV@40). A collection of Raman spectra from samples cooled around -160 °C provided the most accurate measurement of KV@40. Components of the LBO samples were mainly long-chain hydrocarbons with molecular structures that were deformable when these were frozen, and the different structural deformabilities of the components enhanced spectral selectivity among the samples. To study the structural variation of components according to the change of sample temperature from cryogenic to ambient condition, n-heptadecane and pristane (2,6,10,14-tetramethylpentadecane) were selected as representative components of LBO samples, and their temperature-induced spectral features as well as the corresponding spectral loadings were investigated. A two-dimensional (2D) correlation analysis was also employed to explain the origin for the improved accuracy. The asynchronous 2D correlation pattern was simplest at the optimal temperature, indicating the occurrence of distinct and selective spectral variations, which enabled the variation of KV@40 of LBO samples to be more accurately assessed.

  18. Forward selection two sample binomial test

    Science.gov (United States)

    Wong, Kam-Fai; Wong, Weng-Kee; Lin, Miao-Shan

    2016-01-01

    Fisher’s exact test (FET) is a conditional method that is frequently used to analyze data in a 2 × 2 table for small samples. This test is conservative and attempts have been made to modify the test to make it less conservative. For example, Crans and Shuster (2008) proposed adding more points in the rejection region to make the test more powerful. We provide another way to modify the test to make it less conservative by using two independent binomial distributions as the reference distribution for the test statistic. We compare our new test with several methods and show that our test has advantages over existing methods in terms of control of the type 1 and type 2 errors. We reanalyze results from an oncology trial using our proposed method and our software which is freely available to the reader. PMID:27335577

  19. Learning from Past Classification Errors: Exploring Methods for Improving the Performance of a Deep Learning-based Building Extraction Model through Quantitative Analysis of Commission Errors for Optimal Sample Selection

    Science.gov (United States)

    Swan, B.; Laverdiere, M.; Yang, L.

    2017-12-01

    In the past five years, deep Convolutional Neural Networks (CNN) have been increasingly favored for computer vision applications due to their high accuracy and ability to generalize well in very complex problems; however, details of how they function and in turn how they may be optimized are still imperfectly understood. In particular, their complex and highly nonlinear network architecture, including many hidden layers and self-learned parameters, as well as their mathematical implications, presents open questions about how to effectively select training data. Without knowledge of the exact ways the model processes and transforms its inputs, intuition alone may fail as a guide to selecting highly relevant training samples. Working in the context of improving a CNN-based building extraction model used for the LandScan USA gridded population dataset, we have approached this problem by developing a semi-supervised, highly-scalable approach to select training samples from a dataset of identified commission errors. Due to the large scope this project, tens of thousands of potential samples could be derived from identified commission errors. To efficiently trim those samples down to a manageable and effective set for creating additional training sample, we statistically summarized the spectral characteristics of areas with rates of commission errors at the image tile level and grouped these tiles using affinity propagation. Highly representative members of each commission error cluster were then used to select sites for training sample creation. The model will be incrementally re-trained with the new training data to allow for an assessment of how the addition of different types of samples affects the model performance, such as precision and recall rates. By using quantitative analysis and data clustering techniques to select highly relevant training samples, we hope to improve model performance in a manner that is resource efficient, both in terms of training process

  20. A novel graphene-based label-free fluorescence 'turn-on' nanosensor for selective and sensitive detection of phosphorylated species in biological samples and living cells.

    Science.gov (United States)

    Ke, Yaotang; Garg, Bhaskar; Ling, Yong-Chien

    2016-02-28

    A novel label-free fluorescence 'turn-on' nanosensor has been developed for highly selective and sensitive detection of phosphorylated species (Ps) in biological samples and living cells. The design strategy relies on the use of Ti(4+)-immobilized polydopamine (PDA) coated reduced graphene oxide (rGO@PDA-Ti(4+)) that serves as an attractive platform to bind riboflavin 5'-monophosphate molecules (FMNs) through ion-pair interactions between phosphate groups and Ti(4+). The as-prepared rGO@PDA-Ti(4+)-FMNs (nanosensor), fluoresce only weakly due to the ineffective Förster resonance energy transfer between the FMNs and rGO@PDA-Ti(4+). The experimental findings revealed that the microwave-assisted interaction of the nanosensor with α-, β-casein, ovalbumin, human serum, non-fat milk, egg white, and living cells (all containing Ps) releases FMNs (due to the high formation constant between phosphate groups and Ti(4+)), leading to an excellent fluorescence 'turn-on' response. The fluorescence spectroscopy, confocal microscopy, and MALDI-TOF MS spectrometry were used to detect Ps both qualitatively and quantitatively. Under the optimized conditions, the nanosensor showed a detection limit of ca. 118.5, 28.9, and 54.8 nM for the tryptic digests of α-, β-casein and ovalbumin, respectively. Furthermore, the standard addition method was used as a bench-mark proof for phosphopeptide quantification in egg white samples. We postulate that the present quantitative assay for Ps holds tremendous potential and may pave the way to disease diagnostics in the near future.

  1. Selection of the Sample for Data-Driven $Z \\to \

    CERN Document Server

    Krauss, Martin

    2009-01-01

    The topic of this study was to improve the selection of the sample for data-driven Z → ν ν background estimation, which is a major contribution in supersymmetric searches in ̄ a no-lepton search mode. The data is based on Z → + − samples using data created with ATLAS simulation software. This method works if two leptons are reconstructed, but using cuts that are typical for SUSY searches reconstruction efficiency for electrons and muons is rather low. For this reason it was tried to enhance the data sample. Therefore events were considered, where only one electron was reconstructed. In this case the invariant mass for the electron and each jet was computed to select the jet with the best match for the Z boson mass as not reconstructed electron. This way the sample can be extended but significantly looses purity because of also reconstructed background events. To improve this method other variables have to be considered which were not available for this study. Applying a similar method to muons using ...

  2. UNLABELED SELECTED SAMPLES IN FEATURE EXTRACTION FOR CLASSIFICATION OF HYPERSPECTRAL IMAGES WITH LIMITED TRAINING SAMPLES

    Directory of Open Access Journals (Sweden)

    A. Kianisarkaleh

    2015-12-01

    Full Text Available Feature extraction plays a key role in hyperspectral images classification. Using unlabeled samples, often unlimitedly available, unsupervised and semisupervised feature extraction methods show better performance when limited number of training samples exists. This paper illustrates the importance of selecting appropriate unlabeled samples that used in feature extraction methods. Also proposes a new method for unlabeled samples selection using spectral and spatial information. The proposed method has four parts including: PCA, prior classification, posterior classification and sample selection. As hyperspectral image passes these parts, selected unlabeled samples can be used in arbitrary feature extraction methods. The effectiveness of the proposed unlabeled selected samples in unsupervised and semisupervised feature extraction is demonstrated using two real hyperspectral datasets. Results show that through selecting appropriate unlabeled samples, the proposed method can improve the performance of feature extraction methods and increase classification accuracy.

  3. Synthesis of surface molecular imprinted polymers based on carboxyl-modified silica nanoparticles with the selective detection of dibutyl phthalate from tap water samples

    Science.gov (United States)

    Xu, Wanzhen; Zhang, Xiaoming; Huang, Weihong; Luan, Yu; Yang, Yanfei; Zhu, Maiyong; Yang, Wenming

    2017-12-01

    In this work, the molecular imprinted polymers were synthesized with the low monomer concentrations for dibutyl phthalate (DBP). The polymers were prepared over carboxyl-modified silica nanoparticle, which used methacrylic acid as the functional monomer, ethylene glycol dimethacrylate as the cross-linker agent and azoisobutyronitrile as the initiator in the process of preparation. Various measures were used to characterize the structure and morphology in order to get the optimal polymer. The characterization results show that the optimal polymer has suitable features for further adsorption process. And adsorption capacity experiments were evaluated to analyze its adsorption performance, through adsorption isotherms/kinetics, selectivity adsorption and desorption and regeneration experiments. These results showed that the molecular imprinted polymers had a short equilibrium time about 60 min and high stability with 88% after six cycles. Furthermore, the molecular imprinted polymers were successfully applied to remove dibutyl phthalate. The concentration range was 5.0-30.0 μmol L-1, and the limit of detection was 0.06 μmol L-1 in tap water samples.

  4. A facile and selective approach for enrichment of l-cysteine in human plasma sample based on zinc organic polymer: Optimization by response surface methodology.

    Science.gov (United States)

    Bahrani, Sonia; Ghaedi, Mehrorang; Ostovan, Abbas; Javadian, Hamedreza; Mansoorkhani, Mohammad Javad Khoshnood; Taghipour, Tahere

    2018-02-05

    In this research, a facile and selective method was described to extract l-cysteine (l-Cys), an essential α-amino acid for anti-ageing playing an important role in human health, from human blood plasma sample. The importance of this research was the mild and time-consuming synthesis of zinc organic polymer (Zn-MOP) as an adsorbent and evaluation of its ability for efficient enrichment of l-Cys by ultrasound-assisted dispersive micro solid-phase extraction (UA-DMSPE) method. The structure of Zn-MOP was investigated by FT-IR, XRD and SEM. Analysis of variance (ANOVA) was applied for the experimental data to reach the best optimum conditions. The quantification of l-Cys was carried out by high performance liquid chromatography with UV detection set at λ=230nm. The calibration graph showed reasonable linear responses towards l-Cys concentrations in the range of 4.0-1000μg/L (r 2 =0.999) with low limit of detection (0.76μg/L, S/N=3) and RSD≤2.18 (n=3). The results revealed the applicability and high performance of this novel strategy in detecting trace l-Cys by Zn-MOP in complicated matrices. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. Eco-friendly ionic liquid based ultrasonic assisted selective extraction coupled with a simple liquid chromatography for the reliable determination of acrylamide in food samples.

    Science.gov (United States)

    Albishri, Hassan M; El-Hady, Deia Abd

    2014-01-01

    Acrylamide in food has drawn worldwide attention since 2002 due to its neurotoxic and carcinogenic effects. These influences brought out the dual polar and non-polar characters of acrylamide as they enabled it to dissolve in aqueous blood medium or penetrate the non-polar plasma membrane. In the current work, a simple HPLC/UV system was used to reveal that the penetration of acrylamide in non-polar phase was stronger than its dissolution in polar phase. The presence of phosphate salts in the polar phase reduced the acrylamide interaction with the non-polar phase. Furthermore, an eco-friendly and costless coupling of the HPLC/UV with ionic liquid based ultrasonic assisted extraction (ILUAE) was developed to determine the acrylamide content in food samples. ILUAE was proposed for the efficient extraction of acrylamide from bread and potato chips samples. The extracts were obtained by soaking of potato chips and bread samples in 1.5 mol L(-1) 1-butyl-3-methylimmidazolium bromide (BMIMBr) for 30.0 and 60.0 min, respectively and subsequent chromatographic separation within 12.0 min using Luna C18 column and 100% water mobile phase with 0.5 mL min(-1) under 25 °C column temperature at 250 nm. The extraction and analysis of acrylamide could be achieved within 2h. The mean extraction efficiency of acrylamide showed adequate repeatability with relative standard deviation (RSD) of 4.5%. The limit of detection and limit of quantitation were 25.0 and 80.0 ng mL(-1), respectively. The accuracy of the proposed method was tested by recovery in seven food samples giving values ranged between 90.6% and 109.8%. Therefore, the methodology was successfully validated by official guidelines, indicating its reliability to be applied to analysis of real samples, proven to be useful for its intended purpose. Moreover, it served as a simple, eco-friendly and costless alternative method over hitherto reported ones. © 2013 Elsevier B.V. All rights reserved.

  6. Sensitive and selective determination of gallic acid in green tea samples based on an electrochemical platform of poly(melamine) film

    Energy Technology Data Exchange (ETDEWEB)

    Su, Ya-Ling; Cheng, Shu-Hua, E-mail: shcheng@ncnu.edu.tw

    2015-12-11

    In this work, an electrochemical sensor coupled with an effective flow-injection amperometry (FIA) system is developed, targeting the determination of gallic acid (GA) in a mild neutral condition, in contrast to the existing electrochemical methods. The sensor is based on a thin electroactive poly(melamine) film immobilized on a pre-anodized screen-printed carbon electrode (SPCE*/PME). The characteristics of the sensing surface are well-characterized by field emission scanning electron microscopy (FE-SEM), X-ray photoelectron spectroscopy (XPS) and surface water contact angle experiments. The proposed assay exhibits a wide linear response to GA in both pH 3 and pH 7.0 phosphate buffer solutions (PBS) under the optimized flow-injection amperometry. The detection limit (S/N = 3) is 0.076 μM and 0.21 μM in the pH 3 and pH 7 solutions, respectively. A relative standard deviation (RSD) of 3.9% is obtained for 57 successive measurements of 50 μM GA in pH 7 solutions. Interference studies indicate that some inorganic salts, catechol, caffeine and ascorbic acid do not interfere with the GA assay. The interference effects from some orthodiphenolic compounds are also investigated. The proposed method and a conventional Folin–Ciocalteu method are applied to detect GA in green tea samples using the standard addition method, and satisfactory spiked recoveries are obtained. - Highlights: • A nitrogen-rich conducting polymer was used for electroanalysis of gallic acid. • The sensor exhibits excellent electrochemical activity in both acidic and neutral media. • Good analytical results in terms of low detection limit and wide linear range are obtained. • The flow-injection amperometric assay is highly stable for continuous 57 replicates measurement (RSD = 3.9%). • The assay shows good recovery for green tea samples.

  7. Sensitive and selective determination of gallic acid in green tea samples based on an electrochemical platform of poly(melamine) film

    International Nuclear Information System (INIS)

    Su, Ya-Ling; Cheng, Shu-Hua

    2015-01-01

    In this work, an electrochemical sensor coupled with an effective flow-injection amperometry (FIA) system is developed, targeting the determination of gallic acid (GA) in a mild neutral condition, in contrast to the existing electrochemical methods. The sensor is based on a thin electroactive poly(melamine) film immobilized on a pre-anodized screen-printed carbon electrode (SPCE*/PME). The characteristics of the sensing surface are well-characterized by field emission scanning electron microscopy (FE-SEM), X-ray photoelectron spectroscopy (XPS) and surface water contact angle experiments. The proposed assay exhibits a wide linear response to GA in both pH 3 and pH 7.0 phosphate buffer solutions (PBS) under the optimized flow-injection amperometry. The detection limit (S/N = 3) is 0.076 μM and 0.21 μM in the pH 3 and pH 7 solutions, respectively. A relative standard deviation (RSD) of 3.9% is obtained for 57 successive measurements of 50 μM GA in pH 7 solutions. Interference studies indicate that some inorganic salts, catechol, caffeine and ascorbic acid do not interfere with the GA assay. The interference effects from some orthodiphenolic compounds are also investigated. The proposed method and a conventional Folin–Ciocalteu method are applied to detect GA in green tea samples using the standard addition method, and satisfactory spiked recoveries are obtained. - Highlights: • A nitrogen-rich conducting polymer was used for electroanalysis of gallic acid. • The sensor exhibits excellent electrochemical activity in both acidic and neutral media. • Good analytical results in terms of low detection limit and wide linear range are obtained. • The flow-injection amperometric assay is highly stable for continuous 57 replicates measurement (RSD = 3.9%). • The assay shows good recovery for green tea samples.

  8. Diets and selected lifestyle practices of self-defined adult vegetarians from a population-based sample suggest they are more 'health conscious'

    Directory of Open Access Journals (Sweden)

    Barr Susan I

    2005-04-01

    Full Text Available Abstract Background Few population-based studies of vegetarians have been published. Thus we compared self-reported vegetarians to non-vegetarians in a representative sample of British Columbia (BC adults, weighted to reflect the BC population. Methods Questionnaires, 24-hr recalls and anthropometric measures were completed during in-person interviews with 1817 community-dwelling residents, 19–84 years, recruited using a population-based health registry. Vegetarian status was self-defined. ANOVA with age as a covariate was used to analyze continuous variables, and chi-square was used for categorical variables. Supplement intakes were compared using the Mann-Whitney test. Results Approximately 6% (n = 106 stated that they were vegetarian, and most did not adhere rigidly to a flesh-free diet. Vegetarians were more likely female (71% vs. 49%, single, of low-income status, and tended to be younger. Female vegetarians had lower BMI than non-vegetarians (23.1 ± 0.7 (mean ± SE vs. 25.7 ± 0.2 kg/m2, and also had lower waist circumference (75.0 ± 1.5 vs. 79.8 ± 0.5 cm. Male vegetarians and non-vegetarians had similar BMI (25.9 ± 0.8 vs. 26.7 ± 0.2 kg/m2 and waist circumference (92.5 ± 2.3 vs. 91.7 ± 0.4 cm. Female vegetarians were more physically active (69% vs. 42% active ≥4/wk while male vegetarians were more likely to use nutritive supplements (71% vs. 51%. Energy intakes were similar, but vegetarians reported higher % energy as carbohydrate (56% vs. 50%, and lower % protein (men only; 13% vs. 17% or % fat (women only; 27% vs. 33%. Vegetarians had higher fiber, magnesium and potassium intakes. For several other nutrients, differences by vegetarian status differed by gender. The prevalence of inadequate magnesium intake (% below Estimated Average Requirement was lower in vegetarians than non-vegetarians (15% vs. 34%. Female vegetarians also had a lower prevalence of inadequate thiamin, folate, vitamin B6 and C intakes. Vegetarians were

  9. Rapid Detection of Clostridium botulinum Toxins A, B, E, and F in Clinical Samples, Selected Food Matrices, and Buffer Using Paramagnetic Bead-Based Electrochemiluminescence Detection

    National Research Council Canada - National Science Library

    Rivera, Victor R; Gamez, Frank J; Keener, William K; White, Jill A; Poli, Mark A

    2006-01-01

    Sensitive and specific electrochemiluminescence (ECL) assays were used to detect Clostridium botulinum neurotoxins serotypes A, B, E, and F in undiluted human serum, undiluted human urine, assay buffer, and selected food matrices...

  10. Sensitive and selective determination of gallic acid in green tea samples based on an electrochemical platform of poly(melamine) film.

    Science.gov (United States)

    Su, Ya-Ling; Cheng, Shu-Hua

    2015-12-11

    In this work, an electrochemical sensor coupled with an effective flow-injection amperometry (FIA) system is developed, targeting the determination of gallic acid (GA) in a mild neutral condition, in contrast to the existing electrochemical methods. The sensor is based on a thin electroactive poly(melamine) film immobilized on a pre-anodized screen-printed carbon electrode (SPCE*/PME). The characteristics of the sensing surface are well-characterized by field emission scanning electron microscopy (FE-SEM), X-ray photoelectron spectroscopy (XPS) and surface water contact angle experiments. The proposed assay exhibits a wide linear response to GA in both pH 3 and pH 7.0 phosphate buffer solutions (PBS) under the optimized flow-injection amperometry. The detection limit (S/N = 3) is 0.076 μM and 0.21 μM in the pH 3 and pH 7 solutions, respectively. A relative standard deviation (RSD) of 3.9% is obtained for 57 successive measurements of 50 μM GA in pH 7 solutions. Interference studies indicate that some inorganic salts, catechol, caffeine and ascorbic acid do not interfere with the GA assay. The interference effects from some orthodiphenolic compounds are also investigated. The proposed method and a conventional Folin-Ciocalteu method are applied to detect GA in green tea samples using the standard addition method, and satisfactory spiked recoveries are obtained. Copyright © 2015 Elsevier B.V. All rights reserved.

  11. Effective traffic features selection algorithm for cyber-attacks samples

    Science.gov (United States)

    Li, Yihong; Liu, Fangzheng; Du, Zhenyu

    2018-05-01

    By studying the defense scheme of Network attacks, this paper propose an effective traffic features selection algorithm based on k-means++ clustering to deal with the problem of high dimensionality of traffic features which extracted from cyber-attacks samples. Firstly, this algorithm divide the original feature set into attack traffic feature set and background traffic feature set by the clustering. Then, we calculates the variation of clustering performance after removing a certain feature. Finally, evaluating the degree of distinctiveness of the feature vector according to the result. Among them, the effective feature vector is whose degree of distinctiveness exceeds the set threshold. The purpose of this paper is to select out the effective features from the extracted original feature set. In this way, it can reduce the dimensionality of the features so as to reduce the space-time overhead of subsequent detection. The experimental results show that the proposed algorithm is feasible and it has some advantages over other selection algorithms.

  12. 40 CFR 205.57-2 - Test vehicle sample selection.

    Science.gov (United States)

    2010-07-01

    ... pursuant to a test request in accordance with this subpart will be selected in the manner specified in the... then using a table of random numbers to select the number of vehicles as specified in paragraph (c) of... with the desig-nated AQL are contained in Appendix I, -Table II. (c) The appropriate batch sample size...

  13. A novel highly sensitive and selective optical sensor based on a symmetric tetradentate Schiff-base embedded in PVC polymeric film for determination of Zn{sup 2+} ion in real samples

    Energy Technology Data Exchange (ETDEWEB)

    Abdel Aziz, Ayman A., E-mail: aymanaziz31@gmail.com [Chemistry Department, Faculty of Science, Ain Shams University, 11566 Cairo (Egypt); Chemistry Department, Faculty of Science, University of Tabuk, 71421, Tabuk (Saudi Arabia)

    2013-11-15

    A novel prepared Zn{sup 2+} ion PVC membrane sensor based on a novel Schiff base; N,N′bis(salicylaldehyde)2,3-diaminonaphthalene (SDN) for the determination of Zn{sup 2+} ion was described. The chemosensor was synthesized under microwave irradiation via condensation of 2,3-diaminonaphthalene and salicylaldehyde. Photoluminescence characteristics of the novel Schiff base ligand were investigated in different solvents including dicholoromethane (DCM), tetrahydrofuran (THF) and ethanol (EtOH). SDN was found to have higher emission intensity and Stoke’s shift value (Δλ{sub ST}) in EtOH solution. The sensor exhibited a specific fluorescent on response to Zn{sup 2+}. The response of the sensor is based on the fluorescence enhancement of SDN (LH{sub 2}) by Zn{sup 2+} ion as a result of formation the rigid structure L-Zn complex. The experiment results also show that the response behavior of SDN to Zn{sup 2+} is pH independent in the range of pH 6.0–8.0. At pH 7.0, the proposed sensor displays a calibration response for Zn{sup 2+} over a wide concentration range of 1.0×10{sup −9}–2.0×10{sup −3} mol L{sup −1} with a limit of detection (LOD) 8.1×10{sup −10} mol L{sup −1} (0.0529659 μg L{sup −1}). The sensor shows excellent selectivity toward Zn{sup 2+} with respect to common coexisting cations. The proposed fluorescence optode was successfully applied to detect Zn{sup 2+} in human hair samples, different brands of powdered milk and some pharmaceuticals. -- Highlights: • A novel Zn(II) chemosensor has been developed. • Wide linear concentration range of 1.0×10{sup −9}–2.0×10{sup −3} mol L{sup −1}. • Application for determination of Zn(II) in real samples.

  14. The quasar luminosity function from a variability-selected sample

    Science.gov (United States)

    Hawkins, M. R. S.; Veron, P.

    1993-01-01

    A sample of quasars is selected from a 10-yr sequence of 30 UK Schmidt plates. Luminosity functions are derived in several redshift intervals, which in each case show a featureless power-law rise towards low luminosities. There is no sign of the 'break' found in the recent UVX sample of Boyle et al. It is suggested that reasons for the disagreement are connected with biases in the selection of the UVX sample. The question of the nature of quasar evolution appears to be still unresolved.

  15. Design-based estimators for snowball sampling

    OpenAIRE

    Shafie, Termeh

    2010-01-01

    Snowball sampling, where existing study subjects recruit further subjects from amongtheir acquaintances, is a popular approach when sampling from hidden populations.Since people with many in-links are more likely to be selected, there will be a selectionbias in the samples obtained. In order to eliminate this bias, the sample data must beweighted. However, the exact selection probabilities are unknown for snowball samplesand need to be approximated in an appropriate way. This paper proposes d...

  16. Unbiased Selective Isolation of Protein N-Terminal Peptides from Complex Proteome Samples Using Phospho Tagging PTAG) and TiO2-based Depletion

    NARCIS (Netherlands)

    Mommen, G.P.M.; Waterbeemd, van de B.; Meiring, H.D.; Kersten, G.; Heck, A.J.R.; Jong, de A.P.J.M.

    2012-01-01

    A positional proteomics strategy for global N-proteome analysis is presented based on phospho tagging (PTAG) of internal peptides followed by depletion by titanium dioxide (TiO2) affinity chromatography. Therefore, N-terminal and lysine amino groups are initially completely dimethylated with

  17. The genealogy of samples in models with selection.

    Science.gov (United States)

    Neuhauser, C; Krone, S M

    1997-02-01

    We introduce the genealogy of a random sample of genes taken from a large haploid population that evolves according to random reproduction with selection and mutation. Without selection, the genealogy is described by Kingman's well-known coalescent process. In the selective case, the genealogy of the sample is embedded in a graph with a coalescing and branching structure. We describe this graph, called the ancestral selection graph, and point out differences and similarities with Kingman's coalescent. We present simulations for a two-allele model with symmetric mutation in which one of the alleles has a selective advantage over the other. We find that when the allele frequencies in the population are already in equilibrium, then the genealogy does not differ much from the neutral case. This is supported by rigorous results. Furthermore, we describe the ancestral selection graph for other selective models with finitely many selection classes, such as the K-allele models, infinitely-many-alleles models. DNA sequence models, and infinitely-many-sites models, and briefly discuss the diploid case.

  18. Progressive sample processing of band selection for hyperspectral imagery

    Science.gov (United States)

    Liu, Keng-Hao; Chien, Hung-Chang; Chen, Shih-Yu

    2017-10-01

    Band selection (BS) is one of the most important topics in hyperspectral image (HSI) processing. The objective of BS is to find a set of representative bands that can represent the whole image with lower inter-band redundancy. Many types of BS algorithms were proposed in the past. However, most of them can be carried on in an off-line manner. It means that they can only be implemented on the pre-collected data. Those off-line based methods are sometime useless for those applications that are timeliness, particular in disaster prevention and target detection. To tackle this issue, a new concept, called progressive sample processing (PSP), was proposed recently. The PSP is an "on-line" framework where the specific type of algorithm can process the currently collected data during the data transmission under band-interleavedby-sample/pixel (BIS/BIP) protocol. This paper proposes an online BS method that integrates a sparse-based BS into PSP framework, called PSP-BS. In PSP-BS, the BS can be carried out by updating BS result recursively pixel by pixel in the same way that a Kalman filter does for updating data information in a recursive fashion. The sparse regression is solved by orthogonal matching pursuit (OMP) algorithm, and the recursive equations of PSP-BS are derived by using matrix decomposition. The experiments conducted on a real hyperspectral image show that the PSP-BS can progressively output the BS status with very low computing time. The convergence of BS results during the transmission can be quickly achieved by using a rearranged pixel transmission sequence. This significant advantage allows BS to be implemented in a real time manner when the HSI data is transmitted pixel by pixel.

  19. Sampling point selection for energy estimation in the quasicontinuum method

    NARCIS (Netherlands)

    Beex, L.A.A.; Peerlings, R.H.J.; Geers, M.G.D.

    2010-01-01

    The quasicontinuum (QC) method reduces computational costs of atomistic calculations by using interpolation between a small number of so-called repatoms to represent the displacements of the complete lattice and by selecting a small number of sampling atoms to estimate the total potential energy of

  20. Hybrid nested sampling algorithm for Bayesian model selection applied to inverse subsurface flow problems

    International Nuclear Information System (INIS)

    Elsheikh, Ahmed H.; Wheeler, Mary F.; Hoteit, Ibrahim

    2014-01-01

    A Hybrid Nested Sampling (HNS) algorithm is proposed for efficient Bayesian model calibration and prior model selection. The proposed algorithm combines, Nested Sampling (NS) algorithm, Hybrid Monte Carlo (HMC) sampling and gradient estimation using Stochastic Ensemble Method (SEM). NS is an efficient sampling algorithm that can be used for Bayesian calibration and estimating the Bayesian evidence for prior model selection. Nested sampling has the advantage of computational feasibility. Within the nested sampling algorithm, a constrained sampling step is performed. For this step, we utilize HMC to reduce the correlation between successive sampled states. HMC relies on the gradient of the logarithm of the posterior distribution, which we estimate using a stochastic ensemble method based on an ensemble of directional derivatives. SEM only requires forward model runs and the simulator is then used as a black box and no adjoint code is needed. The developed HNS algorithm is successfully applied for Bayesian calibration and prior model selection of several nonlinear subsurface flow problems

  1. Hybrid nested sampling algorithm for Bayesian model selection applied to inverse subsurface flow problems

    Energy Technology Data Exchange (ETDEWEB)

    Elsheikh, Ahmed H., E-mail: aelsheikh@ices.utexas.edu [Institute for Computational Engineering and Sciences (ICES), University of Texas at Austin, TX (United States); Institute of Petroleum Engineering, Heriot-Watt University, Edinburgh EH14 4AS (United Kingdom); Wheeler, Mary F. [Institute for Computational Engineering and Sciences (ICES), University of Texas at Austin, TX (United States); Hoteit, Ibrahim [Department of Earth Sciences and Engineering, King Abdullah University of Science and Technology (KAUST), Thuwal (Saudi Arabia)

    2014-02-01

    A Hybrid Nested Sampling (HNS) algorithm is proposed for efficient Bayesian model calibration and prior model selection. The proposed algorithm combines, Nested Sampling (NS) algorithm, Hybrid Monte Carlo (HMC) sampling and gradient estimation using Stochastic Ensemble Method (SEM). NS is an efficient sampling algorithm that can be used for Bayesian calibration and estimating the Bayesian evidence for prior model selection. Nested sampling has the advantage of computational feasibility. Within the nested sampling algorithm, a constrained sampling step is performed. For this step, we utilize HMC to reduce the correlation between successive sampled states. HMC relies on the gradient of the logarithm of the posterior distribution, which we estimate using a stochastic ensemble method based on an ensemble of directional derivatives. SEM only requires forward model runs and the simulator is then used as a black box and no adjoint code is needed. The developed HNS algorithm is successfully applied for Bayesian calibration and prior model selection of several nonlinear subsurface flow problems.

  2. Autonomous site selection and instrument positioning for sample acquisition

    Science.gov (United States)

    Shaw, A.; Barnes, D.; Pugh, S.

    The European Space Agency Aurora Exploration Program aims to establish a European long-term programme for the exploration of Space, culminating in a human mission to space in the 2030 timeframe. Two flagship missions, namely Mars Sample Return and ExoMars, have been proposed as recognised steps along the way. The Exomars Rover is the first of these flagship missions and includes a rover carrying the Pasteur Payload, a mobile exobiology instrumentation package, and the Beagle 2 arm. The primary objective is the search for evidence of past or present life on mars, but the payload will also study the evolution of the planet and the atmosphere, look for evidence of seismological activity and survey the environment in preparation for future missions. The operation of rovers in unknown environments is complicated, and requires large resources not only on the planet but also in ground based operations. Currently, this can be very labour intensive, and costly, if large teams of scientists and engineers are required to assess mission progress, plan mission scenarios, and construct a sequence of events or goals for uplink. Furthermore, the constraints in communication imposed by the time delay involved over such large distances, and line-of-sight required, make autonomy paramount to mission success, affording the ability to operate in the event of communications outages and be opportunistic with respect to scientific discovery. As part of this drive to reduce mission costs and increase autonomy the Space Robotics group at the University of Wales, Aberystwyth is researching methods of autonomous site selection and instrument positioning, directly applicable to the ExoMars mission. The site selection technique used builds on the geometric reasoning algorithms used previously for localisation and navigation [Shaw 03]. It is proposed that a digital elevation model (DEM) of the local surface, generated during traverse and without interaction from ground based operators, can be

  3. Response rates and selection problems, with emphasis on mental health variables and DNA sampling, in large population-based, cross-sectional and longitudinal studies of adolescents in Norway

    Directory of Open Access Journals (Sweden)

    Lien Lars

    2010-10-01

    Full Text Available Abstract Background Selection bias is a threat to the internal validity of epidemiological studies. In light of a growing number of studies which aim to provide DNA, as well as a considerable number of invitees who declined to participate, we discuss response rates, predictors of lost to follow-up and failure to provide DNA, and the presence of possible selection bias, based on five samples of adolescents. Methods We included nearly 7,000 adolescents from two longitudinal studies of 18/19 year olds with two corresponding cross-sectional baseline studies at age 15/16 (10th graders, and one cross-sectional study of 13th graders (18/19 years old. DNA was sampled from the cheek mucosa of 18/19 year olds. Predictors of lost to follow-up and failure to provide DNA were studied by Poisson regression. Selection bias in the follow-up at age 18/19 was estimated through investigation of prevalence ratios (PRs between selected exposures (physical activity, smoking and outcome variables (general health, mental distress, externalizing problems measured at baseline. Results Out of 5,750 who participated at age 15/16, we lost 42% at follow-up at age 18/19. The percentage of participants who gave their consent to DNA provision was as high as the percentage that consented to a linkage of data with other health registers and surveys, approximately 90%. Significant predictors of lost to follow-up and failure to provide DNA samples in the present genetic epidemiological study were: male gender; non-western ethnicity; postal survey compared with school-based; low educational plans; low education and income of father; low perceived family economy; unmarried parents; poor self-reported health; externalized symptoms and smoking, with some differences in subgroups of ethnicity and gender. The association measures (PRs were quite similar among participants and all invitees, with some minor discrepancies in subgroups of non-western boys and girls. Conclusions Lost to

  4. Response rates and selection problems, with emphasis on mental health variables and DNA sampling, in large population-based, cross-sectional and longitudinal studies of adolescents in Norway.

    Science.gov (United States)

    Bjertness, Espen; Sagatun, Ase; Green, Kristian; Lien, Lars; Søgaard, Anne Johanne; Selmer, Randi

    2010-10-12

    Selection bias is a threat to the internal validity of epidemiological studies. In light of a growing number of studies which aim to provide DNA, as well as a considerable number of invitees who declined to participate, we discuss response rates, predictors of lost to follow-up and failure to provide DNA, and the presence of possible selection bias, based on five samples of adolescents. We included nearly 7,000 adolescents from two longitudinal studies of 18/19 year olds with two corresponding cross-sectional baseline studies at age 15/16 (10th graders), and one cross-sectional study of 13th graders (18/19 years old). DNA was sampled from the cheek mucosa of 18/19 year olds. Predictors of lost to follow-up and failure to provide DNA were studied by Poisson regression. Selection bias in the follow-up at age 18/19 was estimated through investigation of prevalence ratios (PRs) between selected exposures (physical activity, smoking) and outcome variables (general health, mental distress, externalizing problems) measured at baseline. Out of 5,750 who participated at age 15/16, we lost 42% at follow-up at age 18/19. The percentage of participants who gave their consent to DNA provision was as high as the percentage that consented to a linkage of data with other health registers and surveys, approximately 90%. Significant predictors of lost to follow-up and failure to provide DNA samples in the present genetic epidemiological study were: male gender; non-western ethnicity; postal survey compared with school-based; low educational plans; low education and income of father; low perceived family economy; unmarried parents; poor self-reported health; externalized symptoms and smoking, with some differences in subgroups of ethnicity and gender. The association measures (PRs) were quite similar among participants and all invitees, with some minor discrepancies in subgroups of non-western boys and girls. Lost to follow-up had marginal impact on the estimated prevalence ratios

  5. Evaluation of Stress Loaded Steel Samples Using Selected Electromagnetic Methods

    International Nuclear Information System (INIS)

    Chady, T.

    2004-01-01

    In this paper the magnetic leakage flux and eddy current method were used to evaluate changes of materials' properties caused by stress. Seven samples made of ferromagnetic material with different level of applied stress were prepared. First, the leakage magnetic fields were measured by scanning the surface of the specimens with GMR gradiometer. Next, the same samples were evaluated using an eddy current sensor. A comparison between results obtained from both methods was carried out. Finally, selected parameters of the measured signal were calculated and utilized to evaluate level of the applied stress. A strong coincidence between amount of the applied stress and the maximum amplitude of the derivative was confirmed

  6. An Improved Nested Sampling Algorithm for Model Selection and Assessment

    Science.gov (United States)

    Zeng, X.; Ye, M.; Wu, J.; WANG, D.

    2017-12-01

    Multimodel strategy is a general approach for treating model structure uncertainty in recent researches. The unknown groundwater system is represented by several plausible conceptual models. Each alternative conceptual model is attached with a weight which represents the possibility of this model. In Bayesian framework, the posterior model weight is computed as the product of model prior weight and marginal likelihood (or termed as model evidence). As a result, estimating marginal likelihoods is crucial for reliable model selection and assessment in multimodel analysis. Nested sampling estimator (NSE) is a new proposed algorithm for marginal likelihood estimation. The implementation of NSE comprises searching the parameters' space from low likelihood area to high likelihood area gradually, and this evolution is finished iteratively via local sampling procedure. Thus, the efficiency of NSE is dominated by the strength of local sampling procedure. Currently, Metropolis-Hasting (M-H) algorithm and its variants are often used for local sampling in NSE. However, M-H is not an efficient sampling algorithm for high-dimensional or complex likelihood function. For improving the performance of NSE, it could be feasible to integrate more efficient and elaborated sampling algorithm - DREAMzs into the local sampling. In addition, in order to overcome the computation burden problem of large quantity of repeating model executions in marginal likelihood estimation, an adaptive sparse grid stochastic collocation method is used to build the surrogates for original groundwater model.

  7. Measurement of radioactivity in the environment - Soil - Part 2: Guidance for the selection of the sampling strategy, sampling and pre-treatment of samples

    International Nuclear Information System (INIS)

    2007-01-01

    This part of ISO 18589 specifies the general requirements, based on ISO 11074 and ISO/IEC 17025, for all steps in the planning (desk study and area reconnaissance) of the sampling and the preparation of samples for testing. It includes the selection of the sampling strategy, the outline of the sampling plan, the presentation of general sampling methods and equipment, as well as the methodology of the pre-treatment of samples adapted to the measurements of the activity of radionuclides in soil. This part of ISO 18589 is addressed to the people responsible for determining the radioactivity present in soil for the purpose of radiation protection. It is applicable to soil from gardens, farmland, urban or industrial sites, as well as soil not affected by human activities. This part of ISO 18589 is applicable to all laboratories regardless of the number of personnel or the range of the testing performed. When a laboratory does not undertake one or more of the activities covered by this part of ISO 18589, such as planning, sampling or testing, the corresponding requirements do not apply. Information is provided on scope, normative references, terms and definitions and symbols, principle, sampling strategy, sampling plan, sampling process, pre-treatment of samples and recorded information. Five annexes inform about selection of the sampling strategy according to the objectives and the radiological characterization of the site and sampling areas, diagram of the evolution of the sample characteristics from the sampling site to the laboratory, example of sampling plan for a site divided in three sampling areas, example of a sampling record for a single/composite sample and example for a sample record for a soil profile with soil description. A bibliography is provided

  8. Trace level and highly selective determination of urea in various real samples based upon voltammetric analysis of diacetylmonoxime-urea reaction product on the carbon nanotube/carbon paste electrode.

    Science.gov (United States)

    Alizadeh, Taher; Ganjali, Mohammad Reza; Rafiei, Faride

    2017-06-29

    In this study an innovative method was introduced for selective and precise determination of urea in various real samples including urine, blood serum, soil and water. The method was based on the square wave voltammetry determination of an electroactive product, generated during diacetylmonoxime reaction with urea. A carbon paste electrode, modified with multi-walled carbon nanotubes (MWCNTs) was found to be an appropriate electrochemical transducer for recording of the electrochemical signal. It was found that the chemical reaction conditions influenced the analytical signal directly. The calibration graph of the method was linear in the range of 1 × 10 -7 - 1 × 10 -2  mol L -1 . The detection limit was calculated to be 52 nmol L -1 . Relative standard error of the method was also calculated to be 3.9% (n = 3). The developed determination procedure was applied for urea determination in various real samples including soil, urine, plasma and water samples. Copyright © 2017 Elsevier B.V. All rights reserved.

  9. Failure Probability Estimation Using Asymptotic Sampling and Its Dependence upon the Selected Sampling Scheme

    Directory of Open Access Journals (Sweden)

    Martinásková Magdalena

    2017-12-01

    Full Text Available The article examines the use of Asymptotic Sampling (AS for the estimation of failure probability. The AS algorithm requires samples of multidimensional Gaussian random vectors, which may be obtained by many alternative means that influence the performance of the AS method. Several reliability problems (test functions have been selected in order to test AS with various sampling schemes: (i Monte Carlo designs; (ii LHS designs optimized using the Periodic Audze-Eglājs (PAE criterion; (iii designs prepared using Sobol’ sequences. All results are compared with the exact failure probability value.

  10. Enhanced Sampling and Analysis, Selection of Technology for Testing

    Energy Technology Data Exchange (ETDEWEB)

    Svoboda, John; Meikrantz, David

    2010-02-01

    The focus of this study includes the investigation of sampling technologies used in industry and their potential application to nuclear fuel processing. The goal is to identify innovative sampling methods using state of the art techniques that could evolve into the next generation sampling and analysis system for metallic elements. This report details the progress made in the first half of FY 2010 and includes a further consideration of the research focus and goals for this year. Our sampling options and focus for the next generation sampling method are presented along with the criteria used for choosing our path forward. We have decided to pursue the option of evaluating the feasibility of microcapillary based chips to remotely collect, transfer, track and supply microliters of sample solutions to analytical equipment in support of aqueous processes for used nuclear fuel cycles. Microchip vendors have been screened and a choice made for the development of a suitable microchip design followed by production of samples for evaluation by ANL, LANL, and INL on an independent basis.

  11. Application of the Sampling Selection Technique in Approaching Financial Audit

    Directory of Open Access Journals (Sweden)

    Victor Munteanu

    2018-03-01

    Full Text Available In his professional approach, the financial auditor has a wide range of working techniques, including selection techniques. They are applied depending on the nature of the information available to the financial auditor, the manner in which they are presented - paper or electronic format, and, last but not least, the time available. Several techniques are applied, successively or in parallel, to increase the safety of the expressed opinion and to provide the audit report with a solid basis of information. Sampling is used in the phase of control or clarification of the identified error. The main purpose is to corroborate or measure the degree of risk detected following a pertinent analysis. Since the auditor does not have time or means to thoroughly rebuild the information, the sampling technique can provide an effective response to the need for valorization.

  12. A new electrochemical sensor for highly sensitive and selective detection of nitrite in food samples based on sonochemical synthesized Calcium Ferrite (CaFe2O4) clusters modified screen printed carbon electrode.

    Science.gov (United States)

    Balasubramanian, Paramasivam; Settu, Ramki; Chen, Shen-Ming; Chen, Tse-Wei; Sharmila, Ganapathi

    2018-08-15

    Herein, we report a novel, disposable electrochemical sensor for the detection of nitrite ions in food samples based on the sonochemical synthesized orthorhombic CaFe 2 O 4 (CFO) clusters modified screen printed electrode. As synthesized CFO clusters were characterized by scanning electron microscopy (SEM), X-ray diffraction (XRD), Fourier transformer infrared spectroscopy (FT-IR), Thermogravimetric analysis (TGA), X-ray photoelectron spectroscopy (XPS), electrochemical impedance spectroscopy (EIS), cyclic voltammetry (CV) and amperometry (i-t). Under optimal condition, the CFO modified electrode displayed a rapid current response to nitrite, a linear response range from 0.016 to 1921 µM associated with a low detection limit 6.6 nM. The suggested sensor also showed the excellent sensitivity of 3.712 μA μM -1  cm -2 . Furthermore, a good reproducibility, long-term stability and excellent selectivity were also attained on the proposed sensor. In addition, the practical applicability of the sensor was investigated via meat samples, tap water and drinking water, and showed desirable recovery rate, representing its possibilities for practical application. Copyright © 2018 Elsevier Inc. All rights reserved.

  13. MO-FG-CAMPUS-JeP2-01: 4D-MRI with 3D Radial Sampling and Self-Gating-Based K-Space Sorting: Image Quality Improvement by Slab-Selective Excitation

    Energy Technology Data Exchange (ETDEWEB)

    Deng, Z; Pang, J; Tuli, R; Fraass, B; Fan, Z [Cedars Sinai Medical Center, Los Angeles, CA (United States); Yang, W [Cedars-Sinai Medical Center, Los Angeles, CA (United States); Bi, X [Siemens Healthcare, Los Angeles, CA (United States); Hakimian, B [Cedars Sinai Medical Center, Los Angeles CA (United States); Li, D [Cedars Sinai Medical Center, Los Angeles, California (United States)

    2016-06-15

    Purpose: A recent 4D MRI technique based on 3D radial sampling and self-gating-based K-space sorting has shown promising results in characterizing respiratory motion. However due to continuous acquisition and potentially drastic k-space undersampling resultant images could suffer from low blood-to-tissue contrast and streaking artifacts. In this study 3D radial sampling with slab-selective excitation (SS) was proposed in attempt to enhance blood-to-tissue contrast by exploiting the in-flow effect and to suppress the excess signal from the peripheral structures particularly in the superior-inferior direction. The feasibility of improving image quality by using this approach was investigated through a comparison with the previously developed non-selective excitation (NS) approach. Methods: Two excitation approaches SS and NS were compared in 5 cancer patients (1 lung 1 liver 2 pancreas and 1 esophagus) at 3Tesla. Image artifact was assessed in all patients on a 4-point scale (0: poor; 3: excellent). Signal-tonoise ratio (SNR) of the blood vessel (aorta) at the center of field-of-view and its nearby tissue were measured in 3 of the 5 patients (1 liver 2 pancreas) and blood-to-tissue contrast-to-noise ratio (CNR) were then determined. Results: Compared with NS the image quality of SS was visually improved with overall higher signal in all patients (2.6±0.55 vs. 3.4±0.55). SS showed an approximately 2-fold increase of SNR in the blood (aorta: 16.39±1.95 vs. 32.19±7.93) and slight increase in the surrounding tissue (liver/pancreas: 16.91±1.82 vs. 22.31±3.03). As a result the blood-totissue CNR was dramatically higher in the SS method (1.20±1.20 vs. 9.87±6.67). Conclusion: The proposed 3D radial sampling with slabselective excitation allows for reduced image artifact and improved blood SNR and blood-to-tissue CNR. The success of this technique could potentially benefit patients with cancerous tumors that have invaded the surrounding blood vessels where radiation

  14. MO-FG-CAMPUS-JeP2-01: 4D-MRI with 3D Radial Sampling and Self-Gating-Based K-Space Sorting: Image Quality Improvement by Slab-Selective Excitation

    International Nuclear Information System (INIS)

    Deng, Z; Pang, J; Tuli, R; Fraass, B; Fan, Z; Yang, W; Bi, X; Hakimian, B; Li, D

    2016-01-01

    Purpose: A recent 4D MRI technique based on 3D radial sampling and self-gating-based K-space sorting has shown promising results in characterizing respiratory motion. However due to continuous acquisition and potentially drastic k-space undersampling resultant images could suffer from low blood-to-tissue contrast and streaking artifacts. In this study 3D radial sampling with slab-selective excitation (SS) was proposed in attempt to enhance blood-to-tissue contrast by exploiting the in-flow effect and to suppress the excess signal from the peripheral structures particularly in the superior-inferior direction. The feasibility of improving image quality by using this approach was investigated through a comparison with the previously developed non-selective excitation (NS) approach. Methods: Two excitation approaches SS and NS were compared in 5 cancer patients (1 lung 1 liver 2 pancreas and 1 esophagus) at 3Tesla. Image artifact was assessed in all patients on a 4-point scale (0: poor; 3: excellent). Signal-tonoise ratio (SNR) of the blood vessel (aorta) at the center of field-of-view and its nearby tissue were measured in 3 of the 5 patients (1 liver 2 pancreas) and blood-to-tissue contrast-to-noise ratio (CNR) were then determined. Results: Compared with NS the image quality of SS was visually improved with overall higher signal in all patients (2.6±0.55 vs. 3.4±0.55). SS showed an approximately 2-fold increase of SNR in the blood (aorta: 16.39±1.95 vs. 32.19±7.93) and slight increase in the surrounding tissue (liver/pancreas: 16.91±1.82 vs. 22.31±3.03). As a result the blood-totissue CNR was dramatically higher in the SS method (1.20±1.20 vs. 9.87±6.67). Conclusion: The proposed 3D radial sampling with slabselective excitation allows for reduced image artifact and improved blood SNR and blood-to-tissue CNR. The success of this technique could potentially benefit patients with cancerous tumors that have invaded the surrounding blood vessels where radiation

  15. Electromembrane extraction as a rapid and selective miniaturized sample preparation technique for biological fluids

    DEFF Research Database (Denmark)

    Gjelstad, Astrid; Pedersen-Bjergaard, Stig; Seip, Knut Fredrik

    2015-01-01

    This special report discusses the sample preparation method electromembrane extraction, which was introduced in 2006 as a rapid and selective miniaturized extraction method. The extraction principle is based on isolation of charged analytes extracted from an aqueous sample, across a thin film....... Technical aspects of electromembrane extraction, important extraction parameters as well as a handful of examples of applications from different biological samples and bioanalytical areas are discussed in the paper....

  16. A novel heterogeneous training sample selection method on space-time adaptive processing

    Science.gov (United States)

    Wang, Qiang; Zhang, Yongshun; Guo, Yiduo

    2018-04-01

    The performance of ground target detection about space-time adaptive processing (STAP) decreases when non-homogeneity of clutter power is caused because of training samples contaminated by target-like signals. In order to solve this problem, a novel nonhomogeneous training sample selection method based on sample similarity is proposed, which converts the training sample selection into a convex optimization problem. Firstly, the existing deficiencies on the sample selection using generalized inner product (GIP) are analyzed. Secondly, the similarities of different training samples are obtained by calculating mean-hausdorff distance so as to reject the contaminated training samples. Thirdly, cell under test (CUT) and the residual training samples are projected into the orthogonal subspace of the target in the CUT, and mean-hausdorff distances between the projected CUT and training samples are calculated. Fourthly, the distances are sorted in order of value and the training samples which have the bigger value are selective preference to realize the reduced-dimension. Finally, simulation results with Mountain-Top data verify the effectiveness of the proposed method.

  17. Random selection of items. Selection of n1 samples among N items composing a stratum

    International Nuclear Information System (INIS)

    Jaech, J.L.; Lemaire, R.J.

    1987-02-01

    STR-224 provides generalized procedures to determine required sample sizes, for instance in the course of a Physical Inventory Verification at Bulk Handling Facilities. The present report describes procedures to generate random numbers and select groups of items to be verified in a given stratum through each of the measurement methods involved in the verification. (author). 3 refs

  18. Silicon based ultrafast optical waveform sampling

    DEFF Research Database (Denmark)

    Ji, Hua; Galili, Michael; Pu, Minhao

    2010-01-01

    A 300 nmx450 nmx5 mm silicon nanowire is designed and fabricated for a four wave mixing based non-linear optical gate. Based on this silicon nanowire, an ultra-fast optical sampling system is successfully demonstrated using a free-running fiber laser with a carbon nanotube-based mode-locker as th......A 300 nmx450 nmx5 mm silicon nanowire is designed and fabricated for a four wave mixing based non-linear optical gate. Based on this silicon nanowire, an ultra-fast optical sampling system is successfully demonstrated using a free-running fiber laser with a carbon nanotube-based mode......-locker as the sampling source. A clear eye-diagram of a 320 Gbit/s data signal is obtained. The temporal resolution of the sampling system is estimated to 360 fs....

  19. Sampling dynamics: an alternative to payoff-monotone selection dynamics

    DEFF Research Database (Denmark)

    Berkemer, Rainer

    payoff-monotone nor payoff-positive which has interesting consequences. This can be demonstrated by application to the travelers dilemma, a deliberately constructed social dilemma. The game has just one symmetric Nash equilibrium which is Pareto inefficient. Especially when the travelers have many......'' of the standard game theory result. Both, analytical tools and agent based simulation are used to investigate the dynamic stability of sampling equilibria in a generalized travelers dilemma. Two parameters are of interest: the number of strategy options (m) available to each traveler and an experience parameter...... (k), which indicates the number of samples an agent would evaluate before fixing his decision. The special case (k=1) can be treated analytically. The stationary points of the dynamics must be sampling equilibria and one can calculate that for m>3 there will be an interior solution in addition...

  20. A redshift survey of IRAS galaxies. I. Sample selection

    International Nuclear Information System (INIS)

    Strauss, M.A.; Davis, M.; Yahil, A.; Huchra, J.P.

    1990-01-01

    A complete all-sky sample of objects, flux-limited at 60 microns, has been extracted from the data base of the IRAS. The sample consists of 5014 objects, of which 2649 are galaxies and 13 are not yet identified. In order to study large-scale structure with this sample, it must be free of systematic biases. Corrections are applied for a major systematic effect in the flux densities listed in the IRAS Point Source Catalog: sources resolved by the IRAS beam have flux densities systematically underestimated. In addition, accurate flux densities are obtained for sources flagged as variable, or of moderate flux quality at 60 microns. The IRAS detectors suffered radiation-induced responsivity enhancement (hysteresis) due to crossings of the satellite scans across the Galactic plane; this effect is measured and is shown to be negligible. 53 refs

  1. Adaptive Rate Sampling and Filtering Based on Level Crossing Sampling

    Directory of Open Access Journals (Sweden)

    Saeed Mian Qaisar

    2009-01-01

    Full Text Available The recent sophistications in areas of mobile systems and sensor networks demand more and more processing resources. In order to maintain the system autonomy, energy saving is becoming one of the most difficult industrial challenges, in mobile computing. Most of efforts to achieve this goal are focused on improving the embedded systems design and the battery technology, but very few studies target to exploit the input signal time-varying nature. This paper aims to achieve power efficiency by intelligently adapting the processing activity to the input signal local characteristics. It is done by completely rethinking the processing chain, by adopting a non conventional sampling scheme and adaptive rate filtering. The proposed approach, based on the LCSS (Level Crossing Sampling Scheme presents two filtering techniques, able to adapt their sampling rate and filter order by online analyzing the input signal variations. Indeed, the principle is to intelligently exploit the signal local characteristics—which is usually never considered—to filter only the relevant signal parts, by employing the relevant order filters. This idea leads towards a drastic gain in the computational efficiency and hence in the processing power when compared to the classical techniques.

  2. The Impact of Selection, Gene Conversion, and Biased Sampling on the Assessment of Microbial Demography.

    Science.gov (United States)

    Lapierre, Marguerite; Blin, Camille; Lambert, Amaury; Achaz, Guillaume; Rocha, Eduardo P C

    2016-07-01

    Recent studies have linked demographic changes and epidemiological patterns in bacterial populations using coalescent-based approaches. We identified 26 studies using skyline plots and found that 21 inferred overall population expansion. This surprising result led us to analyze the impact of natural selection, recombination (gene conversion), and sampling biases on demographic inference using skyline plots and site frequency spectra (SFS). Forward simulations based on biologically relevant parameters from Escherichia coli populations showed that theoretical arguments on the detrimental impact of recombination and especially natural selection on the reconstructed genealogies cannot be ignored in practice. In fact, both processes systematically lead to spurious interpretations of population expansion in skyline plots (and in SFS for selection). Weak purifying selection, and especially positive selection, had important effects on skyline plots, showing patterns akin to those of population expansions. State-of-the-art techniques to remove recombination further amplified these biases. We simulated three common sampling biases in microbiological research: uniform, clustered, and mixed sampling. Alone, or together with recombination and selection, they further mislead demographic inferences producing almost any possible skyline shape or SFS. Interestingly, sampling sub-populations also affected skyline plots and SFS, because the coalescent rates of populations and their sub-populations had different distributions. This study suggests that extreme caution is needed to infer demographic changes solely based on reconstructed genealogies. We suggest that the development of novel sampling strategies and the joint analyzes of diverse population genetic methods are strictly necessary to estimate demographic changes in populations where selection, recombination, and biased sampling are present. © The Author 2016. Published by Oxford University Press on behalf of the Society for

  3. Evaluation of pump pulsation in respirable size-selective sampling: part II. Changes in sampling efficiency.

    Science.gov (United States)

    Lee, Eun Gyung; Lee, Taekhee; Kim, Seung Won; Lee, Larry; Flemmer, Michael M; Harper, Martin

    2014-01-01

    This second, and concluding, part of this study evaluated changes in sampling efficiency of respirable size-selective samplers due to air pulsations generated by the selected personal sampling pumps characterized in Part I (Lee E, Lee L, Möhlmann C et al. Evaluation of pump pulsation in respirable size-selective sampling: Part I. Pulsation measurements. Ann Occup Hyg 2013). Nine particle sizes of monodisperse ammonium fluorescein (from 1 to 9 μm mass median aerodynamic diameter) were generated individually by a vibrating orifice aerosol generator from dilute solutions of fluorescein in aqueous ammonia and then injected into an environmental chamber. To collect these particles, 10-mm nylon cyclones, also known as Dorr-Oliver (DO) cyclones, were used with five medium volumetric flow rate pumps. Those were the Apex IS, HFS513, GilAir5, Elite5, and Basic5 pumps, which were found in Part I to generate pulsations of 5% (the lowest), 25%, 30%, 56%, and 70% (the highest), respectively. GK2.69 cyclones were used with the Legacy [pump pulsation (PP) = 15%] and Elite12 (PP = 41%) pumps for collection at high flows. The DO cyclone was also used to evaluate changes in sampling efficiency due to pulse shape. The HFS513 pump, which generates a more complex pulse shape, was compared to a single sine wave fluctuation generated by a piston. The luminescent intensity of the fluorescein extracted from each sample was measured with a luminescence spectrometer. Sampling efficiencies were obtained by dividing the intensity of the fluorescein extracted from the filter placed in a cyclone with the intensity obtained from the filter used with a sharp-edged reference sampler. Then, sampling efficiency curves were generated using a sigmoid function with three parameters and each sampling efficiency curve was compared to that of the reference cyclone by constructing bias maps. In general, no change in sampling efficiency (bias under ±10%) was observed until pulsations exceeded 25% for the

  4. Adult health study reference papers. Selection of the sample. Characteristics of the sample

    Energy Technology Data Exchange (ETDEWEB)

    Beebe, G W; Fujisawa, Hideo; Yamasaki, Mitsuru

    1960-12-14

    The characteristics and selection of the clinical sample have been described in some detail to provide information on the comparability of the exposure groups with respect to factors excluded from the matching criteria and to provide basic descriptive information potentially relevant to individual studies that may be done within the framework of the Adult Health Study. The characteristics under review here are age, sex, many different aspects of residence, marital status, occupation and industry, details of location and shielding ATB, acute radiation signs and symptoms, and prior ABCC medical or pathology examinations. 5 references, 57 tables.

  5. Approaches to sampling and case selection in qualitative research: examples in the geography of health.

    Science.gov (United States)

    Curtis, S; Gesler, W; Smith, G; Washburn, S

    2000-04-01

    This paper focuses on the question of sampling (or selection of cases) in qualitative research. Although the literature includes some very useful discussions of qualitative sampling strategies, the question of sampling often seems to receive less attention in methodological discussion than questions of how data is collected or is analysed. Decisions about sampling are likely to be important in many qualitative studies (although it may not be an issue in some research). There are varying accounts of the principles applicable to sampling or case selection. Those who espouse 'theoretical sampling', based on a 'grounded theory' approach, are in some ways opposed to those who promote forms of 'purposive sampling' suitable for research informed by an existing body of social theory. Diversity also results from the many different methods for drawing purposive samples which are applicable to qualitative research. We explore the value of a framework suggested by Miles and Huberman [Miles, M., Huberman,, A., 1994. Qualitative Data Analysis, Sage, London.], to evaluate the sampling strategies employed in three examples of research by the authors. Our examples comprise three studies which respectively involve selection of: 'healing places'; rural places which incorporated national anti-malarial policies; young male interviewees, identified as either chronically ill or disabled. The examples are used to show how in these three studies the (sometimes conflicting) requirements of the different criteria were resolved, as well as the potential and constraints placed on the research by the selection decisions which were made. We also consider how far the criteria Miles and Huberman suggest seem helpful for planning 'sample' selection in qualitative research.

  6. Selection Component Analysis of Natural Polymorphisms using Population Samples Including Mother-Offspring Combinations, II

    DEFF Research Database (Denmark)

    Jarmer, Hanne Østergaard; Christiansen, Freddy Bugge

    1981-01-01

    Population samples including mother-offspring combinations provide information on the selection components: zygotic selection, sexual selection, gametic seletion and fecundity selection, on the mating pattern, and on the deviation from linkage equilibrium among the loci studied. The theory...

  7. Testing the normality assumption in the sample selection model with an application to travel demand

    NARCIS (Netherlands)

    van der Klaauw, B.; Koning, R.H.

    2003-01-01

    In this article we introduce a test for the normality assumption in the sample selection model. The test is based on a flexible parametric specification of the density function of the error terms in the model. This specification follows a Hermite series with bivariate normality as a special case.

  8. Testing the normality assumption in the sample selection model with an application to travel demand

    NARCIS (Netherlands)

    van der Klauw, B.; Koning, R.H.

    In this article we introduce a test for the normality assumption in the sample selection model. The test is based on a flexible parametric specification of the density function of the error terms in the model. This specification follows a Hermite series with bivariate normality as a special case.

  9. Hybrid nested sampling algorithm for Bayesian model selection applied to inverse subsurface flow problems

    KAUST Repository

    Elsheikh, Ahmed H.

    2014-02-01

    A Hybrid Nested Sampling (HNS) algorithm is proposed for efficient Bayesian model calibration and prior model selection. The proposed algorithm combines, Nested Sampling (NS) algorithm, Hybrid Monte Carlo (HMC) sampling and gradient estimation using Stochastic Ensemble Method (SEM). NS is an efficient sampling algorithm that can be used for Bayesian calibration and estimating the Bayesian evidence for prior model selection. Nested sampling has the advantage of computational feasibility. Within the nested sampling algorithm, a constrained sampling step is performed. For this step, we utilize HMC to reduce the correlation between successive sampled states. HMC relies on the gradient of the logarithm of the posterior distribution, which we estimate using a stochastic ensemble method based on an ensemble of directional derivatives. SEM only requires forward model runs and the simulator is then used as a black box and no adjoint code is needed. The developed HNS algorithm is successfully applied for Bayesian calibration and prior model selection of several nonlinear subsurface flow problems. © 2013 Elsevier Inc.

  10. Analysis of Selected Legacy 85Kr Samples

    Energy Technology Data Exchange (ETDEWEB)

    Jubin, Robert Thomas [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Bruffey, Stephanie H. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2016-09-02

    Legacy samples composed of 85Kr encapsulated in solid zeolite 5A material and five small metal tubes containing a mixture of the zeolite combined with a glass matrix resulting from hot isostatic pressing have been preserved. The samples were a result of krypton R&D encapsulation efforts in the late 1970s performed at the Idaho Chemical Processing Plant. These samples were shipped to Oak Ridge National Laboratory (ORNL) in mid-FY 2014. Upon receipt the outer shipping package was opened, and the inner package, removed and placed in a radiological hood. The individual capsules were double bagged as they were removed from the inner shipping pig and placed into individual glass sample bottles for further analysis. The five capsules were then x-ray imaged. Capsules 1 and 4 appear intact and to contain an amorphous mass within the capsules. Capsule 2 clearly shows the saw marks on the capsule and a quantity of loose pellet or bead-like material remaining in the capsule. Capsule 3 shows similar bead-like material within the intact capsule. Capsule 5 had been opened at an undetermined time in the past. The end of this capsule appears to have been cut off, and there are additional saw marks on the side of the capsule. X-ray tomography allowed the capsules to be viewed along the three axes. Of most interest was determining whether there was any residual material in the closed end of Capsule 5. The images confirmed the presence of residual material within this capsule. The material appears to be compacted but still retains some of the bead-like morphology. Based on the nondestructive analysis (NDA) results, a proposed path forward was formulated to advance this effort toward the original goals of understanding the effects of extended storage on the waste form and package. Based on the initial NDA and the fact that there are at least two breached samples, it was proposed that exploratory tests be conducted with the breached specimens before opening the three intact

  11. Does self-selection affect samples' representativeness in online surveys? An investigation in online video game research.

    Science.gov (United States)

    Khazaal, Yasser; van Singer, Mathias; Chatton, Anne; Achab, Sophia; Zullino, Daniele; Rothen, Stephane; Khan, Riaz; Billieux, Joel; Thorens, Gabriel

    2014-07-07

    The number of medical studies performed through online surveys has increased dramatically in recent years. Despite their numerous advantages (eg, sample size, facilitated access to individuals presenting stigmatizing issues), selection bias may exist in online surveys. However, evidence on the representativeness of self-selected samples in online studies is patchy. Our objective was to explore the representativeness of a self-selected sample of online gamers using online players' virtual characters (avatars). All avatars belonged to individuals playing World of Warcraft (WoW), currently the most widely used online game. Avatars' characteristics were defined using various games' scores, reported on the WoW's official website, and two self-selected samples from previous studies were compared with a randomly selected sample of avatars. We used scores linked to 1240 avatars (762 from the self-selected samples and 478 from the random sample). The two self-selected samples of avatars had higher scores on most of the assessed variables (except for guild membership and exploration). Furthermore, some guilds were overrepresented in the self-selected samples. Our results suggest that more proficient players or players more involved in the game may be more likely to participate in online surveys. Caution is needed in the interpretation of studies based on online surveys that used a self-selection recruitment procedure. Epidemiological evidence on the reduced representativeness of sample of online surveys is warranted.

  12. Sample Based Unit Liter Dose Estimates

    International Nuclear Information System (INIS)

    JENSEN, L.

    2000-01-01

    The Tank Waste Characterization Program has taken many core samples, grab samples, and auger samples from the single-shell and double-shell tanks during the past 10 years. Consequently, the amount of sample data available has increased, both in terms of quantity of sample results and the number of tanks characterized. More and better data is available than when the current radiological and toxicological source terms used in the Basis for Interim Operation (BIO) (FDH 1999a) and the Final Safety Analysis Report (FSAR) (FDH 1999b) were developed. The Nuclear Safety and Licensing (NS and L) organization wants to use the new data to upgrade the radiological and toxicological source terms used in the BIO and FSAR. The NS and L organization requested assistance in producing a statistically based process for developing the source terms. This report describes the statistical techniques used and the assumptions made to support the development of a new radiological source term for liquid and solid wastes stored in single-shell and double-shell tanks. The results given in this report are a revision to similar results given in an earlier version of the document (Jensen and Wilmarth 1999). The main difference between the results in this document and the earlier version is that the dose conversion factors (DCF) for converting μCi/g or μCi/L to Sv/L (sieverts per liter) have changed. There are now two DCFs, one based on ICRP-68 and one based on ICW-71 (Brevick 2000)

  13. Sample selection via angular distance in the space of the arguments of an artificial neural network

    Science.gov (United States)

    Fernández Jaramillo, J. M.; Mayerle, R.

    2018-05-01

    In the construction of an artificial neural network (ANN) a proper data splitting of the available samples plays a major role in the training process. This selection of subsets for training, testing and validation affects the generalization ability of the neural network. Also the number of samples has an impact in the time required for the design of the ANN and the training. This paper introduces an efficient and simple method for reducing the set of samples used for training a neural network. The method reduces the required time to calculate the network coefficients, while keeping the diversity and avoiding overtraining the ANN due the presence of similar samples. The proposed method is based on the calculation of the angle between two vectors, each one representing one input of the neural network. When the angle formed among samples is smaller than a defined threshold only one input is accepted for the training. The accepted inputs are scattered throughout the sample space. Tidal records are used to demonstrate the proposed method. The results of a cross-validation show that with few inputs the quality of the outputs is not accurate and depends on the selection of the first sample, but as the number of inputs increases the accuracy is improved and differences among the scenarios with a different starting sample have and important reduction. A comparison with the K-means clustering algorithm shows that for this application the proposed method with a smaller number of samples is producing a more accurate network.

  14. 40 CFR 86.607-84 - Sample selection.

    Science.gov (United States)

    2010-07-01

    ... Auditing of New Light-Duty Vehicles, Light-Duty Trucks, and Heavy-Duty Vehicles § 86.607-84 Sample..., once a manufacturer ships any vehicle from the test sample, it relinquishes the prerogative to conduct...

  15. The lack of selection bias in a snowball sampled case-control study on drug abuse.

    Science.gov (United States)

    Lopes, C S; Rodrigues, L C; Sichieri, R

    1996-12-01

    Friend controls in matched case-control studies can be a potential source of bias based on the assumption that friends are more likely to share exposure factors. This study evaluates the role of selection bias in a case-control study that used the snowball sampling method based on friendship for the selection of cases and controls. The cases selected fro the study were drug abusers located in the community. Exposure was defined by the presence of at least one psychiatric diagnosis. Psychiatric and drug abuse/dependence diagnoses were made according to the Diagnostic and Statistical Manual of Mental Disorders (DSM-III-R) criteria. Cases and controls were matched on sex, age and friendship. The measurement of selection bias was made through the comparison of the proportion of exposed controls selected by exposed cases (p1) with the proportion of exposed controls selected by unexposed cases (p2). If p1 = p2 then, selection bias should not occur. The observed distribution of the 185 matched pairs having at least one psychiatric disorder showed a p1 value of 0.52 and a p2 value of 0.51, indicating no selection bias in this study. Our findings support the idea that the use of friend controls can produce a valid basis for a case-control study.

  16. ARTIFICIAL NEURAL NETWORKS BASED GEARS MATERIAL SELECTION HYBRID INTELLIGENT SYSTEM

    Institute of Scientific and Technical Information of China (English)

    X.C. Li; W.X. Zhu; G. Chen; D.S. Mei; J. Zhang; K.M. Chen

    2003-01-01

    An artificial neural networks(ANNs) based gear material selection hybrid intelligent system is established by analyzing the individual advantages and weakness of expert system (ES) and ANNs and the applications in material select of them. The system mainly consists of tow parts: ES and ANNs. By being trained with much data samples,the back propagation (BP) ANN gets the knowledge of gear materials selection, and is able to inference according to user input. The system realizes the complementing of ANNs and ES. Using this system, engineers without materials selection experience can conveniently deal with gear materials selection.

  17. Passive sampling of selected endocrine disrupting compounds using polar organic chemical integrative samplers

    International Nuclear Information System (INIS)

    Arditsoglou, Anastasia; Voutsa, Dimitra

    2008-01-01

    Two types of polar organic chemical integrative samplers (pharmaceutical POCIS and pesticide POCIS) were examined for their sampling efficiency of selected endocrine disrupting compounds (EDCs). Laboratory-based calibration of POCISs was conducted by exposing them at high and low concentrations of 14 EDCs (4-alkyl-phenols, their ethoxylate oligomers, bisphenol A, selected estrogens and synthetic steroids) for different time periods. The kinetic studies showed an integrative uptake up to 28 days. The sampling rates for the individual compounds were obtained. The use of POCISs could result in an integrative approach to the quality status of the aquatic systems especially in the case of high variation of water concentrations of EDCs. The sampling efficiency of POCISs under various field conditions was assessed after their deployment in different aquatic environments. - Calibration and field performance of polar organic integrative samplers for monitoring EDCs in aquatic environments

  18. Sample Based Unit Liter Dose Estimates

    International Nuclear Information System (INIS)

    JENSEN, L.

    1999-01-01

    The Tank Waste Characterization Program has taken many core samples, grab samples, and auger samples from the single-shell and double-shell tanks during the past 10 years. Consequently, the amount of sample data available has increased, both in terms of quantity of sample results and the number of tanks characterized. More and better data is available than when the current radiological and toxicological source terms used in the Basis for Interim Operation (BIO) (FDH 1999) and the Final Safety Analysis Report (FSAR) (FDH 1999) were developed. The Nuclear Safety and Licensing (NS and L) organization wants to use the new data to upgrade the radiological and toxicological source terms used in the BIO and FSAR. The NS and L organization requested assistance in developing a statistically based process for developing the source terms. This report describes the statistical techniques used and the assumptions made to support the development of a new radiological source term for liquid and solid wastes stored in single-shell and double-shell tanks

  19. 40 CFR 91.506 - Engine sample selection.

    Science.gov (United States)

    2010-07-01

    ... paragraph (b)(2) of this section. It defines one-tail, 95 percent confidence intervals. σ=actual test sample... individual engine x=mean of emission test results of the actual sample FEL=Family Emission Limit n=The actual... carry-over engine families: After one engine is tested, the manufacturer will combine the test with the...

  20. 40 CFR 761.353 - Second level of sample selection.

    Science.gov (United States)

    2010-07-01

    .... At the chemical extraction and analysis laboratory, pour the 19-liter subsample onto a plastic sheet..., AND USE PROHIBITIONS Sampling Non-Liquid, Non-Metal PCB Bulk Product Waste for Purposes of Characterization for PCB Disposal in Accordance With § 761.62, and Sampling PCB Remediation Waste Destined for Off...

  1. Polystyrene Based Silver Selective Electrodes

    Directory of Open Access Journals (Sweden)

    Shiva Agarwal

    2002-06-01

    Full Text Available Silver(I selective sensors have been fabricated from polystyrene matrix membranes containing macrocycle, Me6(14 diene.2HClO4 as ionophore. Best performance was exhibited by the membrane having a composition macrocycle : Polystyrene in the ratio 15:1. This membrane worked well over a wide concentration range 5.0×10-6–1.0×10-1M of Ag+ with a near-Nernstian slope of 53.0 ± 1.0 mV per decade of Ag+ activity. The response time of the sensor is <15 s and the membrane can be used over a period of four months with good reproducibility. The proposed electrode works well in a wide pH range 2.5-9.0 and demonstrates good discriminating power over a number of mono-, di-, and trivalent cations. The sensor has also been used as an indicator electrode in the potentiometric titration of silver(II ions against NaCl solution. The sensor can also be used in non-aqueous medium with no significant change in the value of slope or working concentration range for the estimation of Ag+ in solution having up to 25% (v/v nonaqueous fraction.

  2. Reproductive performance in a select sample of dairy herds.

    Science.gov (United States)

    Ferguson, James D; Skidmore, Andrew

    2013-02-01

    Sixteen herds were selected from a pool of 64 herds nominated by consultants for participation in a national survey to demonstrate excellence in reproductive performance. For inclusion in the survey, herds had to have comprehensive records in a farm computer database or participate in a Dairy Herd Improvement Association record system and have superior reproductive performance as judged by the herd advisor. Herd managers were asked to fill out a questionnaire to describe their reproductive management practices and provide herd records for data analysis. Reproductive analysis was based on individual cow records for active and cull dairy cows that calved during the calendar year 2010. Breeding records by cow were used to calculate indices for insemination rate (IR), conception rate (CR), pregnancy rate (PR), and culling. Herds ranged in size from 262 to 6,126 lactating and dry cows, with a mean of 1,654 [standard deviation (SD) 1,494] cows. Mean days to first insemination (DFS) was 71.2d (SD 4.7d), and IR for first insemination was 86.9%. Mean days between inseminations were 33.4d (SD 3.1d), and 15.4% of insemination intervals were greater than 48 d (range: 7.2 to 21.5%). First-service conception rate was 44.4% (SD 4.8%) across all herds and ranged from 37.5 to 51.8%. Mean PR was 32.0% (SD 3.9%) with a range of 26.5 to 39.4%. Lactation cull rate was 32.2% (SD 12.4%) with a range from 13.6 to 58.1%. Compared with mean data and SD for herds in the Raleigh Dairy Herd Improvement Association system, mean indices for these herds ranked them in the 99 th percentile for IR (using heat detection rate as comparison), 99 th percentile for PR, the bottom 18.6 percentile for DFS, and around the 50th percentile for CR. This suggests that excellent herd reproductive performance was associated with reproductive management that resulted in high insemination rates combined with average CR. Copyright © 2013 American Dairy Science Association. Published by Elsevier Inc. All rights

  3. Alpha Matting with KL-Divergence Based Sparse Sampling.

    Science.gov (United States)

    Karacan, Levent; Erdem, Aykut; Erdem, Erkut

    2017-06-22

    In this paper, we present a new sampling-based alpha matting approach for the accurate estimation of foreground and background layers of an image. Previous sampling-based methods typically rely on certain heuristics in collecting representative samples from known regions, and thus their performance deteriorates if the underlying assumptions are not satisfied. To alleviate this, we take an entirely new approach and formulate sampling as a sparse subset selection problem where we propose to pick a small set of candidate samples that best explains the unknown pixels. Moreover, we describe a new dissimilarity measure for comparing two samples which is based on KLdivergence between the distributions of features extracted in the vicinity of the samples. The proposed framework is general and could be easily extended to video matting by additionally taking temporal information into account in the sampling process. Evaluation on standard benchmark datasets for image and video matting demonstrates that our approach provides more accurate results compared to the state-of-the-art methods.

  4. Trends in major-ion constituents and properties for selected sampling sites in the Tongue and Powder River watersheds, Montana and Wyoming, based on data collected during water years 1980-2010

    Science.gov (United States)

    Sando, Steven K.; Vecchia, Aldo V.; Barnhart, Elliott P.; Sando, Thomas R.; Clark, Melanie L.; Lorenz, David L.

    2014-01-01

    The primary purpose of this report is to present information relating to flow-adjusted temporal trends in major-ion constituents and properties for 16 sampling sites in the Tongue and Powder River watersheds based on data collected during 1980–2010. In association with this primary purpose, the report presents background information on major-ion characteristics (including specific conductance, calcium, magnesium, potassium, sodium adsorption ratio, sodium, alkalinity, chloride, fluoride, dissolved sulfate, and dissolved solids) of the sampling sites and coal-bed methane (CBM) produced water (groundwater pumped from coal seams) in the site watersheds, trend analysis methods, streamflow conditions, and factors that affect trend results. The Tongue and Powder River watersheds overlie the Powder River structural basin (PRB) in northeastern Wyoming and southeastern Montana. Limited extraction of coal-bed methane (CBM) from the PRB began in the early 1990’s, and increased dramatically during the late 1990’s and early 2000’s. CBM-extraction activities produce discharges of water with high concentrations of dissolved solids (particularly sodium and bicarbonate ions) relative to most stream water in the Tongue and Powder River watersheds. Water-quality of CBM produced water is of concern because of potential effects of sodium on agricultural soils and potential effects of bicarbonate on aquatic biota. Two parametric trend-analysis methods were used in this study: the time-series model (TSM) and ordinary least squares regression (OLS) on time, streamflow, and season. The TSM was used to analyze trends for 11 of the 16 study sites. For five sites, data requirements of the TSM were not met and OLS was used to analyze trends. Two primary 10-year trend-analysis periods were selected. Trend-analysis period 1 (water years 1986–95; hereinafter referred to as period 1) was selected to represent variability in major-ion concentrations in the Tongue and Powder River

  5. Sample selection and taste correlation in discrete choice transport modelling

    DEFF Research Database (Denmark)

    Mabit, Stefan Lindhard

    2008-01-01

    explain counterintuitive results in value of travel time estimation. However, the results also point at the difficulty of finding suitable instruments for the selection mechanism. Taste heterogeneity is another important aspect of discrete choice modelling. Mixed logit models are designed to capture...... the question for a broader class of models. It is shown that the original result may be somewhat generalised. Another question investigated is whether mode choice operates as a self-selection mechanism in the estimation of the value of travel time. The results show that self-selection can at least partly...... of taste correlation in willingness-to-pay estimation are presented. The first contribution addresses how to incorporate taste correlation in the estimation of the value of travel time for public transport. Given a limited dataset the approach taken is to use theory on the value of travel time as guidance...

  6. Experimental breakdown of selected anodized aluminum samples in dilute plasmas

    Science.gov (United States)

    Grier, Norman T.; Domitz, Stanley

    1992-01-01

    Anodized aluminum samples representative of Space Station Freedom structural material were tested for electrical breakdown under space plasma conditions. In space, this potential arises across the insulating anodized coating when the spacecraft structure is driven to a negative bias relative to the external plasma potential due to plasma-surface interaction phenomena. For anodized materials used in the tests, it was found that breakdown voltage varied from 100 to 2000 volts depending on the sample. The current in the arcs depended on the sample, the capacitor, and the voltage. The level of the arc currents varied from 60 to 1000 amperes. The plasma number density varied from 3 x 10 exp 6 to 10 exp 3 ions per cc. The time between arcs increased as the number density was lowered. Corona testing of anodized samples revealed that samples with higher corona inception voltage had higher arcing inception voltages. From this it is concluded that corona testing may provide a method of screening the samples.

  7. Hybrid nested sampling algorithm for Bayesian model selection applied to inverse subsurface flow problems

    KAUST Repository

    Elsheikh, Ahmed H.; Wheeler, Mary Fanett; Hoteit, Ibrahim

    2014-01-01

    A Hybrid Nested Sampling (HNS) algorithm is proposed for efficient Bayesian model calibration and prior model selection. The proposed algorithm combines, Nested Sampling (NS) algorithm, Hybrid Monte Carlo (HMC) sampling and gradient estimation using

  8. Sampling Efficiency and Performance of Selected Thoracic Aerosol Samplers.

    Science.gov (United States)

    Görner, Peter; Simon, Xavier; Boivin, Alexis; Bau, Sébastien

    2017-08-01

    Measurement of worker exposure to a thoracic health-related aerosol fraction is necessary in a number of occupational situations. This is the case of workplaces with atmospheres polluted by fibrous particles, such as cotton dust or asbestos, and by particles inducing irritation or bronchoconstriction such as acid mists or flour dust. Three personal and two static thoracic aerosol samplers were tested under laboratory conditions. Sampling efficiency with respect to particle aerodynamic diameter was measured in a horizontal low wind tunnel and in a vertical calm air chamber. Sampling performance was evaluated against conventional thoracic penetration. Three of the tested samplers performed well, when sampling the thoracic aerosol at nominal flow rate and two others performed well at optimized flow rate. The limit of flow rate optimization was found when using cyclone samplers. © The Author 2017. Published by Oxford University Press on behalf of the British Occupational Hygiene Society.

  9. Beta-thalassemia- institution based analysis of ethnic and geographic distribution, effect of consanguinity and safety of chorionic villus sampling as a diagnostic, tool for pre-natal diagnosis in selected patients

    International Nuclear Information System (INIS)

    Abdullah, K.N.; Liaqat, J.; Azim, W.

    2011-01-01

    To study the ethnic and geographic distribution of Beta-thalassemia amongst the patients included and to study the effect of consanguinity in promoting this disease. Also, to establish the safety of CVS when used as a pre-natal diagnostic tool in aiding the early diagnosis of Beta-thalassemia in selected patients. Study Design: Descriptive Study. Place and Duration of Study: PNS Shifa Karachi, from Jan 2008 to Dec 2008. Patients and Methods: A total of 223 women out of 240 that were referred from all over Sindh to PNS Shifa Hospital Karachi for susceptible gene mutations participated in the study. The standard procedure that was used in this study was trans-abdominal aspiration of chorionic villi through suction needle. The samples were then sent for further analysis to the Pathology Department at PNS Shifa Hospital Karachi. Results: In our study population Beta-thalassemia was most prevalent in Sindhi 107 (48%) followed by Punjabi 46 (21%), 27 (12%) Pathan, and 43 (19%) Balochi. Out of 223 women, 95 were of thalassemia trait, while 85 were of thalassemia major. Fifty five percent of thalassemia trait and 56% of thalassemia major fetus parents were first cousins. The rate of pregnancy loss after performing CVS was 2.0% with no complications reported. Conclusion: It is concluded that highest percentage of thalassemia is in first cousins and sindhi origin families are mostly affected. However CVS is a safe and effective tool for prenatal diagnosis and subsequent counselling in selected couples. (author)

  10. Selective solid-phase extraction based on molecularly imprinted technology for the simultaneous determination of 20 triazole pesticides in cucumber samples using high-performance liquid chromatography-tandem mass spectrometry.

    Science.gov (United States)

    Zhao, Fengnian; She, Yongxin; Zhang, Chao; Cao, Xiaolin; Wang, Shanshan; Zheng, Lufei; Jin, Maojun; Shao, Hua; Jin, Fen; Wang, Jing

    2017-10-01

    A selective analytical method for the simultaneous determination of 20 triazole fungicides and plant growth regulators in cucumber samples was developed using solid-phase extraction with specific molecularly imprinted polymers (MIPs) as adsorbents. The MIPs were successfully prepared by precipitation polymerization using triadimefon as the template molecule, methacrylic acid as the functional monomer, trimethylolpropane trimethacrylate as the crosslinker, and acetonitrile as the porogen. The performance and recognition mechanism for both the MIPs and non-molecularly imprinted polymers were evaluated using adsorption isotherms and adsorption kinetics. Liquid chromatography-tandem quadrupole mass spectrometry was used to identify and quantify the target analytes. The solid-phase extraction using the MIPs was rapid, convenient, and efficient for extraction and enrichment of the 20 triazole pesticides from cucumber samples. The recoveries obtained at three concentration levels (1, 2, and 10μgL -1 ) ranged from 82.3% to 117.6% with relative standard deviations of less than 11.8% (n=5) for all analytes. The limits of detection for the 20 triazole pesticides were all less than 0.4μgL -1 , and were sufficient to meet international standards. Copyright © 2017 Elsevier B.V. All rights reserved.

  11. A highly selective and sensitive ultrasonic assisted dispersive liquid phase microextraction based on deep eutectic solvent for determination of cadmium in food and water samples prior to electrothermal atomic absorption spectrometry.

    Science.gov (United States)

    Zounr, Rizwan Ali; Tuzen, Mustafa; Deligonul, Nihal; Khuhawar, Muhammad Yar

    2018-07-01

    A simple, fast, green, sensitive and selective ultrasonic assisted deep eutectic solvent liquid-phase microextraction technique was used for preconcentration and extraction of cadmium (Cd) in water and food samples by electrothermal atomic absorption spectrometry (ETAAS). In this technique, a synthesized reagent (Z)-N-(3,5-diphenyl-1H-pyrrol-2-yl)-3,5-diphenyl-2H-pyrrol-2-imine (Azo) was used as a complexing agent for Cd. The main factors effecting the pre-concentration and extraction of Cd such as effect of pH, type and composition of deep eutectic solvent (DES), volume of DES, volume of complexing agent, volume of tetrahydrofuran (THF) and ultrasonication time have been examined in detail. At optimum conditions the value of pH and molar ratio of DES were found to be 6.0 and 1:4 (ChCl:Ph), respectively. The detection limit (LOD), limit of quantification (LOQ), relative standard deviation (RSD) and preconcentration factor (PF) were observed as 0.023 ng L -1 , 0.161 ng L -1 , 3.1% and 100, correspondingly. Validation of the developed technique was observed by extraction of Cd in certified reference materials (CRMs) and observed results were successfully compared with certified values. The developed procedure was practiced to various food, beverage and water samples. Copyright © 2018 Elsevier Ltd. All rights reserved.

  12. Plasma metanephrine for assessing the selectivity of adrenal venous sampling

    NARCIS (Netherlands)

    Dekkers, T.; Deinum, J.; Schultze Kool, L.J.; Blondin, D.; Vonend, O.; Hermus, A.R.M.M.; Peitzsch, M.; Rump, L.C.; Antoch, G.; Sweep, F.C.; Bornstein, S.R.; Lenders, J.W.M.; Willenberg, H.S.; Eisenhofer, G.

    2013-01-01

    Adrenal vein sampling is used to establish the origins of excess production of adrenal hormones in primary aldosteronism. Correct catheter positioning is confirmed using adrenal vein measurements of cortisol, but this parameter is not always reliable. Plasma metanephrine represents an alternative

  13. Systematic sampling of discrete and continuous populations: sample selection and the choice of estimator

    Science.gov (United States)

    Harry T. Valentine; David L. R. Affleck; Timothy G. Gregoire

    2009-01-01

    Systematic sampling is easy, efficient, and widely used, though it is not generally recognized that a systematic sample may be drawn from the population of interest with or without restrictions on randomization. The restrictions or the lack of them determine which estimators are unbiased, when using the sampling design as the basis for inference. We describe the...

  14. Theory of sampling and its application in tissue based diagnosis

    Directory of Open Access Journals (Sweden)

    Kayser Gian

    2009-02-01

    Full Text Available Abstract Background A general theory of sampling and its application in tissue based diagnosis is presented. Sampling is defined as extraction of information from certain limited spaces and its transformation into a statement or measure that is valid for the entire (reference space. The procedure should be reproducible in time and space, i.e. give the same results when applied under similar circumstances. Sampling includes two different aspects, the procedure of sample selection and the efficiency of its performance. The practical performance of sample selection focuses on search for localization of specific compartments within the basic space, and search for presence of specific compartments. Methods When a sampling procedure is applied in diagnostic processes two different procedures can be distinguished: I the evaluation of a diagnostic significance of a certain object, which is the probability that the object can be grouped into a certain diagnosis, and II the probability to detect these basic units. Sampling can be performed without or with external knowledge, such as size of searched objects, neighbourhood conditions, spatial distribution of objects, etc. If the sample size is much larger than the object size, the application of a translation invariant transformation results in Kriege's formula, which is widely used in search for ores. Usually, sampling is performed in a series of area (space selections of identical size. The size can be defined in relation to the reference space or according to interspatial relationship. The first method is called random sampling, the second stratified sampling. Results Random sampling does not require knowledge about the reference space, and is used to estimate the number and size of objects. Estimated features include area (volume fraction, numerical, boundary and surface densities. Stratified sampling requires the knowledge of objects (and their features and evaluates spatial features in relation to

  15. An algorithm to improve sampling efficiency for uncertainty propagation using sampling based method

    International Nuclear Information System (INIS)

    Campolina, Daniel; Lima, Paulo Rubens I.; Pereira, Claubia; Veloso, Maria Auxiliadora F.

    2015-01-01

    Sample size and computational uncertainty were varied in order to investigate sample efficiency and convergence of the sampling based method for uncertainty propagation. Transport code MCNPX was used to simulate a LWR model and allow the mapping, from uncertain inputs of the benchmark experiment, to uncertain outputs. Random sampling efficiency was improved through the use of an algorithm for selecting distributions. Mean range, standard deviation range and skewness were verified in order to obtain a better representation of uncertainty figures. Standard deviation of 5 pcm in the propagated uncertainties for 10 n-samples replicates was adopted as convergence criterion to the method. Estimation of 75 pcm uncertainty on reactor k eff was accomplished by using sample of size 93 and computational uncertainty of 28 pcm to propagate 1σ uncertainty of burnable poison radius. For a fixed computational time, in order to reduce the variance of the uncertainty propagated, it was found, for the example under investigation, it is preferable double the sample size than double the amount of particles followed by Monte Carlo process in MCNPX code. (author)

  16. Nested sampling algorithm for subsurface flow model selection, uncertainty quantification, and nonlinear calibration

    KAUST Repository

    Elsheikh, A. H.

    2013-12-01

    Calibration of subsurface flow models is an essential step for managing ground water aquifers, designing of contaminant remediation plans, and maximizing recovery from hydrocarbon reservoirs. We investigate an efficient sampling algorithm known as nested sampling (NS), which can simultaneously sample the posterior distribution for uncertainty quantification, and estimate the Bayesian evidence for model selection. Model selection statistics, such as the Bayesian evidence, are needed to choose or assign different weights to different models of different levels of complexities. In this work, we report the first successful application of nested sampling for calibration of several nonlinear subsurface flow problems. The estimated Bayesian evidence by the NS algorithm is used to weight different parameterizations of the subsurface flow models (prior model selection). The results of the numerical evaluation implicitly enforced Occam\\'s razor where simpler models with fewer number of parameters are favored over complex models. The proper level of model complexity was automatically determined based on the information content of the calibration data and the data mismatch of the calibrated model.

  17. Forecasting Urban Air Quality via a Back-Propagation Neural Network and a Selection Sample Rule

    Directory of Open Access Journals (Sweden)

    Yonghong Liu

    2015-07-01

    Full Text Available In this paper, based on a sample selection rule and a Back Propagation (BP neural network, a new model of forecasting daily SO2, NO2, and PM10 concentration in seven sites of Guangzhou was developed using data from January 2006 to April 2012. A meteorological similarity principle was applied in the development of the sample selection rule. The key meteorological factors influencing SO2, NO2, and PM10 daily concentrations as well as weight matrices and threshold matrices were determined. A basic model was then developed based on the improved BP neural network. Improving the basic model, identification of the factor variation consistency was added in the rule, and seven sets of sensitivity experiments in one of the seven sites were conducted to obtain the selected model. A comparison of the basic model from May 2011 to April 2012 in one site showed that the selected model for PM10 displayed better forecasting performance, with Mean Absolute Percentage Error (MAPE values decreasing by 4% and R2 values increasing from 0.53 to 0.68. Evaluations conducted at the six other sites revealed a similar performance. On the whole, the analysis showed that the models presented here could provide local authorities with reliable and precise predictions and alarms about air quality if used at an operational scale.

  18. The Gender Wage Gap and Sample Selection via Risk Attitudes

    OpenAIRE

    Jung , Seeun

    2014-01-01

    This paper investigates a new way to estimate the gender wage gap with the introduction of individual risk attitudes using representative Korean data. We es- timate the wage gap with correction for the selection bias, which latter results in the overestimation of this wage gap. Female workers are more risk averse. They hence prefer working in the public sector, where wages are generally lower than in the private sector. It goes on to explain the reduced gender wage gap by develop- ing an appr...

  19. Stratified sampling design based on data mining.

    Science.gov (United States)

    Kim, Yeonkook J; Oh, Yoonhwan; Park, Sunghoon; Cho, Sungzoon; Park, Hayoung

    2013-09-01

    To explore classification rules based on data mining methodologies which are to be used in defining strata in stratified sampling of healthcare providers with improved sampling efficiency. We performed k-means clustering to group providers with similar characteristics, then, constructed decision trees on cluster labels to generate stratification rules. We assessed the variance explained by the stratification proposed in this study and by conventional stratification to evaluate the performance of the sampling design. We constructed a study database from health insurance claims data and providers' profile data made available to this study by the Health Insurance Review and Assessment Service of South Korea, and population data from Statistics Korea. From our database, we used the data for single specialty clinics or hospitals in two specialties, general surgery and ophthalmology, for the year 2011 in this study. Data mining resulted in five strata in general surgery with two stratification variables, the number of inpatients per specialist and population density of provider location, and five strata in ophthalmology with two stratification variables, the number of inpatients per specialist and number of beds. The percentages of variance in annual changes in the productivity of specialists explained by the stratification in general surgery and ophthalmology were 22% and 8%, respectively, whereas conventional stratification by the type of provider location and number of beds explained 2% and 0.2% of variance, respectively. This study demonstrated that data mining methods can be used in designing efficient stratified sampling with variables readily available to the insurer and government; it offers an alternative to the existing stratification method that is widely used in healthcare provider surveys in South Korea.

  20. Risk Attitudes, Sample Selection and Attrition in a Longitudinal Field Experiment

    DEFF Research Database (Denmark)

    Harrison, Glenn W.; Lau, Morten Igel

    with respect to risk attitudes. Our design builds in explicit randomization on the incentives for participation. We show that there are significant sample selection effects on inferences about the extent of risk aversion, but that the effects of subsequent sample attrition are minimal. Ignoring sample...... selection leads to inferences that subjects in the population are more risk averse than they actually are. Correcting for sample selection and attrition affects utility curvature, but does not affect inferences about probability weighting. Properly accounting for sample selection and attrition effects leads...... to findings of temporal stability in overall risk aversion. However, that stability is around different levels of risk aversion than one might naively infer without the controls for sample selection and attrition we are able to implement. This evidence of “randomization bias” from sample selection...

  1. Learning spectrum's selection in OLAM network for analysis cement samples

    International Nuclear Information System (INIS)

    Huang Ning; Wang Peng; Tang Daiquan; Hu Renlan

    2010-01-01

    It uses OLAM artificial neural network to analyze the samples of cement raw material. Two kinds of spectrums are used for network learning: pure-element spectrum and mix-element spectrum. The output of pure-element method can be used to construct a simulate spectrum, which can be compared with the original spectrum and judge the shift of spectrum; the mix-element method can store more message and correct the matrix effect, but the multicollinearity among spectrums can cause some side effect to the results. (authors)

  2. Mineralogical, chemical, and petrographic analysis of selected rock samples

    International Nuclear Information System (INIS)

    Roy, D.M.

    1976-01-01

    I. The majority of rocks examined from the NTS were found to be siltstones, varying from coarse into the very fine siltstone range, and containing > 60% quartz, usually much higher. Samples of the UEIL series of cores, in contrast, had a large clay mineral fraction, as well as some carbonate present. A few were intermediate silty claystones or argillites. Microphotographs are included to illustrate the variations in texture observed, while most of the data obtained are summarized in tabular form. II. Seven Michigan Salina evaporite specimens were analyzed

  3. Selection of bone samples for 239Pu analyses in man

    International Nuclear Information System (INIS)

    Jee, W.S.S.; Wronski, T.J.; Smith, J.M.; Kimmel, D.B.; Miller, S.C.; Stover, B.J.

    1981-01-01

    Studies on the skeletal macrodistribution, microdistribution, and toxicity of 239 Pu and studies on bone turnover rates show that trabecular bone sites with high turnover rates have the greatest affinity for 239 Pu. In the adult beagle, these high-turnover, trabecular bone sites also show a higher occurrence of osteosarcomas. Correspondingly, high-turnover bone sites in the human would include the ilium (pelvis) and lumbar vertebrae (LVB), sites that are readily obtainable at autopsy. We recommend that the trabecular bone of the ilium and of the LVB be sampled to determine the skeletal radionuclide content of humans

  4. 40 CFR 205.160-2 - Test sample selection and preparation.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 24 2010-07-01 2010-07-01 false Test sample selection and preparation... sample selection and preparation. (a) Vehicles comprising the sample which are required to be tested... maintained in any manner unless such preparation, tests, modifications, adjustments or maintenance are part...

  5. Analysis of selected phytotoxins and mycotoxins in environmental samples.

    Science.gov (United States)

    Hoerger, Corinne C; Schenzel, Judith; Strobel, Bjarne W; Bucheli, Thomas D

    2009-11-01

    Natural toxins such as phytotoxins and mycotoxins have been studied in food and feed for decades, but little attention has yet been paid to their occurrence in the environment. Because of increasing awareness of the presence and potential relevance of micropollutants in the environment, phytotoxins and mycotoxins should be considered and investigated as part of the chemical cocktail in natural samples. Here, we compile chemical analytical methods to determine important phytotoxins (i.e. phenolic acids, quinones, benzoxazinones, terpenoids, glycoalkaloids, glucosinolates, isothiocyanates, phytosterols, flavonoids, coumestans, lignans, and chalcones) and mycotoxins (i.e. resorcyclic acid lactones, trichothecenes, fumonisins, and aflatoxins) in environmentally relevant matrices such as surface water, waste water-treatment plant influent and effluent, soil, sediment, manure, and sewage sludge. The main problems encountered in many of the reviewed methods were the frequent unavailability of suitable internal standards (especially isotope-labelled analogues) and often absent or fragmentary method optimization and validation.

  6. Diversification Strategies and Firm Performance: A Sample Selection Approach

    OpenAIRE

    Santarelli, Enrico; Tran, Hien Thu

    2013-01-01

    This paper is based upon the assumption that firm profitability is determined by its degree of diversification which in turn is strongly related to the antecedent decision to carry out diversification activities. This calls for an empirical approach that permits the joint analysis of the three interrelated and consecutive stages of the overall diversification process: diversification decision, degree of diversification, and outcome of diversification. We apply parametric and semiparametric ap...

  7. Size selective isocyanate aerosols personal air sampling using porous plastic foams

    International Nuclear Information System (INIS)

    Cong Khanh Huynh; Trinh Vu Duc

    2009-01-01

    As part of a European project (SMT4-CT96-2137), various European institutions specialized in occupational hygiene (BGIA, HSL, IOM, INRS, IST, Ambiente e Lavoro) have established a program of scientific collaboration to develop one or more prototypes of European personal samplers for the collection of simultaneous three dust fractions: inhalable, thoracic and respirable. These samplers based on existing sampling heads (IOM, GSP and cassettes) use Polyurethane Plastic Foam (PUF) according to their porosity to support sampling and separator size of the particles. In this study, the authors present an original application of size selective personal air sampling using chemical impregnated PUF to perform isocyanate aerosols capturing and derivatizing in industrial spray-painting shops.

  8. THE zCOSMOS-SINFONI PROJECT. I. SAMPLE SELECTION AND NATURAL-SEEING OBSERVATIONS

    Energy Technology Data Exchange (ETDEWEB)

    Mancini, C.; Renzini, A. [INAF-OAPD, Osservatorio Astronomico di Padova, Vicolo Osservatorio 5, I-35122 Padova (Italy); Foerster Schreiber, N. M.; Hicks, E. K. S.; Genzel, R.; Tacconi, L.; Davies, R. [Max-Planck-Institut fuer Extraterrestrische Physik, Giessenbachstrasse, D-85748 Garching (Germany); Cresci, G. [Osservatorio Astrofisico di Arcetri (OAF), INAF-Firenze, Largo E. Fermi 5, I-50125 Firenze (Italy); Peng, Y.; Lilly, S.; Carollo, M.; Oesch, P. [Institute of Astronomy, Department of Physics, Eidgenossische Technische Hochschule, ETH Zurich CH-8093 (Switzerland); Vergani, D.; Pozzetti, L.; Zamorani, G. [INAF-Bologna, Via Ranzani, I-40127 Bologna (Italy); Daddi, E. [CEA-Saclay, DSM/DAPNIA/Service d' Astrophysique, F-91191 Gif-Sur Yvette Cedex (France); Maraston, C. [Institute of Cosmology and Gravitation, University of Portsmouth, Dennis Sciama Building, Burnaby Road, PO1 3HE Portsmouth (United Kingdom); McCracken, H. J. [IAP, 98bis bd Arago, F-75014 Paris (France); Bouche, N. [Department of Physics, University of California, Santa Barbara, CA 93106 (United States); Shapiro, K. [Aerospace Research Laboratories, Northrop Grumman Aerospace Systems, Redondo Beach, CA 90278 (United States); and others

    2011-12-10

    The zCOSMOS-SINFONI project is aimed at studying the physical and kinematical properties of a sample of massive z {approx} 1.4-2.5 star-forming galaxies, through SINFONI near-infrared integral field spectroscopy (IFS), combined with the multiwavelength information from the zCOSMOS (COSMOS) survey. The project is based on one hour of natural-seeing observations per target, and adaptive optics (AO) follow-up for a major part of the sample, which includes 30 galaxies selected from the zCOSMOS/VIMOS spectroscopic survey. This first paper presents the sample selection, and the global physical characterization of the target galaxies from multicolor photometry, i.e., star formation rate (SFR), stellar mass, age, etc. The H{alpha} integrated properties, such as, flux, velocity dispersion, and size, are derived from the natural-seeing observations, while the follow-up AO observations will be presented in the next paper of this series. Our sample appears to be well representative of star-forming galaxies at z {approx} 2, covering a wide range in mass and SFR. The H{alpha} integrated properties of the 25 H{alpha} detected galaxies are similar to those of other IFS samples at the same redshifts. Good agreement is found among the SFRs derived from H{alpha} luminosity and other diagnostic methods, provided the extinction affecting the H{alpha} luminosity is about twice that affecting the continuum. A preliminary kinematic analysis, based on the maximum observed velocity difference across the source and on the integrated velocity dispersion, indicates that the sample splits nearly 50-50 into rotation-dominated and velocity-dispersion-dominated galaxies, in good agreement with previous surveys.

  9. Chemometric classification of casework arson samples based on gasoline content.

    Science.gov (United States)

    Sinkov, Nikolai A; Sandercock, P Mark L; Harynuk, James J

    2014-02-01

    Detection and identification of ignitable liquids (ILs) in arson debris is a critical part of arson investigations. The challenge of this task is due to the complex and unpredictable chemical nature of arson debris, which also contains pyrolysis products from the fire. ILs, most commonly gasoline, are complex chemical mixtures containing hundreds of compounds that will be consumed or otherwise weathered by the fire to varying extents depending on factors such as temperature, air flow, the surface on which IL was placed, etc. While methods such as ASTM E-1618 are effective, data interpretation can be a costly bottleneck in the analytical process for some laboratories. In this study, we address this issue through the application of chemometric tools. Prior to the application of chemometric tools such as PLS-DA and SIMCA, issues of chromatographic alignment and variable selection need to be addressed. Here we use an alignment strategy based on a ladder consisting of perdeuterated n-alkanes. Variable selection and model optimization was automated using a hybrid backward elimination (BE) and forward selection (FS) approach guided by the cluster resolution (CR) metric. In this work, we demonstrate the automated construction, optimization, and application of chemometric tools to casework arson data. The resulting PLS-DA and SIMCA classification models, trained with 165 training set samples, have provided classification of 55 validation set samples based on gasoline content with 100% specificity and sensitivity. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  10. Supplier selection an MCDA-based approach

    CERN Document Server

    Mukherjee, Krishnendu

    2017-01-01

    The purpose of this book is to present a comprehensive review of the latest research and development trends at the international level for modeling and optimization of the supplier selection process for different industrial sectors. It is targeted to serve two audiences: the MBA and PhD student interested in procurement, and the practitioner who wishes to gain a deeper understanding of procurement analysis with multi-criteria based decision tools to avoid upstream risks to get better supply chain visibility. The book is expected to serve as a ready reference for supplier selection criteria and various multi-criteria based supplier’s evaluation methods for forward, reverse and mass customized supply chain. This book encompasses several criteria, methods for supplier selection in a systematic way based on extensive literature review from 1998 to 2012. It provides several case studies and some useful links which can serve as a starting point for interested researchers. In the appendix several computer code wri...

  11. Novel joint selection methods can reduce sample size for rheumatoid arthritis clinical trials with ultrasound endpoints.

    Science.gov (United States)

    Allen, John C; Thumboo, Julian; Lye, Weng Kit; Conaghan, Philip G; Chew, Li-Ching; Tan, York Kiat

    2018-03-01

    To determine whether novel methods of selecting joints through (i) ultrasonography (individualized-ultrasound [IUS] method), or (ii) ultrasonography and clinical examination (individualized-composite-ultrasound [ICUS] method) translate into smaller rheumatoid arthritis (RA) clinical trial sample sizes when compared to existing methods utilizing predetermined joint sites for ultrasonography. Cohen's effect size (ES) was estimated (ES^) and a 95% CI (ES^L, ES^U) calculated on a mean change in 3-month total inflammatory score for each method. Corresponding 95% CIs [nL(ES^U), nU(ES^L)] were obtained on a post hoc sample size reflecting the uncertainty in ES^. Sample size calculations were based on a one-sample t-test as the patient numbers needed to provide 80% power at α = 0.05 to reject a null hypothesis H 0 : ES = 0 versus alternative hypotheses H 1 : ES = ES^, ES = ES^L and ES = ES^U. We aimed to provide point and interval estimates on projected sample sizes for future studies reflecting the uncertainty in our study ES^S. Twenty-four treated RA patients were followed up for 3 months. Utilizing the 12-joint approach and existing methods, the post hoc sample size (95% CI) was 22 (10-245). Corresponding sample sizes using ICUS and IUS were 11 (7-40) and 11 (6-38), respectively. Utilizing a seven-joint approach, the corresponding sample sizes using ICUS and IUS methods were nine (6-24) and 11 (6-35), respectively. Our pilot study suggests that sample size for RA clinical trials with ultrasound endpoints may be reduced using the novel methods, providing justification for larger studies to confirm these observations. © 2017 Asia Pacific League of Associations for Rheumatology and John Wiley & Sons Australia, Ltd.

  12. Norm based Threshold Selection for Fault Detectors

    DEFF Research Database (Denmark)

    Rank, Mike Lind; Niemann, Henrik

    1998-01-01

    The design of fault detectors for fault detection and isolation (FDI) in dynamic systems is considered from a norm based point of view. An analysis of norm based threshold selection is given based on different formulations of FDI problems. Both the nominal FDI problem as well as the uncertain FDI...... problem are considered. Based on this analysis, a performance index based on norms of the involved transfer functions is given. The performance index allows us also to optimize the structure of the fault detection filter directly...

  13. Personnel Selection Based on Fuzzy Methods

    Directory of Open Access Journals (Sweden)

    Lourdes Cañós

    2011-03-01

    Full Text Available The decisions of managers regarding the selection of staff strongly determine the success of the company. A correct choice of employees is a source of competitive advantage. We propose a fuzzy method for staff selection, based on competence management and the comparison with the valuation that the company considers the best in each competence (ideal candidate. Our method is based on the Hamming distance and a Matching Level Index. The algorithms, implemented in the software StaffDesigner, allow us to rank the candidates, even when the competences of the ideal candidate have been evaluated only in part. Our approach is applied in a numerical example.

  14. Discriminative Projection Selection Based Face Image Hashing

    Science.gov (United States)

    Karabat, Cagatay; Erdogan, Hakan

    Face image hashing is an emerging method used in biometric verification systems. In this paper, we propose a novel face image hashing method based on a new technique called discriminative projection selection. We apply the Fisher criterion for selecting the rows of a random projection matrix in a user-dependent fashion. Moreover, another contribution of this paper is to employ a bimodal Gaussian mixture model at the quantization step. Our simulation results on three different databases demonstrate that the proposed method has superior performance in comparison to previously proposed random projection based methods.

  15. A large sample of Kohonen-selected SDSS quasars with weak emission lines: selection effects and statistical properties

    Science.gov (United States)

    Meusinger, H.; Balafkan, N.

    2014-08-01

    Aims: A tiny fraction of the quasar population shows remarkably weak emission lines. Several hypotheses have been developed, but the weak line quasar (WLQ) phenomenon still remains puzzling. The aim of this study was to create a sizeable sample of WLQs and WLQ-like objects and to evaluate various properties of this sample. Methods: We performed a search for WLQs in the spectroscopic data from the Sloan Digital Sky Survey Data Release 7 based on Kohonen self-organising maps for nearly 105 quasar spectra. The final sample consists of 365 quasars in the redshift range z = 0.6 - 4.2 (z¯ = 1.50 ± 0.45) and includes in particular a subsample of 46 WLQs with equivalent widths WMg iiattention was paid to selection effects. Results: The WLQs have, on average, significantly higher luminosities, Eddington ratios, and accretion rates. About half of the excess comes from a selection bias, but an intrinsic excess remains probably caused primarily by higher accretion rates. The spectral energy distribution shows a bluer continuum at rest-frame wavelengths ≳1500 Å. The variability in the optical and UV is relatively low, even taking the variability-luminosity anti-correlation into account. The percentage of radio detected quasars and of core-dominant radio sources is significantly higher than for the control sample, whereas the mean radio-loudness is lower. Conclusions: The properties of our WLQ sample can be consistently understood assuming that it consists of a mix of quasars at the beginning of a stage of increased accretion activity and of beamed radio-quiet quasars. The higher luminosities and Eddington ratios in combination with a bluer spectral energy distribution can be explained by hotter continua, i.e. higher accretion rates. If quasar activity consists of subphases with different accretion rates, a change towards a higher rate is probably accompanied by an only slow development of the broad line region. The composite WLQ spectrum can be reasonably matched by the

  16. 40 CFR 205.171-2 - Test exhaust system sample selection and preparation.

    Science.gov (United States)

    2010-07-01

    ... Systems § 205.171-2 Test exhaust system sample selection and preparation. (a)(1) Exhaust systems... 40 Protection of Environment 24 2010-07-01 2010-07-01 false Test exhaust system sample selection and preparation. 205.171-2 Section 205.171-2 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY...

  17. Observed Characteristics and Teacher Quality: Impacts of Sample Selection on a Value Added Model

    Science.gov (United States)

    Winters, Marcus A.; Dixon, Bruce L.; Greene, Jay P.

    2012-01-01

    We measure the impact of observed teacher characteristics on student math and reading proficiency using a rich dataset from Florida. We expand upon prior work by accounting directly for nonrandom attrition of teachers from the classroom in a sample selection framework. We find evidence that sample selection is present in the estimation of the…

  18. The Complete Local Volume Groups Sample - I. Sample selection and X-ray properties of the high-richness subsample

    Science.gov (United States)

    O'Sullivan, Ewan; Ponman, Trevor J.; Kolokythas, Konstantinos; Raychaudhury, Somak; Babul, Arif; Vrtilek, Jan M.; David, Laurence P.; Giacintucci, Simona; Gitti, Myriam; Haines, Chris P.

    2017-12-01

    We present the Complete Local-Volume Groups Sample (CLoGS), a statistically complete optically selected sample of 53 groups within 80 Mpc. Our goal is to combine X-ray, radio and optical data to investigate the relationship between member galaxies, their active nuclei and the hot intra-group medium (IGM). We describe sample selection, define a 26-group high-richness subsample of groups containing at least four optically bright (log LB ≥ 10.2 LB⊙) galaxies, and report the results of XMM-Newton and Chandra observations of these systems. We find that 14 of the 26 groups are X-ray bright, possessing a group-scale IGM extending at least 65 kpc and with luminosity >1041 erg s-1, while a further three groups host smaller galaxy-scale gas haloes. The X-ray bright groups have masses in the range M500 ≃ 0.5-5 × 1013 M⊙, based on system temperatures of 0.4-1.4 keV, and X-ray luminosities in the range 2-200 × 1041 erg s-1. We find that ∼53-65 per cent of the X-ray bright groups have cool cores, a somewhat lower fraction than found by previous archival surveys. Approximately 30 per cent of the X-ray bright groups show evidence of recent dynamical interactions (mergers or sloshing), and ∼35 per cent of their dominant early-type galaxies host active galactic nuclei with radio jets. We find no groups with unusually high central entropies, as predicted by some simulations, and confirm that CLoGS is in principle capable of detecting such systems. We identify three previously unrecognized groups, and find that they are either faint (LX, R500 < 1042 erg s-1) with no concentrated cool core, or highly disturbed. This leads us to suggest that ∼20 per cent of X-ray bright groups in the local universe may still be unidentified.

  19. A novel variable selection approach that iteratively optimizes variable space using weighted binary matrix sampling.

    Science.gov (United States)

    Deng, Bai-chuan; Yun, Yong-huan; Liang, Yi-zeng; Yi, Lun-zhao

    2014-10-07

    In this study, a new optimization algorithm called the Variable Iterative Space Shrinkage Approach (VISSA) that is based on the idea of model population analysis (MPA) is proposed for variable selection. Unlike most of the existing optimization methods for variable selection, VISSA statistically evaluates the performance of variable space in each step of optimization. Weighted binary matrix sampling (WBMS) is proposed to generate sub-models that span the variable subspace. Two rules are highlighted during the optimization procedure. First, the variable space shrinks in each step. Second, the new variable space outperforms the previous one. The second rule, which is rarely satisfied in most of the existing methods, is the core of the VISSA strategy. Compared with some promising variable selection methods such as competitive adaptive reweighted sampling (CARS), Monte Carlo uninformative variable elimination (MCUVE) and iteratively retaining informative variables (IRIV), VISSA showed better prediction ability for the calibration of NIR data. In addition, VISSA is user-friendly; only a few insensitive parameters are needed, and the program terminates automatically without any additional conditions. The Matlab codes for implementing VISSA are freely available on the website: https://sourceforge.net/projects/multivariateanalysis/files/VISSA/.

  20. Composition of Trace Metals in Dust Samples Collected from Selected High Schools in Pretoria, South Africa

    Directory of Open Access Journals (Sweden)

    J. O. Olowoyo

    2016-01-01

    Full Text Available Potential health risks associated with trace metal pollution have necessitated the importance of monitoring their levels in the environment. The present study investigated the concentrations and compositions of trace metals in dust samples collected from classrooms and playing ground from the selected high schools In Pretoria. Schools were selected from Pretoria based on factors such as proximity to high traffic ways, industrial areas, and residential areas. Thirty-two dust samples were collected from inside and outside the classrooms, where learners often stay during recess period. The dust samples were analysed for trace metal concentrations using Inductively Coupled Plasma-Mass Spectrometry (ICP-MS. The composition of the elements showed that the concentrations of Zn were more than all other elements except from one of the schools. There were significant differences in the concentrations of trace metals from the schools (p<0.05. Regular cleaning, proximity to busy road, and well maintained gardens seem to have positive effects on the concentrations of trace metals recorded from the classrooms dust. The result further revealed a positive correlation for elements such as Pb, Cu, Zn, Mn, and Sb, indicating that the dust might have a common source.

  1. Selective removal of phosphate for analysis of organic acids in complex samples.

    Science.gov (United States)

    Deshmukh, Sandeep; Frolov, Andrej; Marcillo, Andrea; Birkemeyer, Claudia

    2015-04-03

    Accurate quantitation of compounds in samples of biological origin is often hampered by matrix interferences one of which occurs in GC-MS analysis from the presence of highly abundant phosphate. Consequently, high concentrations of phosphate need to be removed before sample analysis. Within this context, we screened 17 anion exchange solid-phase extraction (SPE) materials for selective phosphate removal using different protocols to meet the challenge of simultaneous recovery of six common organic acids in aqueous samples prior to derivatization for GC-MS analysis. Up to 75% recovery was achieved for the most organic acids, only the low pKa tartaric and citric acids were badly recovered. Compared to the traditional approach of phosphate removal by precipitation, SPE had a broader compatibility with common detection methods and performed more selectively among the organic acids under investigation. Based on the results of this study, it is recommended that phosphate removal strategies during the analysis of biologically relevant small molecular weight organic acids consider the respective pKa of the anticipated analytes and the detection method of choice. Copyright © 2015 Elsevier B.V. All rights reserved.

  2. Feature Selection Based on Mutual Correlation

    Czech Academy of Sciences Publication Activity Database

    Haindl, Michal; Somol, Petr; Ververidis, D.; Kotropoulos, C.

    2006-01-01

    Roč. 19, č. 4225 (2006), s. 569-577 ISSN 0302-9743. [Iberoamerican Congress on Pattern Recognition. CIARP 2006 /11./. Cancun, 14.11.2006-17.11.2006] R&D Projects: GA AV ČR 1ET400750407; GA MŠk 1M0572; GA AV ČR IAA2075302 EU Projects: European Commission(XE) 507752 - MUSCLE Institutional research plan: CEZ:AV0Z10750506 Keywords : feature selection Subject RIV: BD - Theory of Information Impact factor: 0.402, year: 2005 http://library.utia.cas.cz/separaty/historie/haindl-feature selection based on mutual correlation.pdf

  3. 40 CFR 761.247 - Sample site selection for pipe segment removal.

    Science.gov (United States)

    2010-07-01

    ... end of the pipe segment. (3) If the pipe segment is cut with a saw or other mechanical device, take..., take samples from a total of seven segments. (A) Sample the first and last segments removed. (B) Select... total length for purposes of disposal, take samples of each segment that is 1/2 mile distant from the...

  4. 40 CFR 761.306 - Sampling 1 meter square surfaces by random selection of halves.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 30 2010-07-01 2010-07-01 false Sampling 1 meter square surfaces by...(b)(3) § 761.306 Sampling 1 meter square surfaces by random selection of halves. (a) Divide each 1 meter square portion where it is necessary to collect a surface wipe test sample into two equal (or as...

  5. Data Quality Objectives For Selecting Waste Samples To Test The Fluid Bed Steam Reformer Test

    International Nuclear Information System (INIS)

    Banning, D.L.

    2010-01-01

    This document describes the data quality objectives to select archived samples located at the 222-S Laboratory for Fluid Bed Steam Reformer testing. The type, quantity and quality of the data required to select the samples for Fluid Bed Steam Reformer testing are discussed. In order to maximize the efficiency and minimize the time to treat Hanford tank waste in the Waste Treatment and Immobilization Plant, additional treatment processes may be required. One of the potential treatment processes is the fluid bed steam reformer (FBSR). A determination of the adequacy of the FBSR process to treat Hanford tank waste is required. The initial step in determining the adequacy of the FBSR process is to select archived waste samples from the 222-S Laboratory that will be used to test the FBSR process. Analyses of the selected samples will be required to confirm the samples meet the testing criteria.

  6. Cold Spray Deposition of Freestanding Inconel Samples and Comparative Analysis with Selective Laser Melting

    Science.gov (United States)

    Bagherifard, Sara; Roscioli, Gianluca; Zuccoli, Maria Vittoria; Hadi, Mehdi; D'Elia, Gaetano; Demir, Ali Gökhan; Previtali, Barbara; Kondás, Ján; Guagliano, Mario

    2017-10-01

    Cold spray offers the possibility of obtaining almost zero-porosity buildups with no theoretical limit to the thickness. Moreover, cold spray can eliminate particle melting, evaporation, crystallization, grain growth, unwanted oxidation, undesirable phases and thermally induced tensile residual stresses. Such characteristics can boost its potential to be used as an additive manufacturing technique. Indeed, deposition via cold spray is recently finding its path toward fabrication of freeform components since it can address the common challenges of powder-bed additive manufacturing techniques including major size constraints, deposition rate limitations and high process temperature. Herein, we prepared nickel-based superalloy Inconel 718 samples with cold spray technique and compared them with similar samples fabricated by selective laser melting method. The samples fabricated using both methods were characterized in terms of mechanical strength, microstructural and porosity characteristics, Vickers microhardness and residual stresses distribution. Different heat treatment cycles were applied to the cold-sprayed samples in order to enhance their mechanical characteristics. The obtained data confirm that cold spray technique can be used as a complementary additive manufacturing method for fabrication of high-quality freestanding components where higher deposition rate, larger final size and lower fabrication temperatures are desired.

  7. The Quasar Fraction in Low-Frequency Selected Complete Samples and Implications for Unified Schemes

    Science.gov (United States)

    Willott, Chris J.; Rawlings, Steve; Blundell, Katherine M.; Lacy, Mark

    2000-01-01

    Low-frequency radio surveys are ideal for selecting orientation-independent samples of extragalactic sources because the sample members are selected by virtue of their isotropic steep-spectrum extended emission. We use the new 7C Redshift Survey along with the brighter 3CRR and 6C samples to investigate the fraction of objects with observed broad emission lines - the 'quasar fraction' - as a function of redshift and of radio and narrow emission line luminosity. We find that the quasar fraction is more strongly dependent upon luminosity (both narrow line and radio) than it is on redshift. Above a narrow [OII] emission line luminosity of log(base 10) (L(sub [OII])/W) approximately > 35 [or radio luminosity log(base 10) (L(sub 151)/ W/Hz.sr) approximately > 26.5], the quasar fraction is virtually independent of redshift and luminosity; this is consistent with a simple unified scheme with an obscuring torus with a half-opening angle theta(sub trans) approximately equal 53 deg. For objects with less luminous narrow lines, the quasar fraction is lower. We show that this is not due to the difficulty of detecting lower-luminosity broad emission lines in a less luminous, but otherwise similar, quasar population. We discuss evidence which supports at least two probable physical causes for the drop in quasar fraction at low luminosity: (i) a gradual decrease in theta(sub trans) and/or a gradual increase in the fraction of lightly-reddened (0 approximately quasar luminosity; and (ii) the emergence of a distinct second population of low luminosity radio sources which, like M8T, lack a well-fed quasar nucleus and may well lack a thick obscuring torus.

  8. On a Robust MaxEnt Process Regression Model with Sample-Selection

    Directory of Open Access Journals (Sweden)

    Hea-Jung Kim

    2018-04-01

    Full Text Available In a regression analysis, a sample-selection bias arises when a dependent variable is partially observed as a result of the sample selection. This study introduces a Maximum Entropy (MaxEnt process regression model that assumes a MaxEnt prior distribution for its nonparametric regression function and finds that the MaxEnt process regression model includes the well-known Gaussian process regression (GPR model as a special case. Then, this special MaxEnt process regression model, i.e., the GPR model, is generalized to obtain a robust sample-selection Gaussian process regression (RSGPR model that deals with non-normal data in the sample selection. Various properties of the RSGPR model are established, including the stochastic representation, distributional hierarchy, and magnitude of the sample-selection bias. These properties are used in the paper to develop a hierarchical Bayesian methodology to estimate the model. This involves a simple and computationally feasible Markov chain Monte Carlo algorithm that avoids analytical or numerical derivatives of the log-likelihood function of the model. The performance of the RSGPR model in terms of the sample-selection bias correction, robustness to non-normality, and prediction, is demonstrated through results in simulations that attest to its good finite-sample performance.

  9. Solution-based targeted genomic enrichment for precious DNA samples

    Directory of Open Access Journals (Sweden)

    Shearer Aiden

    2012-05-01

    Full Text Available Abstract Background Solution-based targeted genomic enrichment (TGE protocols permit selective sequencing of genomic regions of interest on a massively parallel scale. These protocols could be improved by: 1 modifying or eliminating time consuming steps; 2 increasing yield to reduce input DNA and excessive PCR cycling; and 3 enhancing reproducible. Results We developed a solution-based TGE method for downstream Illumina sequencing in a non-automated workflow, adding standard Illumina barcode indexes during the post-hybridization amplification to allow for sample pooling prior to sequencing. The method utilizes Agilent SureSelect baits, primers and hybridization reagents for the capture, off-the-shelf reagents for the library preparation steps, and adaptor oligonucleotides for Illumina paired-end sequencing purchased directly from an oligonucleotide manufacturing company. Conclusions This solution-based TGE method for Illumina sequencing is optimized for small- or medium-sized laboratories and addresses the weaknesses of standard protocols by reducing the amount of input DNA required, increasing capture yield, optimizing efficiency, and improving reproducibility.

  10. 6. Label-free selective plane illumination microscopy of tissue samples

    Directory of Open Access Journals (Sweden)

    Muteb Alharbi

    2017-10-01

    Conclusion: Overall this method meets the demands of the current needs for 3D imaging tissue samples in a label-free manner. Label-free Selective Plane Microscopy directly provides excellent information about the structure of the tissue samples. This work has highlighted the superiority of Label-free Selective Plane Microscopy to current approaches to label-free 3D imaging of tissue.

  11. Template-Based Sampling of Anisotropic BRDFs

    Czech Academy of Sciences Publication Activity Database

    Filip, Jiří; Vávra, Radomír

    2014-01-01

    Roč. 33, č. 7 (2014), s. 91-99 ISSN 0167-7055. [Pacific Graphics 2014. Soul, 08.10.2014-10.10.2014] R&D Projects: GA ČR(CZ) GA14-02652S; GA ČR(CZ) GA14-10911S; GA ČR GAP103/11/0335 Institutional support: RVO:67985556 Keywords : BRDF database * material appearnce * sampling * measurement Subject RIV: BD - Theory of Information Impact factor: 1.642, year: 2014 http://library.utia.cas.cz/separaty/2014/RO/filip-0432894.pdf

  12. Accounting for animal movement in estimation of resource selection functions: sampling and data analysis.

    Science.gov (United States)

    Forester, James D; Im, Hae Kyung; Rathouz, Paul J

    2009-12-01

    Patterns of resource selection by animal populations emerge as a result of the behavior of many individuals. Statistical models that describe these population-level patterns of habitat use can miss important interactions between individual animals and characteristics of their local environment; however, identifying these interactions is difficult. One approach to this problem is to incorporate models of individual movement into resource selection models. To do this, we propose a model for step selection functions (SSF) that is composed of a resource-independent movement kernel and a resource selection function (RSF). We show that standard case-control logistic regression may be used to fit the SSF; however, the sampling scheme used to generate control points (i.e., the definition of availability) must be accommodated. We used three sampling schemes to analyze simulated movement data and found that ignoring sampling and the resource-independent movement kernel yielded biased estimates of selection. The level of bias depended on the method used to generate control locations, the strength of selection, and the spatial scale of the resource map. Using empirical or parametric methods to sample control locations produced biased estimates under stronger selection; however, we show that the addition of a distance function to the analysis substantially reduced that bias. Assuming a uniform availability within a fixed buffer yielded strongly biased selection estimates that could be corrected by including the distance function but remained inefficient relative to the empirical and parametric sampling methods. As a case study, we used location data collected from elk in Yellowstone National Park, USA, to show that selection and bias may be temporally variable. Because under constant selection the amount of bias depends on the scale at which a resource is distributed in the landscape, we suggest that distance always be included as a covariate in SSF analyses. This approach to

  13. Using maximum entropy modeling for optimal selection of sampling sites for monitoring networks

    Science.gov (United States)

    Stohlgren, Thomas J.; Kumar, Sunil; Barnett, David T.; Evangelista, Paul H.

    2011-01-01

    Environmental monitoring programs must efficiently describe state shifts. We propose using maximum entropy modeling to select dissimilar sampling sites to capture environmental variability at low cost, and demonstrate a specific application: sample site selection for the Central Plains domain (453,490 km2) of the National Ecological Observatory Network (NEON). We relied on four environmental factors: mean annual temperature and precipitation, elevation, and vegetation type. A “sample site” was defined as a 20 km × 20 km area (equal to NEON’s airborne observation platform [AOP] footprint), within which each 1 km2 cell was evaluated for each environmental factor. After each model run, the most environmentally dissimilar site was selected from all potential sample sites. The iterative selection of eight sites captured approximately 80% of the environmental envelope of the domain, an improvement over stratified random sampling and simple random designs for sample site selection. This approach can be widely used for cost-efficient selection of survey and monitoring sites.

  14. [Electroencephalogram Feature Selection Based on Correlation Coefficient Analysis].

    Science.gov (United States)

    Zhou, Jinzhi; Tang, Xiaofang

    2015-08-01

    In order to improve the accuracy of classification with small amount of motor imagery training data on the development of brain-computer interface (BCD systems, we proposed an analyzing method to automatically select the characteristic parameters based on correlation coefficient analysis. Throughout the five sample data of dataset IV a from 2005 BCI Competition, we utilized short-time Fourier transform (STFT) and correlation coefficient calculation to reduce the number of primitive electroencephalogram dimension, then introduced feature extraction based on common spatial pattern (CSP) and classified by linear discriminant analysis (LDA). Simulation results showed that the average rate of classification accuracy could be improved by using correlation coefficient feature selection method than those without using this algorithm. Comparing with support vector machine (SVM) optimization features algorithm, the correlation coefficient analysis can lead better selection parameters to improve the accuracy of classification.

  15. Determination of specific activity of americium and plutonium in selected environmental samples

    International Nuclear Information System (INIS)

    Trebunova, T.

    1999-01-01

    The aim of this work was development of method for determination of americium and plutonium in environmental samples. Developed method was evaluated on soil samples and after they was applied on selected samples of fishes (smoked mackerel, herring and fillet from Alaska hake). The method for separation of americium is based on liquid separation with Aliquate-336, precipitation with oxalic acid and using of chromatographic material TRU-Spec TM .The intervals of radiochemical yields were from 13.0% to 80.9% for plutonium-236 and from 10.5% to 100% for americium-241. Determined specific activities of plutonium-239,240 were from (2.3 ± 1.4) mBq/kg to (82 ± 29) mBq/kg, the specific activities of plutonium-238 were from (14.2 ± 3.7) mBq/kg to (708 ± 86) mBq/kg. The specific activities of americium-241 were from (1.4 ± 0.9) mBq/kg to (3360 ± 210) mBq/kg. The fishes from Baltic Sea as well as from North Sea show highest specific activities then fresh-water fishes from Slovakia. Therefore the monitoring of alpha radionuclides in foods imported from territories with nuclear testing is recommended

  16. The Swift Gamma-Ray Burst Host Galaxy Legacy Survey. I. Sample Selection and Redshift Distribution

    Science.gov (United States)

    Perley, D. A.; Kruhler, T.; Schulze, S.; Postigo, A. De Ugarte; Hjorth, J.; Berger, E.; Cenko, S. B.; Chary, R.; Cucchiara, A.; Ellis, R.; hide

    2016-01-01

    We introduce the Swift Gamma-Ray Burst Host Galaxy Legacy Survey (SHOALS), a multi-observatory high redshift galaxy survey targeting the largest unbiased sample of long-duration gamma-ray burst (GRB) hosts yet assembled (119 in total). We describe the motivations of the survey and the development of our selection criteria, including an assessment of the impact of various observability metrics on the success rate of afterglow-based redshift measurement. We briefly outline our host galaxy observational program, consisting of deep Spitzer/IRAC imaging of every field supplemented by similarly deep, multicolor optical/near-IR photometry, plus spectroscopy of events without preexisting redshifts. Our optimized selection cuts combined with host galaxy follow-up have so far enabled redshift measurements for 110 targets (92%) and placed upper limits on all but one of the remainder. About 20% of GRBs in the sample are heavily dust obscured, and at most 2% originate from z > 5.5. Using this sample, we estimate the redshift-dependent GRB rate density, showing it to peak at z approx. 2.5 and fall by at least an order of magnitude toward low (z = 0) redshift, while declining more gradually toward high (z approx. 7) redshift. This behavior is consistent with a progenitor whose formation efficiency varies modestly over cosmic history. Our survey will permit the most detailed examination to date of the connection between the GRB host population and general star-forming galaxies, directly measure evolution in the host population over cosmic time and discern its causes, and provide new constraints on the fraction of cosmic star formation occurring in undetectable galaxies at all redshifts.

  17. Optimal Selection of the Sampling Interval for Estimation of Modal Parameters by an ARMA- Model

    DEFF Research Database (Denmark)

    Kirkegaard, Poul Henning

    1993-01-01

    Optimal selection of the sampling interval for estimation of the modal parameters by an ARMA-model for a white noise loaded structure modelled as a single degree of- freedom linear mechanical system is considered. An analytical solution for an optimal uniform sampling interval, which is optimal...

  18. Correlations fo Sc, rare earths and other elements in selected rock samples from Arrua-i

    Energy Technology Data Exchange (ETDEWEB)

    Facetti, J F; Prats, M [Asuncion Nacional Univ. (Paraguay). Inst. de Ciencias

    1972-01-01

    The Sc and Eu contents in selected rocks samples from the stock of Arrua-i have been determined and correlations established with other elements and with the relative amount of some rare earths. These correlations suggest metasomatic phenomena for the formation of the rock samples.

  19. HOT-DUST-POOR QUASARS IN MID-INFRARED AND OPTICALLY SELECTED SAMPLES

    International Nuclear Information System (INIS)

    Hao Heng; Elvis, Martin; Civano, Francesca; Lawrence, Andy

    2011-01-01

    We show that the hot-dust-poor (HDP) quasars, originally found in the X-ray-selected XMM-COSMOS type 1 active galactic nucleus (AGN) sample, are just as common in two samples selected at optical/infrared wavelengths: the Richards et al. Spitzer/SDSS sample (8.7% ± 2.2%) and the Palomar-Green-quasar-dominated sample of Elvis et al. (9.5% ± 5.0%). The properties of the HDP quasars in these two samples are consistent with the XMM-COSMOS sample, except that, at the 99% (∼ 2.5σ) significance, a larger proportion of the HDP quasars in the Spitzer/SDSS sample have weak host galaxy contributions, probably due to the selection criteria used. Either the host dust is destroyed (dynamically or by radiation) or is offset from the central black hole due to recoiling. Alternatively, the universality of HDP quasars in samples with different selection methods and the continuous distribution of dust covering factor in type 1 AGNs suggest that the range of spectral energy distributions could be related to the range of tilts in warped fueling disks, as in the model of Lawrence and Elvis, with HDP quasars having relatively small warps.

  20. Correlations fo Sc, rare earths and other elements in selected rock samples from Arrua-i

    International Nuclear Information System (INIS)

    Facetti, J.F.; Prats, M.

    1972-01-01

    The Sc and Eu contents in selected rocks samples from the stock of Arrua-i have been determined and correlations established with other elements and with the relative amount of some rare earths. These correlations suggest metasomatic phenomena for the formation of the rock samples

  1. Proposal for selecting an ore sample from mining shaft under Kvanefjeld

    International Nuclear Information System (INIS)

    Lund Clausen, F.

    1979-02-01

    Uranium ore recovered from the tunnel under Kvanefjeld (Greenland) will be processed in a pilot plant. Selection of a fully representative ore sample for both the whole area and single local sites is discussed. A FORTRAN program for ore distribution is presented, in order to enable correct sampling. (EG)

  2. Random sampling or geostatistical modelling? Choosing between design-based and model-based sampling strategies for soil (with discussion)

    NARCIS (Netherlands)

    Brus, D.J.; Gruijter, de J.J.

    1997-01-01

    Classical sampling theory has been repeatedly identified with classical statistics which assumes that data are identically and independently distributed. This explains the switch of many soil scientists from design-based sampling strategies, based on classical sampling theory, to the model-based

  3. Calibration model maintenance in melamine resin production: Integrating drift detection, smart sample selection and model adaptation.

    Science.gov (United States)

    Nikzad-Langerodi, Ramin; Lughofer, Edwin; Cernuda, Carlos; Reischer, Thomas; Kantner, Wolfgang; Pawliczek, Marcin; Brandstetter, Markus

    2018-07-12

    The physico-chemical properties of Melamine Formaldehyde (MF) based thermosets are largely influenced by the degree of polymerization (DP) in the underlying resin. On-line supervision of the turbidity point by means of vibrational spectroscopy has recently emerged as a promising technique to monitor the DP of MF resins. However, spectroscopic determination of the DP relies on chemometric models, which are usually sensitive to drifts caused by instrumental and/or sample-associated changes occurring over time. In order to detect the time point when drifts start causing prediction bias, we here explore a universal drift detector based on a faded version of the Page-Hinkley (PH) statistic, which we test in three data streams from an industrial MF resin production process. We employ committee disagreement (CD), computed as the variance of model predictions from an ensemble of partial least squares (PLS) models, as a measure for sample-wise prediction uncertainty and use the PH statistic to detect changes in this quantity. We further explore supervised and unsupervised strategies for (semi-)automatic model adaptation upon detection of a drift. For the former, manual reference measurements are requested whenever statistical thresholds on Hotelling's T 2 and/or Q-Residuals are violated. Models are subsequently re-calibrated using weighted partial least squares in order to increase the influence of newer samples, which increases the flexibility when adapting to new (drifted) states. Unsupervised model adaptation is carried out exploiting the dual antecedent-consequent structure of a recently developed fuzzy systems variant of PLS termed FLEXFIS-PLS. In particular, antecedent parts are updated while maintaining the internal structure of the local linear predictors (i.e. the consequents). We found improved drift detection capability of the CD compared to Hotelling's T 2 and Q-Residuals when used in combination with the proposed PH test. Furthermore, we found that active

  4. Data Quality Objectives For Selecting Waste Samples For Bench-Scale Reformer Treatability Studies

    International Nuclear Information System (INIS)

    Banning, D.L.

    2011-01-01

    This document describes the data quality objectives to select archived samples located at the 222-S Laboratory for Bench-Scale Reforming testing. The type, quantity, and quality of the data required to select the samples for Fluid Bed Steam Reformer testing are discussed. In order to maximize the efficiency and minimize the time to treat Hanford tank waste in the Waste Treatment and Immobilization Plant, additional treatment processes may be required. One of the potential treatment processes is the fluidized bed steam reformer. A determination of the adequacy of the fluidized bed steam reformer process to treat Hanford tank waste is required. The initial step in determining the adequacy of the fluidized bed steam reformer process is to select archived waste samples from the 222-S Laboratory that will be used in a bench scale tests. Analyses of the selected samples will be required to confirm the samples meet the shipping requirements and for comparison to the bench scale reformer (BSR) test sample selection requirements.

  5. Sample classroom activities based on climate science

    Science.gov (United States)

    Miler, T.

    2009-09-01

    We present several activities developed for the middle school education based on a climate science. The first activity was designed to teach about the ocean acidification. A simple experiment can prove that absorption of CO2 in water increases its acidity. A liquid pH indicator is suitable for the demonstration in a classroom. The second activity uses data containing coordinates of a hurricane position. Pupils draw a path of a hurricane eye in a tracking chart (map of the Atlantic ocean). They calculate an average speed of the hurricane, investigate its direction and intensity development. The third activity uses pictures of the Arctic ocean on September when ice extend is usually the lowest. Students measure the ice extend for several years using a square grid printed on a plastic foil. Then they plot a graph and discuss the results. All these activities can be used to improve the natural science education and increase the climate change literacy.

  6. HICOSMO - cosmology with a complete sample of galaxy clusters - I. Data analysis, sample selection and luminosity-mass scaling relation

    Science.gov (United States)

    Schellenberger, G.; Reiprich, T. H.

    2017-08-01

    The X-ray regime, where the most massive visible component of galaxy clusters, the intracluster medium, is visible, offers directly measured quantities, like the luminosity, and derived quantities, like the total mass, to characterize these objects. The aim of this project is to analyse a complete sample of galaxy clusters in detail and constrain cosmological parameters, like the matter density, Ωm, or the amplitude of initial density fluctuations, σ8. The purely X-ray flux-limited sample (HIFLUGCS) consists of the 64 X-ray brightest galaxy clusters, which are excellent targets to study the systematic effects, that can bias results. We analysed in total 196 Chandra observations of the 64 HIFLUGCS clusters, with a total exposure time of 7.7 Ms. Here, we present our data analysis procedure (including an automated substructure detection and an energy band optimization for surface brightness profile analysis) that gives individually determined, robust total mass estimates. These masses are tested against dynamical and Planck Sunyaev-Zeldovich (SZ) derived masses of the same clusters, where good overall agreement is found with the dynamical masses. The Planck SZ masses seem to show a mass-dependent bias to our hydrostatic masses; possible biases in this mass-mass comparison are discussed including the Planck selection function. Furthermore, we show the results for the (0.1-2.4) keV luminosity versus mass scaling relation. The overall slope of the sample (1.34) is in agreement with expectations and values from literature. Splitting the sample into galaxy groups and clusters reveals, even after a selection bias correction, that galaxy groups exhibit a significantly steeper slope (1.88) compared to clusters (1.06).

  7. A Uniformly Selected Sample of Low-mass Black Holes in Seyfert 1 Galaxies. II. The SDSS DR7 Sample

    Science.gov (United States)

    Liu, He-Yang; Yuan, Weimin; Dong, Xiao-Bo; Zhou, Hongyan; Liu, Wen-Juan

    2018-04-01

    A new sample of 204 low-mass black holes (LMBHs) in active galactic nuclei (AGNs) is presented with black hole masses in the range of (1–20) × 105 M ⊙. The AGNs are selected through a systematic search among galaxies in the Seventh Data Release (DR7) of the Sloan Digital Sky Survey (SDSS), and careful analyses of their optical spectra and precise measurement of spectral parameters. Combining them with our previous sample selected from SDSS DR4 makes it the largest LMBH sample so far, totaling over 500 objects. Some of the statistical properties of the combined LMBH AGN sample are briefly discussed in the context of exploring the low-mass end of the AGN population. Their X-ray luminosities follow the extension of the previously known correlation with the [O III] luminosity. The effective optical-to-X-ray spectral indices α OX, albeit with a large scatter, are broadly consistent with the extension of the relation with the near-UV luminosity L 2500 Å. Interestingly, a correlation of α OX with black hole mass is also found, with α OX being statistically flatter (stronger X-ray relative to optical) for lower black hole masses. Only 26 objects, mostly radio loud, were detected in radio at 20 cm in the FIRST survey, giving a radio-loud fraction of 4%. The host galaxies of LMBHs have stellar masses in the range of 108.8–1012.4 M ⊙ and optical colors typical of Sbc spirals. They are dominated by young stellar populations that seem to have undergone continuous star formation history.

  8. Discrete Biogeography Based Optimization for Feature Selection in Molecular Signatures.

    Science.gov (United States)

    Liu, Bo; Tian, Meihong; Zhang, Chunhua; Li, Xiangtao

    2015-04-01

    Biomarker discovery from high-dimensional data is a complex task in the development of efficient cancer diagnoses and classification. However, these data are usually redundant and noisy, and only a subset of them present distinct profiles for different classes of samples. Thus, selecting high discriminative genes from gene expression data has become increasingly interesting in the field of bioinformatics. In this paper, a discrete biogeography based optimization is proposed to select the good subset of informative gene relevant to the classification. In the proposed algorithm, firstly, the fisher-markov selector is used to choose fixed number of gene data. Secondly, to make biogeography based optimization suitable for the feature selection problem; discrete migration model and discrete mutation model are proposed to balance the exploration and exploitation ability. Then, discrete biogeography based optimization, as we called DBBO, is proposed by integrating discrete migration model and discrete mutation model. Finally, the DBBO method is used for feature selection, and three classifiers are used as the classifier with the 10 fold cross-validation method. In order to show the effective and efficiency of the algorithm, the proposed algorithm is tested on four breast cancer dataset benchmarks. Comparison with genetic algorithm, particle swarm optimization, differential evolution algorithm and hybrid biogeography based optimization, experimental results demonstrate that the proposed method is better or at least comparable with previous method from literature when considering the quality of the solutions obtained. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  9. Correcting Classifiers for Sample Selection Bias in Two-Phase Case-Control Studies

    Science.gov (United States)

    Theis, Fabian J.

    2017-01-01

    Epidemiological studies often utilize stratified data in which rare outcomes or exposures are artificially enriched. This design can increase precision in association tests but distorts predictions when applying classifiers on nonstratified data. Several methods correct for this so-called sample selection bias, but their performance remains unclear especially for machine learning classifiers. With an emphasis on two-phase case-control studies, we aim to assess which corrections to perform in which setting and to obtain methods suitable for machine learning techniques, especially the random forest. We propose two new resampling-based methods to resemble the original data and covariance structure: stochastic inverse-probability oversampling and parametric inverse-probability bagging. We compare all techniques for the random forest and other classifiers, both theoretically and on simulated and real data. Empirical results show that the random forest profits from only the parametric inverse-probability bagging proposed by us. For other classifiers, correction is mostly advantageous, and methods perform uniformly. We discuss consequences of inappropriate distribution assumptions and reason for different behaviors between the random forest and other classifiers. In conclusion, we provide guidance for choosing correction methods when training classifiers on biased samples. For random forests, our method outperforms state-of-the-art procedures if distribution assumptions are roughly fulfilled. We provide our implementation in the R package sambia. PMID:29312464

  10. Multiwavelength diagnostics of accretion in an X-ray selected sample of CTTSs

    Science.gov (United States)

    Curran, R. L.; Argiroffi, C.; Sacco, G. G.; Orlando, S.; Peres, G.; Reale, F.; Maggio, A.

    2011-02-01

    Context. High resolution X-ray spectroscopy has revealed soft X-rays from high density plasma in classical T Tauri stars (CTTSs), probably arising from the accretion shock region. However, the mass accretion rates derived from the X-ray observations are consistently lower than those derived from UV/optical/NIR studies. Aims: We aim to test the hypothesis that the high density soft X-ray emission originates from accretion by analysing, in a homogeneous manner, optical accretion indicators for an X-ray selected sample of CTTSs. Methods: We analyse optical spectra of the X-ray selected sample of CTTSs and calculate the accretion rates based on measuring the Hα, Hβ, Hγ, He ii 4686 Å, He i 5016 Å, He i 5876 Å, O i 6300 Å, and He i 6678 Å equivalent widths. In addition, we also calculate the accretion rates based on the full width at 10% maximum of the Hα line. The different optical tracers of accretion are compared and discussed. The derived accretion rates are then compared to the accretion rates derived from the X-ray spectroscopy. Results: We find that, for each CTTS in our sample, the different optical tracers predict mass-accretion rates that agree within the errors, albeit with a spread of ≈ 1 order of magnitude. Typically, mass-accretion rates derived from Hα and He i 5876 Å are larger than those derived from Hβ, Hγ, and O i. In addition, the Hα full width at 10%, whilst a good indicator of accretion, may not accurately measure the mass-accretion rate. When the optical mass-accretion rates are compared to the X-ray derived mass-accretion rates, we find that: a) the latter are always lower (but by varying amounts); b) the latter range within a factor of ≈ 2 around 2 × 10-10 M⊙ yr-1, despite the former spanning a range of ≈ 3 orders of magnitude. We suggest that the systematic underestimate of the X-ray derived mass-accretion rates could depend on the density distribution inside the accretion streams, where the densest part of the stream is

  11. Estimating the residential demand function for natural gas in Seoul with correction for sample selection bias

    International Nuclear Information System (INIS)

    Yoo, Seung-Hoon; Lim, Hea-Jin; Kwak, Seung-Jun

    2009-01-01

    Over the last twenty years, the consumption of natural gas in Korea has increased dramatically. This increase has mainly resulted from the rise of consumption in the residential sector. The main objective of the study is to estimate households' demand function for natural gas by applying a sample selection model using data from a survey of households in Seoul. The results show that there exists a selection bias in the sample and that failure to correct for sample selection bias distorts the mean estimate, of the demand for natural gas, downward by 48.1%. In addition, according to the estimation results, the size of the house, the dummy variable for dwelling in an apartment, the dummy variable for having a bed in an inner room, and the household's income all have positive relationships with the demand for natural gas. On the other hand, the size of the family and the price of gas negatively contribute to the demand for natural gas. (author)

  12. Consensus of heterogeneous multi-agent systems based on sampled data with a small sampling delay

    International Nuclear Information System (INIS)

    Wang Na; Wu Zhi-Hai; Peng Li

    2014-01-01

    In this paper, consensus problems of heterogeneous multi-agent systems based on sampled data with a small sampling delay are considered. First, a consensus protocol based on sampled data with a small sampling delay for heterogeneous multi-agent systems is proposed. Then, the algebra graph theory, the matrix method, the stability theory of linear systems, and some other techniques are employed to derive the necessary and sufficient conditions guaranteeing heterogeneous multi-agent systems to asymptotically achieve the stationary consensus. Finally, simulations are performed to demonstrate the correctness of the theoretical results. (interdisciplinary physics and related areas of science and technology)

  13. Risk-based audit selection of dairy farms.

    Science.gov (United States)

    van Asseldonk, M A P M; Velthuis, A G J

    2014-02-01

    Dairy farms are audited in the Netherlands on numerous process standards. Each farm is audited once every 2 years. Increasing demands for cost-effectiveness in farm audits can be met by introducing risk-based principles. This implies targeting subpopulations with a higher risk of poor process standards. To select farms for an audit that present higher risks, a statistical analysis was conducted to test the relationship between the outcome of farm audits and bulk milk laboratory results before the audit. The analysis comprised 28,358 farm audits and all conducted laboratory tests of bulk milk samples 12 mo before the audit. The overall outcome of each farm audit was classified as approved or rejected. Laboratory results included somatic cell count (SCC), total bacterial count (TBC), antimicrobial drug residues (ADR), level of butyric acid spores (BAB), freezing point depression (FPD), level of free fatty acids (FFA), and cleanliness of the milk (CLN). The bulk milk laboratory results were significantly related to audit outcomes. Rejected audits are likely to occur on dairy farms with higher mean levels of SCC, TBC, ADR, and BAB. Moreover, in a multivariable model, maxima for TBC, SCC, and FPD as well as standard deviations for TBC and FPD are risk factors for negative audit outcomes. The efficiency curve of a risk-based selection approach, on the basis of the derived regression results, dominated the current random selection approach. To capture 25, 50, or 75% of the population with poor process standards (i.e., audit outcome of rejected), respectively, only 8, 20, or 47% of the population had to be sampled based on a risk-based selection approach. Milk quality information can thus be used to preselect high-risk farms to be audited more frequently. Copyright © 2014 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  14. A Story-Based Simulation for Teaching Sampling Distributions

    Science.gov (United States)

    Turner, Stephen; Dabney, Alan R.

    2015-01-01

    Statistical inference relies heavily on the concept of sampling distributions. However, sampling distributions are difficult to teach. We present a series of short animations that are story-based, with associated assessments. We hope that our contribution can be useful as a tool to teach sampling distributions in the introductory statistics…

  15. Control charts for location based on different sampling schemes

    NARCIS (Netherlands)

    Mehmood, R.; Riaz, M.; Does, R.J.M.M.

    2013-01-01

    Control charts are the most important statistical process control tool for monitoring variations in a process. A number of articles are available in the literature for the X̄ control chart based on simple random sampling, ranked set sampling, median-ranked set sampling (MRSS), extreme-ranked set

  16. MIS-based sensors with hydrogen selectivity

    Energy Technology Data Exchange (ETDEWEB)

    Li,; Dongmei, [Boulder, CO; Medlin, J William [Boulder, CO; McDaniel, Anthony H [Livermore, CA; Bastasz, Robert J [Livermore, CA

    2008-03-11

    The invention provides hydrogen selective metal-insulator-semiconductor sensors which include a layer of hydrogen selective material. The hydrogen selective material can be polyimide layer having a thickness between 200 and 800 nm. Suitable polyimide materials include reaction products of benzophenone tetracarboxylic dianhydride 4,4-oxydianiline m-phenylene diamine and other structurally similar materials.

  17. Efficient sampling algorithms for Monte Carlo based treatment planning

    International Nuclear Information System (INIS)

    DeMarco, J.J.; Solberg, T.D.; Chetty, I.; Smathers, J.B.

    1998-01-01

    Efficient sampling algorithms are necessary for producing a fast Monte Carlo based treatment planning code. This study evaluates several aspects of a photon-based tracking scheme and the effect of optimal sampling algorithms on the efficiency of the code. Four areas were tested: pseudo-random number generation, generalized sampling of a discrete distribution, sampling from the exponential distribution, and delta scattering as applied to photon transport through a heterogeneous simulation geometry. Generalized sampling of a discrete distribution using the cutpoint method can produce speedup gains of one order of magnitude versus conventional sequential sampling. Photon transport modifications based upon the delta scattering method were implemented and compared with a conventional boundary and collision checking algorithm. The delta scattering algorithm is faster by a factor of six versus the conventional algorithm for a boundary size of 5 mm within a heterogeneous geometry. A comparison of portable pseudo-random number algorithms and exponential sampling techniques is also discussed

  18. Climate Change and Agricultural Productivity in Sub-Saharan Africa: A Spatial Sample Selection Model

    NARCIS (Netherlands)

    Ward, P.S.; Florax, R.J.G.M.; Flores-Lagunes, A.

    2014-01-01

    Using spatially explicit data, we estimate a cereal yield response function using a recently developed estimator for spatial error models when endogenous sample selection is of concern. Our results suggest that yields across Sub-Saharan Africa will decline with projected climatic changes, and that

  19. Decomposing the Gender Wage Gap in the Netherlands with Sample Selection Adjustments

    NARCIS (Netherlands)

    Albrecht, James; Vuuren, van Aico; Vroman, Susan

    2004-01-01

    In this paper, we use quantile regression decomposition methods to analyzethe gender gap between men and women who work full time in the Nether-lands. Because the fraction of women working full time in the Netherlands isquite low, sample selection is a serious issue. In addition to shedding light

  20. Phytochemical analysis and biological evaluation of selected African propolis samples from Cameroon and Congo

    NARCIS (Netherlands)

    Papachroni, D.; Graikou, K.; Kosalec, I.; Damianakos, H.; Ingram, V.J.; Chinou, I.

    2015-01-01

    The objective of this study was the chemical analysis of four selected samples of African propolis (Congo and Cameroon) and their biological evaluation. Twenty-one secondary metabolites belonging to four different chemical groups were isolated from the 70% ethanolic extracts of propolis and their

  1. Gender Wage Gap : A Semi-Parametric Approach With Sample Selection Correction

    NARCIS (Netherlands)

    Picchio, M.; Mussida, C.

    2010-01-01

    Sizeable gender differences in employment rates are observed in many countries. Sample selection into the workforce might therefore be a relevant issue when estimating gender wage gaps. This paper proposes a new semi-parametric estimator of densities in the presence of covariates which incorporates

  2. Principal Stratification in sample selection problems with non normal error terms

    DEFF Research Database (Denmark)

    Rocci, Roberto; Mellace, Giovanni

    The aim of the paper is to relax distributional assumptions on the error terms, often imposed in parametric sample selection models to estimate causal effects, when plausible exclusion restrictions are not available. Within the principal stratification framework, we approximate the true distribut...... an application to the Job Corps training program....

  3. New sorbent materials for selective extraction of cocaine and benzoylecgonine from human urine samples.

    Science.gov (United States)

    Bujak, Renata; Gadzała-Kopciuch, Renata; Nowaczyk, Alicja; Raczak-Gutknecht, Joanna; Kordalewska, Marta; Struck-Lewicka, Wiktoria; Waszczuk-Jankowska, Małgorzata; Tomczak, Ewa; Kaliszan, Michał; Buszewski, Bogusław; Markuszewski, Michał J

    2016-02-20

    An increase in cocaine consumption has been observed in Europe during the last decade. Benzoylecgonine, as a main urinary metabolite of cocaine in human, is so far the most reliable marker of cocaine consumption. Determination of cocaine and its metabolite in complex biological samples as urine or blood, requires efficient and selective sample pretreatment. In this preliminary study, the newly synthesized sorbent materials were proposed for selective extraction of cocaine and benzoylecgonine from urine samples. Application of these sorbent media allowed to determine cocaine and benzoylecgonine in urine samples at the concentration level of 100ng/ml with good recovery values as 81.7%±6.6 and 73.8%±4.2, respectively. The newly synthesized materials provided efficient, inexpensive and selective extraction of both cocaine and benzoylecgonine from urine samples, which can consequently lead to an increase of the sensitivity of the current available screening diagnostic tests. Copyright © 2015 Elsevier B.V. All rights reserved.

  4. Selective Distance-Based K+ Quantification on Paper-Based Microfluidics.

    Science.gov (United States)

    Gerold, Chase T; Bakker, Eric; Henry, Charles S

    2018-04-03

    In this study, paper-based microfluidic devices (μPADs) capable of K + quantification in aqueous samples, as well as in human serum, using both colorimetric and distance-based methods are described. A lipophilic phase containing potassium ionophore I (valinomycin) was utilized to achieve highly selective quantification of K + in the presence of Na + , Li + , and Mg 2+ ions. Successful addition of a suspended lipophilic phase to a wax printed paper-based device is described and offers a solution to current approaches that rely on organic solvents, which damage wax barriers. The approach provides an avenue for future alkali/alkaline quantification utilizing μPADs. Colorimetric spot tests allowed for K + quantification from 0.1-5.0 mM using only 3.00 μL of sample solution. Selective distance-based quantification required small sample volumes (6.00 μL) and gave responses sensitive enough to distinguish between 1.0 and 2.5 mM of sample K + . μPADs using distance-based methods were also capable of differentiating between 4.3 and 6.9 mM K + in human serum samples. Distance-based methods required no digital analysis, electronic hardware, or pumps; any steps required for quantification could be carried out using the naked eye.

  5. Semiparametric efficient and robust estimation of an unknown symmetric population under arbitrary sample selection bias

    KAUST Repository

    Ma, Yanyuan

    2013-09-01

    We propose semiparametric methods to estimate the center and shape of a symmetric population when a representative sample of the population is unavailable due to selection bias. We allow an arbitrary sample selection mechanism determined by the data collection procedure, and we do not impose any parametric form on the population distribution. Under this general framework, we construct a family of consistent estimators of the center that is robust to population model misspecification, and we identify the efficient member that reaches the minimum possible estimation variance. The asymptotic properties and finite sample performance of the estimation and inference procedures are illustrated through theoretical analysis and simulations. A data example is also provided to illustrate the usefulness of the methods in practice. © 2013 American Statistical Association.

  6. Obscured AGN at z ~ 1 from the zCOSMOS-Bright Survey. I. Selection and optical properties of a [Ne v]-selected sample

    Science.gov (United States)

    Mignoli, M.; Vignali, C.; Gilli, R.; Comastri, A.; Zamorani, G.; Bolzonella, M.; Bongiorno, A.; Lamareille, F.; Nair, P.; Pozzetti, L.; Lilly, S. J.; Carollo, C. M.; Contini, T.; Kneib, J.-P.; Le Fèvre, O.; Mainieri, V.; Renzini, A.; Scodeggio, M.; Bardelli, S.; Caputi, K.; Cucciati, O.; de la Torre, S.; de Ravel, L.; Franzetti, P.; Garilli, B.; Iovino, A.; Kampczyk, P.; Knobel, C.; Kovač, K.; Le Borgne, J.-F.; Le Brun, V.; Maier, C.; Pellò, R.; Peng, Y.; Perez Montero, E.; Presotto, V.; Silverman, J. D.; Tanaka, M.; Tasca, L.; Tresse, L.; Vergani, D.; Zucca, E.; Bordoloi, R.; Cappi, A.; Cimatti, A.; Koekemoer, A. M.; McCracken, H. J.; Moresco, M.; Welikala, N.

    2013-08-01

    Aims: The application of multi-wavelength selection techniques is essential for obtaining a complete and unbiased census of active galactic nuclei (AGN). We present here a method for selecting z ~ 1 obscured AGN from optical spectroscopic surveys. Methods: A sample of 94 narrow-line AGN with 0.65 advantage of the large amount of data available in the COSMOS field, the properties of the [Ne v]-selected type 2 AGN were investigated, focusing on their host galaxies, X-ray emission, and optical line-flux ratios. Finally, a previously developed diagnostic, based on the X-ray-to-[Ne v] luminosity ratio, was exploited to search for the more heavily obscured AGN. Results: We found that [Ne v]-selected narrow-line AGN have Seyfert 2-like optical spectra, although their emission line ratios are diluted by a star-forming component. The ACS morphologies and stellar component in the optical spectra indicate a preference for our type 2 AGN to be hosted in early-type spirals with stellar masses greater than 109.5 - 10 M⊙, on average higher than those of the galaxy parent sample. The fraction of galaxies hosting [Ne v]-selected obscured AGN increases with the stellar mass, reaching a maximum of about 3% at ≈2 × 1011 M⊙. A comparison with other selection techniques at z ~ 1, namely the line-ratio diagnostics and X-ray detections, shows that the detection of the [Ne v] λ3426 line is an effective method for selecting AGN in the optical band, in particular the most heavily obscured ones, but cannot provide a complete census of type 2 AGN by itself. Finally, the high fraction of [Ne v]-selected type 2 AGN not detected in medium-deep (≈100-200 ks) Chandra observations (67%) is suggestive of the inclusion of Compton-thick (i.e., with NH > 1024 cm-2) sources in our sample. The presence of a population of heavily obscured AGN is corroborated by the X-ray-to-[Ne v] ratio; we estimated, by means of an X-ray stacking technique and simulations, that the Compton-thick fraction in our

  7. Triangulation based inclusion probabilities: a design-unbiased sampling approach

    OpenAIRE

    Fehrmann, Lutz; Gregoire, Timothy; Kleinn, Christoph

    2011-01-01

    A probabilistic sampling approach for design-unbiased estimation of area-related quantitative characteristics of spatially dispersed population units is proposed. The developed field protocol includes a fixed number of 3 units per sampling location and is based on partial triangulations over their natural neighbors to derive the individual inclusion probabilities. The performance of the proposed design is tested in comparison to fixed area sample plots in a simulation with two forest stands. ...

  8. Frequency-Selective Signal Sensing with Sub-Nyquist Uniform Sampling Scheme

    DEFF Research Database (Denmark)

    Pierzchlewski, Jacek; Arildsen, Thomas

    2015-01-01

    In this paper the authors discuss a problem of acquisition and reconstruction of a signal polluted by adjacent- channel interference. The authors propose a method to find a sub-Nyquist uniform sampling pattern which allows for correct reconstruction of selected frequencies. The method is inspired...... by the Restricted Isometry Property, which is known from the field of compressed sensing. Then, compressed sensing is used to successfully reconstruct a wanted signal even if some of the uniform samples were randomly lost, e. g. due to ADC saturation. An experiment which tests the proposed method in practice...

  9. 40 CFR 761.308 - Sample selection by random number generation on any two-dimensional square grid.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 30 2010-07-01 2010-07-01 false Sample selection by random number... § 761.79(b)(3) § 761.308 Sample selection by random number generation on any two-dimensional square... area created in accordance with paragraph (a) of this section, select two random numbers: one each for...

  10. Assessment of selected contaminants in streambed- and suspended-sediment samples collected in Bexar County, Texas, 2007-09

    Science.gov (United States)

    Wilson, Jennifer T.

    2011-01-01

    Elevated concentrations of sediment-associated contaminants are typically associated with urban areas such as San Antonio, Texas, in Bexar County, the seventh most populous city in the United States. This report describes an assessment of selected sediment-associated contaminants in samples collected in Bexar County from sites on the following streams: Medio Creek, Medina River, Elm Creek, Martinez Creek, Chupaderas Creek, Leon Creek, Salado Creek, and San Antonio River. During 2007-09, the U.S. Geological Survey periodically collected surficial streambed-sediment samples during base flow and suspended-sediment (large-volume suspended-sediment) samples from selected streams during stormwater runoff. All sediment samples were analyzed for major and trace elements and for organic compounds including halogenated organic compounds and polycyclic aromatic hydrocarbons (PAHs). Selected contaminants in streambed and suspended sediments in watersheds of the eight major streams in Bexar County were assessed by using a variety of methods—observations of occurrence and distribution, comparison to sediment-quality guidelines and data from previous studies, statistical analyses, and source indicators. Trace elements concentrations were low compared to the consensus-based sediment-quality guidelines threshold effect concentration (TEC) and probable effect concentration (PEC). Trace element concentrations were greater than the TEC in 28 percent of the samples and greater than the PEC in 1.5 percent of the samples. Chromium concentrations exceeded sediment-quality guidelines more frequently than concentrations of any other constituents analyzed in this study (greater than the TEC in 69 percent of samples and greater than the PEC in 8 percent of samples). Mean trace element concentrations generally are lower in Bexar County samples compared to concentrations in samples collected during previous studies in the Austin and Fort Worth, Texas, areas, but considering the relatively

  11. The Toggle Local Planner for sampling-based motion planning

    KAUST Repository

    Denny, Jory; Amato, Nancy M.

    2012-01-01

    Sampling-based solutions to the motion planning problem, such as the probabilistic roadmap method (PRM), have become commonplace in robotics applications. These solutions are the norm as the dimensionality of the planning space grows, i.e., d > 5

  12. Variable screening and ranking using sampling-based sensitivity measures

    International Nuclear Information System (INIS)

    Wu, Y-T.; Mohanty, Sitakanta

    2006-01-01

    This paper presents a methodology for screening insignificant random variables and ranking significant important random variables using sensitivity measures including two cumulative distribution function (CDF)-based and two mean-response based measures. The methodology features (1) using random samples to compute sensitivities and (2) using acceptance limits, derived from the test-of-hypothesis, to classify significant and insignificant random variables. Because no approximation is needed in either the form of the performance functions or the type of continuous distribution functions representing input variables, the sampling-based approach can handle highly nonlinear functions with non-normal variables. The main characteristics and effectiveness of the sampling-based sensitivity measures are investigated using both simple and complex examples. Because the number of samples needed does not depend on the number of variables, the methodology appears to be particularly suitable for problems with large, complex models that have large numbers of random variables but relatively few numbers of significant random variables

  13. A sampling-based approach to probabilistic pursuit evasion

    KAUST Repository

    Mahadevan, Aditya; Amato, Nancy M.

    2012-01-01

    Probabilistic roadmaps (PRMs) are a sampling-based approach to motion-planning that encodes feasible paths through the environment using a graph created from a subset of valid positions. Prior research has shown that PRMs can be augmented

  14. Population genetics inference for longitudinally-sampled mutants under strong selection.

    Science.gov (United States)

    Lacerda, Miguel; Seoighe, Cathal

    2014-11-01

    Longitudinal allele frequency data are becoming increasingly prevalent. Such samples permit statistical inference of the population genetics parameters that influence the fate of mutant variants. To infer these parameters by maximum likelihood, the mutant frequency is often assumed to evolve according to the Wright-Fisher model. For computational reasons, this discrete model is commonly approximated by a diffusion process that requires the assumption that the forces of natural selection and mutation are weak. This assumption is not always appropriate. For example, mutations that impart drug resistance in pathogens may evolve under strong selective pressure. Here, we present an alternative approximation to the mutant-frequency distribution that does not make any assumptions about the magnitude of selection or mutation and is much more computationally efficient than the standard diffusion approximation. Simulation studies are used to compare the performance of our method to that of the Wright-Fisher and Gaussian diffusion approximations. For large populations, our method is found to provide a much better approximation to the mutant-frequency distribution when selection is strong, while all three methods perform comparably when selection is weak. Importantly, maximum-likelihood estimates of the selection coefficient are severely attenuated when selection is strong under the two diffusion models, but not when our method is used. This is further demonstrated with an application to mutant-frequency data from an experimental study of bacteriophage evolution. We therefore recommend our method for estimating the selection coefficient when the effective population size is too large to utilize the discrete Wright-Fisher model. Copyright © 2014 by the Genetics Society of America.

  15. Magnetically separable polymer (Mag-MIP) for selective analysis of biotin in food samples.

    Science.gov (United States)

    Uzuriaga-Sánchez, Rosario Josefina; Khan, Sabir; Wong, Ademar; Picasso, Gino; Pividori, Maria Isabel; Sotomayor, Maria Del Pilar Taboada

    2016-01-01

    This work presents an efficient method for the preparation of magnetic nanoparticles modified with molecularly imprinted polymers (Mag-MIP) through core-shell method for the determination of biotin in milk food samples. The functional monomer acrylic acid was selected from molecular modeling, EGDMA was used as cross-linking monomer and AIBN as radical initiator. The Mag-MIP and Mag-NIP were characterized by FTIR, magnetic hysteresis, XRD, SEM and N2-sorption measurements. The capacity of Mag-MIP for biotin adsorption, its kinetics and selectivity were studied in detail. The adsorption data was well described by Freundlich isotherm model with adsorption equilibrium constant (KF) of 1.46 mL g(-1). The selectivity experiments revealed that prepared Mag-MIP had higher selectivity toward biotin compared to other molecules with different chemical structure. The material was successfully applied for the determination of biotin in diverse milk samples using HPLC for quantification of the analyte, obtaining the mean value of 87.4% recovery. Copyright © 2015 Elsevier Ltd. All rights reserved.

  16. On incomplete sampling under birth-death models and connections to the sampling-based coalescent.

    Science.gov (United States)

    Stadler, Tanja

    2009-11-07

    The constant rate birth-death process is used as a stochastic model for many biological systems, for example phylogenies or disease transmission. As the biological data are usually not fully available, it is crucial to understand the effect of incomplete sampling. In this paper, we analyze the constant rate birth-death process with incomplete sampling. We derive the density of the bifurcation events for trees on n leaves which evolved under this birth-death-sampling process. This density is used for calculating prior distributions in Bayesian inference programs and for efficiently simulating trees. We show that the birth-death-sampling process can be interpreted as a birth-death process with reduced rates and complete sampling. This shows that joint inference of birth rate, death rate and sampling probability is not possible. The birth-death-sampling process is compared to the sampling-based population genetics model, the coalescent. It is shown that despite many similarities between these two models, the distribution of bifurcation times remains different even in the case of very large population sizes. We illustrate these findings on an Hepatitis C virus dataset from Egypt. We show that the transmission times estimates are significantly different-the widely used Gamma statistic even changes its sign from negative to positive when switching from the coalescent to the birth-death process.

  17. Development of ion imprinted polymers for the selective extraction of lanthanides from environmental samples

    International Nuclear Information System (INIS)

    Moussa, Manel

    2016-01-01

    The analysis of the lanthanide ions present at trace level in complex environmental matrices requires often a purification and preconcentration step. The solid phase extraction (SPE) is the most used sample preparation technique. To improve the selectivity of this step, Ion Imprinted Polymers (IIPs) can be used as SPE solid supports. The aim of this work was the development of IIPs for the selective extraction of lanthanide ions from environmental samples. In a first part, IIPs were prepared according to the trapping approach using 5,7-dichloroquinoline-8-ol as non-vinylated ligand. For the first time, the loss of the trapped ligand during template ion removal and sedimentation steps was demonstrated by HPLC-UV. Moreover, this loss was not repeatable, which led to a lack of repeatability of the SPE profiles. It was then demonstrated that the trapping approach is not appropriate for the IIPs synthesis. In a second part, IIPs were synthesized by chemical immobilization of methacrylic acid as vinylated monomer. The repeatability of the synthesis and the SPE protocol were confirmed. A good selectivity of the IIPs for all the lanthanide ions was obtained. IIPs were successfully used to selectively extract lanthanide ions from tap and river water. Finally, IIPs were synthesized by chemical immobilization of methacrylic acid and 4-vinylpyridine as functional monomers and either a light (Nd 3+ ) or a heavy (Er 3+ ) lanthanide ion as template. Both kinds of IIPs led to a similar selectivity for all lanthanide ions. Nevertheless, this selectivity can be modified by changing the nature and the pH of the washing solution used in the SPE protocol. (author)

  18. Selective parathyroid venous sampling in primary hyperparathyroidism: A systematic review and meta-analysis.

    Science.gov (United States)

    Ibraheem, Kareem; Toraih, Eman A; Haddad, Antoine B; Farag, Mahmoud; Randolph, Gregory W; Kandil, Emad

    2018-05-14

    Minimally invasive parathyroidectomy requires accurate preoperative localization techniques. There is considerable controversy about the effectiveness of selective parathyroid venous sampling (sPVS) in primary hyperparathyroidism (PHPT) patients. The aim of this meta-analysis is to examine the diagnostic accuracy of sPVS as a preoperative localization modality in PHPT. Studies evaluating the diagnostic accuracy of sPVS for PHPT were electronically searched in the PubMed, EMBASE, Web of Science, and Cochrane Controlled Trials Register databases. Two independent authors reviewed the studies, and revised quality assessment of diagnostic accuracy study tool was used for the quality assessment. Study heterogeneity and pooled estimates were calculated. Two hundred and two unique studies were identified. Of those, 12 studies were included in the meta-analysis. Pooled sensitivity, specificity, and positive likelihood ratio (PLR) of sPVS were 74%, 41%, and 1.55, respectively. The area-under-the-receiver operating characteristic curve was 0.684, indicating an average discriminatory ability of sPVS. On comparison between sPVS and noninvasive imaging modalities, sensitivity, PLR, and positive posttest probability were significantly higher in sPVS compared to noninvasive imaging modalities. Interestingly, super-selective venous sampling had the highest sensitivity, accuracy, and positive posttest probability compared to other parathyroid venous sampling techniques. This is the first meta-analysis to examine the accuracy of sPVS in PHPT. sPVS had higher pooled sensitivity when compared to noninvasive modalities in revision parathyroid surgery. However, the invasiveness of this technique does not favor its routine use for preoperative localization. Super-selective venous sampling was the most accurate among all other parathyroid venous sampling techniques. Laryngoscope, 2018. © 2018 The American Laryngological, Rhinological and Otological Society, Inc.

  19. SADA: Ecological Risk Based Decision Support System for Selective Remediation

    Science.gov (United States)

    Spatial Analysis and Decision Assistance (SADA) is freeware that implements terrestrial ecological risk assessment and yields a selective remediation design using its integral geographical information system, based on ecological and risk assessment inputs. Selective remediation ...

  20. Electron microprobe analyses of selected samples from deep rock disposal experiment No. 1

    International Nuclear Information System (INIS)

    Hlava, P.F.; Chambers, W.F.

    1976-04-01

    Deep Rock Disposal Experiment No. 1 was designed to provide information about the interaction between a molten, glass-based, nuclear waste simulant and rock material. Selected samples from this experiment were examined by optical microscopy and electron probe microanalysis. Analysis of the homogenized material in the convection cell that was created in the central portion of the melt region shows that an amount of rock equal to about one-half of the original amount of waste simulant was incorporated in the melt during the experiment. Stagnant melt at the sides of the cell formed a glass with large compositional gradients. A white band separated the convected and stagnant materials. The color of the band is attributed to light scattering by small crystallites formed during cooling. Four types of crystallites grew from the melt: two oxides, a Mg--Fe borate, and a silicate. Spinel (MgO, Cr 2 O 3 , FeO (Fe 2 O 3 ), and NiO) was the most common crystallite in the glass. The spinel crystallites found within the convection cell displayed skeletal morphology and oscillatory zoning which indicates growth at varying temperatures as they were carried along by convection. A single cluster of nonskeletal (Fe,Cr) 2 O 3 crystallites was found at the bottom of the melt zone where convection did not occur. Mg--Fe borate crystallites grew in clusters in the central portion of the convection cell after convection ceased. A silicate similar to Fe-rich diopside (CaMgSi 2 O 6 ) with unusual amounts of Ce 2 O 3 and other heavy metal oxides formed as larger crystallites in the stagnant melt at the side of the convection cell and as many very small crystallites in the white band

  1. The diagnostic value of CT scan and selective venous sampling in Cushing's syndrome

    International Nuclear Information System (INIS)

    Negoro, Makoto; Kuwayama, Akio; Yamamoto, Naoto; Nakane, Toshichi; Yokoe, Toshio; Kageyama, Naoki; Ichihara, Kaoru; Ishiguchi, Tsuneo; Sakuma, Sadayuki

    1986-01-01

    We studied 24 patients with Cushing's syndrome in order to find the best way to confirm the pituitary adenoma preoperatively. At first, the sellar content was studied by means of a high-resolution CT scan in each patient. Second, by selective catheterization in the bilateral internal jugular vein and the inferior petrosal sinus, venous samples (c) were obtained for ACTH assay. Simultaneously, peripheral blood sampling (P) was made at the anterior cubital vein for the same purpose, and the C/P ratio was carefully calculated in each patient. If the C/P ratio exceeded 2, it was highly suggestive of the presence of pituitary adenoma. Even by an advanced high-resolution CT scan with a thickness of 2 mm, pituitary adenomas were detected in only 32 % of the patients studied. The result of image diagnosis in Cushing disease was discouraging. As for the chemical diagnosis, the results were as follows. At the early stage of this study, the catheterization was terminated in the jugular veins of nine patients. Among these, in five patients the presence of pituitary adenoma was predicted correctly in the preoperative stage. Later, by means of inferior petrosal sinus samplings, pituitary microadenomas were detected in ten patients among the twelve. Selective venous sampling for ACTH in the inferior petrosal sinus or jugular vein proved to be useful for the differential diagnosis of Cushing's syndrome when other diagnostic measures such as CT scan were inconclusive. (author)

  2. Stability of selected volatile breath constituents in Tedlar, Kynar and Flexfilm sampling bags

    Science.gov (United States)

    Mochalski, Paweł; King, Julian; Unterkofler, Karl; Amann, Anton

    2016-01-01

    The stability of 41 selected breath constituents in three types of polymer sampling bags, Tedlar, Kynar, and Flexfilm, was investigated using solid phase microextraction and gas chromatography mass spectrometry. The tested molecular species belong to different chemical classes (hydrocarbons, ketones, aldehydes, aromatics, sulphurs, esters, terpenes, etc.) and exhibit close-to-breath low ppb levels (3–12 ppb) with the exception of isoprene, acetone and acetonitrile (106 ppb, 760 ppb, 42 ppb respectively). Stability tests comprised the background emission of contaminants, recovery from dry samples, recovery from humid samples (RH 80% at 37 °C), influence of the bag’s filling degree, and reusability. Findings yield evidence of the superiority of Tedlar bags over remaining polymers in terms of background emission, species stability (up to 7 days for dry samples), and reusability. Recoveries of species under study suffered from the presence of high amounts of water (losses up to 10%). However, only heavier volatiles, with molecular masses higher than 90, exhibited more pronounced losses (20–40%). The sample size (the degree of bag filling) was found to be one of the most important factors affecting the sample integrity. To sum up, it is recommended to store breath samples in pre-conditioned Tedlar bags up to 6 hours at the maximum possible filling volume. Among the remaining films, Kynar can be considered as an alternative to Tedlar; however, higher losses of compounds should be expected even within the first hours of storage. Due to the high background emission Flexfilm is not suitable for sampling and storage of samples for analyses aiming at volatiles at a low ppb level. PMID:23323261

  3. Structuring AHP-based maintenance policy selection

    NARCIS (Netherlands)

    Goossens, Adriaan; Basten, Robertus Johannes Ida; Hummel, J. Marjan; van der Wegen, Leonardus L.M.

    2015-01-01

    We aim to structure the maintenance policy selection process for ships, using the Analytic Hierarchy Process (AHP). Maintenance is an important contributor to reach the intended life-time of capital technical assets, and it is gaining increasing interest and relevance. A maintenance policy is a

  4. Antibiotic content of selective culture media for isolation of Capnocytophaga species from oral polymicrobial samples.

    Science.gov (United States)

    Ehrmann, E; Jolivet-Gougeon, A; Bonnaure-Mallet, M; Fosse, T

    2013-10-01

    In oral microbiome, because of the abundance of commensal competitive flora, selective media with antibiotics are necessary for the recovery of fastidious Capnocytophaga species. The performances of six culture media (blood agar, chocolate blood agar, VCAT medium, CAPE medium, bacitracin chocolate blood agar and VK medium) were compared with literature data concerning five other media (FAA, LB, TSBV, CapR and TBBP media). To understand variable growth on selective media, the MICs of each antimicrobial agent contained in this different media (colistin, kanamycin, trimethoprim, trimethoprim-sulfamethoxazole, vancomycin, aztreonam and bacitracin) were determined for all Capnocytophaga species. Overall, VCAT medium (Columbia, 10% cooked horse blood, polyvitaminic supplement, 3·75 mg l(-1) of colistin, 1·5 mg l(-1) of trimethoprim, 1 mg l(-1) of vancomycin and 0·5 mg l(-1) of amphotericin B, Oxoid, France) was the more efficient selective medium, with regard to the detection of Capnocytophaga species from oral samples (P culture, a simple blood agar allowed the growth of all Capnocytophaga species. Nonetheless, in oral samples, because of the abundance of commensal competitive flora, selective media with antibiotics are necessary for the recovery of Capnocytophaga species. The demonstrated superiority of VCAT medium made its use essential for the optimal detection of this bacterial genus. This work showed that extreme caution should be exercised when reporting the isolation of Capnocytophaga species from oral polymicrobial samples, because the culture medium is a determining factor. © 2013 The Society for Applied Microbiology.

  5. Effect of selective logging on genetic diversity and gene flow in Cariniana legalis sampled from a cacao agroforestry system.

    Science.gov (United States)

    Leal, J B; Santos, R P; Gaiotto, F A

    2014-01-28

    The fragments of the Atlantic Forest of southern Bahia have a long history of intense logging and selective cutting. Some tree species, such as jequitibá rosa (Cariniana legalis), have experienced a reduction in their populations with respect to both area and density. To evaluate the possible effects of selective logging on genetic diversity, gene flow, and spatial genetic structure, 51 C. legalis individuals were sampled, representing the total remaining population from the cacao agroforestry system. A total of 120 alleles were observed from the 11 microsatellite loci analyzed. The average observed heterozygosity (0.486) was less than the expected heterozygosity (0.721), indicating a loss of genetic diversity in this population. A high fixation index (FIS = 0.325) was found, which is possibly due to a reduction in population size, resulting in increased mating among relatives. The maximum (1055 m) and minimum (0.095 m) distances traveled by pollen or seeds were inferred based on paternity tests. We found 36.84% of unique parents among all sampled seedlings. The progenitors of the remaining seedlings (63.16%) were most likely out of the sampled area. Positive and significant spatial genetic structure was identified in this population among classes 10 to 30 m away with an average coancestry coefficient between pairs of individuals of 0.12. These results suggest that the agroforestry system of cacao cultivation is contributing to maintaining levels of diversity and gene flow in the studied population, thus minimizing the effects of selective logging.

  6. Determination of Selected Polycyclic Aromatic Compounds in Particulate Matter Samples with Low Mass Loading: An Approach to Test Method Accuracy

    Directory of Open Access Journals (Sweden)

    Susana García-Alonso

    2017-01-01

    Full Text Available A miniaturized analytical procedure to determine selected polycyclic aromatic compounds (PACs in low mass loadings (<10 mg of particulate matter (PM is evaluated. The proposed method is based on a simple sonication/agitation method using small amounts of solvent for extraction. The use of a reduced sample size of particulate matter is often limiting for allowing the quantification of analytes. This also leads to the need for changing analytical procedures and evaluating its performance. The trueness and precision of the proposed method were tested using ambient air samples. Analytical results from the proposed method were compared with those of pressurized liquid and microwave extractions. Selected PACs (polycyclic aromatic hydrocarbons (PAHs and nitro polycyclic aromatic hydrocarbons (NPAHs were determined by liquid chromatography with fluorescence detection (HPLC/FD. Taking results from pressurized liquid extractions as reference values, recovery rates of sonication/agitation method were over 80% for the most abundant PAHs. Recovery rates of selected NPAHs were lower. Enhanced rates were obtained when methanol was used as a modifier. Intermediate precision was estimated by data comparison from two mathematical approaches: normalized difference data and pooled relative deviations. Intermediate precision was in the range of 10–20%. The effectiveness of the proposed method was evaluated in PM aerosol samples collected with very low mass loadings (<0.2 mg during characterization studies from turbofan engine exhausts.

  7. Passive sampling of selected pesticides in aquatic environment using polar organic chemical integrative samplers.

    Science.gov (United States)

    Thomatou, Alphanna-Akrivi; Zacharias, Ierotheos; Hela, Dimitra; Konstantinou, Ioannis

    2011-08-01

    Polar chemical integrative samplers (POCIS) were examined for their sampling efficiency of 12 pesticides and one metabolite commonly detected in surface waters. Laboratory-based calibration experiments of POCISs were conducted. The determined passive sampling rates were applied for the monitoring of pesticides levels in Lake Amvrakia, Western Greece. Spot sampling was also performed for comparison purposes. Calibration experiments were performed on the basis of static renewal exposure of POCIS under stirred conditions for different time periods of up to 28 days. The analytical procedures were based on the coupling of POCIS and solid phase extraction by Oasis HLB cartridges with gas chromatography-mass spectrometry. The recovery of the target pesticides from the POCIS was generally >79% with relative standard deviation (RSD) monitoring campaign using both passive and spot sampling whereas higher concentrations were measured by spot sampling in most cases. Passive sampling by POCIS provides a useful tool for the monitoring of pesticides in aquatic systems since integrative sampling at rates sufficient for analytical quantitation of ambient levels was observed. Calibration data are in demand for a greater number of compounds in order to extend the use in environmental monitoring.

  8. Feasibility of self-sampled dried blood spot and saliva samples sent by mail in a population-based study

    International Nuclear Information System (INIS)

    Sakhi, Amrit Kaur; Bastani, Nasser Ezzatkhah; Ellingjord-Dale, Merete; Gundersen, Thomas Erik; Blomhoff, Rune; Ursin, Giske

    2015-01-01

    In large epidemiological studies it is often challenging to obtain biological samples. Self-sampling by study participants using dried blood spots (DBS) technique has been suggested to overcome this challenge. DBS is a type of biosampling where blood samples are obtained by a finger-prick lancet, blotted and dried on filter paper. However, the feasibility and efficacy of collecting DBS samples from study participants in large-scale epidemiological studies is not known. The aim of the present study was to test the feasibility and response rate of collecting self-sampled DBS and saliva samples in a population–based study of women above 50 years of age. We determined response proportions, number of phone calls to the study center with questions about sampling, and quality of the DBS. We recruited women through a study conducted within the Norwegian Breast Cancer Screening Program. Invitations, instructions and materials were sent to 4,597 women. The data collection took place over a 3 month period in the spring of 2009. Response proportions for the collection of DBS and saliva samples were 71.0% (3,263) and 70.9% (3,258), respectively. We received 312 phone calls (7% of the 4,597 women) with questions regarding sampling. Of the 3,263 individuals that returned DBS cards, 3,038 (93.1%) had been packaged and shipped according to instructions. A total of 3,032 DBS samples were sufficient for at least one biomarker analysis (i.e. 92.9% of DBS samples received by the laboratory). 2,418 (74.1%) of the DBS cards received by the laboratory were filled with blood according to the instructions (i.e. 10 completely filled spots with up to 7 punches per spot for up to 70 separate analyses). To assess the quality of the samples, we selected and measured two biomarkers (carotenoids and vitamin D). The biomarker levels were consistent with previous reports. Collecting self-sampled DBS and saliva samples through the postal services provides a low cost, effective and feasible

  9. Feasibility of self-sampled dried blood spot and saliva samples sent by mail in a population-based study.

    Science.gov (United States)

    Sakhi, Amrit Kaur; Bastani, Nasser Ezzatkhah; Ellingjord-Dale, Merete; Gundersen, Thomas Erik; Blomhoff, Rune; Ursin, Giske

    2015-04-11

    In large epidemiological studies it is often challenging to obtain biological samples. Self-sampling by study participants using dried blood spots (DBS) technique has been suggested to overcome this challenge. DBS is a type of biosampling where blood samples are obtained by a finger-prick lancet, blotted and dried on filter paper. However, the feasibility and efficacy of collecting DBS samples from study participants in large-scale epidemiological studies is not known. The aim of the present study was to test the feasibility and response rate of collecting self-sampled DBS and saliva samples in a population-based study of women above 50 years of age. We determined response proportions, number of phone calls to the study center with questions about sampling, and quality of the DBS. We recruited women through a study conducted within the Norwegian Breast Cancer Screening Program. Invitations, instructions and materials were sent to 4,597 women. The data collection took place over a 3 month period in the spring of 2009. Response proportions for the collection of DBS and saliva samples were 71.0% (3,263) and 70.9% (3,258), respectively. We received 312 phone calls (7% of the 4,597 women) with questions regarding sampling. Of the 3,263 individuals that returned DBS cards, 3,038 (93.1%) had been packaged and shipped according to instructions. A total of 3,032 DBS samples were sufficient for at least one biomarker analysis (i.e. 92.9% of DBS samples received by the laboratory). 2,418 (74.1%) of the DBS cards received by the laboratory were filled with blood according to the instructions (i.e. 10 completely filled spots with up to 7 punches per spot for up to 70 separate analyses). To assess the quality of the samples, we selected and measured two biomarkers (carotenoids and vitamin D). The biomarker levels were consistent with previous reports. Collecting self-sampled DBS and saliva samples through the postal services provides a low cost, effective and feasible

  10. Selection of Sampling Pumps Used for Groundwater Monitoring at the Hanford Site

    Energy Technology Data Exchange (ETDEWEB)

    Schalla, Ronald; Webber, William D.; Smith, Ronald M.

    2001-11-05

    The variable frequency drive centrifugal submersible pump, Redi-Flo2a made by Grundfosa, was selected for universal application for Hanford Site groundwater monitoring. Specifications for the selected pump and five other pumps were evaluated against current and future Hanford groundwater monitoring performance requirements, and the Redi-Flo2 was selected as the most versatile and applicable for the range of monitoring conditions. The Redi-Flo2 pump distinguished itself from the other pumps considered because of its wide range in output flow rate and its comparatively moderate maintenance and low capital costs. The Redi-Flo2 pump is able to purge a well at a high flow rate and then supply water for sampling at a low flow rate. Groundwater sampling using a low-volume-purging technique (e.g., low flow, minimal purge, no purge, or micropurgea) is planned in the future, eliminating the need for the pump to supply a high-output flow rate. Under those conditions, the Well Wizard bladder pump, manufactured by QED Environmental Systems, Inc., may be the preferred pump because of the lower capital cost.

  11. Concurrent and Longitudinal Associations Among Temperament, Parental Feeding Styles, and Selective Eating in a Preschool Sample.

    Science.gov (United States)

    Kidwell, Katherine M; Kozikowski, Chelsea; Roth, Taylor; Lundahl, Alyssa; Nelson, Timothy D

    2018-06-01

    To examine the associations among negative/reactive temperament, feeding styles, and selective eating in a sample of preschoolers because preschool eating behaviors likely have lasting implications for children's health. A community sample of preschoolers aged 3-5 years (M = 4.49 years, 49.5% female, 75.7% European American) in the Midwest of the United States was recruited to participate in the study (N = 297). Parents completed measures of temperament and feeding styles at two time points 6 months apart. A series of regressions indicated that children who had temperaments high in negative affectivity were significantly more likely to experience instrumental and emotional feeding styles. They were also significantly more likely to be selective eaters. These associations were present when examined both concurrently and after 6 months. This study provides a novel investigation of child temperament and eating behaviors, allowing for a better understanding of how negative affectivity is associated with instrumental feeding, emotional feeding, and selective eating. These results inform interventions to improve child health.

  12. Tyrosinase-Based Biosensors for Selective Dopamine Detection

    Directory of Open Access Journals (Sweden)

    Monica Florescu

    2017-06-01

    Full Text Available A novel tyrosinase-based biosensor was developed for the detection of dopamine (DA. For increased selectivity, gold electrodes were previously modified with cobalt (II-porphyrin (CoP film with electrocatalytic activity, to act both as an electrochemical mediator and an enzyme support, upon which the enzyme tyrosinase (Tyr was cross-linked. Differential pulse voltammetry was used for electrochemical detection and the reduction current of dopamine-quinone was measured as a function of dopamine concentration. Our experiments demonstrated that the presence of CoP improves the selectivity of the electrode towards dopamine in the presence of ascorbic acid (AA, with a linear trend of concentration dependence in the range of 2–30 µM. By optimizing the conditioning parameters, a separation of 130 mV between the peak potentials for ascorbic acid AA and DA was obtained, allowing the selective detection of DA. The biosensor had a sensitivity of 1.22 ± 0.02 µA·cm−2·µM−1 and a detection limit of 0.43 µM. Biosensor performances were tested in the presence of dopamine medication, with satisfactory results in terms of recovery (96%, and relative standard deviation values below 5%. These results confirmed the applicability of the biosensors in real samples such as human urine and blood serum.

  13. Correlations Between Life-Detection Techniques and Implications for Sampling Site Selection in Planetary Analog Missions

    Science.gov (United States)

    Gentry, Diana M.; Amador, Elena S.; Cable, Morgan L.; Chaudry, Nosheen; Cullen, Thomas; Jacobsen, Malene B.; Murukesan, Gayathri; Schwieterman, Edward W.; Stevens, Adam H.; Stockton, Amanda; Tan, George; Yin, Chang; Cullen, David C.; Geppert, Wolf

    2017-10-01

    We conducted an analog sampling expedition under simulated mission constraints to areas dominated by basaltic tephra of the Eldfell and Fimmvörðuháls lava fields (Iceland). Sites were selected to be "homogeneous" at a coarse remote sensing resolution (10-100 m) in apparent color, morphology, moisture, and grain size, with best-effort realism in numbers of locations and replicates. Three different biomarker assays (counting of nucleic-acid-stained cells via fluorescent microscopy, a luciferin/luciferase assay for adenosine triphosphate, and quantitative polymerase chain reaction (qPCR) to detect DNA associated with bacteria, archaea, and fungi) were characterized at four nested spatial scales (1 m, 10 m, 100 m, and >1 km) by using five common metrics for sample site representativeness (sample mean variance, group F tests, pairwise t tests, and the distribution-free rank sum H and u tests). Correlations between all assays were characterized with Spearman's rank test. The bioluminescence assay showed the most variance across the sites, followed by qPCR for bacterial and archaeal DNA; these results could not be considered representative at the finest resolution tested (1 m). Cell concentration and fungal DNA also had significant local variation, but they were homogeneous over scales of >1 km. These results show that the selection of life detection assays and the number, distribution, and location of sampling sites in a low biomass environment with limited a priori characterization can yield both contrasting and complementary results, and that their interdependence must be given due consideration to maximize science return in future biomarker sampling expeditions.

  14. Effects of soil water saturation on sampling equilibrium and kinetics of selected polycyclic aromatic hydrocarbons.

    Science.gov (United States)

    Kim, Pil-Gon; Roh, Ji-Yeon; Hong, Yongseok; Kwon, Jung-Hwan

    2017-10-01

    Passive sampling can be applied for measuring the freely dissolved concentration of hydrophobic organic chemicals (HOCs) in soil pore water. When using passive samplers under field conditions, however, there are factors that might affect passive sampling equilibrium and kinetics, such as soil water saturation. To determine the effects of soil water saturation on passive sampling, the equilibrium and kinetics of passive sampling were evaluated by observing changes in the distribution coefficient between sampler and soil (K sampler/soil ) and the uptake rate constant (k u ) at various soil water saturations. Polydimethylsiloxane (PDMS) passive samplers were deployed into artificial soils spiked with seven selected polycyclic aromatic hydrocarbons (PAHs). In dry soil (0% water saturation), both K sampler/soil and k u values were much lower than those in wet soils likely due to the contribution of adsorption of PAHs onto soil mineral surfaces and the conformational changes in soil organic matter. For high molecular weight PAHs (chrysene, benzo[a]pyrene, and dibenzo[a,h]anthracene), both K sampler/soil and k u values increased with increasing soil water saturation, whereas they decreased with increasing soil water saturation for low molecular weight PAHs (phenanthrene, anthracene, fluoranthene, and pyrene). Changes in the sorption capacity of soil organic matter with soil water content would be the main cause of the changes in passive sampling equilibrium. Henry's law constant could explain the different behaviors in uptake kinetics of the selected PAHs. The results of this study would be helpful when passive samplers are deployed under various soil water saturations. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. Detection of Salmonella spp. in veterinary samples by combining selective enrichment and real-time PCR.

    Science.gov (United States)

    Goodman, Laura B; McDonough, Patrick L; Anderson, Renee R; Franklin-Guild, Rebecca J; Ryan, James R; Perkins, Gillian A; Thachil, Anil J; Glaser, Amy L; Thompson, Belinda S

    2017-11-01

    Rapid screening for enteric bacterial pathogens in clinical environments is essential for biosecurity. Salmonella found in veterinary hospitals, particularly Salmonella enterica serovar Dublin, can pose unique challenges for culture and testing because of its poor growth. Multiple Salmonella serovars including Dublin are emerging threats to public health given increasing prevalence and antimicrobial resistance. We adapted an automated food testing method to veterinary samples and evaluated the performance of the method in a variety of matrices including environmental samples ( n = 81), tissues ( n = 52), feces ( n = 148), and feed ( n = 29). A commercial kit was chosen as the basis for this approach in view of extensive performance characterizations published by multiple independent organizations. A workflow was established for efficiently and accurately testing veterinary matrices and environmental samples by use of real-time PCR after selective enrichment in Rappaport-Vassiliadis soya (RVS) medium. Using this method, the detection limit for S. Dublin improved by 100-fold over subculture on selective agars (eosin-methylene blue, brilliant green, and xylose-lysine-deoxycholate). Overall, the procedure was effective in detecting Salmonella spp. and provided next-day results.

  16. Polymer platforms for selective detection of cocaine in street samples adulterated with levamisole.

    Science.gov (United States)

    Florea, Anca; Cowen, Todd; Piletsky, Sergey; De Wael, Karolien

    2018-08-15

    Accurate drug detection is of utmost importance for fighting against drug abuse. With a high number of cutting agents and adulterants being added to cut or mask drugs in street powders the number of false results is increasing. We demonstrate for the first time the usefulness of employing polymers readily synthesized by electrodeposition to selectively detect cocaine in the presence of the commonly used adulterant levamisole. The polymers were selected by computational modelling to exhibit high binding affinity towards cocaine and deposited directly on the surface of graphene-modified electrodes via electropolymerization. The resulting platforms allowed a distinct electrochemical signal for cocaine, which is otherwise suppressed by levamisole. Square wave voltammetry was used to quantify cocaine alone and in the presence of levamisole. The usefulness of the platforms was demonstrated in the screening of real street samples. Copyright © 2018 Elsevier B.V. All rights reserved.

  17. Rapid determination of trace level copper in tea infusion samples by solid contact ion selective electrode

    Directory of Open Access Journals (Sweden)

    Aysenur Birinci

    2016-07-01

    Full Text Available A new solid contact copper selective electrode with a poly (vinyl chloride (PVC membrane consisting of o-xylylenebis(N,N-diisobutyldithiocarbamate as ionophore has been prepared. The main novelties of constructed ion selective electrode concept are the enhanced robustness, cheapness, and fastness due to the use of solid contacts. The electrode exhibits a rapid (< 10 seconds and near-Nernstian response to Cu2+ activity from 10−1 to 10−6 mol/L at the pH range of 4.0–6.0. No serious interference from common ions was found. The electrode characterizes by high potential stability, reproducibility, and full repeatability. The electrode was used as an indicator electrode in potentiometric titration of Cu(II ions with EDTA and for the direct assay of tea infusion samples by means of the calibration graph technique. The results compared favorably with those obtained by the atomic absorption spectroscopy (AAS.

  18. Soft magnetic properties of bulk amorphous Co-based samples

    International Nuclear Information System (INIS)

    Fuezer, J.; Bednarcik, J.; Kollar, P.

    2006-01-01

    Ball milling of melt-spun ribbons and subsequent compaction of the resulting powders in the supercooled liquid region were used to prepare disc shaped bulk amorphous Co-based samples. The several bulk samples have been prepared by hot compaction with subsequent heat treatment (500 deg C - 575 deg C). The influence of the consolidation temperature and follow-up heat treatment on the magnetic properties of bulk samples was investigated. The final heat treatment leads to decrease of the coercivity to the value between the 7.5 to 9 A/m (Authors)

  19. A novel PMT test system based on waveform sampling

    Science.gov (United States)

    Yin, S.; Ma, L.; Ning, Z.; Qian, S.; Wang, Y.; Jiang, X.; Wang, Z.; Yu, B.; Gao, F.; Zhu, Y.; Wang, Z.

    2018-01-01

    Comparing with the traditional test system based on a QDC and TDC and scaler, a test system based on waveform sampling is constructed for signal sampling of the 8"R5912 and the 20"R12860 Hamamatsu PMT in different energy states from single to multiple photoelectrons. In order to achieve high throughput and to reduce the dead time in data processing, the data acquisition software based on LabVIEW is developed and runs with a parallel mechanism. The analysis algorithm is realized in LabVIEW and the spectra of charge, amplitude, signal width and rising time are analyzed offline. The results from Charge-to-Digital Converter, Time-to-Digital Converter and waveform sampling are discussed in detailed comparison.

  20. Antimicrobial and antibiofilm effects of selected food preservatives against Salmonella spp. isolated from chicken samples.

    Science.gov (United States)

    Er, Buket; Demirhan, Burak; Onurdag, Fatma Kaynak; Ozgacar, Selda Özgen; Oktem, Aysel Bayhan

    2014-03-01

    Salmonella spp. are widespread foodborne pathogens that contaminate egg and poultry meats. Attachment, colonization, as well as biofilm formation capacity of Salmonella spp. on food and contact surfaces of food may cause continuous contamination. Biofilm may play a crucial role in the survival of salmonellae under unfavorable environmental conditions, such as in animal slaughterhouses and processing plants. This could serve as a reservoir compromising food safety and human health. Addition of antimicrobial preservatives extends shelf lives of food products, but even when products are supplemented with adequate amounts of preservatives, it is not always possible to inhibit the microorganisms in a biofilm community. In this study, our aims were i) to determine the minimum inhibitory concentrations (MIC) and minimum biofilm inhibitory concentrations (MBIC) of selected preservatives against planktonic and biofilm forms of Salmonella spp. isolated from chicken samples and Salmonella Typhimurium SL1344 standard strain, ii) to show the differences in the susceptibility patterns of same strains versus the planktonic and biofilm forms to the same preservative agent, and iii) to determine and compare antimicrobial and antibiofilm effects of selected food preservatives against Salmonella spp. For this purpose, Salmonella Typhimurium SL1344 standard strain and 4 Salmonella spp. strains isolated from chicken samples were used. Investigation of antimicrobial and antibiofilm effects of selected food preservatives against Salmonella spp. was done according to Clinical and Laboratory Standards Institute M100-S18 guidelines and BioTimer assay, respectively. As preservative agents, pure ciprofloxacin, sodium nitrite, potassium sorbate, sodium benzoate, methyl paraben, and propyl paraben were selected. As a result, it was determined that MBIC values are greater than the MIC values of the preservatives. This result verified the resistance seen in a biofilm community to food

  1. Automatic Samples Selection Using Histogram of Oriented Gradients (HOG Feature Distance

    Directory of Open Access Journals (Sweden)

    Inzar Salfikar

    2018-01-01

    Full Text Available Finding victims at a disaster site is the primary goal of Search-and-Rescue (SAR operations. Many technologies created from research for searching disaster victims through aerial imaging. but, most of them are difficult to detect victims at tsunami disaster sites with victims and backgrounds which are look similar. This research collects post-tsunami aerial imaging data from the internet to builds dataset and model for detecting tsunami disaster victims. Datasets are built based on distance differences from features every sample using Histogram-of-Oriented-Gradient (HOG method. We use the longest distance to collect samples from photo to generate victim and non-victim samples. We claim steps to collect samples by measuring HOG feature distance from all samples. the longest distance between samples will take as a candidate to build the dataset, then classify victim (positives and non-victim (negatives samples manually. The dataset of tsunami disaster victims was re-analyzed using cross-validation Leave-One-Out (LOO with Support-Vector-Machine (SVM method. The experimental results show the performance of two test photos with 61.70% precision, 77.60% accuracy, 74.36% recall and f-measure 67.44% to distinguish victim (positives and non-victim (negatives.

  2. Sol-gel based sensor for selective formaldehyde determination

    Energy Technology Data Exchange (ETDEWEB)

    Bunkoed, Opas [Trace Analysis and Biosensor Research Center, Prince of Songkla University, Hat Yai, Songkhla 90112 (Thailand); Department of Chemistry and Center for Innovation in Chemistry, Faculty of Science, Prince of Songkla University, Hat Yai, Songkhla 90112 (Thailand); Davis, Frank [Cranfield Health, Cranfield University, Bedford MK43 0AL (United Kingdom); Kanatharana, Proespichaya, E-mail: proespichaya.K@psu.ac.th [Trace Analysis and Biosensor Research Center, Prince of Songkla University, Hat Yai, Songkhla 90112 (Thailand); Department of Chemistry and Center for Innovation in Chemistry, Faculty of Science, Prince of Songkla University, Hat Yai, Songkhla 90112 (Thailand); Thavarungkul, Panote [Trace Analysis and Biosensor Research Center, Prince of Songkla University, Hat Yai, Songkhla 90112 (Thailand); Department of Physics, Faculty of Science, Prince of Songkla University, Hat Yai, Songkhla 90112 (Thailand); Higson, Seamus P.J., E-mail: s.p.j.higson@cranfield.ac.uk [Cranfield Health, Cranfield University, Bedford MK43 0AL (United Kingdom)

    2010-02-05

    We report the development of transparent sol-gels with entrapped sensitive and selective reagents for the detection of formaldehyde. The sampling method is based on the adsorption of formaldehyde from the air and reaction with {beta}-diketones (for example acetylacetone) in a sol-gel matrix to produce a yellow product, lutidine, which was detected directly. The proposed method does not require preparation of samples prior to analysis and allows both screening by visual detection and quantitative measurement by simple spectrophotometry. The detection limit of 0.03 ppmv formaldehyde is reported which is lower than the maximum exposure concentrations recommended by both the World Health Organisation (WHO) and the Occupational Safety and Health Administration (OSHA). This sampling method was found to give good reproducibility, the relative standard deviation at 0.2 and 1 ppmv being 6.3% and 4.6%, respectively. Other carbonyl compounds i.e. acetaldehyde, benzaldehyde, acetone and butanone do not interfere with this analytical approach. Results are provided for the determination of formaldehyde in indoor air.

  3. Sol-gel based sensor for selective formaldehyde determination

    International Nuclear Information System (INIS)

    Bunkoed, Opas; Davis, Frank; Kanatharana, Proespichaya; Thavarungkul, Panote; Higson, Seamus P.J.

    2010-01-01

    We report the development of transparent sol-gels with entrapped sensitive and selective reagents for the detection of formaldehyde. The sampling method is based on the adsorption of formaldehyde from the air and reaction with β-diketones (for example acetylacetone) in a sol-gel matrix to produce a yellow product, lutidine, which was detected directly. The proposed method does not require preparation of samples prior to analysis and allows both screening by visual detection and quantitative measurement by simple spectrophotometry. The detection limit of 0.03 ppmv formaldehyde is reported which is lower than the maximum exposure concentrations recommended by both the World Health Organisation (WHO) and the Occupational Safety and Health Administration (OSHA). This sampling method was found to give good reproducibility, the relative standard deviation at 0.2 and 1 ppmv being 6.3% and 4.6%, respectively. Other carbonyl compounds i.e. acetaldehyde, benzaldehyde, acetone and butanone do not interfere with this analytical approach. Results are provided for the determination of formaldehyde in indoor air.

  4. A Lightweight Structure Redesign Method Based on Selective Laser Melting

    Directory of Open Access Journals (Sweden)

    Li Tang

    2016-11-01

    Full Text Available The purpose of this paper is to present a new design method of lightweight parts fabricated by selective laser melting (SLM based on the “Skin-Frame” and to explore the influence of fabrication defects on SLM parts with different sizes. Some standard lattice parts were designed according to the Chinese GB/T 1452-2005 standard and manufactured by SLM. Then these samples were tested in an MTS Insight 30 compression testing machine to study the trends of the yield process with different structure sizes. A set of standard cylinder samples were also designed according to the Chinese GB/T 228-2010 standard. These samples, which were made of iron-nickel alloy (IN718, were also processed by SLM, and then tested in the universal material testing machine INSTRON 1346 to obtain their tensile strength. Furthermore, a lightweight redesigned method was researched. Then some common parts such as a stopper and connecting plate were redesigned using this method. These redesigned parts were fabricated and some application tests have already been performed. The compression testing results show that when the minimum structure size is larger than 1.5 mm, the mechanical characteristics will hardly be affected by process defects. The cylinder parts were fractured by the universal material testing machine at about 1069.6 MPa. These redesigned parts worked well in application tests, with both the weight and fabrication time of these parts reduced more than 20%.

  5. Pesticides, selected elements, and other chemicals in adult total diet samples October 1979-September 1980

    International Nuclear Information System (INIS)

    Gartrell, M.J.; Craun, J.C.; Podrebarac, D.S.; Gunderson, E.L.

    1985-01-01

    The US Food and Drug Administration (FDA) conducts Total Diet Studies to determine the dietary intake of selected pesticides, industrial chemicals, and elements (including radionuclides). These studies involve the retail purchase and analysis of foods representative of the diets of infants, toddlers, and adults. The individual food items are separated into a number of food groups, each of which is analyzed as a composite. This report summarizes the results for adult Total Diet samples collected in 20 cities between October 1979 and September 1980. The average concentration, range of concentrations, and calculated average daily intake of each chemical found are presented by food group. The average daily intakes of the chemicals are similar to those found in the several preceding years and are within acceptable limits. The results for samples collected during the same period that represent the diets of infants and toddlers are reported separately

  6. 40 CFR Appendix A to Subpart F of... - Sampling Plans for Selective Enforcement Auditing of Nonroad Engines

    Science.gov (United States)

    2010-07-01

    ... Enforcement Auditing of Nonroad Engines A Appendix A to Subpart F of Part 89 Protection of Environment... NONROAD COMPRESSION-IGNITION ENGINES Selective Enforcement Auditing Pt. 89, Subpt. F, App. A Appendix A to Subpart F of Part 89—Sampling Plans for Selective Enforcement Auditing of Nonroad Engines Table 1—Sampling...

  7. 40 CFR Appendix A to Subpart G of... - Sampling Plans for Selective Enforcement Auditing of Marine Engines

    Science.gov (United States)

    2010-07-01

    ... Enforcement Auditing of Marine Engines A Appendix A to Subpart G of Part 91 Protection of Environment...-IGNITION ENGINES Selective Enforcement Auditing Regulations Pt. 91, Subpt. G, App. A Appendix A to Subpart G of Part 91—Sampling Plans for Selective Enforcement Auditing of Marine Engines Table 1—Sampling...

  8. Improved mesh based photon sampling techniques for neutron activation analysis

    International Nuclear Information System (INIS)

    Relson, E.; Wilson, P. P. H.; Biondo, E. D.

    2013-01-01

    The design of fusion power systems requires analysis of neutron activation of large, complex volumes, and the resulting particles emitted from these volumes. Structured mesh-based discretization of these problems allows for improved modeling in these activation analysis problems. Finer discretization of these problems results in large computational costs, which drives the investigation of more efficient methods. Within an ad hoc subroutine of the Monte Carlo transport code MCNP, we implement sampling of voxels and photon energies for volumetric sources using the alias method. The alias method enables efficient sampling of a discrete probability distribution, and operates in 0(1) time, whereas the simpler direct discrete method requires 0(log(n)) time. By using the alias method, voxel sampling becomes a viable alternative to sampling space with the 0(1) approach of uniformly sampling the problem volume. Additionally, with voxel sampling it is straightforward to introduce biasing of volumetric sources, and we implement this biasing of voxels as an additional variance reduction technique that can be applied. We verify our implementation and compare the alias method, with and without biasing, to direct discrete sampling of voxels, and to uniform sampling. We study the behavior of source biasing in a second set of tests and find trends between improvements and source shape, material, and material density. Overall, however, the magnitude of improvements from source biasing appears to be limited. Future work will benefit from the implementation of efficient voxel sampling - particularly with conformal unstructured meshes where the uniform sampling approach cannot be applied. (authors)

  9. EEG feature selection method based on decision tree.

    Science.gov (United States)

    Duan, Lijuan; Ge, Hui; Ma, Wei; Miao, Jun

    2015-01-01

    This paper aims to solve automated feature selection problem in brain computer interface (BCI). In order to automate feature selection process, we proposed a novel EEG feature selection method based on decision tree (DT). During the electroencephalogram (EEG) signal processing, a feature extraction method based on principle component analysis (PCA) was used, and the selection process based on decision tree was performed by searching the feature space and automatically selecting optimal features. Considering that EEG signals are a series of non-linear signals, a generalized linear classifier named support vector machine (SVM) was chosen. In order to test the validity of the proposed method, we applied the EEG feature selection method based on decision tree to BCI Competition II datasets Ia, and the experiment showed encouraging results.

  10. Parameter selection for peak alignment in chromatographic sample profiling: Objective quality indicators and use of control samples

    NARCIS (Netherlands)

    Peters, S.; van Velzen, E.; Janssen, H.-G.

    2009-01-01

    In chromatographic profiling applications, peak alignment is often essential as most chromatographic systems exhibit small peak shifts over time. When using currently available alignment algorithms, there are several parameters that determine the outcome of the alignment process. Selecting the

  11. Selective extraction of dimethoate from cucumber samples by use of molecularly imprinted microspheres

    Directory of Open Access Journals (Sweden)

    Jiao-Jiao Du

    2015-06-01

    Full Text Available Molecularly imprinted polymers for dimethoate recognition were synthesized by the precipitation polymerization technique using methyl methacrylate (MMA as the functional monomer and ethylene glycol dimethacrylate (EGDMA as the cross-linker. The morphology, adsorption and recognition properties were investigated by scanning electron microscopy (SEM, static adsorption test, and competitive adsorption test. To obtain the best selectivity and binding performance, the synthesis and adsorption conditions of MIPs were optimized through single factor experiments. Under the optimized conditions, the resultant polymers exhibited uniform size, satisfactory binding capacity and significant selectivity. Furthermore, the imprinted polymers were successfully applied as a specific solid-phase extractants combined with high performance liquid chromatography (HPLC for determination of dimethoate residues in the cucumber samples. The average recoveries of three spiked samples ranged from 78.5% to 87.9% with the relative standard deviations (RSDs less than 4.4% and the limit of detection (LOD obtained for dimethoate as low as 2.3 μg/mL. Keywords: Molecularly imprinted polymer, Precipitation polymerization, Dimethoate, Cucumber, HPLC

  12. Contingency inferences driven by base rates: Valid by sampling

    Directory of Open Access Journals (Sweden)

    Florian Kutzner

    2011-04-01

    Full Text Available Fiedler et al. (2009, reviewed evidence for the utilization of a contingency inference strategy termed pseudocontingencies (PCs. In PCs, the more frequent levels (and, by implication, the less frequent levels are assumed to be associated. PCs have been obtained using a wide range of task settings and dependent measures. Yet, the readiness with which decision makers rely on PCs is poorly understood. A computer simulation explored two potential sources of subjective validity of PCs. First, PCs are shown to perform above chance level when the task is to infer the sign of moderate to strong population contingencies from a sample of observations. Second, contingency inferences based on PCs and inferences based on cell frequencies are shown to partially agree across samples. Intriguingly, this criterion and convergent validity are by-products of random sampling error, highlighting the inductive nature of contingency inferences.

  13. Community-based survey versus sentinel site sampling in ...

    African Journals Online (AJOL)

    rural children. Implications for nutritional surveillance and the development of nutritional programmes. G. c. Solarsh, D. M. Sanders, C. A. Gibson, E. Gouws. A study of the anthropometric status of under-5-year-olds was conducted in the Nqutu district of Kwazulu by means of a representative community-based sample and.

  14. A sampling-based approach to probabilistic pursuit evasion

    KAUST Repository

    Mahadevan, Aditya

    2012-05-01

    Probabilistic roadmaps (PRMs) are a sampling-based approach to motion-planning that encodes feasible paths through the environment using a graph created from a subset of valid positions. Prior research has shown that PRMs can be augmented with useful information to model interesting scenarios related to multi-agent interaction and coordination. © 2012 IEEE.

  15. Content-based image retrieval: Color-selection exploited

    NARCIS (Netherlands)

    Broek, E.L. van den; Vuurpijl, L.G.; Kisters, P. M. F.; Schmid, J.C.M. von; Moens, M.F.; Busser, R. de; Hiemstra, D.; Kraaij, W.

    2002-01-01

    This research presents a new color selection interface that facilitates query-by-color in Content-Based Image Retrieval (CBIR). Existing CBIR color selection interfaces, are being judged as non-intuitive and difficult to use. Our interface copes with these problems of usability. It is based on 11

  16. Content-Based Image Retrieval: Color-selection exploited

    NARCIS (Netherlands)

    Moens, Marie-Francine; van den Broek, Egon; Vuurpijl, L.G.; de Brusser, Rik; Kisters, P.M.F.; Hiemstra, Djoerd; Kraaij, Wessel; von Schmid, J.C.M.

    2002-01-01

    This research presents a new color selection interface that facilitates query-by-color in Content-Based Image Retrieval (CBIR). Existing CBIR color selection interfaces, are being judged as non-intuitive and difficult to use. Our interface copes with these problems of usability. It is based on 11

  17. ACTIVE LEARNING TO OVERCOME SAMPLE SELECTION BIAS: APPLICATION TO PHOTOMETRIC VARIABLE STAR CLASSIFICATION

    Energy Technology Data Exchange (ETDEWEB)

    Richards, Joseph W.; Starr, Dan L.; Miller, Adam A.; Bloom, Joshua S.; Butler, Nathaniel R.; Berian James, J. [Astronomy Department, University of California, Berkeley, CA 94720-7450 (United States); Brink, Henrik [Dark Cosmology Centre, Juliane Maries Vej 30, 2100 Copenhagen O (Denmark); Long, James P.; Rice, John, E-mail: jwrichar@stat.berkeley.edu [Statistics Department, University of California, Berkeley, CA 94720-7450 (United States)

    2012-01-10

    Despite the great promise of machine-learning algorithms to classify and predict astrophysical parameters for the vast numbers of astrophysical sources and transients observed in large-scale surveys, the peculiarities of the training data often manifest as strongly biased predictions on the data of interest. Typically, training sets are derived from historical surveys of brighter, more nearby objects than those from more extensive, deeper surveys (testing data). This sample selection bias can cause catastrophic errors in predictions on the testing data because (1) standard assumptions for machine-learned model selection procedures break down and (2) dense regions of testing space might be completely devoid of training data. We explore possible remedies to sample selection bias, including importance weighting, co-training, and active learning (AL). We argue that AL-where the data whose inclusion in the training set would most improve predictions on the testing set are queried for manual follow-up-is an effective approach and is appropriate for many astronomical applications. For a variable star classification problem on a well-studied set of stars from Hipparcos and Optical Gravitational Lensing Experiment, AL is the optimal method in terms of error rate on the testing data, beating the off-the-shelf classifier by 3.4% and the other proposed methods by at least 3.0%. To aid with manual labeling of variable stars, we developed a Web interface which allows for easy light curve visualization and querying of external databases. Finally, we apply AL to classify variable stars in the All Sky Automated Survey, finding dramatic improvement in our agreement with the ASAS Catalog of Variable Stars, from 65.5% to 79.5%, and a significant increase in the classifier's average confidence for the testing set, from 14.6% to 42.9%, after a few AL iterations.

  18. ACTIVE LEARNING TO OVERCOME SAMPLE SELECTION BIAS: APPLICATION TO PHOTOMETRIC VARIABLE STAR CLASSIFICATION

    International Nuclear Information System (INIS)

    Richards, Joseph W.; Starr, Dan L.; Miller, Adam A.; Bloom, Joshua S.; Butler, Nathaniel R.; Berian James, J.; Brink, Henrik; Long, James P.; Rice, John

    2012-01-01

    Despite the great promise of machine-learning algorithms to classify and predict astrophysical parameters for the vast numbers of astrophysical sources and transients observed in large-scale surveys, the peculiarities of the training data often manifest as strongly biased predictions on the data of interest. Typically, training sets are derived from historical surveys of brighter, more nearby objects than those from more extensive, deeper surveys (testing data). This sample selection bias can cause catastrophic errors in predictions on the testing data because (1) standard assumptions for machine-learned model selection procedures break down and (2) dense regions of testing space might be completely devoid of training data. We explore possible remedies to sample selection bias, including importance weighting, co-training, and active learning (AL). We argue that AL—where the data whose inclusion in the training set would most improve predictions on the testing set are queried for manual follow-up—is an effective approach and is appropriate for many astronomical applications. For a variable star classification problem on a well-studied set of stars from Hipparcos and Optical Gravitational Lensing Experiment, AL is the optimal method in terms of error rate on the testing data, beating the off-the-shelf classifier by 3.4% and the other proposed methods by at least 3.0%. To aid with manual labeling of variable stars, we developed a Web interface which allows for easy light curve visualization and querying of external databases. Finally, we apply AL to classify variable stars in the All Sky Automated Survey, finding dramatic improvement in our agreement with the ASAS Catalog of Variable Stars, from 65.5% to 79.5%, and a significant increase in the classifier's average confidence for the testing set, from 14.6% to 42.9%, after a few AL iterations.

  19. Active Learning to Overcome Sample Selection Bias: Application to Photometric Variable Star Classification

    Science.gov (United States)

    Richards, Joseph W.; Starr, Dan L.; Brink, Henrik; Miller, Adam A.; Bloom, Joshua S.; Butler, Nathaniel R.; James, J. Berian; Long, James P.; Rice, John

    2012-01-01

    Despite the great promise of machine-learning algorithms to classify and predict astrophysical parameters for the vast numbers of astrophysical sources and transients observed in large-scale surveys, the peculiarities of the training data often manifest as strongly biased predictions on the data of interest. Typically, training sets are derived from historical surveys of brighter, more nearby objects than those from more extensive, deeper surveys (testing data). This sample selection bias can cause catastrophic errors in predictions on the testing data because (1) standard assumptions for machine-learned model selection procedures break down and (2) dense regions of testing space might be completely devoid of training data. We explore possible remedies to sample selection bias, including importance weighting, co-training, and active learning (AL). We argue that AL—where the data whose inclusion in the training set would most improve predictions on the testing set are queried for manual follow-up—is an effective approach and is appropriate for many astronomical applications. For a variable star classification problem on a well-studied set of stars from Hipparcos and Optical Gravitational Lensing Experiment, AL is the optimal method in terms of error rate on the testing data, beating the off-the-shelf classifier by 3.4% and the other proposed methods by at least 3.0%. To aid with manual labeling of variable stars, we developed a Web interface which allows for easy light curve visualization and querying of external databases. Finally, we apply AL to classify variable stars in the All Sky Automated Survey, finding dramatic improvement in our agreement with the ASAS Catalog of Variable Stars, from 65.5% to 79.5%, and a significant increase in the classifier's average confidence for the testing set, from 14.6% to 42.9%, after a few AL iterations.

  20. Sample Entropy-Based Approach to Evaluate the Stability of Double-Wire Pulsed MIG Welding

    Directory of Open Access Journals (Sweden)

    Ping Yao

    2014-01-01

    Full Text Available According to the sample entropy, this paper deals with a quantitative method to evaluate the current stability in double-wire pulsed MIG welding. Firstly, the sample entropy of current signals with different stability but the same parameters is calculated. The results show that the more stable the current, the smaller the value and the standard deviation of sample entropy. Secondly, four parameters, which are pulse width, peak current, base current, and frequency, are selected for four-level three-factor orthogonal experiment. The calculation and analysis of desired signals indicate that sample entropy values are affected by welding current parameters. Then, a quantitative method based on sample entropy is proposed. The experiment results show that the method can preferably quantify the welding current stability.

  1. Tungsten based catalysts for selective deoxygenation

    NARCIS (Netherlands)

    Gosselink, R.W.|info:eu-repo/dai/nl/326164081; Stellwagen, D.R.; Bitter, J.H.|info:eu-repo/dai/nl/160581435

    2013-01-01

    Over the past decades, impending oil shortages combined with petroleum market instability have prompted a search for a new source of both transportation fuels and bulk chemicals. Renewable bio-based feedstocks such as sugars, grains, and seeds are assumed to be capable of contributing to a

  2. Sampling

    CERN Document Server

    Thompson, Steven K

    2012-01-01

    Praise for the Second Edition "This book has never had a competitor. It is the only book that takes a broad approach to sampling . . . any good personal statistics library should include a copy of this book." —Technometrics "Well-written . . . an excellent book on an important subject. Highly recommended." —Choice "An ideal reference for scientific researchers and other professionals who use sampling." —Zentralblatt Math Features new developments in the field combined with all aspects of obtaining, interpreting, and using sample data Sampling provides an up-to-date treat

  3. Selective whole genome amplification for resequencing target microbial species from complex natural samples.

    Science.gov (United States)

    Leichty, Aaron R; Brisson, Dustin

    2014-10-01

    Population genomic analyses have demonstrated power to address major questions in evolutionary and molecular microbiology. Collecting populations of genomes is hindered in many microbial species by the absence of a cost effective and practical method to collect ample quantities of sufficiently pure genomic DNA for next-generation sequencing. Here we present a simple method to amplify genomes of a target microbial species present in a complex, natural sample. The selective whole genome amplification (SWGA) technique amplifies target genomes using nucleotide sequence motifs that are common in the target microbe genome, but rare in the background genomes, to prime the highly processive phi29 polymerase. SWGA thus selectively amplifies the target genome from samples in which it originally represented a minor fraction of the total DNA. The post-SWGA samples are enriched in target genomic DNA, which are ideal for population resequencing. We demonstrate the efficacy of SWGA using both laboratory-prepared mixtures of cultured microbes as well as a natural host-microbe association. Targeted amplification of Borrelia burgdorferi mixed with Escherichia coli at genome ratios of 1:2000 resulted in >10(5)-fold amplification of the target genomes with genomic extracts from Wolbachia pipientis-infected Drosophila melanogaster resulted in up to 70% of high-throughput resequencing reads mapping to the W. pipientis genome. By contrast, 2-9% of sequencing reads were derived from W. pipientis without prior amplification. The SWGA technique results in high sequencing coverage at a fraction of the sequencing effort, thus allowing population genomic studies at affordable costs. Copyright © 2014 by the Genetics Society of America.

  4. Cermet based solar selective absorbers : further selectivity improvement and developing new fabrication technique

    OpenAIRE

    Nejati, Mohammadreza

    2008-01-01

    Spectral selectivity of cermet based selective absorbers were increased by inducing surface roughness on the surface of the cermet layer using a roughening technique (deposition on hot substrates) or by micro-structuring the metallic substrates before deposition of the absorber coating using laser and imprint structuring techniques. Cu-Al2O3 cermet absorbers with very rough surfaces and excellent selectivity were obtained by employing a roughness template layer under the infrared reflective l...

  5. Spectral Characterization of H2020/PTAL Mineral Samples: Implications for In Situ Martian Exploration and Mars Sample Selection

    Science.gov (United States)

    Lantz, C.; Pilorget, C.; Poulet, F.; Riu, L.; Dypvik, H.; Hellevang, H.; Rull Perez, F.; Veneranda, M.; Cousin, A.; Viennet, J.-C.; Werner, S. C.

    2018-04-01

    We present combined analysis performed in the framework of the Planetary Terrestrial Analogues Library (H2020 project). XRD, NIR, Raman, and LIBS spectroscopies are used to characterise samples to prepare ExoMars/ESA and Mars2020/NASA observations.

  6. Do Culture-based Segments Predict Selection of Market Strategy?

    Directory of Open Access Journals (Sweden)

    Veronika Jadczaková

    2015-01-01

    Full Text Available Academists and practitioners have already acknowledged the importance of unobservable segmentation bases (such as psychographics yet still focusing on how well these bases are capable of describing relevant segments (the identifiability criterion rather than on how precisely these segments can predict (the predictability criterion. Therefore, this paper intends to add a debate to this topic by exploring whether culture-based segments do account for a selection of market strategy. To do so, a set of market strategy variables over a sample of 251 manufacturing firms was first regressed on a set of 19 cultural variables using canonical correlation analysis. Having found significant relationship in the first canonical function, it was further examined by means of correspondence analysis which cultural segments – if any – are linked to which market strategies. However, as correspondence analysis failed to find a significant relationship, it may be concluded that business culture might relate to the adoption of market strategy but not to the cultural groupings presented in the paper.

  7. Obscured AGN at z similar to 1 from the zCOSMOS-Bright Survey : I. Selection and optical properties of a [Ne v]-selected sample

    NARCIS (Netherlands)

    Mignoli, M.; Vignali, C.; Gilli, R.; Comastri, A.; Zamorani, G.; Bolzonella, M.; Bongiorno, A.; Lamareille, F.; Nair, P.; Pozzetti, L.; Lilly, S. J.; Carollo, C. M.; Contini, T.; Kneib, J. -P.; Le Fevre, O.; Mainieri, V.; Renzini, A.; Scodeggio, M.; Bardelli, S.; Caputi, K.; Cucciati, O.; de la Torre, S.; de Ravel, L.; Franzetti, P.; Garilli, B.; Iovino, A.; Kampczyk, P.; Knobel, C.; Kovac, K.; Le Borgne, J. -F.; Le Brun, V.; Maier, C.; Pello, R.; Peng, Y.; Montero, E. Perez; Presotto, V.; Silverman, J. D.; Tanaka, M.; Tasca, L.; Tresse, L.; Vergani, D.; Zucca, E.; Bordoloi, R.; Cappi, A.; Cimatti, A.; Koekemoer, A. M.; McCracken, H. J.; Moresco, M.; Welikala, N.

    Aims. The application of multi-wavelength selection techniques is essential for obtaining a complete and unbiased census of active galactic nuclei (AGN). We present here a method for selecting z similar to 1 obscured AGN from optical spectroscopic surveys. Methods. A sample of 94 narrow-line AGN

  8. Concentration of ions in selected bottled water samples sold in Malaysia

    Science.gov (United States)

    Aris, Ahmad Zaharin; Kam, Ryan Chuan Yang; Lim, Ai Phing; Praveena, Sarva Mangala

    2013-03-01

    Many consumers around the world, including Malaysians, have turned to bottled water as their main source of drinking water. The aim of this study is to determine the physical and chemical properties of bottled water samples sold in Selangor, Malaysia. A total of 20 bottled water brands consisting of `natural mineral (NM)' and `packaged drinking (PD)' types were randomly collected and analyzed for their physical-chemical characteristics: hydrogen ion concentration (pH), electrical conductivity (EC) and total dissolved solids (TDS), selected major ions: calcium (Ca), potassium (K), magnesium (Mg) and sodium (Na), and minor trace constituents: copper (Cu) and zinc (Zn) to ascertain their suitability for human consumption. The results obtained were compared with guideline values recommended by World Health Organization (WHO) and Malaysian Ministry of Health (MMOH), respectively. It was found that all bottled water samples were in accordance with the guidelines set by WHO and MMOH except for one sample (D3) which was below the pH limit of 6.5. Both NM and PD bottled water were dominated by Na + K > Ca > Mg. Low values for EC and TDS in the bottled water samples showed that water was deficient in essential elements, likely an indication that these were removed by water treatment. Minerals like major ions were present in very low concentrations which could pose a risk to individuals who consume this water on a regular basis. Generally, the overall quality of the supplied bottled water was in accordance to standards and guidelines set by WHO and MMOH and safe for consumption.

  9. A large sample of Kohonen selected E+A (post-starburst) galaxies from the Sloan Digital Sky Survey

    Science.gov (United States)

    Meusinger, H.; Brünecke, J.; Schalldach, P.; in der Au, A.

    2017-01-01

    Context. The galaxy population in the contemporary Universe is characterised by a clear bimodality, blue galaxies with significant ongoing star formation and red galaxies with only a little. The migration between the blue and the red cloud of galaxies is an issue of active research. Post starburst (PSB) galaxies are thought to be observed in the short-lived transition phase. Aims: We aim to create a large sample of local PSB galaxies from the Sloan Digital Sky Survey (SDSS) to study their characteristic properties, particularly morphological features indicative of gravitational distortions and indications for active galactic nuclei (AGNs). Another aim is to present a tool set for an efficient search in a large database of SDSS spectra based on Kohonen self-organising maps (SOMs). Methods: We computed a huge Kohonen SOM for ∼106 spectra from SDSS data release 7. The SOM is made fully available, in combination with an interactive user interface, for the astronomical community. We selected a large sample of PSB galaxies taking advantage of the clustering behaviour of the SOM. The morphologies of both PSB galaxies and randomly selected galaxies from a comparison sample in SDSS Stripe 82 (S82) were inspected on deep co-added SDSS images to search for indications of gravitational distortions. We used the Portsmouth galaxy property computations to study the evolutionary stage of the PSB galaxies and archival multi-wavelength data to search for hidden AGNs. Results: We compiled a catalogue of 2665 PSB galaxies with redshifts z 3 Å and z cloud, in agreement with the idea that PSB galaxies represent the transitioning phase between actively and passively evolving galaxies. The relative frequency of distorted PSB galaxies is at least 57% for EW(Hδ) > 5 Å, significantly higher than in the comparison sample. The search for AGNs based on conventional selection criteria in the radio and MIR results in a low AGN fraction of ∼2-3%. We confirm an MIR excess in the mean SED of

  10. Selection of components based on their importance

    International Nuclear Information System (INIS)

    Stvan, F.

    2004-12-01

    A proposal is presented for sorting components of the Dukovany nuclear power plant with respect to their importance. The classification scheme includes property priority, property criticality and property structure. Each area has its criteria with weight coefficients to calculate the importance of each component by the Risk Priority Number method. The aim of the process is to generate a list of components in order of operating and safety importance, which will help spend funds to ensure operation and safety in an optimal manner. This proposal is linked to a proposal for a simple database which should serve to enter information and perform assessments. The present stage focused on a safety assessment of components categorized in safety classes BT1, BT2 and BT3 pursuant to Decree No. 76. Assessment was performed based ona PSE study for Level 1. The database includes inputs for entering financial data, which are represented by a potential damage resulting from the given failure and by the loss of MWh in financial terms. In a next input, the failure incidence intensity and time of correction can be entered. Information regarding the property structure, represented by the degree of backup and reparability of the component, is the last input available

  11. A novel method of selective removal of human DNA improves PCR sensitivity for detection of Salmonella Typhi in blood samples.

    Science.gov (United States)

    Zhou, Liqing; Pollard, Andrew J

    2012-07-27

    Enteric fever is a major public health problem, causing an estimated 21million new cases and 216,000 or more deaths every year. Current diagnosis of the disease is inadequate. Blood culture only identifies 45 to 70% of the cases and is time-consuming. Serological tests have very low sensitivity and specificity. Clinical samples obtained for diagnosis of enteric fever in the field generally have blood, so that even PCR-based methods, widely used for detection of other infectious diseases, are not a straightforward option in typhoid diagnosis. We developed a novel method to enrich target bacterial DNA by selective removal of human DNA from blood samples, enhancing the sensitivity of PCR tests. This method offers the possibility of improving PCR assays directly using clinical specimens for diagnosis of this globally important infectious disease. Blood samples were mixed with ox bile for selective lysis of human blood cells and the released human DNA was then digested with addition of bile resistant micrococcal nuclease. The intact Salmonella Typhi bacteria were collected from the specimen by centrifugation and the DNA extracted with QIAamp DNA mini kit. The presence of Salmonella Typhi bacteria in blood samples was detected by PCR with the fliC-d gene of Salmonella Typhi as the target. Micrococcal nuclease retained activity against human blood DNA in the presence of up to 9% ox bile. Background human DNA was dramatically removed from blood samples through the use of ox bile lysis and micrococcal nuclease for removal of mammalian DNA. Consequently target Salmonella Typhi DNA was enriched in DNA preparations and the PCR sensitivity for detection of Salmonella Typhi in spiked blood samples was enhanced by 1,000 fold. Use of a combination of selective ox-bile blood cell lysis and removal of human DNA with micrococcal nuclease significantly improves PCR sensitivity and offers a better option for improved typhoid PCR assays directly using clinical specimens in diagnosis of

  12. Research on test of product based on spatial sampling criteria and variable step sampling mechanism

    Science.gov (United States)

    Li, Ruihong; Han, Yueping

    2014-09-01

    This paper presents an effective approach for online testing the assembly structures inside products using multiple views technique and X-ray digital radiography system based on spatial sampling criteria and variable step sampling mechanism. Although there are some objects inside one product to be tested, there must be a maximal rotary step for an object within which the least structural size to be tested is predictable. In offline learning process, Rotating the object by the step and imaging it and so on until a complete cycle is completed, an image sequence is obtained that includes the full structural information for recognition. The maximal rotary step is restricted by the least structural size and the inherent resolution of the imaging system. During online inspection process, the program firstly finds the optimum solutions to all different target parts in the standard sequence, i.e., finds their exact angles in one cycle. Aiming at the issue of most sizes of other targets in product are larger than that of the least structure, the paper adopts variable step-size sampling mechanism to rotate the product specific angles with different steps according to different objects inside the product and match. Experimental results show that the variable step-size method can greatly save time compared with the traditional fixed-step inspection method while the recognition accuracy is guaranteed.

  13. Dietary trace element intakes of a selected sample of Canadian elderly women

    International Nuclear Information System (INIS)

    Gibson, R.S.; MacDonald, A.C.; Martinez, O.B.

    1984-01-01

    Energy, and selected trace intakes of a sample of 90 noninstitutionalized Canadian women (mean age 66.2 +/- 6.2 years) living in a University community and consuming self-selected diets were assessed by chemical analysis of one-day duplicate diets and via 1-day dietary records collected by the subjects. Mean gross energy intake (determined via bomb calorimetry was 6.0 +/- 2.4 MJ (1435 +/- 580 kcal) and mean intakes of Cu and Mn (determined via atomic absorption spectrophotometry) were 1.2 +/- 0.6 mg and 3.8 +/- 2.1 mg/day, respectively. Instrumental neutron activation analysis was used for Cr - median = 77.4 μg/day; Se - median = 69.6 μg/day; Zn - mean + SD = 7.7 +/- 3.6 mg/day; Ag - median = 26.9 μg/day; Cs - median = 4.8 μg/day; Rb - median = 1.6 mg/day; Sb - median = 1.8 μg/day; Sc - median = 0.3 μg/day. Dietary intakes of Cr, Mn and Se for the majority of the subjects fell within the US safe and adequate range. In contrast, a high proportion of subjects had apparently low intakes of dietary Cu and Zn in relation to current US dietary recommendations

  14. Determination of Nd3+ Ions in Solution Samples by a Coated Wire Ion-Selective Sensor

    Directory of Open Access Journals (Sweden)

    Hassan Ali Zamani

    2012-01-01

    Full Text Available A new coated wire electrode (CWE using 5-(methylsulfanyl-3-phenyl-1H-1,2,4-triazole (MPT as an ionophore has been developed as a neodymium ion-selective sensor. The sensor exhibits Nernstian response for the Nd3+ ions in the concentration range of 1.0×10−6-1.0×10−2 M with detection limit of 3.7×10−7 M. It displays a Nernstian slope of 20.2±0.2 mV/decade in the pH range of 2.7–8.1. The proposed sensor also exhibits a fast response time of ∼5 s. The sensor revealed high selectivity with respect to all common alkali, alkaline earth, transition and heavy metal ions, including members of the lanthanide family other than Nd3+. The electrode was used as an indicator electrode in the potentiometric titration of Nd(III ions with EDTA. The electrode was also employed for the determination of the Nd3+ ions concentration in water solution samples.

  15. Semi-selective medium for Fusarium graminearum detection in seed samples

    Directory of Open Access Journals (Sweden)

    Marivane Segalin

    2010-12-01

    Full Text Available Fungi of the genus Fusarium cause a variety of difficult to control diseases in different crops, including winter cereals and maize. Among the species of this genus Fusarium graminearum deserves attention. The aim of this work was to develop a semi-selective medium to study this fungus. In several experiments, substrates for fungal growth were tested, including fungicides and antibiotics such as iprodiona, nystatin and triadimenol, and the antibacterial agents streptomycin and neomycin sulfate. Five seed samples of wheat, barley, oat, black beans and soybeans for F. graminearum detection by using the media Nash and Snyder agar (NSA, Segalin & Reis agar (SRA and one-quarter dextrose agar (1/4PDA; potato 50g; dextrose 5g and agar 20g, either unsupplemented or supplemented with various concentrations of the antimicrobial agents cited above. The selected components and concentrations (g.L-1 of the proposed medium, Segalin & Reis agar (SRA-FG, were: iprodiona 0.05; nystatin 0,025; triadimenol 0.015; neomycin sulfate 0.05; and streptomycin sulfate, 0.3 added of ¼ potato sucrose agar. In the isolation from seeds of cited plant species, the sensitivity of this medium was similar to that of NSA but with de advantage of maintaining the colony morphological aspects similar to those observed in potato-dextrose-agar medium.

  16. Information Gain Based Dimensionality Selection for Classifying Text Documents

    Energy Technology Data Exchange (ETDEWEB)

    Dumidu Wijayasekara; Milos Manic; Miles McQueen

    2013-06-01

    Selecting the optimal dimensions for various knowledge extraction applications is an essential component of data mining. Dimensionality selection techniques are utilized in classification applications to increase the classification accuracy and reduce the computational complexity. In text classification, where the dimensionality of the dataset is extremely high, dimensionality selection is even more important. This paper presents a novel, genetic algorithm based methodology, for dimensionality selection in text mining applications that utilizes information gain. The presented methodology uses information gain of each dimension to change the mutation probability of chromosomes dynamically. Since the information gain is calculated a priori, the computational complexity is not affected. The presented method was tested on a specific text classification problem and compared with conventional genetic algorithm based dimensionality selection. The results show an improvement of 3% in the true positives and 1.6% in the true negatives over conventional dimensionality selection methods.

  17. Multispectral iris recognition based on group selection and game theory

    Science.gov (United States)

    Ahmad, Foysal; Roy, Kaushik

    2017-05-01

    A commercially available iris recognition system uses only a narrow band of the near infrared spectrum (700-900 nm) while iris images captured in the wide range of 405 nm to 1550 nm offer potential benefits to enhance recognition performance of an iris biometric system. The novelty of this research is that a group selection algorithm based on coalition game theory is explored to select the best patch subsets. In this algorithm, patches are divided into several groups based on their maximum contribution in different groups. Shapley values are used to evaluate the contribution of patches in different groups. Results show that this group selection based iris recognition

  18. Dietary intakes of pesticides based on community duplicate diet samples.

    Science.gov (United States)

    Melnyk, Lisa Jo; Xue, Jianping; Brown, G Gordon; McCombs, Michelle; Nishioka, Marcia; Michael, Larry C

    2014-01-15

    The calculation of dietary intake of selected pesticides was accomplished using food samples collected from individual representatives of a defined demographic community using a community duplicate diet approach. A community of nine participants was identified in Apopka, FL from which intake assessments of organophosphate (OP) and pyrethroid pesticides were made. From these nine participants, sixty-seven individual samples were collected and subsequently analyzed by gas chromatography/mass spectrometry. Measured concentrations were used to estimate dietary intakes for individuals and for the community. Individual intakes of total OP and pyrethroid pesticides ranged from 6.7 to 996 ng and 1.2 to 16,000 ng, respectively. The community intake was 256 ng for OPs and 3430 ng for pyrethroid pesticides. The most commonly detected pesticide was permethrin, but the highest overall intake was of bifenthrin followed by esfenvalerate. These data indicate that the community in Apopka, FL, as represented by the nine individuals, was potentially exposed to both OP and pyrethroid pesticides at levels consistent with a dietary model and other field studies in which standard duplicate diet samples were collected. Higher levels of pyrethroid pesticides were measured than OPs, which is consistent with decreased usage of OPs. The diversity of pyrethroid pesticides detected in food samples was greater than expected. Continually changing pesticide usage patterns need to be considered when determining analytes of interest for large scale epidemiology studies. The Community Duplicate Diet Methodology is a tool for researchers to meet emerging exposure measurement needs that will lead to more accurate assessments of intake which may enhance decisions for chemical regulation. Successfully determining the intake of pesticides through the dietary route will allow for accurate assessments of pesticide exposures to a community of individuals, thereby significantly enhancing the research benefit

  19. Source apportionment and location by selective wind sampling and Positive Matrix Factorization.

    Science.gov (United States)

    Venturini, Elisa; Vassura, Ivano; Raffo, Simona; Ferroni, Laura; Bernardi, Elena; Passarini, Fabrizio

    2014-10-01

    In order to determine the pollution sources in a suburban area and identify the main direction of their origin, PM2.5 was collected with samplers coupled with a wind select sensor and then subjected to Positive Matrix Factorization (PMF) analysis. In each sample, soluble ions, organic carbon, elemental carbon, levoglucosan, metals, and Polycyclic Aromatic Hydrocarbons (PAHs) were determined. PMF results identified six main sources affecting the area: natural gas home appliances, motor vehicles, regional transport, biomass combustion, manufacturing activities, and secondary aerosol. The connection of factor temporal trends with other parameters (i.e., temperature, PM2.5 concentration, and photochemical processes) confirms factor attributions. PMF analysis indicated that the main source of PM2.5 in the area is secondary aerosol. This should be mainly due to regional contributions, owing to both the secondary nature of the source itself and the higher concentration registered in inland air masses. The motor vehicle emission source contribution is also important. This source likely has a prevalent local origin. The most toxic determined components, i.e., PAHs, Cd, Pb, and Ni, are mainly due to vehicular traffic. Even if this is not the main source in the study area, it is the one of greatest concern. The application of PMF analysis to PM2.5 collected with this new sampling technique made it possible to obtain more detailed results on the sources affecting the area compared to a classical PMF analysis.

  20. Characteristic of selected frequency luminescence for samples collected in deserts north to Beijing

    International Nuclear Information System (INIS)

    Li Dongxu; Wei Mingjian; Wang Junping; Pan Baolin; Zhao Shiyuan; Liu Zhaowen

    2009-01-01

    Surface sand samples were collected in eight sites of the Horqin and Otindag deserts located in north to Beijing. BG2003 luminescence spectrograph was used to analyze the emitted photons and characteristic spectra of the selected frequency luminescence were obtained. It was found that high intensities of emitted photons stimulated by heat from 85 degree C-135 degree C and 350 degree C-400 degree C. It belong to the traps of 4.13 eV (300 nm), 4.00 eV (310 nm), 3.88 eV (320 nm) and 2.70 eV (460 nm), and the emitted photons belong to traps of 4.00 eV (310 nm), 3.88 eV (320 nm) and 2.70 eV (460 nm) were stimulated by green laser. And sand samples of the eight sites can respond to the increase of definite radiological dose at each wavelength, which is the characteristic spectrum to provide radiation dosimetry basis for dating. There are definite district characteristic in their characteristic spectra. (authors)

  1. Clinical impact of strict criteria for selectivity and lateralization in adrenal vein sampling.

    Science.gov (United States)

    Gasparetto, Alessandro; Angle, John F; Darvishi, Pasha; Freeman, Colbey W; Norby, Ray G; Carey, Robert M

    2015-04-01

    Selectivity index (SI) and lateralization index (LI) thresholds determine the adequacy of adrenal vein sampling (AVS) and the degree of lateralization. The purpose of this study was investigate the clinical outcome of patients whose adrenal vein sampling was interpreted using "strict criteria" (SC) (SIpre-stimuli≥3, SIpost-stimuli≥5 and LIpre-stimuli≥4, LIpost-stimuli≥4). A retrospective review of 73 consecutive AVS procedures was performed and 67 were technically successful. Forty-three patients showed lateralization and underwent surgery, while 24 did not lateralize and were managed conservatively. Systolic blood pressure (SBP), diastolic blood pressure (DBP), kalemia (K(+)), and the change in number of blood pressure (BP) medications were recorded for each patient before and after AVS and potential surgery were performed. In the surgery group, BP and K(+) changed respectively from 160±5.3/100±2.0 mmHg to 127±3.3/80±1.9 (p blood pressure medications were six (14.0%) in the lateralized group and 22 (91.7%) in the non-lateralized group (p <0.001). AVS interpretation with SC leads to significant clinical improvement in both patients who underwent surgery and those managed conservatively.

  2. Fine mapping quantitative trait loci under selective phenotyping strategies based on linkage and linkage disequilibrium criteria

    DEFF Research Database (Denmark)

    Ansari-Mahyari, S; Berg, P; Lund, M S

    2009-01-01

    disequilibrium-based sampling criteria (LDC) for selecting individuals to phenotype are compared to random phenotyping in a quantitative trait loci (QTL) verification experiment using stochastic simulation. Several strategies based on LAC and LDC for selecting the most informative 30%, 40% or 50% of individuals...... for phenotyping to extract maximum power and precision in a QTL fine mapping experiment were developed and assessed. Linkage analyses for the mapping was performed for individuals sampled on LAC within families and combined linkage disequilibrium and linkage analyses was performed for individuals sampled across...... the whole population based on LDC. The results showed that selecting individuals with similar haplotypes to the paternal haplotypes (minimum recombination criterion) using LAC compared to random phenotyping gave at least the same power to detect a QTL but decreased the accuracy of the QTL position. However...

  3. Soil sampling intercomparison exercise by selected laboratories of the ALMERA Network

    International Nuclear Information System (INIS)

    2009-01-01

    The IAEA's Seibersdorf Laboratories in Austria have the programmatic responsibility to provide assistance to Member State laboratories in maintaining and improving the reliability of analytical measurement results, both in radionuclide and trace element determinations. This is accomplished through the provision of reference materials of terrestrial origin, validated analytical procedures, training in the implementation of internal quality control, and through the evaluation of measurement performance by the organization of worldwide and regional interlaboratory comparison exercises. The IAEA is mandated to support global radionuclide measurement systems related to accidental or intentional releases of radioactivity in the environment. To fulfil this obligation and ensure a reliable, worldwide, rapid and consistent response, the IAEA coordinates an international network of analytical laboratories for the measurement of environmental radioactivity (ALMERA). The network was established by the IAEA in 1995 and makes available to Member States a world-wide network of analytical laboratories capable of providing reliable and timely analysis of environmental samples in the event of an accidental or intentional release of radioactivity. A primary requirement for the ALMERA members is participation in the IAEA interlaboratory comparison exercises, which are specifically organized for ALMERA on a regular basis. These exercises are designed to monitor and demonstrate the performance and analytical capabilities of the network members, and to identify gaps and problem areas where further development is needed. In this framework, the IAEA organized a soil sampling intercomparison exercise (IAEA/SIE/01) for selected laboratories of the ALMERA network. The main objective of this exercise was to compare soil sampling procedures used by different participating laboratories. The performance evaluation results of the interlaboratory comparison exercises performed in the framework of

  4. Vibration and acoustic frequency spectra for industrial process modeling using selective fusion multi-condition samples and multi-source features

    Science.gov (United States)

    Tang, Jian; Qiao, Junfei; Wu, ZhiWei; Chai, Tianyou; Zhang, Jian; Yu, Wen

    2018-01-01

    Frequency spectral data of mechanical vibration and acoustic signals relate to difficult-to-measure production quality and quantity parameters of complex industrial processes. A selective ensemble (SEN) algorithm can be used to build a soft sensor model of these process parameters by fusing valued information selectively from different perspectives. However, a combination of several optimized ensemble sub-models with SEN cannot guarantee the best prediction model. In this study, we use several techniques to construct mechanical vibration and acoustic frequency spectra of a data-driven industrial process parameter model based on selective fusion multi-condition samples and multi-source features. Multi-layer SEN (MLSEN) strategy is used to simulate the domain expert cognitive process. Genetic algorithm and kernel partial least squares are used to construct the inside-layer SEN sub-model based on each mechanical vibration and acoustic frequency spectral feature subset. Branch-and-bound and adaptive weighted fusion algorithms are integrated to select and combine outputs of the inside-layer SEN sub-models. Then, the outside-layer SEN is constructed. Thus, "sub-sampling training examples"-based and "manipulating input features"-based ensemble construction methods are integrated, thereby realizing the selective information fusion process based on multi-condition history samples and multi-source input features. This novel approach is applied to a laboratory-scale ball mill grinding process. A comparison with other methods indicates that the proposed MLSEN approach effectively models mechanical vibration and acoustic signals.

  5. Volatile compounds in samples of cork and also produced by selected fungi.

    Science.gov (United States)

    Barreto, M C; Vilas Boas, L; Carneiro, L C; San Romão, M V

    2011-06-22

    The production of volatile compounds by microbial communities of cork samples taken during the cork manufacturing process was investigated. The majority of volatiles were found in samples collected at two stages: resting after the first boiling and nontreated cork disks. Volatile profiles produced by microbiota in both stages are similar. The releasable volatile compounds and 2,4,6-trichloroanisole (TCA) produced in cork-based culture medium by five isolated fungal species in pure and mixed cultures were also analyzed by gas chromatography coupled with mass spectrometry (GC-MS).The results showed that 1-octen-3-ol and esters of fatty acids (medium chain length C8-C20) were the main volatile compounds produced by either pure fungal species or their mixture. Apparently, Penicillium glabrum is the main contributor to the overall volatile composition observed in the mixed culture. The production of releasable TCA on cork cannot be attributed to any of the assayed fungal isolates.

  6. New procedure of selected biogenic amines determination in wine samples by HPLC

    Energy Technology Data Exchange (ETDEWEB)

    Piasta, Anna M.; Jastrzębska, Aneta, E-mail: aj@chem.uni.torun.pl; Krzemiński, Marek P.; Muzioł, Tadeusz M.; Szłyk, Edward

    2014-06-27

    Highlights: • We proposed new procedure for derivatization of biogenic amines. • The NMR and XRD analysis confirmed the purity and uniqueness of derivatives. • Concentration of biogenic amines in wine samples were analyzed by RP-HPLC. • Sample contamination and derivatization reactions interferences were minimized. - Abstract: A new procedure for determination of biogenic amines (BA): histamine, phenethylamine, tyramine and tryptamine, based on the derivatization reaction with 2-chloro-1,3-dinitro-5-(trifluoromethyl)-benzene (CNBF), is proposed. The amines derivatives with CNBF were isolated and characterized by X-ray crystallography and {sup 1}H, {sup 13}C, {sup 19}F NMR spectroscopy in solution. The novelty of the procedure is based on the pure and well-characterized products of the amines derivatization reaction. The method was applied for the simultaneous analysis of the above mentioned biogenic amines in wine samples by the reversed phase-high performance liquid chromatography. The procedure revealed correlation coefficients (R{sup 2}) between 0.9997 and 0.9999, and linear range: 0.10–9.00 mg L{sup −1} (histamine); 0.10–9.36 mg L{sup -1} (tyramine); 0.09–8.64 mg L{sup −1} (tryptamine) and 0.10–8.64 mg L{sup −1} (phenethylamine), whereas accuracy was 97%–102% (recovery test). Detection limit of biogenic amines in wine samples was 0.02–0.03 mg L{sup −1}, whereas quantification limit ranged 0.05–0.10 mg L{sup −1}. The variation coefficients for the analyzed amines ranged between 0.49% and 3.92%. Obtained BA derivatives enhanced separation the analytes on chromatograms due to the inhibition of hydrolysis reaction and the reduction of by-products formation.

  7. Process observation in fiber laser-based selective laser melting

    Science.gov (United States)

    Thombansen, Ulrich; Gatej, Alexander; Pereira, Milton

    2015-01-01

    The process observation in selective laser melting (SLM) focuses on observing the interaction point where the powder is processed. To provide process relevant information, signals have to be acquired that are resolved in both time and space. Especially in high-power SLM, where more than 1 kW of laser power is used, processing speeds of several meters per second are required for a high-quality processing results. Therefore, an implementation of a suitable process observation system has to acquire a large amount of spatially resolved data at low sampling speeds or it has to restrict the acquisition to a predefined area at a high sampling speed. In any case, it is vitally important to synchronously record the laser beam position and the acquired signal. This is a prerequisite that allows the recorded data become information. Today, most SLM systems employ f-theta lenses to focus the processing laser beam onto the powder bed. This report describes the drawbacks that result for process observation and suggests a variable retro-focus system which solves these issues. The beam quality of fiber lasers delivers the processing laser beam to the powder bed at relevant focus diameters, which is a key prerequisite for this solution to be viable. The optical train we present here couples the processing laser beam and the process observation coaxially, ensuring consistent alignment of interaction zone and observed area. With respect to signal processing, we have developed a solution that synchronously acquires signals from a pyrometer and the position of the laser beam by sampling the data with a field programmable gate array. The relevance of the acquired signals has been validated by the scanning of a sample filament. Experiments with grooved samples show a correlation between different powder thicknesses and the acquired signals at relevant processing parameters. This basic work takes a first step toward self-optimization of the manufacturing process in SLM. It enables the

  8. Selective detection of Co2+ by fluorescent nano probe: Diagnostic approach for analysis of environmental samples and biological activities

    Science.gov (United States)

    Mahajan, Prasad G.; Dige, Nilam C.; Desai, Netaji K.; Patil, Shivajirao R.; Kondalkar, Vijay V.; Hong, Seong-Karp; Lee, Ki Hwan

    2018-06-01

    Nowadays scientist over the world are engaging to put forth improved methods to detect metal ion in an aqueous medium based on fluorescence studies. A simple, selective and sensitive method was proposed for detection of Co2+ ion using fluorescent organic nanoparticles. We synthesized a fluorescent small molecule viz. 4,4‧-{benzene-1,4-diylbis-[(Z)methylylidenenitrilo]}dibenzoic acid (BMBA) to explore its suitability as sensor for Co2+ ion and biocompatibility in form of nanoparticles. Fluorescence nanoparticles (BMBANPs) prepared by simple reprecipitation method. Aggregation induced enhanced emission of BMBANPs exhibits the narrower particle size of 68 nm and sphere shape morphology. The selective fluorescence quenching was observed by addition of Co2+ and does not affected by presence of other coexisting ion solutions. The photo-physical properties, viz. UV-absorption, fluorescence emission, and lifetime measurements are in support of ligand-metal interaction followed by static fluorescence quenching phenomenon in emission of BMBANPs. Finally, we develop a simple analytical method for selective and sensitive determination of Co2+ ion in environmental samples. The cell culture E. coli, Bacillus sps., and M. tuberculosis H37RV strain in the vicinity of BMBANPs indicates virtuous anti-bacterial and anti-tuberculosis activity which is of additional novel application shown by prepared nanoparticles.

  9. 44 CFR 321.2 - Selection of the mobilization base.

    Science.gov (United States)

    2010-10-01

    ... base. 321.2 Section 321.2 Emergency Management and Assistance FEDERAL EMERGENCY MANAGEMENT AGENCY, DEPARTMENT OF HOMELAND SECURITY PREPAREDNESS MAINTENANCE OF THE MOBILIZATION BASE (DEPARTMENT OF DEFENSE, DEPARTMENT OF ENERGY, MARITIME ADMINISTRATION) § 321.2 Selection of the mobilization base. (a) The Department...

  10. Using machine learning to accelerate sampling-based inversion

    Science.gov (United States)

    Valentine, A. P.; Sambridge, M.

    2017-12-01

    In most cases, a complete solution to a geophysical inverse problem (including robust understanding of the uncertainties associated with the result) requires a sampling-based approach. However, the computational burden is high, and proves intractable for many problems of interest. There is therefore considerable value in developing techniques that can accelerate sampling procedures.The main computational cost lies in evaluation of the forward operator (e.g. calculation of synthetic seismograms) for each candidate model. Modern machine learning techniques-such as Gaussian Processes-offer a route for constructing a computationally-cheap approximation to this calculation, which can replace the accurate solution during sampling. Importantly, the accuracy of the approximation can be refined as inversion proceeds, to ensure high-quality results.In this presentation, we describe and demonstrate this approach-which can be seen as an extension of popular current methods, such as the Neighbourhood Algorithm, and bridges the gap between prior- and posterior-sampling frameworks.

  11. The RBANS Effort Index: base rates in geriatric samples.

    Science.gov (United States)

    Duff, Kevin; Spering, Cynthia C; O'Bryant, Sid E; Beglinger, Leigh J; Moser, David J; Bayless, John D; Culp, Kennith R; Mold, James W; Adams, Russell L; Scott, James G

    2011-01-01

    The Effort Index (EI) of the RBANS was developed to assist clinicians in discriminating patients who demonstrate good effort from those with poor effort. However, there are concerns that older adults might be unfairly penalized by this index, which uses uncorrected raw scores. Using five independent samples of geriatric patients with a broad range of cognitive functioning (e.g., cognitively intact, nursing home residents, probable Alzheimer's disease), base rates of failure on the EI were calculated. In cognitively intact and mildly impaired samples, few older individuals were classified as demonstrating poor effort (e.g., 3% in cognitively intact). However, in the more severely impaired geriatric patients, over one third had EI scores that fell above suggested cutoff scores (e.g., 37% in nursing home residents, 33% in probable Alzheimer's disease). In the cognitively intact sample, older and less educated patients were more likely to have scores suggestive of poor effort. Education effects were observed in three of the four clinical samples. Overall cognitive functioning was significantly correlated with EI scores, with poorer cognition being associated with greater suspicion of low effort. The current results suggest that age, education, and level of cognitive functioning should be taken into consideration when interpreting EI results and that significant caution is warranted when examining EI scores in elders suspected of having dementia.

  12. Ultrasonic-based membrane aided sample preparation of urine proteomes.

    Science.gov (United States)

    Jesus, Jemmyson Romário; Santos, Hugo M; López-Fernández, H; Lodeiro, Carlos; Arruda, Marco Aurélio Zezzi; Capelo, J L

    2018-02-01

    A new ultrafast ultrasonic-based method for shotgun proteomics as well as label-free protein quantification in urine samples is developed. The method first separates the urine proteins using nitrocellulose-based membranes and then proteins are in-membrane digested using trypsin. The enzymatic digestion process is accelerated from overnight to four minutes using a sonoreactor ultrasonic device. Overall, the sample treatment pipeline comprising protein separation, digestion and identification is done in just 3h. The process is assessed using urine of healthy volunteers. The method shows that male can be differentiated from female using the protein content of urine in a fast, easy and straightforward way. 232 and 226 proteins are identified in urine of male and female, respectively. From this, 162 are common to both genders, whilst 70 are unique to male and 64 to female. From the 162 common proteins, 13 are present at levels statistically different (p minimalism concept as outlined by Halls, as each stage of this analysis is evaluated to minimize the time, cost, sample requirement, reagent consumption, energy requirements and production of waste products. Copyright © 2017 Elsevier B.V. All rights reserved.

  13. Recursive SVM feature selection and sample classification for mass-spectrometry and microarray data

    Directory of Open Access Journals (Sweden)

    Harris Lyndsay N

    2006-04-01

    Full Text Available Abstract Background Like microarray-based investigations, high-throughput proteomics techniques require machine learning algorithms to identify biomarkers that are informative for biological classification problems. Feature selection and classification algorithms need to be robust to noise and outliers in the data. Results We developed a recursive support vector machine (R-SVM algorithm to select important genes/biomarkers for the classification of noisy data. We compared its performance to a similar, state-of-the-art method (SVM recursive feature elimination or SVM-RFE, paying special attention to the ability of recovering the true informative genes/biomarkers and the robustness to outliers in the data. Simulation experiments show that a 5 %-~20 % improvement over SVM-RFE can be achieved regard to these properties. The SVM-based methods are also compared with a conventional univariate method and their respective strengths and weaknesses are discussed. R-SVM was applied to two sets of SELDI-TOF-MS proteomics data, one from a human breast cancer study and the other from a study on rat liver cirrhosis. Important biomarkers found by the algorithm were validated by follow-up biological experiments. Conclusion The proposed R-SVM method is suitable for analyzing noisy high-throughput proteomics and microarray data and it outperforms SVM-RFE in the robustness to noise and in the ability to recover informative features. The multivariate SVM-based method outperforms the univariate method in the classification performance, but univariate methods can reveal more of the differentially expressed features especially when there are correlations between the features.

  14. GENERALISED MODEL BASED CONFIDENCE INTERVALS IN TWO STAGE CLUSTER SAMPLING

    Directory of Open Access Journals (Sweden)

    Christopher Ouma Onyango

    2010-09-01

    Full Text Available Chambers and Dorfman (2002 constructed bootstrap confidence intervals in model based estimation for finite population totals assuming that auxiliary values are available throughout a target population and that the auxiliary values are independent. They also assumed that the cluster sizes are known throughout the target population. We now extend to two stage sampling in which the cluster sizes are known only for the sampled clusters, and we therefore predict the unobserved part of the population total. Jan and Elinor (2008 have done similar work, but unlike them, we use a general model, in which the auxiliary values are not necessarily independent. We demonstrate that the asymptotic properties of our proposed estimator and its coverage rates are better than those constructed under the model assisted local polynomial regression model.

  15. 40 CFR Appendix A to Subpart F of... - Sampling Plans for Selective Enforcement Auditing of Small Nonroad Engines

    Science.gov (United States)

    2010-07-01

    ... Enforcement Auditing of Small Nonroad Engines A Appendix A to Subpart F of Part 90 Protection of Environment...-IGNITION ENGINES AT OR BELOW 19 KILOWATTS Selective Enforcement Auditing Pt. 90, Subpt. F, App. A Appendix A to Subpart F of Part 90—Sampling Plans for Selective Enforcement Auditing of Small Nonroad Engines...

  16. Sample-Based Extreme Learning Machine with Missing Data

    Directory of Open Access Journals (Sweden)

    Hang Gao

    2015-01-01

    Full Text Available Extreme learning machine (ELM has been extensively studied in machine learning community during the last few decades due to its high efficiency and the unification of classification, regression, and so forth. Though bearing such merits, existing ELM algorithms cannot efficiently handle the issue of missing data, which is relatively common in practical applications. The problem of missing data is commonly handled by imputation (i.e., replacing missing values with substituted values according to available information. However, imputation methods are not always effective. In this paper, we propose a sample-based learning framework to address this issue. Based on this framework, we develop two sample-based ELM algorithms for classification and regression, respectively. Comprehensive experiments have been conducted in synthetic data sets, UCI benchmark data sets, and a real world fingerprint image data set. As indicated, without introducing extra computational complexity, the proposed algorithms do more accurate and stable learning than other state-of-the-art ones, especially in the case of higher missing ratio.

  17. Efficient Multi-Label Feature Selection Using Entropy-Based Label Selection

    Directory of Open Access Journals (Sweden)

    Jaesung Lee

    2016-11-01

    Full Text Available Multi-label feature selection is designed to select a subset of features according to their importance to multiple labels. This task can be achieved by ranking the dependencies of features and selecting the features with the highest rankings. In a multi-label feature selection problem, the algorithm may be faced with a dataset containing a large number of labels. Because the computational cost of multi-label feature selection increases according to the number of labels, the algorithm may suffer from a degradation in performance when processing very large datasets. In this study, we propose an efficient multi-label feature selection method based on an information-theoretic label selection strategy. By identifying a subset of labels that significantly influence the importance of features, the proposed method efficiently outputs a feature subset. Experimental results demonstrate that the proposed method can identify a feature subset much faster than conventional multi-label feature selection methods for large multi-label datasets.

  18. Gamma radiation measurement in select sand samples from Camburi beach - Vitoria, Espirito Santo, Brazil: preliminary results

    Energy Technology Data Exchange (ETDEWEB)

    Barros, Livia F.; Pecequilo, Brigitte R.S.; Aquino, Reginaldo R., E-mail: lfbarros@ipen.b, E-mail: brigitte@ipen.b, E-mail: raquino@ipen.b [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil)

    2011-07-01

    The variation of natural radioactivity along the surface of the beach sands of Camburi, located in Vitoria, capital of Espirito Santo, southeastern Brazil, was determined from the contents of {sup 226}Ra, {sup 232}Th and {sup 40}K. Eleven collecting points was selected along all the 6 km extension of the Camburi beach. Sand samples collected from all established points on January 2011 were dried and sealed in standard 100 mL polyethylene flasks and measured by high resolution gamma spectrometry after a 4 weeks ingrowth period, in order to allow the secular equilibrium in the {sup 238}U and {sup 232}Th series. The {sup 226}Ra concentration was determined from the weighted average concentrations of {sup 214}Pb and {sup 214}Bi. The {sup 232}Th concentration was determined from the weighted average concentrations of {sup 228}Ac, {sup 212}Pb and {sup 212}Bi and the {sup 40}K from its single gamma transition. Preliminary results show activity concentrations varying from 5 Bq.kg{sup -1} to {sup 222} Bq.kg{sup -1} for {sup 226}Ra and from 14 Bq.kg{sup -1} to 1074 Bq.kg{sup -'}1 for {sup 232}Th, both with the highest values for Camburi South and Central. For {sup 40}K, the activity concentrations ranged from 14 Bq.kg{sup -1} to 179 Bq.kg{sup -1} and the highest values were obtained for Camburi South. (author)

  19. Crude protein, fibre and phytic acid in vitro digestibility of selected legume and buckwheat samples

    Directory of Open Access Journals (Sweden)

    Petra Vojtíšková

    2013-01-01

    Full Text Available The aim of this study was to determine crude protein, fibre and phytic acid in vitro digestibility of selected legumes and buckwheat products. All analyses except the phytic acid contents were performed in the line with the Commission Regulation (EC No. 152/2009. A modified version of Holt’s Method was used for phytic acid (phytate determination. None of all samples contained more than 11% of moisture. Soybeans are rich in crude protein; they contain nearly 40% of this compound. The content of crude protein in buckwheat flours was about 14%. The highest amount of phytate was found in common beans and soybeans-about 2 g/100 g of dry matter. On the other hand, the lowest phytate content was observed in buckwheat pasta (F. esculentum groats was 1.9 g per 100 g of dry matter. In vitro digestibility was determined using an incubator Daisy and pepsin enzymes and the combination of pepsin and pancreatin. The highest coefficient of crude protein digestibility was discovered to be in peels and wholemeal flour. The greatest fibre digestibility coefficients were obtained for peels, which contain about 65% of fibre in their dry matter. When pepsin was used, a higher phytic acid digestibility coefficient for G. max, Ph. vulgaris, peels, flour, groats and broken groats was observed; while when the combination of pepsin and pancreatin was used, higher phytic acid digestibility coefficients for peas, lentil and wholemeal flour were observed.

  20. Gamma radiation measurement in select sand samples from Camburi beach - Vitoria, Espirito Santo, Brazil: preliminary results

    International Nuclear Information System (INIS)

    Barros, Livia F.; Pecequilo, Brigitte R.S.; Aquino, Reginaldo R.

    2011-01-01

    The variation of natural radioactivity along the surface of the beach sands of Camburi, located in Vitoria, capital of Espirito Santo, southeastern Brazil, was determined from the contents of 226 Ra, 232 Th and 40 K. Eleven collecting points was selected along all the 6 km extension of the Camburi beach. Sand samples collected from all established points on January 2011 were dried and sealed in standard 100 mL polyethylene flasks and measured by high resolution gamma spectrometry after a 4 weeks ingrowth period, in order to allow the secular equilibrium in the 238 U and 232 Th series. The 226 Ra concentration was determined from the weighted average concentrations of 214 Pb and 214 Bi. The 232 Th concentration was determined from the weighted average concentrations of 228 Ac, 212 Pb and 212 Bi and the 40 K from its single gamma transition. Preliminary results show activity concentrations varying from 5 Bq.kg -1 to 222 Bq.kg -1 for 226 Ra and from 14 Bq.kg -1 to 1074 Bq.kg -' 1 for 232 Th, both with the highest values for Camburi South and Central. For 40 K, the activity concentrations ranged from 14 Bq.kg -1 to 179 Bq.kg -1 and the highest values were obtained for Camburi South. (author)

  1. Analysis of a selected sample of RR Lyrae stars in the LMC from OGLE-III

    International Nuclear Information System (INIS)

    Chen Bing-Qiu; Jiang Bi-Wei; Yang Ming

    2013-01-01

    A systematic study of RR Lyrae stars is performed using a selected sample of 655 objects in the Large Magellanic Cloud (LMC) with long-term observations and numerous measurements from the Optical Gravitational Lensing Experiment III project. The phase dispersion method and linear superposition of the harmonic oscillations are used to derive the pulsation frequency and properties of light variation. It is found that a dichotomy exists in Oosterhoff Type I and Oosterhoff Type II for RR Lyrae stars in the LMC. Due to our strict criteria for identifying a frequency, a lower limit for the incidence rate of Blazhko modulation in the LMC is estimated in various subclasses of RR Lyrae stars. For fundamental-mode RR Lyrae stars, the rate of 7.5% is smaller than the previous result. In the case of the first-overtone RR Lyrae variables, the rate of 9.1% is relatively high. In addition to the Blazhko variables, 15 objects are identified to pulsate in the fundamental/first-overtone double mode. Furthermore, four objects show a period ratio around 0.6, which makes them very likely to be rare pulsators in the fundamental/second-overtone double mode. (research papers)

  2. CdTe detector based PIXE mapping of geological samples

    Energy Technology Data Exchange (ETDEWEB)

    Chaves, P.C., E-mail: cchaves@ctn.ist.utl.pt [Centro de Física Atómica da Universidade de Lisboa, Av. Prof. Gama Pinto 2, 1649-003 Lisboa (Portugal); IST/ITN, Instituto Superior Técnico, Universidade Técnica de Lisboa, Campus Tecnológico e Nuclear, EN10, 2686-953 Sacavém (Portugal); Taborda, A. [Centro de Física Atómica da Universidade de Lisboa, Av. Prof. Gama Pinto 2, 1649-003 Lisboa (Portugal); IST/ITN, Instituto Superior Técnico, Universidade Técnica de Lisboa, Campus Tecnológico e Nuclear, EN10, 2686-953 Sacavém (Portugal); Oliveira, D.P.S. de [Laboratório Nacional de Energia e Geologia (LNEG), Apartado 7586, 2611-901 Alfragide (Portugal); Reis, M.A. [Centro de Física Atómica da Universidade de Lisboa, Av. Prof. Gama Pinto 2, 1649-003 Lisboa (Portugal); IST/ITN, Instituto Superior Técnico, Universidade Técnica de Lisboa, Campus Tecnológico e Nuclear, EN10, 2686-953 Sacavém (Portugal)

    2014-01-01

    A sample collected from a borehole drilled approximately 10 km ESE of Bragança, Trás-os-Montes, was analysed by standard and high energy PIXE at both CTN (previous ITN) PIXE setups. The sample is a fine-grained metapyroxenite grading to coarse-grained in the base with disseminated sulphides and fine veinlets of pyrrhotite and pyrite. Matrix composition was obtained at the standard PIXE setup using a 1.25 MeV H{sup +} beam at three different spots. Medium and high Z elemental concentrations were then determined using the DT2fit and DT2simul codes (Reis et al., 2008, 2013 [1,2]), on the spectra obtained in the High Resolution and High Energy (HRHE)-PIXE setup (Chaves et al., 2013 [3]) by irradiation of the sample with a 3.8 MeV proton beam provided by the CTN 3 MV Tandetron accelerator. In this paper we present results, discuss detection limits of the method and the added value of the use of the CdTe detector in this context.

  3. 40 CFR Appendix Xi to Part 86 - Sampling Plans for Selective Enforcement Auditing of Light-Duty Vehicles

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 19 2010-07-01 2010-07-01 false Sampling Plans for Selective Enforcement Auditing of Light-Duty Vehicles XI Appendix XI to Part 86 Protection of Environment ENVIRONMENTAL... Enforcement Auditing of Light-Duty Vehicles 40% AQL Table 1—Sampling Plan Code Letter Annual sales of...

  4. Towards Identify Selective Antibacterial Peptides Based on Abstracts Meaning

    Directory of Open Access Journals (Sweden)

    Liliana I. Barbosa-Santillán

    2016-01-01

    Full Text Available We present an Identify Selective Antibacterial Peptides (ISAP approach based on abstracts meaning. Laboratories and researchers have significantly increased the report of their discoveries related to antibacterial peptides in primary publications. It is important to find antibacterial peptides that have been reported in primary publications because they can produce antibiotics of different generations that attack and destroy the bacteria. Unfortunately, researchers used heterogeneous forms of natural language to describe their discoveries (sometimes without the sequence of the peptides. Thus, we propose that learning the words meaning instead of the antibacterial peptides sequence is possible to identify and predict antibacterial peptides reported in the PubMed engine. The ISAP approach consists of two stages: training and discovering. ISAP founds that the 35% of the abstracts sample had antibacterial peptides and we tested in the updated Antimicrobial Peptide Database 2 (APD2. ISAP predicted that 45% of the abstracts had antibacterial peptides. That is, ISAP found that 810 antibacterial peptides were not classified like that, so they are not reported in APD2. As a result, this new search tool would complement the APD2 with a set of peptides that are candidates to be antibacterial. Finally, 20% of the abstracts were not semantic related to APD2.

  5. Molecularly imprinted membrane extraction combined with high-performance liquid chromatography for selective analysis of cloxacillin from shrimp samples.

    Science.gov (United States)

    Du, Wei; Sun, Min; Guo, Pengqi; Chang, Chun; Fu, Qiang

    2018-09-01

    Nowadays, the abuse of antibiotics in aquaculture has generated considerable problems for food safety. Therefore, it is imperative to develop a simple and selective method for monitoring illegal use of antibiotics in aquatic products. In this study, a method combined molecularly imprinted membranes (MIMs) extraction and liquid chromatography was developed for the selective analysis of cloxacillin from shrimp samples. The MIMs was synthesized by UV photopolymerization, and characterized by scanning electron microscope, Fourier transform infrared spectra, thermo-gravimetric analysis and swelling test. The results showed that the MIMs exhibited excellent permselectivity, high adsorption capacity and fast adsorption rate for cloxacillin. Finally, the method was utilized to determine cloxacillin from shrimp samples, with good accuracies and acceptable relative standard deviation values for precision. The proposed method was a promising alternative for selective analysis of cloxacillin in shrimp samples, due to the easy-operation and excellent selectivity. Copyright © 2018. Published by Elsevier Ltd.

  6. Development of Base Transceiver Station Selection Algorithm for ...

    African Journals Online (AJOL)

    TEMS) equipment was carried out on the existing BTSs, and a linear algorithm optimization program based on the spectral link efficiency of each BTS was developed, the output of this site optimization gives the selected number of base station sites ...

  7. Sample similarity analysis of angles of repose based on experimental results for DEM calibration

    Science.gov (United States)

    Tan, Yuan; Günthner, Willibald A.; Kessler, Stephan; Zhang, Lu

    2017-06-01

    As a fundamental material property, particle-particle friction coefficient is usually calculated based on angle of repose which can be obtained experimentally. In the present study, the bottomless cylinder test was carried out to investigate this friction coefficient of a kind of biomass material, i.e. willow chips. Because of its irregular shape and varying particle size distribution, calculation of the angle becomes less applicable and decisive. In the previous studies only one section of those uneven slopes is chosen in most cases, although standard methods in definition of a representable section are barely found. Hence, we presented an efficient and reliable method from the new technology, 3D scan, which was used to digitize the surface of heaps and generate its point cloud. Then, two tangential lines of any selected section were calculated through the linear least-squares regression (LLSR), such that the left and right angle of repose of a pile could be derived. As the next step, a certain sum of sections were stochastic selected, and calculations were repeated correspondingly in order to achieve sample of angles, which was plotted in Cartesian coordinates as spots diagram. Subsequently, different samples were acquired through various selections of sections. By applying similarities and difference analysis of these samples, the reliability of this proposed method was verified. Phased results provides a realistic criterion to reduce the deviation between experiment and simulation as a result of random selection of a single angle, which will be compared with the simulation results in the future.

  8. Towards the harmonization between National Forest Inventory and Forest Condition Monitoring. Consistency of plot allocation and effect of tree selection methods on sample statistics in Italy.

    Science.gov (United States)

    Gasparini, Patrizia; Di Cosmo, Lucio; Cenni, Enrico; Pompei, Enrico; Ferretti, Marco

    2013-07-01

    In the frame of a process aiming at harmonizing National Forest Inventory (NFI) and ICP Forests Level I Forest Condition Monitoring (FCM) in Italy, we investigated (a) the long-term consistency between FCM sample points (a subsample of the first NFI, 1985, NFI_1) and recent forest area estimates (after the second NFI, 2005, NFI_2) and (b) the effect of tree selection method (tree-based or plot-based) on sample composition and defoliation statistics. The two investigations were carried out on 261 and 252 FCM sites, respectively. Results show that some individual forest categories (larch and stone pine, Norway spruce, other coniferous, beech, temperate oaks and cork oak forests) are over-represented and others (hornbeam and hophornbeam, other deciduous broadleaved and holm oak forests) are under-represented in the FCM sample. This is probably due to a change in forest cover, which has increased by 1,559,200 ha from 1985 to 2005. In case of shift from a tree-based to a plot-based selection method, 3,130 (46.7%) of the original 6,703 sample trees will be abandoned, and 1,473 new trees will be selected. The balance between exclusion of former sample trees and inclusion of new ones will be particularly unfavourable for conifers (with only 16.4% of excluded trees replaced by new ones) and less for deciduous broadleaves (with 63.5% of excluded trees replaced). The total number of tree species surveyed will not be impacted, while the number of trees per species will, and the resulting (plot-based) sample composition will have a much larger frequency of deciduous broadleaved trees. The newly selected trees have-in general-smaller diameter at breast height (DBH) and defoliation scores. Given the larger rate of turnover, the deciduous broadleaved part of the sample will be more impacted. Our results suggest that both a revision of FCM network to account for forest area change and a plot-based approach to permit statistical inference and avoid bias in the tree sample

  9. Computational fragment-based screening using RosettaLigand: the SAMPL3 challenge

    Science.gov (United States)

    Kumar, Ashutosh; Zhang, Kam Y. J.

    2012-05-01

    SAMPL3 fragment based virtual screening challenge provides a valuable opportunity for researchers to test their programs, methods and screening protocols in a blind testing environment. We participated in SAMPL3 challenge and evaluated our virtual fragment screening protocol, which involves RosettaLigand as the core component by screening a 500 fragments Maybridge library against bovine pancreatic trypsin. Our study reaffirmed that the real test for any virtual screening approach would be in a blind testing environment. The analyses presented in this paper also showed that virtual screening performance can be improved, if a set of known active compounds is available and parameters and methods that yield better enrichment are selected. Our study also highlighted that to achieve accurate orientation and conformation of ligands within a binding site, selecting an appropriate method to calculate partial charges is important. Another finding is that using multiple receptor ensembles in docking does not always yield better enrichment than individual receptors. On the basis of our results and retrospective analyses from SAMPL3 fragment screening challenge we anticipate that chances of success in a fragment screening process could be increased significantly with careful selection of receptor structures, protein flexibility, sufficient conformational sampling within binding pocket and accurate assignment of ligand and protein partial charges.

  10. Supplier selection based on multi-criterial AHP method

    Directory of Open Access Journals (Sweden)

    Jana Pócsová

    2010-03-01

    Full Text Available This paper describes a case-study of supplier selection based on multi-criterial Analytic Hierarchy Process (AHP method.It is demonstrated that using adequate mathematical method can bring us “unprejudiced” conclusion, even if the alternatives (suppliercompanies are very similar in given selection-criteria. The result is the best possible supplier company from the viewpoint of chosen criteriaand the price of the product.

  11. Personnel Selection Method Based on Personnel-Job Matching

    OpenAIRE

    Li Wang; Xilin Hou; Lili Zhang

    2013-01-01

    The existing personnel selection decisions in practice are based on the evaluation of job seeker's human capital, and it may be difficult to make personnel-job matching and make each party satisfy. Therefore, this paper puts forward a new personnel selection method by consideration of bilateral matching. Starting from the employment thoughts of ¡°satisfy¡±, the satisfaction evaluation indicator system of each party are constructed. The multi-objective optimization model is given according to ...

  12. Construction Tender Subcontract Selection using Case-based Reasoning

    Directory of Open Access Journals (Sweden)

    Due Luu

    2012-11-01

    Full Text Available Obtaining competitive quotations from suitably qualified subcontractors at tender tim n significantly increase the chance of w1nmng a construction project. Amidst an increasingly growing trend to subcontracting in Australia, selecting appropriate subcontractors for a construction project can be a daunting task requiring the analysis of complex and dynamic criteria such as past performance, suitable experience, track record of competitive pricing, financial stability and so on. Subcontractor selection is plagued with uncertainty and vagueness and these conditions are difficul_t o represent in generalised sets of rules. DeciSIOns pertaining to the selection of subcontr:act?s tender time are usually based on the mtu1t1onand past experience of construction estimators. Case-based reasoning (CBR may be an appropriate method of addressing the chal_lenges of selecting subcontractors because CBR 1s able to harness the experiential knowledge of practitioners. This paper reviews the practicality and suitability of a CBR approach for subcontractor tender selection through the development of a prototype CBR procurement advisory system. In this system, subcontractor selection cases are represented by a set of attributes elicited from experienced construction estimators. The results indicate that CBR can enhance the appropriateness of the selection of subcontractors for construction projects.

  13. Cooperative Technique Based on Sensor Selection in Wireless Sensor Network

    Directory of Open Access Journals (Sweden)

    ISLAM, M. R.

    2009-02-01

    Full Text Available An energy efficient cooperative technique is proposed for the IEEE 1451 based Wireless Sensor Networks. Selected numbers of Wireless Transducer Interface Modules (WTIMs are used to form a Multiple Input Single Output (MISO structure wirelessly connected with a Network Capable Application Processor (NCAP. Energy efficiency and delay of the proposed architecture are derived for different combination of cluster size and selected number of WTIMs. Optimized constellation parameters are used for evaluating derived parameters. The results show that the selected MISO structure outperforms the unselected MISO structure and it shows energy efficient performance than SISO structure after a certain distance.

  14. Chemical and X-ray diffraction analysis on selected samples from the TMI-2 reactor core

    International Nuclear Information System (INIS)

    Kleykamp, H.; Pejsa, R.

    1991-05-01

    Selected samples from different positions of the damaged TMI-2 reactor core were investigated by X-ray microanalysis and X-ray diffraction. The measurements yield the following resolidified phases after cooling: Cd and In depleted Ag absorber material, intermetallic Zr-steel compounds, fully oxidized Zircaloy, UO 2 -ZrO 2 solid solutions and their decomposed phases, and Fe-Al-Cr-Zr spinels. The composition of the phases and their lattice parameters as well as the eutectic and monotectic character can serve as indicators of local temperatures of the core. The reaction sequences are estimated from the heterogeneous equilibria of these phases. The main conclusions are: (1) Liquefaction onset is locally possible by Inconel-Zircaloy and steel-Zircaloy reactions of spacers and absorber guide tubes at 930deg C. However, increased rates of dissolution occur above 1200deg C. (2) UO 2 dissolution in the Inconel-steel-Zircaloy melt starts at 1300deg C with increased rates above 1900deg C. (3) Fuel temperatures in the core centre are increased above 2550deg C, liquid (U,Zr)O 2 is generated. (4) Square UO 2 particles are reprecipitated from the Incoloy-steel-Zircaloy-UO 2 melt during cooling, the remaining metallic melt is oxygen poor; two types of intermetallic phases are formed. (5) Oxidized Fe and Zr and Al 2 O 3 from burnable absorber react to spinels which form a low melting eutectic with the fuel at 1500deg C. The spinel acts as lubricant for fuel transport to the lower reactor plenum above 1500deg C. (6) Ruthenium (Ru-106) is dissolved in the steel phase, antimony (Sb-125) in the α-Ag absorber during liquefaction. (7) Oxidation of the Zircaloy-steel phases takes place mainly in the reflood stage 3 of the accident scenario. (orig.) [de

  15. Investigation of selected trace elements in hair samples of eczema patients

    International Nuclear Information System (INIS)

    Osman, N. O.

    2010-12-01

    The aim of this case-control study was to investigate the relationship between selected trace elements and skin diseases, namely eczema. Fifty five patients affected by the most frequent eczema types were recruited at the onset of disease at the hospital of dermatology in Khartoum together with thirty healthy controls. Fe, Zn, Cu, and Ni were measured in hair samples obtained from both patients and control group using Atomic Absorption Spectrometry (AAS). Data analysis was performed using the T-test. Partial correlation was used to study the relationship between the elemental concentration. Certified reference material (IAEA-85) Hair Powder) produced by the International Atomic Energy Agency (IAEA) was used as a quality control to check the accuracy and precision of the analytical technique, good agreement was achieved for all elements under investigation. Significant variations (p<0.05) in the concentrations of Fe, Zn, Cu, and Ni in the hair of the patients compared to the control group, and this difference was a decrease of iron, zinc and copper, therefore, should be given to the patient doses of these elements, while there was an increase in the nickel. So it is not included in the treatment. These interesting associations between the levels the of trace elements could be used as an indication for the disease as well as to monitor the treatment. Comparisons of the results obtained in the present study with those conducted for other population in the literature showed very close agreement. The levels of the elements under investigation are comparable with the data obtained from the literature for other populations with exception of Fe which was found to be very high in Sudanese population. (Author)

  16. Investigation of selected trace elements in hair samples of eczema patients

    Energy Technology Data Exchange (ETDEWEB)

    Osman, N O [Atomic Energy Council, Sudan Academy of Sciences (SAS), Khartoum (Sudan)

    2010-12-15

    The aim of this case-control study was to investigate the relationship between selected trace elements and skin diseases, namely eczema. Fifty five patients affected by the most frequent eczema types were recruited at the onset of disease at the hospital of dermatology in Khartoum together with thirty healthy controls. Fe, Zn, Cu, and Ni were measured in hair samples obtained from both patients and control group using Atomic Absorption Spectrometry (AAS). Data analysis was performed using the T-test. Partial correlation was used to study the relationship between the elemental concentration. Certified reference material (IAEA-85) Hair Powder) produced by the International Atomic Energy Agency (IAEA) was used as a quality control to check the accuracy and precision of the analytical technique, good agreement was achieved for all elements under investigation. Significant variations (p<0.05) in the concentrations of Fe, Zn, Cu, and Ni in the hair of the patients compared to the control group, and this difference was a decrease of iron, zinc and copper, therefore, should be given to the patient doses of these elements, while there was an increase in the nickel. So it is not included in the treatment. These interesting associations between the levels the of trace elements could be used as an indication for the disease as well as to monitor the treatment. Comparisons of the results obtained in the present study with those conducted for other population in the literature showed very close agreement. The levels of the elements under investigation are comparable with the data obtained from the literature for other populations with exception of Fe which was found to be very high in Sudanese population. (Author)

  17. Active sites in the alkylation of toluene with methanol : a study by selective acid-base poisoning

    NARCIS (Netherlands)

    Borgna, A.; Sepulveda, J.; Magni, S.I.; Apesteguia, C.R.

    2004-01-01

    Selective acid–base poisoning of the alkylation of toluene with methanol was studied over alkali and alkaline-earth exchanged Y zeolites. Surface acid–base properties of the samples were determined by infrared spectroscopy using carbon dioxide and pyridine as probe molecules. Selective poisoning

  18. Generalized Selectivity Description for Polymeric Ion-Selective Electrodes Based on the Phase Boundary Potential Model.

    Science.gov (United States)

    Bakker, Eric

    2010-02-15

    A generalized description of the response behavior of potentiometric polymer membrane ion-selective electrodes is presented on the basis of ion-exchange equilibrium considerations at the sample-membrane interface. This paper includes and extends on previously reported theoretical advances in a more compact yet more comprehensive form. Specifically, the phase boundary potential model is used to derive the origin of the Nernstian response behavior in a single expression, which is valid for a membrane containing any charge type and complex stoichiometry of ionophore and ion-exchanger. This forms the basis for a generalized expression of the selectivity coefficient, which may be used for the selectivity optimization of ion-selective membranes containing electrically charged and neutral ionophores of any desired stoichiometry. It is shown to reduce to expressions published previously for specialized cases, and may be effectively applied to problems relevant in modern potentiometry. The treatment is extended to mixed ion solutions, offering a comprehensive yet formally compact derivation of the response behavior of ion-selective electrodes to a mixture of ions of any desired charge. It is compared to predictions by the less accurate Nicolsky-Eisenman equation. The influence of ion fluxes or any form of electrochemical excitation is not considered here, but may be readily incorporated if an ion-exchange equilibrium at the interface may be assumed in these cases.

  19. Knowledge based expert system approach to instrumentation selection (INSEL

    Directory of Open Access Journals (Sweden)

    S. Barai

    2004-08-01

    Full Text Available The selection of appropriate instrumentation for any structural measurement of civil engineering structure is a complex task. Recent developments in Artificial Intelligence (AI can help in an organized use of experiential knowledge available on instrumentation for laboratory and in-situ measurement. Usually, the instrumentation decision is based on the experience and judgment of experimentalists. The heuristic knowledge available for different types of measurement is domain dependent and the information is scattered in varied knowledge sources. The knowledge engineering techniques can help in capturing the experiential knowledge. This paper demonstrates a prototype knowledge based system for INstrument SELection (INSEL assistant where the experiential knowledge for various structural domains can be captured and utilized for making instrumentation decision. In particular, this Knowledge Based Expert System (KBES encodes the heuristics on measurement and demonstrates the instrument selection process with reference to steel bridges. INSEL runs on a microcomputer and uses an INSIGHT 2+ environment.

  20. Diversified models for portfolio selection based on uncertain semivariance

    Science.gov (United States)

    Chen, Lin; Peng, Jin; Zhang, Bo; Rosyida, Isnaini

    2017-02-01

    Since the financial markets are complex, sometimes the future security returns are represented mainly based on experts' estimations due to lack of historical data. This paper proposes a semivariance method for diversified portfolio selection, in which the security returns are given subjective to experts' estimations and depicted as uncertain variables. In the paper, three properties of the semivariance of uncertain variables are verified. Based on the concept of semivariance of uncertain variables, two types of mean-semivariance diversified models for uncertain portfolio selection are proposed. Since the models are complex, a hybrid intelligent algorithm which is based on 99-method and genetic algorithm is designed to solve the models. In this hybrid intelligent algorithm, 99-method is applied to compute the expected value and semivariance of uncertain variables, and genetic algorithm is employed to seek the best allocation plan for portfolio selection. At last, several numerical examples are presented to illustrate the modelling idea and the effectiveness of the algorithm.

  1. Preview-based sampling for controlling gaseous simulations

    KAUST Repository

    Huang, Ruoguan

    2011-01-01

    In this work, we describe an automated method for directing the control of a high resolution gaseous fluid simulation based on the results of a lower resolution preview simulation. Small variations in accuracy between low and high resolution grids can lead to divergent simulations, which is problematic for those wanting to achieve a desired behavior. Our goal is to provide a simple method for ensuring that the high resolution simulation matches key properties from the lower resolution simulation. We first let a user specify a fast, coarse simulation that will be used for guidance. Our automated method samples the data to be matched at various positions and scales in the simulation, or allows the user to identify key portions of the simulation to maintain. During the high resolution simulation, a matching process ensures that the properties sampled from the low resolution simulation are maintained. This matching process keeps the different resolution simulations aligned even for complex systems, and can ensure consistency of not only the velocity field, but also advected scalar values. Because the final simulation is naturally similar to the preview simulation, only minor controlling adjustments are needed, allowing a simpler control method than that used in prior keyframing approaches. Copyright © 2011 by the Association for Computing Machinery, Inc.

  2. Quantum Ensemble Classification: A Sampling-Based Learning Control Approach.

    Science.gov (United States)

    Chen, Chunlin; Dong, Daoyi; Qi, Bo; Petersen, Ian R; Rabitz, Herschel

    2017-06-01

    Quantum ensemble classification (QEC) has significant applications in discrimination of atoms (or molecules), separation of isotopes, and quantum information extraction. However, quantum mechanics forbids deterministic discrimination among nonorthogonal states. The classification of inhomogeneous quantum ensembles is very challenging, since there exist variations in the parameters characterizing the members within different classes. In this paper, we recast QEC as a supervised quantum learning problem. A systematic classification methodology is presented by using a sampling-based learning control (SLC) approach for quantum discrimination. The classification task is accomplished via simultaneously steering members belonging to different classes to their corresponding target states (e.g., mutually orthogonal states). First, a new discrimination method is proposed for two similar quantum systems. Then, an SLC method is presented for QEC. Numerical results demonstrate the effectiveness of the proposed approach for the binary classification of two-level quantum ensembles and the multiclass classification of multilevel quantum ensembles.

  3. Total arsenic in selected food samples from Argentina: Estimation of their contribution to inorganic arsenic dietary intake.

    Science.gov (United States)

    Sigrist, Mirna; Hilbe, Nandi; Brusa, Lucila; Campagnoli, Darío; Beldoménico, Horacio

    2016-11-01

    An optimized flow injection hydride generation atomic absorption spectroscopy (FI-HGAAS) method was used to determine total arsenic in selected food samples (beef, chicken, fish, milk, cheese, egg, rice, rice-based products, wheat flour, corn flour, oats, breakfast cereals, legumes and potatoes) and to estimate their contributions to inorganic arsenic dietary intake. The limit of detection (LOD) and limit of quantification (LOQ) values obtained were 6μgkg(-)(1) and 18μgkg(-)(1), respectively. The mean recovery range obtained for all food at a fortification level of 200μgkg(-)(1) was 85-110%. Accuracy was evaluated using dogfish liver certified reference material (DOLT-3 NRC) for trace metals. The highest total arsenic concentrations (in μgkg(-)(1)) were found in fish (152-439), rice (87-316) and rice-based products (52-201). The contribution to inorganic arsenic (i-As) intake was calculated from the mean i-As content of each food (calculated by applying conversion factors to total arsenic data) and the mean consumption per day. The primary contributors to inorganic arsenic intake were wheat flour, including its proportion in wheat flour-based products (breads, pasta and cookies), followed by rice; both foods account for close to 53% and 17% of the intake, respectively. The i-As dietary intake, estimated as 10.7μgday(-)(1), was significantly lower than that from drinking water in vast regions of Argentina. Copyright © 2016 Elsevier Ltd. All rights reserved.

  4. Re-Emergence of Under-Selected Stimuli, after the Extinction of Over-Selected Stimuli in an Automated Match to Samples Procedure

    Science.gov (United States)

    Broomfield, Laura; McHugh, Louise; Reed, Phil

    2008-01-01

    Stimulus over-selectivity occurs when one of potentially many aspects of the environment comes to control behaviour. In two experiments, adults with no developmental disabilities, were trained and tested in an automated match to samples (MTS) paradigm. In Experiment 1, participants completed two conditions, in one of which the over-selected…

  5. Selective extraction of emerging contaminants from water samples by dispersive liquid-liquid microextraction using functionalized ionic liquids.

    Science.gov (United States)

    Yao, Cong; Li, Tianhao; Twu, Pamela; Pitner, William R; Anderson, Jared L

    2011-03-25

    Functionalized ionic liquids containing the tris(pentafluoroethyl)trifluorophosphate (FAP) anion were used as extraction solvents in dispersive liquid-liquid microextraction (DLLME) for the extraction of 14 emerging contaminants from water samples. The extraction efficiencies and selectivities were compared to those of an in situ IL DLLME method which uses an in situ metathesis reaction to exchange 1-butyl-3-methylimidazolium chloride (BMIM-Cl) to 1-butyl-3-methylimidazolium bis[(trifluoromethyl)sulfonyl]imide (BMIM-NTf(2)). Compounds containing tertiary amine functionality were extracted with high selectivity and sensitivity by the 1-(6-amino-hexyl)-1-methylpyrrolidinium tris(pentafluoroethyl)trifluorophosphate (HNH(2)MPL-FAP) IL compared to other FAP-based ILs and the BMIM-NTf(2) IL. On the other hand, polar or acidic compounds without amine groups exhibited higher enrichment factors using the BMIM-NTf(2) IL. The detection limits for the studied analytes varied from 0.1 to 55.1 μg/L using the traditional IL DLLME method with the HNH(2)MPL-FAP IL as extraction solvent, and from 0.1 to 55.8 μg/L using in situ IL DLLME method with BMIM-Cl+LiNTf(2) as extraction solvent. A 93-fold decrease in the detection limit of caffeine was observed when using the HNH(2)MPL-FAP IL compared to that obtained using in situ IL DLLME method. Real water samples including tap water and creek water were analyzed with both IL DLLME methods and yielded recoveries ranging from 91% to 110%. Copyright © 2011 Elsevier B.V. All rights reserved.

  6. Selective inferior petrosal sinus sampling without venous outflow diversion in the detection of a pituitary adenoma in Cushing's syndrome

    International Nuclear Information System (INIS)

    Andereggen, Lukas; Schroth, Gerhard; Gralla, Jan; Ozdoba, Christoph; Seiler, Rolf; Mariani, Luigi; Beck, Juergen; Widmer, Hans-Rudolf; Andres, Robert H.; Christ, Emanuel

    2012-01-01

    Conventional MRI may still be an inaccurate method for the non-invasive detection of a microadenoma in adrenocorticotropin (ACTH)-dependent Cushing's syndrome (CS). Bilateral inferior petrosal sinus sampling (BIPSS) with ovine corticotropin-releasing hormone (oCRH) stimulation is an invasive, but accurate, intervention in the diagnostic armamentarium surrounding CS. Until now, there is a continuous controversial debate regarding lateralization data in detecting a microadenoma. Using BIPSS, we evaluated whether a highly selective placement of microcatheters without diversion of venous outflow might improve detection of pituitary microadenoma. We performed BIPSS in 23 patients that met clinical and biochemical criteria of CS and with equivocal MRI findings. For BIPSS, the femoral veins were catheterized bilaterally with a 6-F catheter and the inferior petrosal sinus bilaterally with a 2.7-F microcatheter. A third catheter was placed in the right femoral vein. Blood samples were collected from each catheter to determine ACTH blood concentration before and after oCRH stimulation. In 21 patients, a central-to-peripheral ACTH gradient was found and the affected side determined. In 18 of 20 patients where transsphenoidal partial hypophysectomy was performed based on BIPSS findings, microadenoma was histologically confirmed. BIPSS had a sensitivity of 94% and a specificity of 67% after oCRH stimulation in detecting a microadenoma. Correct localization of the adenoma was achieved in all Cushing's disease patients. BIPSS remains the gold standard in the detection of a microadenoma in CS. Our findings show that the selective placement of microcatheters without venous outflow diversion might further enhance better recognition to localize the pituitary tumor. (orig.)

  7. Soil classification basing on the spectral characteristics of topsoil samples

    Science.gov (United States)

    Liu, Huanjun; Zhang, Xiaokang; Zhang, Xinle

    2016-04-01

    Soil taxonomy plays an important role in soil utility and management, but China has only course soil map created based on 1980s data. New technology, e.g. spectroscopy, could simplify soil classification. The study try to classify soils basing on the spectral characteristics of topsoil samples. 148 topsoil samples of typical soils, including Black soil, Chernozem, Blown soil and Meadow soil, were collected from Songnen plain, Northeast China, and the room spectral reflectance in the visible and near infrared region (400-2500 nm) were processed with weighted moving average, resampling technique, and continuum removal. Spectral indices were extracted from soil spectral characteristics, including the second absorption positions of spectral curve, the first absorption vale's area, and slope of spectral curve at 500-600 nm and 1340-1360 nm. Then K-means clustering and decision tree were used respectively to build soil classification model. The results indicated that 1) the second absorption positions of Black soil and Chernozem were located at 610 nm and 650 nm respectively; 2) the spectral curve of the meadow is similar to its adjacent soil, which could be due to soil erosion; 3) decision tree model showed higher classification accuracy, and accuracy of Black soil, Chernozem, Blown soil and Meadow are 100%, 88%, 97%, 50% respectively, and the accuracy of Blown soil could be increased to 100% by adding one more spectral index (the first two vole's area) to the model, which showed that the model could be used for soil classification and soil map in near future.

  8. Evaluation of pump pulsation in respirable size-selective sampling: Part III. Investigation of European standard methods.

    Science.gov (United States)

    Soo, Jhy-Charm; Lee, Eun Gyung; Lee, Larry A; Kashon, Michael L; Harper, Martin

    2014-10-01

    Lee et al. (Evaluation of pump pulsation in respirable size-selective sampling: part I. Pulsation measurements. Ann Occup Hyg 2014a;58:60-73) introduced an approach to measure pump pulsation (PP) using a real-world sampling train, while the European Standards (EN) (EN 1232-1997 and EN 12919-1999) suggest measuring PP using a resistor in place of the sampler. The goal of this study is to characterize PP according to both EN methods and to determine the relationship of PP between the published method (Lee et al., 2014a) and the EN methods. Additional test parameters were investigated to determine whether the test conditions suggested by the EN methods were appropriate for measuring pulsations. Experiments were conducted using a factorial combination of personal sampling pumps (six medium- and two high-volumetric flow rate pumps), back pressures (six medium- and seven high-flow rate pumps), resistors (two types), tubing lengths between a pump and resistor (60 and 90 cm), and different flow rates (2 and 2.5 l min(-1) for the medium- and 4.4, 10, and 11.2 l min(-1) for the high-flow rate pumps). The selection of sampling pumps and the ranges of back pressure were based on measurements obtained in the previous study (Lee et al., 2014a). Among six medium-flow rate pumps, only the Gilian5000 and the Apex IS conformed to the 10% criterion specified in EN 1232-1997. Although the AirChek XR5000 exceeded the 10% limit, the average PP (10.9%) was close to the criterion. One high-flow rate pump, the Legacy (PP=8.1%), conformed to the 10% criterion in EN 12919-1999, while the Elite12 did not (PP=18.3%). Conducting supplemental tests with additional test parameters beyond those used in the two subject EN standards did not strengthen the characterization of PPs. For the selected test conditions, a linear regression model [PPEN=0.014+0.375×PPNIOSH (adjusted R2=0.871)] was developed to determine the PP relationship between the published method (Lee et al., 2014a) and the EN methods

  9. Performance-Based Technology Selection Filter description report

    International Nuclear Information System (INIS)

    O'Brien, M.C.; Morrison, J.L.; Morneau, R.A.; Rudin, M.J.; Richardson, J.G.

    1992-05-01

    A formal methodology has been developed for identifying technology gaps and assessing innovative or postulated technologies for inclusion in proposed Buried Waste Integrated Demonstration (BWID) remediation systems. Called the Performance-Based Technology Selection Filter, the methodology provides a formalized selection process where technologies and systems are rated and assessments made based on performance measures, and regulatory and technical requirements. The results are auditable, and can be validated with field data. This analysis methodology will be applied to the remedial action of transuranic contaminated waste pits and trenches buried at the Idaho National Engineering Laboratory (INEL)

  10. Performance-Based Technology Selection Filter description report

    Energy Technology Data Exchange (ETDEWEB)

    O' Brien, M.C.; Morrison, J.L.; Morneau, R.A.; Rudin, M.J.; Richardson, J.G.

    1992-05-01

    A formal methodology has been developed for identifying technology gaps and assessing innovative or postulated technologies for inclusion in proposed Buried Waste Integrated Demonstration (BWID) remediation systems. Called the Performance-Based Technology Selection Filter, the methodology provides a formalized selection process where technologies and systems are rated and assessments made based on performance measures, and regulatory and technical requirements. The results are auditable, and can be validated with field data. This analysis methodology will be applied to the remedial action of transuranic contaminated waste pits and trenches buried at the Idaho National Engineering Laboratory (INEL).

  11. Comparative studies of praseodymium(III) selective sensors based on newly synthesized Schiff's bases

    International Nuclear Information System (INIS)

    Gupta, Vinod K.; Goyal, Rajendra N.; Pal, Manoj K.; Sharma, Ram A.

    2009-01-01

    Praseodymium ion selective polyvinyl chloride (PVC) membrane sensors, based on two new Schiff's bases 1,3-diphenylpropane-1,3-diylidenebis(azan-1-ylidene)diphenol (M 1 ) and N,N'-bis(pyridoxylideneiminato) ethylene (M 2 ) have been developed and studied. The sensor having membrane composition of PVC: o-NPOE: ionophore (M 1 ): NaTPB (w/w; mg) of 150: 300: 8: 5 showed best performances in comparison to M 2 based membranes. The sensor based on (M 1 ) exhibits the working concentration range 1.0 x 10 -8 to 1.0 x 10 -2 M with a detection limit of 5.0 x 10 -9 M and a Nernstian slope 20.0 ± 0.3 mV decade -1 of activity. It exhibited a quick response time as <8 s and its potential responses were pH independent across the range of 3.5-8.5.The influence of the membrane composition and possible interfering ions have also been investigated on the response properties of the electrode. The sensor has been found to work satisfactorily in partially non-aqueous media up to 15% (v/v) content of methanol, ethanol or acetonitrile and could be used for a period of 3 months. The selectivity coefficients determined by using fixed interference method (FIM) indicate high selectivity for praseodymium(III) ions over wide variety of other cations. To asses its analytical applicability the prepared sensor was successfully applied for determination of praseodymium(III) in spiked water samples.

  12. Mineralogy, petrology and whole-rock chemistry data compilation for selected samples of Yucca Mountain tuffs

    International Nuclear Information System (INIS)

    Connolly, J.R.

    1991-12-01

    Petrologic, bulk chemical, and mineralogic data are presented for 49 samples of tuffaceous rocks from core holes USW G-1 and UE-25a number-sign 1 at Yucca Mountain, Nevada. Included, in descending stratigraphic order, are 11 samples from the Topopah Spring Member of the Paintbrush Tuff, 12 samples from the Tuffaceous Beds of Calico Hills, 3 samples from the Prow Pass Member of the Crater Flat Tuff, 20 samples from the Bullfrog Member of the Crater Flat Tuff and 3 samples from the Tram Member of the Crater Flat Tuff. The suite of samples contains a wide variety of petrologic types, including zeolitized, glassy, and devitrified tuffs. Data vary considerably between groups of samples, and include thin section descriptions (some with modal analyses for which uncertainties are estimated), electron microprobe analyses of mineral phases and matrix, mineral identifications by X-ray diffraction, and major element analyses with uncertainty estimates

  13. Design-based Sample and Probability Law-Assumed Sample: Their Role in Scientific Investigation.

    Science.gov (United States)

    Ojeda, Mario Miguel; Sahai, Hardeo

    2002-01-01

    Discusses some key statistical concepts in probabilistic and non-probabilistic sampling to provide an overview for understanding the inference process. Suggests a statistical model constituting the basis of statistical inference and provides a brief review of the finite population descriptive inference and a quota sampling inferential theory.…

  14. Prototype selection based on FCM and its application in discrimination between nuclear explosion and earthquake

    International Nuclear Information System (INIS)

    Han Shaoqing; Li Xihai; Song Zibiao; Liu Daizhi

    2007-01-01

    The synergetic pattern recognition is a new way of pattern recognition with many excellent features such as noise resistance and deformity resistance. But when it is used in the discrimination between nuclear explosion and earthquake using existing methods of prototype selection, the results are not satisfying. A new method of prototype selection based on FCM is proposed in this paper. First, each group of training samples is clustered into c groups using FCM; then c barycenters or centers are chosen as prototypes. Experiment results show that compared with existing methods of prototype selection this new method is effective and it increases the recognition ratio greatly. (authors)

  15. A quantitative method to detect explosives and selected semivolatiles in soil samples by Fourier transform infrared spectroscopy

    International Nuclear Information System (INIS)

    Clapper-Gowdy, M.; Dermirgian, J.; Robitaille, G.

    1995-01-01

    This paper describes a novel Fourier transform infrared (FTIR) spectroscopic method that can be used to rapidly screen soil samples from potentially hazardous waste sites. Samples are heated in a thermal desorption unit and the resultant vapors are collected and analyzed in a long-path gas cell mounted in a FTIR. Laboratory analysis of a soil sample by FTIR takes approximately 10 minutes. This method has been developed to identify and quantify microgram concentrations of explosives in soil samples and is directly applicable to the detection of selected volatile organics, semivolatile organics, and pesticides

  16. Determination of selected metals in coal samples from Lafia-Obi and ...

    African Journals Online (AJOL)

    coal samples were determined using atomic absorption spectroscopy (AAS). All the samples have comparable chromium and copper contents, while iron, aluminum, magnesium and potassium content vary to some extent. Metals concentrations in both Lafia-Obi and Chikila coal samples are within the limits allowed by the ...

  17. A Table-Based Random Sampling Simulation for Bioluminescence Tomography

    Directory of Open Access Journals (Sweden)

    Xiaomeng Zhang

    2006-01-01

    Full Text Available As a popular simulation of photon propagation in turbid media, the main problem of Monte Carlo (MC method is its cumbersome computation. In this work a table-based random sampling simulation (TBRS is proposed. The key idea of TBRS is to simplify multisteps of scattering to a single-step process, through randomly table querying, thus greatly reducing the computing complexity of the conventional MC algorithm and expediting the computation. The TBRS simulation is a fast algorithm of the conventional MC simulation of photon propagation. It retained the merits of flexibility and accuracy of conventional MC method and adapted well to complex geometric media and various source shapes. Both MC simulations were conducted in a homogeneous medium in our work. Also, we present a reconstructing approach to estimate the position of the fluorescent source based on the trial-and-error theory as a validation of the TBRS algorithm. Good agreement is found between the conventional MC simulation and the TBRS simulation.

  18. The sequence relay selection strategy based on stochastic dynamic programming

    Science.gov (United States)

    Zhu, Rui; Chen, Xihao; Huang, Yangchao

    2017-07-01

    Relay-assisted (RA) network with relay node selection is a kind of effective method to improve the channel capacity and convergence performance. However, most of the existing researches about the relay selection did not consider the statically channel state information and the selection cost. This shortage limited the performance and application of RA network in practical scenarios. In order to overcome this drawback, a sequence relay selection strategy (SRSS) was proposed. And the performance upper bound of SRSS was also analyzed in this paper. Furthermore, in order to make SRSS more practical, a novel threshold determination algorithm based on the stochastic dynamic program (SDP) was given to work with SRSS. Numerical results are also presented to exhibit the performance of SRSS with SDP.

  19. DNA barcoding of selected UAE medicinal plant species: a comparative assessment of herbarium and fresh samples.

    Science.gov (United States)

    Enan, Mohamed Rizk; Palakkott, Abdul Rasheed; Ksiksi, Taoufik Saleh

    2017-01-01

    It is commonly difficult to extract and amplify DNA from herbarium samples as they are old and preserved using different compounds. In addition, such samples are subjected to the accumulation of intrinsically produced plant substances over long periods (up to hundreds of years). DNA extraction from desert flora may pause added difficulties as many contain high levels of secondary metabolites. Herbarium samples from the Biology Department (UAE University) plant collection and fresh plant samples, collected from around Al-Ain (UAE), were used in this study. The three barcode loci for the coding genes matK, rbcL and rpoC1-were amplified. Our results showed that T. terresteris , H. robustum , T. pentandrus and Z. qatarense were amplified using all three primers for both fresh and herbaium samples. Both fresh and herbarium samples of C. comosum , however, were not amplified at all, using the three primers. Herbarium samples from A. javanica , C. imbricatum , T. aucherana and Z. simplex were not amplified with any of the three primers. For fresh samples 90, 90 and 80% of the samples were amplified using matK, rbcL and rpoC1, respectively. In short, fresh samples were significantly better amplified than those from herbarium sources, using the three primers. Both fresh and herbarium samples from one species ( C. comosum ), however, were not successfully amplified. It is also concluded that the rbcL regions showed real potentials to distinguish the UAE species under investigation into the appropriate family and genus.

  20. Estimation of plant sampling uncertainty: an example based on chemical analysis of moss samples.

    Science.gov (United States)

    Dołęgowska, Sabina

    2016-11-01

    In order to estimate the level of uncertainty arising from sampling, 54 samples (primary and duplicate) of the moss species Pleurozium schreberi (Brid.) Mitt. were collected within three forested areas (Wierna Rzeka, Piaski, Posłowice Range) in the Holy Cross Mountains (south-central Poland). During the fieldwork, each primary sample composed of 8 to 10 increments (subsamples) was taken over an area of 10 m 2 whereas duplicate samples were collected in the same way at a distance of 1-2 m. Subsequently, all samples were triple rinsed with deionized water, dried, milled, and digested (8 mL HNO 3 (1:1) + 1 mL 30 % H 2 O 2 ) in a closed microwave system Multiwave 3000. The prepared solutions were analyzed twice for Cu, Fe, Mn, and Zn using FAAS and GFAAS techniques. All datasets were checked for normality and for normally distributed elements (Cu from Piaski, Zn from Posłowice, Fe, Zn from Wierna Rzeka). The sampling uncertainty was computed with (i) classical ANOVA, (ii) classical RANOVA, (iii) modified RANOVA, and (iv) range statistics. For the remaining elements, the sampling uncertainty was calculated with traditional and/or modified RANOVA (if the amount of outliers did not exceed 10 %) or classical ANOVA after Box-Cox transformation (if the amount of outliers exceeded 10 %). The highest concentrations of all elements were found in moss samples from Piaski, whereas the sampling uncertainty calculated with different statistical methods ranged from 4.1 to 22 %.

  1. Alumina physically loaded by thiosemicarbazide for selective preconcentration of mercury(II) ion from natural water samples

    International Nuclear Information System (INIS)

    Ahmed, Salwa A.

    2008-01-01

    The multifunctional ligand, thiosemicarbazide, was physically loaded on neutral alumina. The produced alumina-modified solid phase (SP) extractor named, alumina-modified thiosemicarbazide (AM-TSC), experienced high thermal and medium stability. This new phase was identified based on surface coverage determination by thermal desorption method to be 0.437 ± 0.1 mmol g -1 . The selectivity of AM-TSC phase towards the uptake of different nine metal ions was checked using simple, fast and direct batch equilibration technique. AM-TSC was found to have the highest capacity in selective extraction of Hg(II) from aqueous solutions all over the range of pH used (1.0-7.0), compared to the other eight tested metal ions. So, Hg(II) uptake was 1.82 mmol g -1 (distribution coefficient log K d = 5.658) at pH 1.0 or 2.0 and 1.78, 1.73, 1.48, 1.28 and 1.28 mmol g -1 (log K d = 4.607, 4.265, 3.634, 3.372 and 3.372), at pH 3.0, 4.0, 5.0, 6.0 and 7.0, respectively. On the other hand, the metal ions Ca(II), Fe(III), Co(II), Ni(II), Cu(II), Zn(II), Cd(II) and Pb(II) showed low uptake values in range 0.009-0.720 mmol g -1 (log K d < 3.0) at their optimum pH values. A mechanism was suggested to explain the unique uptake of Hg(II) ions based on their binding as neutral and chloroanionic species predominate at pH values ≤3.0 of a medium rich in chloride ions. Application of the new phase for the preconcentration of ultratrace amounts of Hg(II) ions spiked natural water samples: doubly distilled water (DDW), drinking tap water (DTW) and Nile river water (NRW) using cold vapor atomic absorption spectroscopy (CV-AAS) was studied. The high recovery values obtained using AM-TSC (98.5 ± 0.5, 98.0 ± 0.5 and 103.0 ± 1.0) for DDW, DTW and NRW samples, respectively based on excellent enrichment factor 1000, along with a good precision (R.S.D.% 0.51-0.97%, n 3) demonstrate the accuracy and validity of the new modified alumina sorbent for preconcentrating ultratrace amounts of Hg(II) with no

  2. Traditional and robust vector selection methods for use with similarity based models

    International Nuclear Information System (INIS)

    Hines, J. W.; Garvey, D. R.

    2006-01-01

    Vector selection, or instance selection as it is often called in the data mining literature, performs a critical task in the development of nonparametric, similarity based models. Nonparametric, similarity based modeling (SBM) is a form of 'lazy learning' which constructs a local model 'on the fly' by comparing a query vector to historical, training vectors. For large training sets the creation of local models may become cumbersome, since each training vector must be compared to the query vector. To alleviate this computational burden, varying forms of training vector sampling may be employed with the goal of selecting a subset of the training data such that the samples are representative of the underlying process. This paper describes one such SBM, namely auto-associative kernel regression (AAKR), and presents five traditional vector selection methods and one robust vector selection method that may be used to select prototype vectors from a larger data set in model training. The five traditional vector selection methods considered are min-max, vector ordering, combination min-max and vector ordering, fuzzy c-means clustering, and Adeli-Hung clustering. Each method is described in detail and compared using artificially generated data and data collected from the steam system of an operating nuclear power plant. (authors)

  3. Geochemistry of Selected Coal Samples from Sumatra, Kalimantan, Sulawesi, and Papua, Indonesia

    Science.gov (United States)

    Belkin, Harvey E.; Tewalt, Susan J.

    2007-01-01

    and ash (generally Indonesia although, at present, there are concerns about the strong need for a major revision in mining laws and foreign investment policies (Wahju, 2004; United States Embassy Jakarta, 2004). The World Coal Quality Inventory (WoCQI) program of the U.S. Geological Survey (Tewalt and others, 2005) is a cooperative project with about 50 countries (out of 70 coal-producing countries world-wide). The WoCQI initiative has collected and published extensive coal quality data from the world's largest coal producers and consumers. The important aspects of the WoCQI program are; (1) samples from active mines are collected, (2) the data have a high degree of internal consistency with a broad array of coal quality parameters, and (3) the data are linked to GIS and available through the world-wide-web. The coal quality parameters include proximate and ultimate analysis, sulfur forms, major-, minor-, and trace-element concentrations and various technological tests. This report contains geochemical data from a selected group of Indonesian coal samples from a range of coal types, localities, and ages collected for the WoCQI program.

  4. A Principle Component Analysis of Galaxy Properties from a Large, Gas-Selected Sample

    Directory of Open Access Journals (Sweden)

    Yu-Yen Chang

    2012-01-01

    concluded that this is in conflict with the CDM model. Considering the importance of the issue, we reinvestigate the problem using the principal component analysis on a fivefold larger sample and additional near-infrared data. We use databases from the Arecibo Legacy Fast Arecibo L-band Feed Array Survey for the gas properties, the Sloan Digital Sky Survey for the optical properties, and the Two Micron All Sky Survey for the near-infrared properties. We confirm that the parameters are indeed correlated where a single physical parameter can explain 83% of the variations. When color (g-i is included, the first component still dominates but it develops a second principal component. In addition, the near-infrared color (i-J shows an obvious second principal component that might provide evidence of the complex old star formation. Based on our data, we suggest that it is premature to pronounce the failure of the CDM model and it motivates more theoretical work.

  5. Attribute based selection of thermoplastic resin for vacuum infusion process

    DEFF Research Database (Denmark)

    Prabhakaran, R.T. Durai; Lystrup, Aage; Løgstrup Andersen, Tom

    2011-01-01

    The composite industry looks toward a new material system (resins) based on thermoplastic polymers for the vacuum infusion process, similar to the infusion process using thermosetting polymers. A large number of thermoplastics are available in the market with a variety of properties suitable...... for different engineering applications, and few of those are available in a not yet polymerised form suitable for resin infusion. The proper selection of a new resin system among these thermoplastic polymers is a concern for manufactures in the current scenario and a special mathematical tool would...... be beneficial. In this paper, the authors introduce a new decision making tool for resin selection based on significant attributes. This article provides a broad overview of suitable thermoplastic material systems for vacuum infusion process available in today’s market. An illustrative example—resin selection...

  6. An opinion formation based binary optimization approach for feature selection

    Science.gov (United States)

    Hamedmoghadam, Homayoun; Jalili, Mahdi; Yu, Xinghuo

    2018-02-01

    This paper proposed a novel optimization method based on opinion formation in complex network systems. The proposed optimization technique mimics human-human interaction mechanism based on a mathematical model derived from social sciences. Our method encodes a subset of selected features to the opinion of an artificial agent and simulates the opinion formation process among a population of agents to solve the feature selection problem. The agents interact using an underlying interaction network structure and get into consensus in their opinions, while finding better solutions to the problem. A number of mechanisms are employed to avoid getting trapped in local minima. We compare the performance of the proposed method with a number of classical population-based optimization methods and a state-of-the-art opinion formation based method. Our experiments on a number of high dimensional datasets reveal outperformance of the proposed algorithm over others.

  7. Dose selection based on physiologically based pharmacokinetic (PBPK) approaches.

    Science.gov (United States)

    Jones, Hannah M; Mayawala, Kapil; Poulin, Patrick

    2013-04-01

    Physiologically based pharmacokinetic (PBPK) models are built using differential equations to describe the physiology/anatomy of different biological systems. Readily available in vitro and in vivo preclinical data can be incorporated into these models to not only estimate pharmacokinetic (PK) parameters and plasma concentration-time profiles, but also to gain mechanistic insight into compound properties. They provide a mechanistic framework to understand and extrapolate PK and dose across in vitro and in vivo systems and across different species, populations and disease states. Using small molecule and large molecule examples from the literature and our own company, we have shown how PBPK techniques can be utilised for human PK and dose prediction. Such approaches have the potential to increase efficiency, reduce the need for animal studies, replace clinical trials and increase PK understanding. Given the mechanistic nature of these models, the future use of PBPK modelling in drug discovery and development is promising, however some limitations need to be addressed to realise its application and utility more broadly.

  8. Automatic Peak Selection by a Benjamini-Hochberg-Based Algorithm

    KAUST Repository

    Abbas, Ahmed; Kong, Xin-Bing; Liu, Zhi; Jing, Bing-Yi; Gao, Xin

    2013-01-01

    A common issue in bioinformatics is that computational methods often generate a large number of predictions sorted according to certain confidence scores. A key problem is then determining how many predictions must be selected to include most of the true predictions while maintaining reasonably high precision. In nuclear magnetic resonance (NMR)-based protein structure determination, for instance, computational peak picking methods are becoming more and more common, although expert-knowledge remains the method of choice to determine how many peaks among thousands of candidate peaks should be taken into consideration to capture the true peaks. Here, we propose a Benjamini-Hochberg (B-H)-based approach that automatically selects the number of peaks. We formulate the peak selection problem as a multiple testing problem. Given a candidate peak list sorted by either volumes or intensities, we first convert the peaks into p-values and then apply the B-H-based algorithm to automatically select the number of peaks. The proposed approach is tested on the state-of-the-art peak picking methods, including WaVPeak [1] and PICKY [2]. Compared with the traditional fixed number-based approach, our approach returns significantly more true peaks. For instance, by combining WaVPeak or PICKY with the proposed method, the missing peak rates are on average reduced by 20% and 26%, respectively, in a benchmark set of 32 spectra extracted from eight proteins. The consensus of the B-H-selected peaks from both WaVPeak and PICKY achieves 88% recall and 83% precision, which significantly outperforms each individual method and the consensus method without using the B-H algorithm. The proposed method can be used as a standard procedure for any peak picking method and straightforwardly applied to some other prediction selection problems in bioinformatics. The source code, documentation and example data of the proposed method is available at http://sfb.kaust.edu.sa/pages/software.aspx. © 2013

  9. Automatic Peak Selection by a Benjamini-Hochberg-Based Algorithm

    KAUST Repository

    Abbas, Ahmed

    2013-01-07

    A common issue in bioinformatics is that computational methods often generate a large number of predictions sorted according to certain confidence scores. A key problem is then determining how many predictions must be selected to include most of the true predictions while maintaining reasonably high precision. In nuclear magnetic resonance (NMR)-based protein structure determination, for instance, computational peak picking methods are becoming more and more common, although expert-knowledge remains the method of choice to determine how many peaks among thousands of candidate peaks should be taken into consideration to capture the true peaks. Here, we propose a Benjamini-Hochberg (B-H)-based approach that automatically selects the number of peaks. We formulate the peak selection problem as a multiple testing problem. Given a candidate peak list sorted by either volumes or intensities, we first convert the peaks into p-values and then apply the B-H-based algorithm to automatically select the number of peaks. The proposed approach is tested on the state-of-the-art peak picking methods, including WaVPeak [1] and PICKY [2]. Compared with the traditional fixed number-based approach, our approach returns significantly more true peaks. For instance, by combining WaVPeak or PICKY with the proposed method, the missing peak rates are on average reduced by 20% and 26%, respectively, in a benchmark set of 32 spectra extracted from eight proteins. The consensus of the B-H-selected peaks from both WaVPeak and PICKY achieves 88% recall and 83% precision, which significantly outperforms each individual method and the consensus method without using the B-H algorithm. The proposed method can be used as a standard procedure for any peak picking method and straightforwardly applied to some other prediction selection problems in bioinformatics. The source code, documentation and example data of the proposed method is available at http://sfb.kaust.edu.sa/pages/software.aspx. © 2013

  10. Automatic learning-based beam angle selection for thoracic IMRT

    International Nuclear Information System (INIS)

    Amit, Guy; Marshall, Andrea; Purdie, Thomas G.; Jaffray, David A.; Levinshtein, Alex; Hope, Andrew J.; Lindsay, Patricia; Pekar, Vladimir

    2015-01-01

    Purpose: The treatment of thoracic cancer using external beam radiation requires an optimal selection of the radiation beam directions to ensure effective coverage of the target volume and to avoid unnecessary treatment of normal healthy tissues. Intensity modulated radiation therapy (IMRT) planning is a lengthy process, which requires the planner to iterate between choosing beam angles, specifying dose–volume objectives and executing IMRT optimization. In thorax treatment planning, where there are no class solutions for beam placement, beam angle selection is performed manually, based on the planner’s clinical experience. The purpose of this work is to propose and study a computationally efficient framework that utilizes machine learning to automatically select treatment beam angles. Such a framework may be helpful for reducing the overall planning workload. Methods: The authors introduce an automated beam selection method, based on learning the relationships between beam angles and anatomical features. Using a large set of clinically approved IMRT plans, a random forest regression algorithm is trained to map a multitude of anatomical features into an individual beam score. An optimization scheme is then built to select and adjust the beam angles, considering the learned interbeam dependencies. The validity and quality of the automatically selected beams evaluated using the manually selected beams from the corresponding clinical plans as the ground truth. Results: The analysis included 149 clinically approved thoracic IMRT plans. For a randomly selected test subset of 27 plans, IMRT plans were generated using automatically selected beams and compared to the clinical plans. The comparison of the predicted and the clinical beam angles demonstrated a good average correspondence between the two (angular distance 16.8° ± 10°, correlation 0.75 ± 0.2). The dose distributions of the semiautomatic and clinical plans were equivalent in terms of primary target volume

  11. Classification and Target Group Selection Based Upon Frequent Patterns

    NARCIS (Netherlands)

    W.H.L.M. Pijls (Wim); R. Potharst (Rob)

    2000-01-01

    textabstractIn this technical report , two new algorithms based upon frequent patterns are proposed. One algorithm is a classification method. The other one is an algorithm for target group selection. In both algorithms, first of all, the collection of frequent patterns in the training set is

  12. Robot soccer action selection based on Q learning

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    This paper researches robot soccer action selection based on Q learning . The robot learn to activate particular behavior given their current situation and reward signal. We adopt neural network to implementations of Q learning for their generalization properties and limited computer memory requirements

  13. Evaporation rate-based selection of supramolecular chirality.

    Science.gov (United States)

    Hattori, Shingo; Vandendriessche, Stefaan; Koeckelberghs, Guy; Verbiest, Thierry; Ishii, Kazuyuki

    2017-03-09

    We demonstrate the evaporation rate-based selection of supramolecular chirality for the first time. P-type aggregates prepared by fast evaporation, and M-type aggregates prepared by slow evaporation are kinetic and thermodynamic products under dynamic reaction conditions, respectively. These findings provide a novel solution reaction chemistry under the dynamic reaction conditions.

  14. Validity of selected cardiovascular field-based test among Malaysian ...

    African Journals Online (AJOL)

    Based on emerge obese problem among Malaysian, this research is formulated to validate published tests among healthy female adult. Selected test namely; 20 meter multi-stage shuttle run, 2.4km run test, 1 mile walk test and Harvard Step test were correlated with laboratory test (Bruce protocol) to find the criterion validity ...

  15. Solar Thermal AIR Collector Based on New Type Selective Coating

    Directory of Open Access Journals (Sweden)

    Musiy, R.Y.

    2014-01-01

    Full Text Available Based on the best for optical performance and selective coating solar thermal air collector, which operates by solar power on the principle of simultaneous ventilation and heating facilities, is designed. It can be used for vacation homes, museums, wooden churches, warehouses, garages, houses, greenhouses etc.

  16. Exposure level from selected base station tower around Kuala Nerus

    African Journals Online (AJOL)

    Health risk due to RF radiation exposure from base station tower (BST) has been debated for years leading to public concerns. Thus, this preliminary study aims to measure, evaluate and analyze the exposure level on three selected BST around Kuala Nerus. The measurement of exposure level in terms of voltage ...

  17. Selection of representative calibration sample sets for near-infrared reflectance spectroscopy to predict nitrogen concentration in grasses

    DEFF Research Database (Denmark)

    Shetty, Nisha; Rinnan, Åsmund; Gislum, René

    2012-01-01

    ) algorithm were used and compared. Both Puchwein and CADEX methods provide a calibration set equally distributed in space, and both methods require a minimum prior of knowledge. The samples were also selected randomly using complete random, cultivar random (year fixed), year random (cultivar fixed......) and interaction (cultivar × year fixed) random procedures to see the influence of different factors on sample selection. Puchwein's method performed best with lowest RMSEP followed by CADEX, interaction random, year random, cultivar random and complete random. Out of 118 samples of the complete calibration set...... effectively enhance the cost-effectiveness of NIR spectral analysis by reducing the number of analyzed samples in the calibration set by more than 80%, which substantially reduces the effort of laboratory analyses with no significant loss in prediction accuracy....

  18. Validation of the ANSR(®) Listeria monocytogenes Method for Detection of Listeria monocytogenes in Selected Food and Environmental Samples.

    Science.gov (United States)

    Caballero, Oscar; Alles, Susan; Le, Quynh-Nhi; Gray, R Lucas; Hosking, Edan; Pinkava, Lisa; Norton, Paul; Tolan, Jerry; Mozola, Mark; Rice, Jennifer; Chen, Yi; Ryser, Elliot; Odumeru, Joseph

    2016-01-01

    Work was conducted to validate performance of the ANSR(®) for Listeria monocytogenes method in selected food and environmental matrixes. This DNA-based assay involves amplification of nucleic acid via an isothermal reaction based on nicking enzyme amplification technology. Following single-step sample enrichment for 16-24 h for most matrixes, the assay is completed in 40 min using only simple instrumentation. When 50 distinct strains of L. monocytogenes were tested for inclusivity, 48 produced positive results, the exceptions being two strains confirmed by PCR to lack the assay target gene. Forty-seven nontarget strains (30 species), including multiple non-monocytogenes Listeria species as well as non-Listeria, Gram-positive bacteria, were tested, and all generated negative ANSR assay results. Performance of the ANSR method was compared with that of the U.S. Department of Agriculture, Food Safety and Inspection Service Microbiology Laboratory Guidebook reference culture procedure for detection of L. monocytogenes in hot dogs, pasteurized liquid egg, and sponge samples taken from an inoculated stainless steel surface. In addition, ANSR performance was measured against the U.S. Food and Drug Administration Bacteriological Analytical Manual reference method for detection of L. monocytogenes in Mexican-style cheese, cantaloupe, sprout irrigation water, and guacamole. With the single exception of pasteurized liquid egg at 16 h, ANSR method performance as quantified by the number of positives obtained was not statistically different from that of the reference methods. Robustness trials demonstrated that deliberate introduction of small deviations to the normal assay parameters did not affect ANSR method performance. Results of accelerated stability testing conducted using two manufactured lots of reagents predicts stability at the specified storage temperature of 4°C of more than 1 year.

  19. Tomographic imaging of 12 fracture samples selected from Olkiluoto deep drillholes

    International Nuclear Information System (INIS)

    Kuva, J.; Voutilainen, M.; Timonen, J.; Aaltonen, I.

    2010-06-01

    Rock samples from Olkiluoto were imaged with X-ray tomography to analyze distributions of mineral components and alteration of rock around different fracture types. Twelve samples were analyzed, which contained three types of fractures, and each sample was scanned with two different resolutions. Three dimensional reconstructions of the samples with four or five distinct mineral components displayed changes in the mineral distribution around previously water conducting fractures, which extended to a depth of several millimeters away from fracture surfaces. In addition, structure of fracture filling minerals is depicted. (orig.)

  20. Polymer platforms for selective detection of cocaine in street samples adulterated with levamisole

    OpenAIRE

    Florea, Anca; Cowen, Todd; Piletsky, Sergey; Wael, De, Karolien

    2018-01-01

    Abstract: Accurate drug detection is of utmost importance for fighting against drug abuse. With a high number of cutting agents and adulterants being added to cut or mask drugs in street powders the number of false results is increasing. We demonstrate for the first time the usefulness of employing polymers readily synthesized by electrodeposition to selectively detect cocaine in the presence of the commonly used adulterant levamisole. The polymers were selected by computational modelling to ...

  1. Selective solid-phase extraction of Ni(II) by an ion-imprinted polymer from water samples

    International Nuclear Information System (INIS)

    Saraji, Mohammad; Yousefi, Hamideh

    2009-01-01

    A new ion-imprinted polymer (IIP) material was synthesized by copolymerization of 4-vinylpyridine as monomer, ethyleneglycoldimethacrylate as crosslinking agent and 2,2'-azobis-sobutyronitrile as initiator in the presence of Ni-dithizone complex. The IIP was used as sorbent in a solid-phase extraction column. The effects of sampling volume, elution conditions, sample pH and sample flow rate on the extraction of Ni ions form water samples were studied. The maximum adsorption capacity and the relative selectivity coefficients of imprinted polymer for Ni(II)/Co(II), Ni(II)/Cu(II) and Ni(II)/Cd(II) were calculated. Compared with non-imprinted polymer particles, the IIP had higher selectivity for Ni(II). The relative selectivity factor (α r ) values of Ni(II)/Co(II), Ni(II)/Cu(II) and Ni(II)/Cd(II) were 21.6, 54.3, and 22.7, respectively, which are greater than 1. The relative standard deviation of the five replicate determinations of Ni(II) was 3.4%. The detection limit for 150 mL of sample was 1.6 μg L -1 using flame atomic absorption spectrometry. The developed method was successfully applied to the determination of trace nickel in water samples with satisfactory results.

  2. Salicylimine-Based Colorimetric and Fluorescent Chemosensor for Selective Detection of Cyanide in Aqueous Buffer

    Energy Technology Data Exchange (ETDEWEB)

    Noh, Jin Young; Hwang, In Hong; Kim, Hyun; Song, Eun Joo; Kim, Kyung Beom; Kim, Cheal [Seoul National Univ., Seoul (Korea, Republic of)

    2013-07-15

    A simple colorimetric and fluorescent anion sensor 1 based on salicylimine showed a high selectivity and sensitivity for detection of cyanide in aqueous solution. The receptor 1 showed high selectivity toward CN{sup -} ions in a 1:1 stoichiometric manner, which induces a fast color change from colorless to orange and a dramatic enhancement in fluorescence intensity selectively for cyanide anions over other anions. Such selectivity resulted from the nucleophilic addition of CN{sup -} to the carbon atom of an electron-deficient imine group. The sensitivity of the fluorescence-based assay (0.06 μM) is below the 1.9 μM suggested by the World Health Organization (WHO) as the maximum allowable cyanide concentration in drinking water, capable of being a practical system for the monitoring of CN. concentrations in aqueous samples.

  3. The Toggle Local Planner for sampling-based motion planning

    KAUST Repository

    Denny, Jory

    2012-05-01

    Sampling-based solutions to the motion planning problem, such as the probabilistic roadmap method (PRM), have become commonplace in robotics applications. These solutions are the norm as the dimensionality of the planning space grows, i.e., d > 5. An important primitive of these methods is the local planner, which is used for validation of simple paths between two configurations. The most common is the straight-line local planner which interpolates along the straight line between the two configurations. In this paper, we introduce a new local planner, Toggle Local Planner (Toggle LP), which extends local planning to a two-dimensional subspace of the overall planning space. If no path exists between the two configurations in the subspace, then Toggle LP is guaranteed to correctly return false. Intuitively, more connections could be found by Toggle LP than by the straight-line planner, resulting in better connected roadmaps. As shown in our results, this is the case, and additionally, the extra cost, in terms of time or storage, for Toggle LP is minimal. Additionally, our experimental analysis of the planner shows the benefit for a wide array of robots, with DOF as high as 70. © 2012 IEEE.

  4. Application of In-Segment Multiple Sampling in Object-Based Classification

    Directory of Open Access Journals (Sweden)

    Nataša Đurić

    2014-12-01

    Full Text Available When object-based analysis is applied to very high-resolution imagery, pixels within the segments reveal large spectral inhomogeneity; their distribution can be considered complex rather than normal. When normality is violated, the classification methods that rely on the assumption of normally distributed data are not as successful or accurate. It is hard to detect normality violations in small samples. The segmentation process produces segments that vary highly in size; samples can be very big or very small. This paper investigates whether the complexity within the segment can be addressed using multiple random sampling of segment pixels and multiple calculations of similarity measures. In order to analyze the effect sampling has on classification results, statistics and probability value equations of non-parametric two-sample Kolmogorov-Smirnov test and parametric Student’s t-test are selected as similarity measures in the classification process. The performance of both classifiers was assessed on a WorldView-2 image for four land cover classes (roads, buildings, grass and trees and compared to two commonly used object-based classifiers—k-Nearest Neighbor (k-NN and Support Vector Machine (SVM. Both proposed classifiers showed a slight improvement in the overall classification accuracies and produced more accurate classification maps when compared to the ground truth image.

  5. Selecting a sampling method to aid in vegetation management decisions in loblolly pine plantations

    Science.gov (United States)

    David R. Weise; Glenn R. Glover

    1993-01-01

    Objective methods to evaluate hardwood competition in young loblolly pine (Pinustaeda L.) plantations are not widely used in the southeastern United States. Ability of common sampling rules to accurately estimate hardwood rootstock attributes at low sampling intensities and across varying rootstock spatial distributions is unknown. Fixed area plot...

  6. A novel knot selection method for the error-bounded B-spline curve fitting of sampling points in the measuring process

    International Nuclear Information System (INIS)

    Liang, Fusheng; Zhao, Ji; Ji, Shijun; Zhang, Bing; Fan, Cheng

    2017-01-01

    The B-spline curve has been widely used in the reconstruction of measurement data. The error-bounded sampling points reconstruction can be achieved by the knot addition method (KAM) based B-spline curve fitting. In KAM, the selection pattern of initial knot vector has been associated with the ultimate necessary number of knots. This paper provides a novel initial knots selection method to condense the knot vector required for the error-bounded B-spline curve fitting. The initial knots are determined by the distribution of features which include the chord length (arc length) and bending degree (curvature) contained in the discrete sampling points. Firstly, the sampling points are fitted into an approximate B-spline curve Gs with intensively uniform knot vector to substitute the description of the feature of the sampling points. The feature integral of Gs is built as a monotone increasing function in an analytic form. Then, the initial knots are selected according to the constant increment of the feature integral. After that, an iterative knot insertion (IKI) process starting from the initial knots is introduced to improve the fitting precision, and the ultimate knot vector for the error-bounded B-spline curve fitting is achieved. Lastly, two simulations and the measurement experiment are provided, and the results indicate that the proposed knot selection method can reduce the number of ultimate knots available. (paper)

  7. Transmit antenna selection based on shadowing side information

    KAUST Repository

    Yilmaz, Ferkan

    2011-05-01

    In this paper, we propose a new transmit antenna selection scheme based on shadowing side information. In the proposed scheme, single transmit antenna which has the highest shadowing coefficient is selected. By the proposed technique, usage of the feedback channel and channel estimation complexity at the receiver can be reduced. We consider independent but not identically distributed Generalized-K composite fading model, which is a general composite fading & shadowing channel model for wireless environments. Exact closed-form outage probability, moment generating function and symbol error probability expressions are derived. In addition, theoretical performance results are validated by Monte Carlo simulations. © 2011 IEEE.

  8. Transmit antenna selection based on shadowing side information

    KAUST Repository

    Yilmaz, Ferkan; Yilmaz, Ahmet Oǧuz; Alouini, Mohamed-Slim; Kucur, Oǧuz

    2011-01-01

    In this paper, we propose a new transmit antenna selection scheme based on shadowing side information. In the proposed scheme, single transmit antenna which has the highest shadowing coefficient is selected. By the proposed technique, usage of the feedback channel and channel estimation complexity at the receiver can be reduced. We consider independent but not identically distributed Generalized-K composite fading model, which is a general composite fading & shadowing channel model for wireless environments. Exact closed-form outage probability, moment generating function and symbol error probability expressions are derived. In addition, theoretical performance results are validated by Monte Carlo simulations. © 2011 IEEE.

  9. Predicting Drug-Target Interactions Based on Small Positive Samples.

    Science.gov (United States)

    Hu, Pengwei; Chan, Keith C C; Hu, Yanxing

    2018-01-01

    evaluation of ODT shows that it can be potentially useful. It confirms that predicting potential or missing DTIs based on the known interactions is a promising direction to solve problems related to the use of uncertain and unreliable negative samples and those related to the great demand in computational resources. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  10. A Simple Analytical Method Using HPLC with Fluorescence Detection to Determine Selected Polycyclic Aromatic Compounds in Filter Samples

    International Nuclear Information System (INIS)

    Garcia, S.; Perez, R. M.

    2014-01-01

    A study on the comparison and evaluation of a miniaturized extraction method for the determination of selected PACs in sample filters is presented. The main objective was the optimization and development of simple, rapid and low cost methods, minimizing the use of extracting solvent volume. The work also includes a study on the intermediate precision. (Author)

  11. The size selectivity of the main body of a sampling pelagic pair trawl in freshwater reservoirs during the night

    Czech Academy of Sciences Publication Activity Database

    Říha, Milan; Jůza, Tomáš; Prchalová, Marie; Mrkvička, Tomáš; Čech, Martin; Draštík, Vladislav; Muška, Milan; Kratochvíl, Michal; Peterka, Jiří; Tušer, Michal; Vašek, Mojmír; Kubečka, Jan

    2012-01-01

    Roč. 127, September (2012), s. 56-60 ISSN 0165-7836 R&D Projects: GA MZe(CZ) QH81046 Institutional support: RVO:60077344 Keywords : quantitative sampling * gear selectivity * trawl * reservoirs Subject RIV: GL - Fishing Impact factor: 1.695, year: 2012

  12. Moment Conditions Selection Based on Adaptive Penalized Empirical Likelihood

    Directory of Open Access Journals (Sweden)

    Yunquan Song

    2014-01-01

    Full Text Available Empirical likelihood is a very popular method and has been widely used in the fields of artificial intelligence (AI and data mining as tablets and mobile application and social media dominate the technology landscape. This paper proposes an empirical likelihood shrinkage method to efficiently estimate unknown parameters and select correct moment conditions simultaneously, when the model is defined by moment restrictions in which some are possibly misspecified. We show that our method enjoys oracle-like properties; that is, it consistently selects the correct moment conditions and at the same time its estimator is as efficient as the empirical likelihood estimator obtained by all correct moment conditions. Moreover, unlike the GMM, our proposed method allows us to carry out confidence regions for the parameters included in the model without estimating the covariances of the estimators. For empirical implementation, we provide some data-driven procedures for selecting the tuning parameter of the penalty function. The simulation results show that the method works remarkably well in terms of correct moment selection and the finite sample properties of the estimators. Also, a real-life example is carried out to illustrate the new methodology.

  13. Geochemical and mineralogical study of selected weathered samples from Olkiluoto site

    International Nuclear Information System (INIS)

    Lindberg, A.

    2009-02-01

    Optical microscopy, chemical analyses and X-ray diffraction method were used to study the influence of weathering from 11 drill core samples from shallow depths (< 25 m). The samples, 4 to 22 cm in length were drilled from Olkiluoto study site, Eurajoki, and they represent the common rock types of local bedrock: mica gneiss, tonalitic and granodioritic gneiss. Two of the samples were macroscopically unweathered and 9 of them were remarkably altered. The alteration was shown as porosity, the abundance of chlorite instead of biotite and pink, unclear feldspars. Many samples also contained red-brown hematite and fractures, some of them coated with secondary minerals, even clay. Microscopically the most visible feature of weathering was the total alteration of plagioclase and cordierite to sericite. In many samples also biotite was richly altered to chlorite and opaque minerals. Microfractures were common and they were filled by hematite, kaolinite and fine-grained muscovite (sericite). Hematite was, in some cases, also largely replacing the weathered minerals, feldspars and cordierite. Chemical alteration was not clear, because the alteration of main minerals have produced secondary minerals with almost the same chemical composition without any reasonable depleting or enrichment of certain elements. X-ray diffraction determination of samples proved, that often plagioclase was replaced by mica and biotite by chlorite. In some cases the samples contained products of chemical weathering, kaolinite and smectite. (orig.)

  14. Numerical Model based Reliability Estimation of Selective Laser Melting Process

    DEFF Research Database (Denmark)

    Mohanty, Sankhya; Hattel, Jesper Henri

    2014-01-01

    Selective laser melting is developing into a standard manufacturing technology with applications in various sectors. However, the process is still far from being at par with conventional processes such as welding and casting, the primary reason of which is the unreliability of the process. While...... of the selective laser melting process. A validated 3D finite-volume alternating-direction-implicit numerical technique is used to model the selective laser melting process, and is calibrated against results from single track formation experiments. Correlation coefficients are determined for process input...... parameters such as laser power, speed, beam profile, etc. Subsequently, uncertainties in the processing parameters are utilized to predict a range for the various outputs, using a Monte Carlo method based uncertainty analysis methodology, and the reliability of the process is established....

  15. Nested sampling algorithm for subsurface flow model selection, uncertainty quantification, and nonlinear calibration

    KAUST Repository

    Elsheikh, A. H.; Wheeler, M. F.; Hoteit, Ibrahim

    2013-01-01

    Calibration of subsurface flow models is an essential step for managing ground water aquifers, designing of contaminant remediation plans, and maximizing recovery from hydrocarbon reservoirs. We investigate an efficient sampling algorithm known

  16. Woody species diversity in forest plantations in a mountainous region of Beijing, China: effects of sampling scale and species selection.

    Directory of Open Access Journals (Sweden)

    Yuxin Zhang

    Full Text Available The role of forest plantations in biodiversity conservation has gained more attention in recent years. However, most work on evaluating the diversity of forest plantations focuses only on one spatial scale; thus, we examined the effects of sampling scale on diversity in forest plantations. We designed a hierarchical sampling strategy to collect data on woody species diversity in planted pine (Pinus tabuliformis Carr., planted larch (Larix principis-rupprechtii Mayr., and natural secondary deciduous broadleaf forests in a mountainous region of Beijing, China. Additive diversity partition analysis showed that, compared to natural forests, the planted pine forests had a different woody species diversity partitioning pattern at multi-scales (except the Simpson diversity in the regeneration layer, while the larch plantations did not show multi-scale diversity partitioning patterns that were obviously different from those in the natural secondary broadleaf forest. Compare to the natural secondary broadleaf forests, the effects of planted pine forests on woody species diversity are dependent on the sampling scale and layers selected for analysis. Diversity in the planted larch forest, however, was not significantly different from that in the natural forest for all diversity components at all sampling levels. Our work demonstrated that the species selected for afforestation and the sampling scales selected for data analysis alter the conclusions on the levels of diversity supported by plantations. We suggest that a wide range of scales should be considered in the evaluation of the role of forest plantations on biodiversity conservation.

  17. Improved targeted immunization strategies based on two rounds of selection

    Science.gov (United States)

    Xia, Ling-Ling; Song, Yu-Rong; Li, Chan-Chan; Jiang, Guo-Ping

    2018-04-01

    In the case of high degree targeted immunization where the number of vaccine is limited, when more than one node associated with the same degree meets the requirement of high degree centrality, how can we choose a certain number of nodes from those nodes, so that the number of immunized nodes will not exceed the limit? In this paper, we introduce a new idea derived from the selection process of second-round exam to solve this problem and then propose three improved targeted immunization strategies. In these proposed strategies, the immunized nodes are selected through two rounds of selection, where we increase the quotas of first-round selection according the evaluation criterion of degree centrality and then consider another characteristic parameter of node, such as node's clustering coefficient, betweenness and closeness, to help choose targeted nodes in the second-round selection. To validate the effectiveness of the proposed strategies, we compare them with the degree immunizations including the high degree targeted and the high degree adaptive immunizations using two metrics: the size of the largest connected component of immunized network and the number of infected nodes. Simulation results demonstrate that the proposed strategies based on two rounds of sorting are effective for heterogeneous networks and their immunization effects are better than that of the degree immunizations.

  18. Determination of chloride in MOX samples using chloride ion selective electrode

    Energy Technology Data Exchange (ETDEWEB)

    Govindan, R; Das, D K; Mallik, G K; Sumathi, A; Patil, Sangeeta; Raul, Seema; Bhargava, V K; Kamath, H S [Bhabha Atomic Research Centre, Tarapur (India). Advanced Fuel Fabrication Facility

    1997-09-01

    The chloride present in the MOX fuel is separated from the matrix by pyrohydrolysis at a temperature of 950 {+-} 50 degC and is then analyzed by chloride ion selective electrode (Cl-ISE). The range covered is 0.4-4 ppm with a precision of better than {+-}5% R.S.D. (author). 4 refs., 1 tab.

  19. Sample similarity analysis of angles of repose based on experimental results for DEM calibration

    Directory of Open Access Journals (Sweden)

    Tan Yuan

    2017-01-01

    Full Text Available As a fundamental material property, particle-particle friction coefficient is usually calculated based on angle of repose which can be obtained experimentally. In the present study, the bottomless cylinder test was carried out to investigate this friction coefficient of a kind of biomass material, i.e. willow chips. Because of its irregular shape and varying particle size distribution, calculation of the angle becomes less applicable and decisive. In the previous studies only one section of those uneven slopes is chosen in most cases, although standard methods in definition of a representable section are barely found. Hence, we presented an efficient and reliable method from the new technology, 3D scan, which was used to digitize the surface of heaps and generate its point cloud. Then, two tangential lines of any selected section were calculated through the linear least-squares regression (LLSR, such that the left and right angle of repose of a pile could be derived. As the next step, a certain sum of sections were stochastic selected, and calculations were repeated correspondingly in order to achieve sample of angles, which was plotted in Cartesian coordinates as spots diagram. Subsequently, different samples were acquired through various selections of sections. By applying similarities and difference analysis of these samples, the reliability of this proposed method was verified. Phased results provides a realistic criterion to reduce the deviation between experiment and simulation as a result of random selection of a single angle, which will be compared with the simulation results in the future.

  20. Selective Solid-Phase Extraction of Zinc(II) from Environmental Water Samples Using Ion Imprinted Activated Carbon.

    Science.gov (United States)

    Moniri, Elham; Panahi, Homayon Ahmad; Aghdam, Khaledeh; Sharif, Amir Abdollah Mehrdad

    2015-01-01

    A simple ion imprinted amino-functionalized sorbent was synthesized by coupling activated carbon with iminodiacetic acid, a functional compound for metal chelating, through cyanoric chloride spacer. The resulting sorbent has been characterized using FTIR spectroscopy, elemental analysis, and thermogravimetric analysis and evaluated for the preconcentration and determination of trace Zn(II) in environmental water samples. The optimum pH value for sorption of the metal ion was 6-7.5. The sorption capacity of the functionalized sorbent was 66.6 mg/g. The chelating sorbent can be reused for 10 cycles of sorption-desorption without any significant change in sorption capacity. A recovery of 100% was obtained for the metal ion with 0.5 M nitric acid as the eluent. Compared with nonimprinted polymer particles, the prepared Zn-imprinted sorbent showed high adsorption capacity, significant selectivity, and good site accessibility for Zn(II). Scatchard analysis revealed that the homogeneous binding sites were formed in the polymer. The equilibrium sorption data of Zn(II) by modified resin were analyzed by Langmuir, Freundlich, Temkin, and Redlich-Peterson models. Based on equilibrium adsorption data, the Langmuir, Freundlich, and Temkin constants were determined as 0.139, 12.82, and 2.34, respectively, at 25°C.

  1. Testing the efficiency of rover science protocols for robotic sample selection: A GeoHeuristic Operational Strategies Test

    Science.gov (United States)

    Yingst, R. A.; Bartley, J. K.; Chidsey, T. C.; Cohen, B. A.; Gilleaudeau, G. J.; Hynek, B. M.; Kah, L. C.; Minitti, M. E.; Williams, R. M. E.; Black, S.; Gemperline, J.; Schaufler, R.; Thomas, R. J.

    2018-05-01

    The GHOST field tests are designed to isolate and test science-driven rover operations protocols, to determine best practices. During a recent field test at a potential Mars 2020 landing site analog, we tested two Mars Science Laboratory data-acquisition and decision-making methods to assess resulting science return and sample quality: a linear method, where sites of interest are studied in the order encountered, and a "walkabout-first" method, where sites of interest are examined remotely before down-selecting to a subset of sites that are interrogated with more resource-intensive instruments. The walkabout method cost less time and fewer resources, while increasing confidence in interpretations. Contextual data critical to evaluating site geology was acquired earlier than for the linear method, and given a higher priority, which resulted in development of more mature hypotheses earlier in the analysis process. Combined, this saved time and energy in the collection of data with more limited spatial coverage. Based on these results, we suggest that the walkabout method be used where doing so would provide early context and time for the science team to develop hypotheses-critical tests; and that in gathering context, coverage may be more important than higher resolution.

  2. Selectivity and limitations of carbon sorption tubes for capturing siloxanes in biogas during field sampling.

    Science.gov (United States)

    Tansel, Berrin; Surita, Sharon C

    2016-06-01

    Siloxane levels in biogas can jeopardize the warranties of the engines used at the biogas to energy facilities. The chemical structure of siloxanes consists of silicon and oxygen atoms, alternating in position, with hydrocarbon groups attached to the silicon side chain. Siloxanes can be either in cyclic (D) or linear (L) configuration and referred with a letter corresponding to their structure followed by a number corresponding to the number of silicon atoms present. When siloxanes are burned, the hydrocarbon fraction is lost and silicon is converted to silicates. The purpose of this study was to evaluate the adequacy of activated carbon gas samplers for quantitative analysis of siloxanes in biogas samples. Biogas samples were collected from a landfill and an anaerobic digester using multiple carbon sorbent tubes assembled in series. One set of samples was collected for 30min (sampling 6-L gas), and the second set was collected for 60min (sampling 12-L gas). Carbon particles were thermally desorbed and analyzed by Gas Chromatography Mass Spectrometry (GC/MS). The results showed that biogas sampling using a single tube would not adequately capture octamethyltrisiloxane (L3), hexamethylcyclotrisiloxane (D3), octamethylcyclotetrasiloxane (D4), decamethylcyclopentasiloxane (D5) and dodecamethylcyclohexasiloxane (D6). Even with 4 tubes were used in series, D5 was not captured effectively. The single sorbent tube sampling method was adequate only for capturing trimethylsilanol (TMS) and hexamethyldisiloxane (L2). Affinity of siloxanes for activated carbon decreased with increasing molecular weight. Using multiple carbon sorbent tubes in series can be an appropriate method for developing a standard procedure for determining siloxane levels for low molecular weight siloxanes (up to D3). Appropriate quality assurance and quality control procedures should be developed for adequately quantifying the levels of the higher molecular weight siloxanes in biogas with sorbent tubes

  3. The Effect of Using a Proposed Teaching Strategy Based on the Selective Thinking on Students' Acquisition Concepts in Mathematics

    Science.gov (United States)

    Qudah, Ahmad Hassan

    2016-01-01

    This study aimed at identify the effect of using a proposed teaching strategy based on the selective thinking in acquire mathematical concepts by Classroom Teacher Students at Al- al- Bayt University, The sample of the study consisted of (74) students, equally distributed into a control group and an experimental group. The selective thinking…

  4. Uniform design based SVM model selection for face recognition

    Science.gov (United States)

    Li, Weihong; Liu, Lijuan; Gong, Weiguo

    2010-02-01

    Support vector machine (SVM) has been proved to be a powerful tool for face recognition. The generalization capacity of SVM depends on the model with optimal hyperparameters. The computational cost of SVM model selection results in application difficulty in face recognition. In order to overcome the shortcoming, we utilize the advantage of uniform design--space filling designs and uniformly scattering theory to seek for optimal SVM hyperparameters. Then we propose a face recognition scheme based on SVM with optimal model which obtained by replacing the grid and gradient-based method with uniform design. The experimental results on Yale and PIE face databases show that the proposed method significantly improves the efficiency of SVM model selection.

  5. NetProt: Complex-based Feature Selection.

    Science.gov (United States)

    Goh, Wilson Wen Bin; Wong, Limsoon

    2017-08-04

    Protein complex-based feature selection (PCBFS) provides unparalleled reproducibility with high phenotypic relevance on proteomics data. Currently, there are five PCBFS paradigms, but not all representative methods have been implemented or made readily available. To allow general users to take advantage of these methods, we developed the R-package NetProt, which provides implementations of representative feature-selection methods. NetProt also provides methods for generating simulated differential data and generating pseudocomplexes for complex-based performance benchmarking. The NetProt open source R package is available for download from https://github.com/gohwils/NetProt/releases/ , and online documentation is available at http://rpubs.com/gohwils/204259 .

  6. New Approach Based on Compressive Sampling for Sample Rate Enhancement in DASs for Low-Cost Sensing Nodes

    Directory of Open Access Journals (Sweden)

    Francesco Bonavolontà

    2014-10-01

    Full Text Available The paper deals with the problem of improving the maximum sample rate of analog-to-digital converters (ADCs included in low cost wireless sensing nodes. To this aim, the authors propose an efficient acquisition strategy based on the combined use of high-resolution time-basis and compressive sampling. In particular, the high-resolution time-basis is adopted to provide a proper sequence of random sampling instants, and a suitable software procedure, based on compressive sampling approach, is exploited to reconstruct the signal of interest from the acquired samples. Thanks to the proposed strategy, the effective sample rate of the reconstructed signal can be as high as the frequency of the considered time-basis, thus significantly improving the inherent ADC sample rate. Several tests are carried out in simulated and real conditions to assess the performance of the proposed acquisition strategy in terms of reconstruction error. In particular, the results obtained in experimental tests with ADC included in actual 8- and 32-bits microcontrollers highlight the possibility of achieving effective sample rate up to 50 times higher than that of the original ADC sample rate.

  7. The particle analysis based on FT-TIMS technique for swipe sample under the frame of nuclear safeguard

    International Nuclear Information System (INIS)

    Yang Tianli; Liu Xuemei; Liu Zhao; Tang Lei; Long Kaiming

    2008-06-01

    Under the frame of nuclear safeguard, the particles analysis for swipe sample is an advance mean to detect the undeclared uranium enriched facilities and undeclared uranium enriched activity. The technique of particle analysis based on fission track-thermal ionization mass spectrometry (FT-TIMS) for swipe sample have been built. The reliability and the experimental background for selecting particles consisting of uranium from swipe sample by FT method have been verified. In addition, the utilization coefficient of particles on the surface of swipe sample have also been tested. These works have provided the technique support for application in the area of nuclear verification. (authors)

  8. Determination of different contaminants in selective drinking water samples collected from Peshawar valley area

    International Nuclear Information System (INIS)

    Ihsanullah; Khan, M.; Khattak, T.N.; Sattar, A.

    1999-01-01

    Among the pollutants carried through sewage, industrial effluents, fertilizers, pesticides; heavy metals and various pathogenic bacteria are directly related to human/animal diseases. Samples of drinking water were collected from different locations, in the Peshawar area. Cadmium, lead and copper levels in these samples were determined by potentiometric stripping analysis (PSA). The data indicated wide variation in the concentration of these heavy metals. Variation in results is discussed on the basis of some possible sources of contamination. The concentration of cadmium and lead in all the samples was higher compared to the values given in the guideline of World Health Organization (WHO) for drinking water. Copper was below the detection limit in majority of the samples. The values of Cd, Pb and Cu were in the range of 0.023-2.75, 0.025-1.88 and 0-0.67 mg/1 respectively. Various physical quality indices (ph, electrical conductivity and total solids) and pathogenic bacteria (E. coli and total coliforms) were also determined in water samples. Most of the drinking waters was found contaminated with higher levels of Cd and Pb and pathogenic bacteria and hence, considered unfit for drinking purposes. (author)

  9. The impact of sample size and marker selection on the study of haplotype structures

    Directory of Open Access Journals (Sweden)

    Sun Xiao

    2004-03-01

    Full Text Available Abstract Several studies of haplotype structures in the human genome in various populations have found that the human chromosomes are structured such that each chromosome can be divided into many blocks, within which there is limited haplotype diversity. In addition, only a few genetic markers in a putative block are needed to capture most of the diversity within a block. There has been no systematic empirical study of the effects of sample size and marker set on the identified block structures and representative marker sets, however. The purpose of this study was to conduct a detailed empirical study to examine such impacts. Towards this goal, we have analysed three representative autosomal regions from a large genome-wide study of haplotypes with samples consisting of African-Americans and samples consisting of Japanese and Chinese individuals. For both populations, we have found that the sample size and marker set have significant impact on the number of blocks and the total number of representative markers identified. The marker set in particular has very strong impacts, and our results indicate that the marker density in the original datasets may not be adequate to allow a meaningful characterisation of haplotype structures. In general, we conclude that we need a relatively large sample size and a very dense marker panel in the study of haplotype structures in human populations.

  10. Attention-based Memory Selection Recurrent Network for Language Modeling

    OpenAIRE

    Liu, Da-Rong; Chuang, Shun-Po; Lee, Hung-yi

    2016-01-01

    Recurrent neural networks (RNNs) have achieved great success in language modeling. However, since the RNNs have fixed size of memory, their memory cannot store all the information about the words it have seen before in the sentence, and thus the useful long-term information may be ignored when predicting the next words. In this paper, we propose Attention-based Memory Selection Recurrent Network (AMSRN), in which the model can review the information stored in the memory at each previous time ...

  11. Development of a thermodynamic data base for selected heavy metals

    International Nuclear Information System (INIS)

    Hageman, Sven; Scharge, Tina; Willms, Thomas

    2015-07-01

    The report on the development of a thermodynamic data base for selected heavy metals covers the description of experimental methods, the thermodynamic model for chromate, the thermodynamic model for dichromate, the thermodynamic model for manganese (II), the thermodynamic model for cobalt, the thermodynamic model for nickel, the thermodynamic model for copper (I), the thermodynamic model for copper(II), the thermodynamic model for mercury (0) and mercury (I), the thermodynamic model for mercury (III), the thermodynamic model for arsenate.

  12. Biobanking of fresh frozen tissue from clinical surgical specimens: transport logistics, sample selection, and histologic characterization.

    Science.gov (United States)

    Botling, Johan; Micke, Patrick

    2011-01-01

    Access to high-quality fresh frozen tissue is critical for translational cancer research and molecular -diagnostics. Here we describe a workflow for the collection of frozen solid tissue samples derived from fresh human patient specimens after surgery. The routines have been in operation at Uppsala University Hospital since 2001. We have integrated cryosection and histopathologic examination of each biobank sample into the biobank manual. In this way, even small, macroscopically ill-defined lesions can be -procured without a diagnostic hazard due to the removal of uncharacterized tissue from a clinical -specimen. Also, knowledge of the histomorphology of the frozen tissue sample - tumor cell content, stromal components, and presence of necrosis - is pivotal before entering a biobank case into costly molecular profiling studies.

  13. Studying hardness, workability and minimum bending radius in selectively laser-sintered Ti–6Al–4V alloy samples

    Science.gov (United States)

    Galkina, N. V.; Nosova, Y. A.; Balyakin, A. V.

    2018-03-01

    This research is relevant as it tries to improve the mechanical and service performance of the Ti–6Al–4V titanium alloy obtained by selective laser sintering. For that purpose, sintered samples were annealed at 750 and 850°C for an hour. Sintered and annealed samples were tested for hardness, workability and microstructure. It was found that incomplete annealing of selectively laser-sintered Ti–6Al–4V samples results in an insignificant reduction in hardness and ductility. Sintered and incompletely annealed samples had a hardness of 32..33 HRC, which is lower than the value of annealed parts specified in standards. Complete annealing at temperature 850°C reduces the hardness to 25 HRC and ductility by 15...20%. Incomplete annealing lowers the ductility factor from 0.08 to 0.06. Complete annealing lowers that value to 0.025. Complete annealing probably results in the embrittlement of sintered samples, perhaps due to their oxidation and hydrogenation in the air. Optical metallography showed lateral fractures in both sintered and annealed samples, which might be the reason why they had lower hardness and ductility.

  14. Grouped fuzzy SVM with EM-based partition of sample space for clustered microcalcification detection.

    Science.gov (United States)

    Wang, Huiya; Feng, Jun; Wang, Hongyu

    2017-07-20

    Detection of clustered microcalcification (MC) from mammograms plays essential roles in computer-aided diagnosis for early stage breast cancer. To tackle problems associated with the diversity of data structures of MC lesions and the variability of normal breast tissues, multi-pattern sample space learning is required. In this paper, a novel grouped fuzzy Support Vector Machine (SVM) algorithm with sample space partition based on Expectation-Maximization (EM) (called G-FSVM) is proposed for clustered MC detection. The diversified pattern of training data is partitioned into several groups based on EM algorithm. Then a series of fuzzy SVM are integrated for classification with each group of samples from the MC lesions and normal breast tissues. From DDSM database, a total of 1,064 suspicious regions are selected from 239 mammography, and the measurement of Accuracy, True Positive Rate (TPR), False Positive Rate (FPR) and EVL = TPR* 1-FPR are 0.82, 0.78, 0.14 and 0.72, respectively. The proposed method incorporates the merits of fuzzy SVM and multi-pattern sample space learning, decomposing the MC detection problem into serial simple two-class classification. Experimental results from synthetic data and DDSM database demonstrate that our integrated classification framework reduces the false positive rate significantly while maintaining the true positive rate.

  15. Assessment of fracture risk: value of random population-based samples--the Geelong Osteoporosis Study.

    Science.gov (United States)

    Henry, M J; Pasco, J A; Seeman, E; Nicholson, G C; Sanders, K M; Kotowicz, M A

    2001-01-01

    Fracture risk is determined by bone mineral density (BMD). The T-score, a measure of fracture risk, is the position of an individual's BMD in relation to a reference range. The aim of this study was to determine the magnitude of change in the T-score when different sampling techniques were used to produce the reference range. Reference ranges were derived from three samples, drawn from the same region: (1) an age-stratified population-based random sample, (2) unselected volunteers, and (3) a selected healthy subset of the population-based sample with no diseases or drugs known to affect bone. T-scores were calculated using the three reference ranges for a cohort of women who had sustained a fracture and as a group had a low mean BMD (ages 35-72 yr; n = 484). For most comparisons, the T-scores for the fracture cohort were more negative using the population reference range. The difference in T-scores reached 1.0 SD. The proportion of the fracture cohort classified as having osteoporosis at the spine was 26, 14, and 23% when the population, volunteer, and healthy reference ranges were applied, respectively. The use of inappropriate reference ranges results in substantial changes to T-scores and may lead to inappropriate management.

  16. Preview-based sampling for controlling gaseous simulations

    KAUST Repository

    Huang, Ruoguan; Melek, Zeki; Keyser, John

    2011-01-01

    to maintain. During the high resolution simulation, a matching process ensures that the properties sampled from the low resolution simulation are maintained. This matching process keeps the different resolution simulations aligned even for complex systems

  17. Linear feature selection in texture analysis - A PLS based method

    DEFF Research Database (Denmark)

    Marques, Joselene; Igel, Christian; Lillholm, Martin

    2013-01-01

    We present a texture analysis methodology that combined uncommitted machine-learning techniques and partial least square (PLS) in a fully automatic framework. Our approach introduces a robust PLS-based dimensionality reduction (DR) step to specifically address outliers and high-dimensional feature...... and considering all CV groups, the methods selected 36 % of the original features available. The diagnosis evaluation reached a generalization area-under-the-ROC curve of 0.92, which was higher than established cartilage-based markers known to relate to OA diagnosis....

  18. A dansyl based fluorescence chemosensor for Hg2+ and its application in the complicated environment samples

    Science.gov (United States)

    Zhou, Shuai; Zhou, Ze-Quan; Zhao, Xuan-Xuan; Xiao, Yu-Hao; Xi, Gang; Liu, Jin-Ting; Zhao, Bao-Xiang

    2015-09-01

    We have developed a novel fluorescent chemosensor (DAM) based on dansyl and morpholine units for the detection of mercury ion with excellent selectivity and sensitivity. In the presence of Hg2+ in a mixture solution of HEPES buffer (pH 7.5, 20 mM) and MeCN (2/8, v/v) at room temperature, the fluorescence of DAM was almost completely quenched from green to colorless with fast response time. Moreover, DAM also showed its excellent anti-interference capability even in the presence of large amount of interfering ions. It is worth noting that DAM could be used to detect Hg2+ specifically in the Yellow River samples, which significantly implied the potential applications of DAM in the complicated environment samples.

  19. A dansyl based fluorescence chemosensor for Hg(2+) and its application in the complicated environment samples.

    Science.gov (United States)

    Zhou, Shuai; Zhou, Ze-Quan; Zhao, Xuan-Xuan; Xiao, Yu-Hao; Xi, Gang; Liu, Jin-Ting; Zhao, Bao-Xiang

    2015-09-05

    We have developed a novel fluorescent chemosensor (DAM) based on dansyl and morpholine units for the detection of mercury ion with excellent selectivity and sensitivity. In the presence of Hg(2+) in a mixture solution of HEPES buffer (pH 7.5, 20 mM) and MeCN (2/8, v/v) at room temperature, the fluorescence of DAM was almost completely quenched from green to colorless with fast response time. Moreover, DAM also showed its excellent anti-interference capability even in the presence of large amount of interfering ions. It is worth noting that DAM could be used to detect Hg(2+) specifically in the Yellow River samples, which significantly implied the potential applications of DAM in the complicated environment samples. Copyright © 2015 Elsevier B.V. All rights reserved.

  20. Selective recognition of Pr3+ based on fluorescence enhancement sensor

    International Nuclear Information System (INIS)

    Ganjali, M.R.; Hosseini, M.; Ghafarloo, A.; Khoobi, M.; Faridbod, F.; Shafiee, A.; Norouzi, P.

    2013-01-01

    (E)-2-(1-(4-hydroxy-2-oxo-2H-chromen-3-yl)ethylidene) hydrazinecarbothioamide (L) has been used to detect trace amounts of praseodymium ion in acetonitrile–water solution (MeCN/H 2 O) by fluorescence spectroscopy. The fluorescent probe undergoes fluorescent emission intensity enhancement upon binding to Pr 3+ ions in MeCN/H 2 O (9/1:v/v) solution. The fluorescence enhancement of L is attributed to a 1:1 complex formation between L and Pr 3+ , which has been utilized as the basis for selective detection of Pr 3+ . The sensor can be applied to the quantification of praseodymium ion with a linear range of 1.6 × 10 −7 to 1.0 × 10 −5 M. The limit of detection was 8.3 × 10 −8 M. The sensor exhibits high selectivity toward praseodymium ions in comparison with common metal ions. The proposed fluorescent sensor was successfully used for determination of Pr 3+ in water samples. - Highlights: • A new fluorescent sensor is introduced as a selective probe for Pr 3+ detection. • Fluorescent intensity of the chemical probe enhances upon binding to Pr 3+ ion. • The sensor can be used for Pr 3+ determination in the range of 1.6 × 10 −7 –1.0 × 10 −5 M

  1. Use of space-filling curves to select sample locations in natural resource monitoring studies

    Science.gov (United States)

    Andrew Lister; Charles T. Scott

    2009-01-01

    The establishment of several large area monitoring networks over the past few decades has led to increased research into ways to spatially balance sample locations across the landscape. Many of these methods are well documented and have been used in the past with great success. In this paper, we present a method using geographic information systems (GIS) and fractals...

  2. Chemical and physical characteristics of tar samples from selected Manufactured Gas Plant (MGP) sites

    International Nuclear Information System (INIS)

    Ripp, J.; Taylor, B.; Mauro, D.; Young, M.

    1993-05-01

    A multiyear, multidisciplinary project concerning the toxicity of former Manufactured Gas Plant (MGP) tarry residues was initiated by EPRI under the Environmental Behavior of Organic Substances (EBOS) Program. This report concerns one portion of that work -- the collection and chemical characterization of tar samples from several former MGP sites. META Environmental, Inc. and Atlantic Environmental Services, Inc. were contracted by EPRI to collect several samples of tarry residues from former MGP sites with varied historical gas production processes and from several parts of the country. The eight tars collected during this program were physically very different. Some tars were fluid and easily pumped from existing wells, while other tars were thicker, semi-solid, or solid. Although care was taken to collect only tar, the nature of the residues at several sites made it impossible not to collect other material, such as soil, gravel, and plant matter. After the samples were collected, they were analyzed for 37 organic compounds, 8 metals, and cyanide. In addition, elemental analysis was performed on the tar samples for carbon, hydrogen, oxygen, sulfur and nitrogen content and several physical/chemical properties were determined for each tar. The tars were mixed together in different batches and distributed to researchers for use in animal toxicity studies. The results of this work show that, although the tars were produced from different processes and stored in different manners, they had some chemical similarities. All of the tars, with the exception of one unusual solid tar, contained similar relative abundances of polycyclic aromatic hydrocarbons (PAHs)

  3. Risk Factors for Drug Abuse among Nepalese Samples Selected from a Town of Eastern Nepal

    Science.gov (United States)

    Niraula, Surya Raj; Chhetry, Devendra Bahadur; Singh, Girish Kumar; Nagesh, S.; Shyangwa, Pramod Mohan

    2009-01-01

    The study focuses on the serious issue related to the adolescents' and adults' behavior and health. It aims to identify the risk factors for drug abuse from samples taken from a town of Eastern Nepal. This is a matched case-control study. The conditional logistic regression method was adopted for data analysis. The diagnosis cut off was determined…

  4. Multi-Factor Policy Evaluation and Selection in the One-Sample Situation

    NARCIS (Netherlands)

    C.M. Chen (Chien-Ming)

    2008-01-01

    textabstractFirms nowadays need to make decisions with fast information obsolesce. In this paper I deal with one class of decision problems in this situation, called the “one-sample” problems: we have finite options and one sample of the multiple criteria with which we use to evaluate those options.

  5. Index Fund Selections with GAs and Classifications Based on Turnover

    Science.gov (United States)

    Orito, Yukiko; Motoyama, Takaaki; Yamazaki, Genji

    It is well known that index fund selections are important for the risk hedge of investment in a stock market. The`selection’means that for`stock index futures’, n companies of all ones in the market are selected. For index fund selections, Orito et al.(6) proposed a method consisting of the following two steps : Step 1 is to select N companies in the market with a heuristic rule based on the coefficient of determination between the return rate of each company in the market and the increasing rate of the stock price index. Step 2 is to construct a group of n companies by applying genetic algorithms to the set of N companies. We note that the rule of Step 1 is not unique. The accuracy of the results using their method depends on the length of time data (price data) in the experiments. The main purpose of this paper is to introduce a more`effective rule’for Step 1. The rule is based on turnover. The method consisting of Step 1 based on turnover and Step 2 is examined with numerical experiments for the 1st Section of Tokyo Stock Exchange. The results show that with our method, it is possible to construct the more effective index fund than the results of Orito et al.(6). The accuracy of the results using our method depends little on the length of time data (turnover data). The method especially works well when the increasing rate of the stock price index over a period can be viewed as a linear time series data.

  6. Structure-based prediction of subtype selectivity of histamine H3 receptor selective antagonists in clinical trials.

    Science.gov (United States)

    Kim, Soo-Kyung; Fristrup, Peter; Abrol, Ravinder; Goddard, William A

    2011-12-27

    Histamine receptors (HRs) are excellent drug targets for the treatment of diseases, such as schizophrenia, psychosis, depression, migraine, allergies, asthma, ulcers, and hypertension. Among them, the human H(3) histamine receptor (hH(3)HR) antagonists have been proposed for specific therapeutic applications, including treatment of Alzheimer's disease, attention deficit hyperactivity disorder (ADHD), epilepsy, and obesity. However, many of these drug candidates cause undesired side effects through the cross-reactivity with other histamine receptor subtypes. In order to develop improved selectivity and activity for such treatments, it would be useful to have the three-dimensional structures for all four HRs. We report here the predicted structures of four HR subtypes (H(1), H(2), H(3), and H(4)) using the GEnSeMBLE (GPCR ensemble of structures in membrane bilayer environment) Monte Carlo protocol, sampling ∼35 million combinations of helix packings to predict the 10 most stable packings for each of the four subtypes. Then we used these 10 best protein structures with the DarwinDock Monte Carlo protocol to sample ∼50 000 × 10(20) poses to predict the optimum ligand-protein structures for various agonists and antagonists. We find that E206(5.46) contributes most in binding H(3) selective agonists (5, 6, 7) in agreement with experimental mutation studies. We also find that conserved E5.46/S5.43 in both of hH(3)HR and hH(4)HR are involved in H(3)/ H(4) subtype selectivity. In addition, we find that M378(6.55) in hH(3)HR provides additional hydrophobic interactions different from hH(4)HR (the corresponding amino acid of T323(6.55) in hH(4)HR) to provide additional subtype bias. From these studies, we developed a pharmacophore model based on our predictions for known hH(3)HR selective antagonists in clinical study [ABT-239 1, GSK-189,254 2, PF-3654746 3, and BF2.649 (tiprolisant) 4] that suggests critical selectivity directing elements are: the basic proton

  7. Speeding Up Non-Parametric Bootstrap Computations for Statistics Based on Sample Moments in Small/Moderate Sample Size Applications.

    Directory of Open Access Journals (Sweden)

    Elias Chaibub Neto

    Full Text Available In this paper we propose a vectorized implementation of the non-parametric bootstrap for statistics based on sample moments. Basically, we adopt the multinomial sampling formulation of the non-parametric bootstrap, and compute bootstrap replications of sample moment statistics by simply weighting the observed data according to multinomial counts instead of evaluating the statistic on a resampled version of the observed data. Using this formulation we can generate a matrix of bootstrap weights and compute the entire vector of bootstrap replications with a few matrix multiplications. Vectorization is particularly important for matrix-oriented programming languages such as R, where matrix/vector calculations tend to be faster than scalar operations implemented in a loop. We illustrate the application of the vectorized implementation in real and simulated data sets, when bootstrapping Pearson's sample correlation coefficient, and compared its performance against two state-of-the-art R implementations of the non-parametric bootstrap, as well as a straightforward one based on a for loop. Our investigations spanned varying sample sizes and number of bootstrap replications. The vectorized bootstrap compared favorably against the state-of-the-art implementations in all cases tested, and was remarkably/considerably faster for small/moderate sample sizes. The same results were observed in the comparison with the straightforward implementation, except for large sample sizes, where the vectorized bootstrap was slightly slower than the straightforward implementation due to increased time expenditures in the generation of weight matrices via multinomial sampling.

  8. A fuzzy logic based PROMETHEE method for material selection problems

    Directory of Open Access Journals (Sweden)

    Muhammet Gul

    2018-03-01

    Full Text Available Material selection is a complex problem in the design and development of products for diverse engineering applications. This paper presents a fuzzy PROMETHEE (Preference Ranking Organization Method for Enrichment Evaluation method based on trapezoidal fuzzy interval numbers that can be applied to the selection of materials for an automotive instrument panel. Also, it presents uniqueness in making a significant contribution to the literature in terms of the application of fuzzy decision-making approach to material selection problems. The method is illustrated, validated, and compared against three different fuzzy MCDM methods (fuzzy VIKOR, fuzzy TOPSIS, and fuzzy ELECTRE in terms of its ranking performance. Also, the relationships between the compared methods and the proposed scenarios for fuzzy PROMETHEE are evaluated via the Spearman’s correlation coefficient. Styrene Maleic Anhydride and Polypropylene are determined optionally as suitable materials for the automotive instrument panel case. We propose a generic fuzzy MCDM methodology that can be practically implemented to material selection problem. The main advantages of the methodology are consideration of the vagueness, uncertainty, and fuzziness to decision making environment.

  9. Structure and Mechanical Properties of the AlSi10Mg Alloy Samples Manufactured by Selective Laser Melting

    Science.gov (United States)

    Li, Xiaodan; Ni, Jiaqiang; Zhu, Qingfeng; Su, Hang; Cui, Jianzhong; Zhang, Yifei; Li, Jianzhong

    2017-11-01

    The AlSi10Mg alloy samples with the size of 14×14×91mm were produced by the selective laser melting (SLM) method in different building direction. The structures and the properties at -70°C of the sample in different direction were investigated. The results show that the structure in different building direction shows different morphology. The fish scale structures distribute on the side along the building direction, and the oval structures distribute on the side vertical to the building direction. Some pores in with the maximum size of 100 μm exist of the structure. And there is no major influence for the build orientation on the tensile properties. The tensile strength and the elongation of the sample in the building direction are 340 Mpa and 11.2 % respectively. And the tensile strength and the elongation of the sample vertical to building direction are 350 Mpa and 13.4 % respectively

  10. Finding metastabilities in reversible Markov chains based on incomplete sampling

    Directory of Open Access Journals (Sweden)

    Fackeldey Konstantin

    2017-01-01

    Full Text Available In order to fully characterize the state-transition behaviour of finite Markov chains one needs to provide the corresponding transition matrix P. In many applications such as molecular simulation and drug design, the entries of the transition matrix P are estimated by generating realizations of the Markov chain and determining the one-step conditional probability Pij for a transition from one state i to state j. This sampling can be computational very demanding. Therefore, it is a good idea to reduce the sampling effort. The main purpose of this paper is to design a sampling strategy, which provides a partial sampling of only a subset of the rows of such a matrix P. Our proposed approach fits very well to stochastic processes stemming from simulation of molecular systems or random walks on graphs and it is different from the matrix completion approaches which try to approximate the transition matrix by using a low-rank-assumption. It will be shown how Markov chains can be analyzed on the basis of a partial sampling. More precisely. First, we will estimate the stationary distribution from a partially given matrix P. Second, we will estimate the infinitesimal generator Q of P on the basis of this stationary distribution. Third, from the generator we will compute the leading invariant subspace, which should be identical to the leading invariant subspace of P. Forth, we will apply Robust Perron Cluster Analysis (PCCA+ in order to identify metastabilities using this subspace.

  11. Eleven-Year Retrospective Report of Super-Selective Venous Sampling for the Evaluation of Recurrent or Persistent Hyperparathyroidism in 32 Patients.

    Science.gov (United States)

    Habibollahi, Peiman; Shin, Benjamin; Shamchi, Sara P; Wachtel, Heather; Fraker, Douglas L; Trerotola, Scott O

    2018-01-01

    Parathyroid venous sampling (PAVS) is usually reserved for patients with persistent or recurrent hyperparathyroidism after parathyroidectomy with inconclusive noninvasive imaging studies. A retrospective study was performed to evaluate the diagnostic efficacy of super-selective PAVS (SSVS) in patients needing revision neck surgery with inconclusive imaging. Patients undergoing PAVS between 2005 and 2016 due to persistent or recurrent hyperparathyroidism following surgery were reviewed. PAVS was performed in all patients using super-selective technique. Single-value measurements within central neck veins performed as part of super-selective PAVS were used to simulate selective venous sampling (SVS) and allow for comparison to data, which might be obtained in a non-super-selective approach. 32 patients (mean age 51 ± 15 years; 8 men and 24 women) met inclusion and exclusion criteria. The sensitivity and positive predictive value (PPV) of SSVS for localizing the source of elevated PTH to a limited area in the neck or chest was 96 and 84%, respectively. Simulated SVS, on the other hand, had a sensitivity of 28% and a PPV of 89% based on the predefined gold standard. SSVS had a significantly higher sensitivity compared to simulated SVS (p localizing the source of hyperparathyroidism in patients undergoing revision surgery for hyperparathyroidism in whom noninvasive imaging studies are inconclusive. SSVS data had also markedly higher sensitivity for localizing disease in these patients compared to simulated SVS.

  12. The relationship between the FFM and personality disorders in a personnel selection sample.

    Science.gov (United States)

    Nederström, Mikael; Furnham, Adrian

    2012-10-01

    The relationships between the Five Factor Model (FFM) personality and personality disorders were investigated. A sample of real-life job applicants completed two personality questionnaires with different theoretical backgrounds in a psychological assessment center. The job applicants provided self-descriptions both on the FFM inventory and on a personality disorder trait inventory. A subsample of these candidates was interviewed by expert psychologists upon entrance to the assessment center. The psychologists assessed the same disorder traits of each target in job interviews. Both self-descriptions were used to predict the expert assessments. The results demonstrated considerable overlap between the FFM measures of normal and measures of abnormal personality in both samples and regardless of assessment method. © 2012 The Authors. Scandinavian Journal of Psychology © 2012 The Scandinavian Psychological Associations.

  13. Reliability assessment based on small samples of normal distribution

    International Nuclear Information System (INIS)

    Ma Zhibo; Zhu Jianshi; Xu Naixin

    2003-01-01

    When the pertinent parameter involved in reliability definition complies with normal distribution, the conjugate prior of its distributing parameters (μ, h) is of normal-gamma distribution. With the help of maximum entropy and the moments-equivalence principles, the subjective information of the parameter and the sampling data of its independent variables are transformed to a Bayesian prior of (μ,h). The desired estimates are obtained from either the prior or the posterior which is formed by combining the prior and sampling data. Computing methods are described and examples are presented to give demonstrations

  14. Suitability of selected free-gas and dissolved-gas sampling containers for carbon isotopic analysis.

    Science.gov (United States)

    Eby, P; Gibson, J J; Yi, Y

    2015-07-15

    Storage trials were conducted for 2 to 3 months using a hydrocarbon and carbon dioxide gas mixture with known carbon isotopic composition to simulate typical hold times for gas samples prior to isotopic analysis. A range of containers (both pierced and unpierced) was periodically sampled to test for δ(13)C isotopic fractionation. Seventeen containers were tested for free-gas storage (20°C, 1 atm pressure) and 7 containers were tested for dissolved-gas storage, the latter prepared by bubbling free gas through tap water until saturated (20°C, 1 atm) and then preserved to avoid biological activity by acidifying to pH 2 with phosphoric acid and stored in the dark at 5°C. Samples were extracted using valves or by piercing septa, and then introduced into an isotope ratio mass spectrometer for compound-specific δ(13)C measurements. For free gas, stainless steel canisters and crimp-top glass serum bottles with butyl septa were most effective at preventing isotopic fractionation (pierced and unpierced), whereas silicone and PTFE-butyl septa allowed significant isotopic fractionation. FlexFoil and Tedlar bags were found to be effective only for storage of up to 1 month. For dissolved gas, crimp-top glass serum bottles with butyl septa were again effective, whereas silicone and PTFE-butyl were not. FlexFoil bags were reliable for up to 2 months. Our results suggest a range of preferred containers as well as several that did not perform very well for isotopic analysis. Overall, the results help establish better QA/QC procedures to avoid isotopic fractionation when storing environmental gas samples. Recommended containers for air transportation include steel canisters and glass serum bottles with butyl septa (pierced and unpierced). Copyright © 2015 John Wiley & Sons, Ltd.

  15. Radiation hazards evaluation for selected sand samples from Camburi beach, Vitoria, Espirito Santo, Brazil

    International Nuclear Information System (INIS)

    Barros, Livia F.; Pecequilo, Brigitte R.S.

    2013-01-01

    In this work, a single location at Camburi beach, known to be a naturally high background region, was studied. Radiation hazards indexes and annual effective dose were evaluated from the 226 Ra, 232 Th and 40 K sands activities concentrations. Sand samples were monthly collected during 2011, dried, sealed in standard 100 mL HPDE polyethylene flasks and measured by high resolution gamma spectrometry after a 4 weeks in-growth period. The 226 Ra concentration was determined from the weighted average concentrations of 214 Pb and 214 Bi. The 232 Th concentration was determined from the weighted average concentrations of 228 Ac, 212 Pb and 212 Bi and the 4 0K from its single gamma transition. The results, considering samples gamma-rays self-attenuation, show activities concentrations in the range from 6 Bq kg -1 to 39 Bq kg -1 for 226 Ra, 13 Bq kg -1 to 161 Bq kg -1 for 232 Th, and 7 Bq kg -1 to 65 Bq kg -1 for 40 K. The radium equivalent activity for the studied samples ranged from 26 Bq kg -1 to 274 Bq kg - '1. The external and internal hazard indexes varied, respectively, from 0.07 to 0.74 and from 0.09 to 0.85. The annual effective dose values laid from 0.07 mSv.y -1 to 0.72 mSv.y - '1. All values obtained in this work are below the radiological protection recommended limits. (author)

  16. Typeability of PowerPlex Y (Promega) profiles in selected tissue samples incubated in various environments.

    Science.gov (United States)

    Niemcunowicz-Janica, Anna; Pepiński, Witold; Janica, Jacek Robert; Janica, Jerzy; Skawrońska, Małgorzata; Koc-Zórawska, Ewa

    2007-01-01

    In cases of decomposed bodies, Y chromosomal STR markers may be useful in identification of a male relative. The authors assessed typeability of PowerPlex Y (Promega) loci in post mortem tissue material stored in various environments. Kidney, spleen and pancreas specimens were collected during autopsies of five persons aged 20-30 years, whose time of death was determined within the limit of 14 hours. Tissue material was incubated at 21 degrees C and 4 degrees C in various environmental conditions. DNA was extracted by the organic method from tissue samples collected in 7-day intervals and subsequently typed using the PowerPlexY-STR kit and ABI 310. A fast decrease in the typeability rate was seen in specimens incubated in peat soil and in sand. Kidney tissue samples were typeable in all PowerPlexY-STR loci within 63 days of incubation at 4 degrees C. Faster DNA degradation was recorded in spleen and pancreas specimens. In samples with negative genotyping results, no DNA was found by fluorometric quantitation. Decomposed soft tissues are a potential material for DNA typing.

  17. Object width modulates object-based attentional selection.

    Science.gov (United States)

    Nah, Joseph C; Neppi-Modona, Marco; Strother, Lars; Behrmann, Marlene; Shomstein, Sarah

    2018-04-24

    Visual input typically includes a myriad of objects, some of which are selected for further processing. While these objects vary in shape and size, most evidence supporting object-based guidance of attention is drawn from paradigms employing two identical objects. Importantly, object size is a readily perceived stimulus dimension, and whether it modulates the distribution of attention remains an open question. Across four experiments, the size of the objects in the display was manipulated in a modified version of the two-rectangle paradigm. In Experiment 1, two identical parallel rectangles of two sizes (thin or thick) were presented. Experiments 2-4 employed identical trapezoids (each having a thin and thick end), inverted in orientation. In the experiments, one end of an object was cued and participants performed either a T/L discrimination or a simple target-detection task. Combined results show that, in addition to the standard object-based attentional advantage, there was a further attentional benefit for processing information contained in the thick versus thin end of objects. Additionally, eye-tracking measures demonstrated increased saccade precision towards thick object ends, suggesting that Fitts's Law may play a role in object-based attentional shifts. Taken together, these results suggest that object-based attentional selection is modulated by object width.

  18. Minimal gene selection for classification and diagnosis prediction based on gene expression profile

    Directory of Open Access Journals (Sweden)

    Alireza Mehridehnavi

    2013-01-01

    Conclusion: We have shown that the use of two most significant genes based on their S/N ratios and selection of suitable training samples can lead to classify DLBCL patients with a rather good result. Actually with the aid of mentioned methods we could compensate lack of enough number of patients, improve accuracy of classifying and reduce complication of computations and so running time.

  19. Compressed sensing of roller bearing fault based on multiple down-sampling strategy

    International Nuclear Information System (INIS)

    Wang, Huaqing; Ke, Yanliang; Luo, Ganggang; Tang, Gang

    2016-01-01

    Roller bearings are essential components of rotating machinery and are often exposed to complex operating conditions, which can easily lead to their failures. Thus, to ensure normal production and the safety of machine operators, it is essential to detect the failures as soon as possible. However, it is a major challenge to maintain a balance between detection efficiency and big data acquisition given the limitations of sampling theory. To overcome these limitations, we try to preserve the information pertaining to roller bearing failures using a sampling rate far below the Nyquist sampling rate, which can ease the pressure generated by the large-scale data. The big data of a faulty roller bearing’s vibration signals is firstly reduced by a down-sample strategy while preserving the fault features by selecting peaks to represent the data segments in time domain. However, a problem arises in that the fault features may be weaker than before, since the noise may be mistaken for the peaks when the noise is stronger than the vibration signals, which makes the fault features unable to be extracted by commonly-used envelope analysis. Here we employ compressive sensing theory to overcome this problem, which can make a signal enhancement and reduce the sample sizes further. Moreover, it is capable of detecting fault features from a small number of samples based on orthogonal matching pursuit approach, which can overcome the shortcomings of the multiple down-sample algorithm. Experimental results validate the effectiveness of the proposed technique in detecting roller bearing faults. (paper)

  20. Compressed sensing of roller bearing fault based on multiple down-sampling strategy

    Science.gov (United States)

    Wang, Huaqing; Ke, Yanliang; Luo, Ganggang; Tang, Gang

    2016-02-01

    Roller bearings are essential components of rotating machinery and are often exposed to complex operating conditions, which can easily lead to their failures. Thus, to ensure normal production and the safety of machine operators, it is essential to detect the failures as soon as possible. However, it is a major challenge to maintain a balance between detection efficiency and big data acquisition given the limitations of sampling theory. To overcome these limitations, we try to preserve the information pertaining to roller bearing failures using a sampling rate far below the Nyquist sampling rate, which can ease the pressure generated by the large-scale data. The big data of a faulty roller bearing’s vibration signals is firstly reduced by a down-sample strategy while preserving the fault features by selecting peaks to represent the data segments in time domain. However, a problem arises in that the fault features may be weaker than before, since the noise may be mistaken for the peaks when the noise is stronger than the vibration signals, which makes the fault features unable to be extracted by commonly-used envelope analysis. Here we employ compressive sensing theory to overcome this problem, which can make a signal enhancement and reduce the sample sizes further. Moreover, it is capable of detecting fault features from a small number of samples based on orthogonal matching pursuit approach, which can overcome the shortcomings of the multiple down-sample algorithm. Experimental results validate the effectiveness of the proposed technique in detecting roller bearing faults.

  1. Product-selective blot: a technique for measuring enzyme activities in large numbers of samples and in native electrophoresis gels

    International Nuclear Information System (INIS)

    Thompson, G.A.; Davies, H.M.; McDonald, N.

    1985-01-01

    A method termed product-selective blotting has been developed for screening large numbers of samples for enzyme activity. The technique is particularly well suited to detection of enzymes in native electrophoresis gels. The principle of the method was demonstrated by blotting samples from glutaminase or glutamate synthase reactions into an agarose gel embedded with ion-exchange resin under conditions favoring binding of product (glutamate) over substrates and other substances in the reaction mixture. After washes to remove these unbound substances, the product was measured using either fluorometric staining or radiometric techniques. Glutaminase activity in native electrophoresis gels was visualized by a related procedure in which substrates and products from reactions run in the electrophoresis gel were blotted directly into a resin-containing image gel. Considering the selective-binding materials available for use in the image gel, along with the possible detection systems, this method has potentially broad application

  2. Titanium-based spectrally selective surfaces for solar thermal systems

    Energy Technology Data Exchange (ETDEWEB)

    Wilson, A D; Holmes, J P

    1983-10-01

    A study of spectrally selective surfaces based on anodic oxide films on titanium is presented. These surfaces have low values of solar absorptance, 0.77, due to the nonideal optical properties of the anodic TiO2 for antireflection of titanium. A simple chemical etching process is described which gives a textured surface with dimensions similar to the wavelengths of solar radiation, leading to spectral selectivity. The performance of this dark-etched surface can be further improved by anodising, and optimum absorbers have been produced with alpha(s) 0.935 and hemispherical emittances (400 K) 0.23. The surface texturing effects a significant improvement in alpha(s) at oblique incidence.

  3. A Reliability Based Model for Wind Turbine Selection

    Directory of Open Access Journals (Sweden)

    A.K. Rajeevan

    2013-06-01

    Full Text Available A wind turbine generator output at a specific site depends on many factors, particularly cut- in, rated and cut-out wind speed parameters. Hence power output varies from turbine to turbine. The objective of this paper is to develop a mathematical relationship between reliability and wind power generation. The analytical computation of monthly wind power is obtained from weibull statistical model using cubic mean cube root of wind speed. Reliability calculation is based on failure probability analysis. There are many different types of wind turbinescommercially available in the market. From reliability point of view, to get optimum reliability in power generation, it is desirable to select a wind turbine generator which is best suited for a site. The mathematical relationship developed in this paper can be used for site-matching turbine selection in reliability point of view.

  4. Selective trace enrichment of chlorotriazine pesticides from natural waters and sediment samples using terbuthylazine molecularly imprinted polymers

    Science.gov (United States)

    Ferrer, I.; Lanza, F.; Tolokan, A.; Horvath, V.; Sellergren, B.; Horvai, G.; Barcelo, D.

    2000-01-01

    Two molecularly imprinted polymers were synthesized using either dichloromethane or toluene as the porogen and terbuthylazine as the template and were used as solid-phase extraction cartridges for the enrichment of six chlorotriazines (deisopropylatrazine, deethylatrazine, simazine, atrazine, propazine, and terbuthylazine) in natural water and sediment samples. The extracted samples were analyzed by liquid chromatography/diode array detection (LC/DAD). Several washing solvents, as well as different volumes, were tested for their ability to remove the matrix components nonspecifically adsorbed on the sorbents. This cleanup step was shown to be of prime importance to the successful extraction of the pesticides from the aqueous samples. The optimal analytical conditions were obtained when the MIP imprinted using dichloromethane was the sorbent, 2 mL of dichloromethane was used in the washing step, and the preconcentrated analytes were eluted with 8 mL of methanol. The recoveries were higher than 80% for all the chlorotriazines except for propazine (53%) when 50- or 100-mL groundwater samples, spiked at 1 ??g/L level, were analyzed. The limits of detection varied from 0.05 to 0.2 ??g/L when preconcentrating a 100-mL groundwater sample. Natural sediment samples from the Ebre Delta area (Tarragona, Spain) containing atrazine and deethylatrazine were Soxhlet extracted and analyzed by the methodology developed in this work. No significant interferences from the sample matrix were noticed, thus indicating good selectivity of the MIP sorbents used.

  5. Two sample Bayesian prediction intervals for order statistics based on the inverse exponential-type distributions using right censored sample

    Directory of Open Access Journals (Sweden)

    M.M. Mohie El-Din

    2011-10-01

    Full Text Available In this paper, two sample Bayesian prediction intervals for order statistics (OS are obtained. This prediction is based on a certain class of the inverse exponential-type distributions using a right censored sample. A general class of prior density functions is used and the predictive cumulative function is obtained in the two samples case. The class of the inverse exponential-type distributions includes several important distributions such the inverse Weibull distribution, the inverse Burr distribution, the loglogistic distribution, the inverse Pareto distribution and the inverse paralogistic distribution. Special cases of the inverse Weibull model such as the inverse exponential model and the inverse Rayleigh model are considered.

  6. Individual and pen-based oral fluid sampling: A welfare-friendly sampling method for group-housed gestating sows.

    Science.gov (United States)

    Pol, Françoise; Dorenlor, Virginie; Eono, Florent; Eudier, Solveig; Eveno, Eric; Liégard-Vanhecke, Dorine; Rose, Nicolas; Fablet, Christelle

    2017-11-01

    The aims of this study were to assess the feasibility of individual and pen-based oral fluid sampling (OFS) in 35 pig herds with group-housed sows, compare these methods to blood sampling, and assess the factors influencing the success of sampling. Individual samples were collected from at least 30 sows per herd. Pen-based OFS was performed using devices placed in at least three pens for 45min. Information related to the farm, the sows, and their living conditions were collected. Factors significantly associated with the duration of sampling and the chewing behaviour of sows were identified by logistic regression. Individual OFS took 2min 42s on average; the type of floor, swab size, and operator were associated with a sampling time >2min. Pen-based OFS was obtained from 112 devices (62.2%). The type of floor, parity, pen-level activity, and type of feeding were associated with chewing behaviour. Pen activity was associated with the latency to interact with the device. The type of floor, gestation stage, parity, group size, and latency to interact with the device were associated with a chewing time >10min. After 15, 30 and 45min of pen-based OFS, 48%, 60% and 65% of the sows were lying down, respectively. The time spent after the beginning of sampling, genetic type, and time elapsed since the last meal were associated with 50% of the sows lying down at one time point. The mean time to blood sample the sows was 1min 16s and 2min 52s if the number of operators required was considered in the sampling time estimation. The genetic type, parity, and type of floor were significantly associated with a sampling time higher than 1min 30s. This study shows that individual OFS is easy to perform in group-housed sows by a single operator, even though straw-bedded animals take longer to sample than animals housed on slatted floors, and suggests some guidelines to optimise pen-based OFS success. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. Frequency selective surfaces based high performance microstrip antenna

    CERN Document Server

    Narayan, Shiv; Jha, Rakesh Mohan

    2016-01-01

    This book focuses on performance enhancement of printed antennas using frequency selective surfaces (FSS) technology. The growing demand of stealth technology in strategic areas requires high-performance low-RCS (radar cross section) antennas. Such requirements may be accomplished by incorporating FSS into the antenna structure either in its ground plane or as the superstrate, due to the filter characteristics of FSS structure. In view of this, a novel approach based on FSS technology is presented in this book to enhance the performance of printed antennas including out-of-band structural RCS reduction. In this endeavor, the EM design of microstrip patch antennas (MPA) loaded with FSS-based (i) high impedance surface (HIS) ground plane, and (ii) the superstrates are discussed in detail. The EM analysis of proposed FSS-based antenna structures have been carried out using transmission line analogy, in combination with the reciprocity theorem. Further, various types of novel FSS structures are considered in desi...

  8. Distance based control system for machine vision-based selective spraying

    NARCIS (Netherlands)

    Steward, B.L.; Tian, L.F.; Tang, L.

    2002-01-01

    For effective operation of a selective sprayer with real-time local weed sensing, herbicides must be delivered, accurately to weed targets in the field. With a machine vision-based selective spraying system, acquiring sequential images and switching nozzles on and off at the correct locations are

  9. Geochemistry and petrology of selected coal samples from Sumatra, Kalimantan, Sulawesi, and Papua, Indonesia

    Energy Technology Data Exchange (ETDEWEB)

    Belkin, Harvey E.; Tewalt, Susan J. [U.S. Geological Survey, 956 National Center, Reston, VA 20192 (United States); Hower, James C. [University of Kentucky Center for Applied Energy Research, 2540 Research Park Drive, Lexington, KY 40511 (United States); Stucker, J.D. [University of Kentucky Center for Applied Energy Research, 2540 Research Park Drive, Lexington, KY 40511 (United States)]|[University of Kentucky Department of Earth and Environmental Sciences, Lexington, KY 40506 (United States); O' Keefe, J.M.K. [Morehead State University, Department of Physical Science, Morehead, KY 40351 (United States)

    2009-01-31

    Indonesia has become the world's largest exporter of thermal coal and is a major supplier to the Asian coal market, particularly as the People's Republic of China is now (2007) and perhaps may remain a net importer of coal. Indonesia has had a long history of coal production, mainly in Sumatra and Kalimantan, but only in the last two decades have government and commercial forces resulted in a remarkable coal boom. A recent assessment of Indonesian coal-bed methane (CBM) potential has motivated active CBM exploration. Most of the coal is Paleogene and Neogene, low to moderate rank and has low ash yield and sulfur (generally < 10 and < 1 wt.%, respectively). Active tectonic and igneous activity has resulted in significant rank increase in some coal basins. Eight coal samples are described that represent the major export and/or resource potential of Sumatra, Kalimantan, Sulawesi, and Papua. Detailed geochemistry, including proximate and ultimate analysis, sulfur forms, and major, minor, and trace element determinations are presented. Organic petrology and vitrinite reflectance data reflect various precursor flora assemblages and rank variations, including sample composites from active igneous and tectonic areas. A comparison of Hazardous Air Pollutants (HAPs) elements abundance with world and US averages show that the Indonesian coals have low combustion pollution potential. (author)

  10. Assessment of DDT levels in selected environmental media and biological samples from Mexico and Central America.

    Science.gov (United States)

    Pérez-Maldonado, Iván N; Trejo, Antonio; Ruepert, Clemens; Jovel, Reyna del Carmen; Méndez, Mónica Patricia; Ferrari, Mirtha; Saballos-Sobalvarro, Emilio; Alexander, Carlos; Yáñez-Estrada, Leticia; Lopez, Dania; Henao, Samuel; Pinto, Emilio R; Díaz-Barriga, Fernando

    2010-03-01

    Taking into account the environmental persistence and the toxicity of DDT, the Pan American Health Organization (PAHO) organized a surveillance program in Mesoamerica which included the detection of residual DDT in environmental (soil) and biological samples (fish tissue and children's blood). This program was carried out in communities from Mexico, Guatemala, El Salvador, Honduras, Nicaragua, Costa Rica and Panama. This paper presents the first report of that program. As expected, the results show that the levels for [summation operator] DDT in soil (outdoor or indoor) and fish samples in the majority of the locations studied are below guidelines. However, in some locations, we found children with high concentrations of DDT as in Mexico (mean level 50.2 ng/mL). Furthermore, in some communities and for some matrices, the DDT/DDE quotient is higher than one and this may reflect a recent DDT exposure. Therefore, more efforts are needed to avoid exposure and to prevent the reintroduction of DDT into the region. In this regard it is important to know that under the surveillance of PAHO and with the support of UNEP, a regional program in Mesoamerica for the collection and disposal of DDT and other POPs stockpiles is in progress. Copyright (c) 2009 Elsevier Ltd. All rights reserved.

  11. Geochemistry and petrology of selected coal samples from Sumatra, Kalimantan, Sulawesi, and Papua, Indonesia

    International Nuclear Information System (INIS)

    Belkin, Harvey E.; Tewalt, Susan J.; Hower, James C.; Stucker, J.D.; O'Keefe, J.M.K.

    2009-01-01

    Indonesia has become the world's largest exporter of thermal coal and is a major supplier to the Asian coal market, particularly as the People's Republic of China is now (2007) and perhaps may remain a net importer of coal. Indonesia has had a long history of coal production, mainly in Sumatra and Kalimantan, but only in the last two decades have government and commercial forces resulted in a remarkable coal boom. A recent assessment of Indonesian coal-bed methane (CBM) potential has motivated active CBM exploration. Most of the coal is Paleogene and Neogene, low to moderate rank and has low ash yield and sulfur (generally < 10 and < 1 wt.%, respectively). Active tectonic and igneous activity has resulted in significant rank increase in some coal basins. Eight coal samples are described that represent the major export and/or resource potential of Sumatra, Kalimantan, Sulawesi, and Papua. Detailed geochemistry, including proximate and ultimate analysis, sulfur forms, and major, minor, and trace element determinations are presented. Organic petrology and vitrinite reflectance data reflect various precursor flora assemblages and rank variations, including sample composites from active igneous and tectonic areas. A comparison of Hazardous Air Pollutants (HAPs) elements abundance with world and US averages show that the Indonesian coals have low combustion pollution potential. (author)

  12. Recent trends in sorption-based sample preparation and liquid chromatography techniques for food analysis.

    Science.gov (United States)

    V Soares Maciel, Edvaldo; de Toffoli, Ana Lúcia; Lanças, Fernando Mauro

    2018-04-20

    The accelerated rising of the world's population increased the consumption of food, thus demanding more rigors in the control of residue and contaminants in food-based products marketed for human consumption. In view of the complexity of most food matrices, including fruits, vegetables, different types of meat, beverages, among others, a sample preparation step is important to provide more reliable results when combined with HPLC separations. An adequate sample preparation step before the chromatographic analysis is mandatory in obtaining higher precision and accuracy in order to improve the extraction of the target analytes, one of the priorities in analytical chemistry. The recent discovery of new materials such as ionic liquids, graphene-derived materials, molecularly imprinted polymers, restricted access media, magnetic nanoparticles, and carbonaceous nanomaterials, provided high sensitivity and selectivity results in an extensive variety of applications. These materials, as well as their several possible combinations, have been demonstrated to be highly appropriate for the extraction of different analytes in complex samples such as food products. The main characteristics and application of these new materials in food analysis will be presented and discussed in this paper. Another topic discussed in this review covers the main advantages and limitations of sample preparation microtechniques, as well as their off-line and on-line combination with HPLC for food analysis. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  13. Estimated ventricle size using Evans index: reference values from a population-based sample.

    Science.gov (United States)

    Jaraj, D; Rabiei, K; Marlow, T; Jensen, C; Skoog, I; Wikkelsø, C

    2017-03-01

    Evans index is an estimate of ventricular size used in the diagnosis of idiopathic normal-pressure hydrocephalus (iNPH). Values >0.3 are considered pathological and are required by guidelines for the diagnosis of iNPH. However, there are no previous epidemiological studies on Evans index, and normal values in adults are thus not precisely known. We examined a representative sample to obtain reference values and descriptive data on Evans index. A population-based sample (n = 1235) of men and women aged ≥70 years was examined. The sample comprised people living in private households and residential care, systematically selected from the Swedish population register. Neuropsychiatric examinations, including head computed tomography, were performed between 1986 and 2000. Evans index ranged from 0.11 to 0.46. The mean value in the total sample was 0.28 (SD, 0.04) and 20.6% (n = 255) had values >0.3. Among men aged ≥80 years, the mean value of Evans index was 0.3 (SD, 0.03). Individuals with dementia had a mean value of Evans index of 0.31 (SD, 0.05) and those with radiological signs of iNPH had a mean value of 0.36 (SD, 0.04). A substantial number of subjects had ventricular enlargement according to current criteria. Clinicians and researchers need to be aware of the range of values among older individuals. © 2017 EAN.

  14. Statistical surrogate model based sampling criterion for stochastic global optimization of problems with constraints

    Energy Technology Data Exchange (ETDEWEB)

    Cho, Su Gil; Jang, Jun Yong; Kim, Ji Hoon; Lee, Tae Hee [Hanyang University, Seoul (Korea, Republic of); Lee, Min Uk [Romax Technology Ltd., Seoul (Korea, Republic of); Choi, Jong Su; Hong, Sup [Korea Research Institute of Ships and Ocean Engineering, Daejeon (Korea, Republic of)

    2015-04-15

    Sequential surrogate model-based global optimization algorithms, such as super-EGO, have been developed to increase the efficiency of commonly used global optimization technique as well as to ensure the accuracy of optimization. However, earlier studies have drawbacks because there are three phases in the optimization loop and empirical parameters. We propose a united sampling criterion to simplify the algorithm and to achieve the global optimum of problems with constraints without any empirical parameters. It is able to select the points located in a feasible region with high model uncertainty as well as the points along the boundary of constraint at the lowest objective value. The mean squared error determines which criterion is more dominant among the infill sampling criterion and boundary sampling criterion. Also, the method guarantees the accuracy of the surrogate model because the sample points are not located within extremely small regions like super-EGO. The performance of the proposed method, such as the solvability of a problem, convergence properties, and efficiency, are validated through nonlinear numerical examples with disconnected feasible regions.

  15. The Performance Analysis Based on SAR Sample Covariance Matrix

    Directory of Open Access Journals (Sweden)

    Esra Erten

    2012-03-01

    Full Text Available Multi-channel systems appear in several fields of application in science. In the Synthetic Aperture Radar (SAR context, multi-channel systems may refer to different domains, as multi-polarization, multi-interferometric or multi-temporal data, or even a combination of them. Due to the inherent speckle phenomenon present in SAR images, the statistical description of the data is almost mandatory for its utilization. The complex images acquired over natural media present in general zero-mean circular Gaussian characteristics. In this case, second order statistics as the multi-channel covariance matrix fully describe the data. For practical situations however, the covariance matrix has to be estimated using a limited number of samples, and this sample covariance matrix follow the complex Wishart distribution. In this context, the eigendecomposition of the multi-channel covariance matrix has been shown in different areas of high relevance regarding the physical properties of the imaged scene. Specifically, the maximum eigenvalue of the covariance matrix has been frequently used in different applications as target or change detection, estimation of the dominant scattering mechanism in polarimetric data, moving target indication, etc. In this paper, the statistical behavior of the maximum eigenvalue derived from the eigendecomposition of the sample multi-channel covariance matrix in terms of multi-channel SAR images is simplified for SAR community. Validation is performed against simulated data and examples of estimation and detection problems using the analytical expressions are as well given.

  16. Gaussian process based intelligent sampling for measuring nano-structure surfaces

    Science.gov (United States)

    Sun, L. J.; Ren, M. J.; Yin, Y. H.

    2016-09-01

    Nanotechnology is the science and engineering that manipulate matters at nano scale, which can be used to create many new materials and devices with a vast range of applications. As the nanotech product increasingly enters the commercial marketplace, nanometrology becomes a stringent and enabling technology for the manipulation and the quality control of the nanotechnology. However, many measuring instruments, for instance scanning probe microscopy, are limited to relatively small area of hundreds of micrometers with very low efficiency. Therefore some intelligent sampling strategies should be required to improve the scanning efficiency for measuring large area. This paper presents a Gaussian process based intelligent sampling method to address this problem. The method makes use of Gaussian process based Bayesian regression as a mathematical foundation to represent the surface geometry, and the posterior estimation of Gaussian process is computed by combining the prior probability distribution with the maximum likelihood function. Then each sampling point is adaptively selected by determining the position which is the most likely outside of the required tolerance zone among the candidates and then inserted to update the model iteratively. Both simulationson the nominal surface and manufactured surface have been conducted on nano-structure surfaces to verify the validity of the proposed method. The results imply that the proposed method significantly improves the measurement efficiency in measuring large area structured surfaces.

  17. Core Business Selection Based on Ant Colony Clustering Algorithm

    Directory of Open Access Journals (Sweden)

    Yu Lan

    2014-01-01

    Full Text Available Core business is the most important business to the enterprise in diversified business. In this paper, we first introduce the definition and characteristics of the core business and then descript the ant colony clustering algorithm. In order to test the effectiveness of the proposed method, Tianjin Port Logistics Development Co., Ltd. is selected as the research object. Based on the current situation of the development of the company, the core business of the company can be acquired by ant colony clustering algorithm. Thus, the results indicate that the proposed method is an effective way to determine the core business for company.

  18. Multichannel Selective Femtosecond Coherent Control Based on Symmetry Properties

    International Nuclear Information System (INIS)

    Amitay, Zohar; Gandman, Andrey; Chuntonov, Lev; Rybak, Leonid

    2008-01-01

    We present and implement a new scheme for extended multichannel selective femtosecond coherent control based on symmetry properties of the excitation channels. Here, an atomic nonresonant two-photon absorption channel is coherently incorporated in a resonance-mediated (2+1) three-photon absorption channel. By proper pulse shaping, utilizing the invariance of the two-photon absorption to specific phase transformations of the pulse, the three-photon absorption is tuned independently over an order-of-magnitude yield range for any possible two-photon absorption yield. Noticeable is a set of ''two-photon dark pulses'' inducing widely tunable three-photon absorption

  19. Sample selection, preparation methods, and the apparent tensile properties of silkworm (B. mori) cocoon silk.

    Science.gov (United States)

    Reed, Emily J; Bianchini, Lindsay L; Viney, Christopher

    2012-06-01

    Reported literature values of the tensile properties of natural silk cover a wide range. While much of this inconsistency is the result of variability that is intrinsic to silk, some is also a consequence of differences in the way that silk is prepared for tensile tests. Here we explore how measured mechanical properties of Bombyx mori cocoon silk are affected by two intrinsic factors (the location from which the silk is collected within the cocoon, and the color of the silk), and two extrinsic factors (the storage conditions prior to testing, and different styles of reeling the fiber). We find that extrinsic and therefore controllable factors can affect the properties more than the intrinsic ones studied. Our results suggest that enhanced inter-laboratory collaborations, that lead to standardized sample collection, handling, and storage protocols prior to mechanical testing, would help to decrease unnecessary (and complicating) variation in reported tensile properties. Copyright © 2011 Wiley Periodicals, Inc.

  20. Selected social-psychological characteristics of a sample of Israeli cancer patients: facts and implications.

    Science.gov (United States)

    Baider, L; Sarell, M; Edelstein, E L

    1982-02-01

    This paper presents some sociodemographic, medical and psychological data gathered in an ongoing study aimed at early identification of the psychosocial coping potential of adult, Jewish cancer patients in Israel. We show the distribution of a sample of 86 patients on variables such as age, sex, marital status, place of birth, religiosity, medical diagnosis, treatment modality, and duration of illness. We describe the patients' reported behavioral changes, their perceptions of the nature and causes of their illness, and their views on the supportive resources available to them. We also analyze patients' expectations regarding their future functioning in the areas of work, household, family and social relations, and leisure-time activities. On the basis of these initial analyses, we present some recommendations for the improvement of social-psychological intervention with cancer patients.

  1. Carbon Nanotube-Based Ion Selective Sensors for Wearable Applications.

    Science.gov (United States)

    Roy, Soumyendu; David-Pur, Moshe; Hanein, Yael

    2017-10-11

    Wearable electronics offer new opportunities in a wide range of applications, especially sweat analysis using skin sensors. A fundamental challenge in these applications is the formation of sensitive and stable electrodes. In this article we report the development of a wearable sensor based on carbon nanotube (CNT) electrode arrays for sweat sensing. Solid-state ion selective electrodes (ISEs), sensitive to Na + ions, were prepared by drop coating plasticized poly(vinyl chloride) (PVC) doped with ionophore and ion exchanger on CNT electrodes. The ion selective membrane (ISM) filled the intertubular spaces of the highly porous CNT film and formed an attachment that was stronger than that achieved with flat Au, Pt, or carbon electrodes. Concentration of the ISM solution used influenced the attachment to the CNT film, the ISM surface morphology, and the overall performance of the sensor. Sensitivity of 56 ± 3 mV/decade to Na + ions was achieved. Optimized solid-state reference electrodes (REs), suitable for wearable applications, were prepared by coating CNT electrodes with colloidal dispersion of Ag/AgCl, agarose hydrogel with 0.5 M NaCl, and a passivation layer of PVC doped with NaCl. The CNT-based REs had low sensitivity (-1.7 ± 1.2 mV/decade) toward the NaCl solution and high repeatability and were superior to bare Ag/AgCl, metals, carbon, and CNT films, reported previously as REs. CNT-based ISEs were calibrated against CNT-based REs, and the short-term stability of the system was tested. We demonstrate that CNT-based devices implemented on a flexible support are a very attractive platform for future wearable technology devices.

  2. Road Network Selection Based on Road Hierarchical Structure Control

    Directory of Open Access Journals (Sweden)

    HE Haiwei

    2015-04-01

    Full Text Available A new road network selection method based on hierarchical structure is studied. Firstly, road network is built as strokes which are then classified into hierarchical collections according to the criteria of betweenness centrality value (BC value. Secondly, the hierarchical structure of the strokes is enhanced using structural characteristic identification technique. Thirdly, the importance calculation model was established according to the relationships among the hierarchical structure of the strokes. Finally, the importance values of strokes are got supported with the model's hierarchical calculation, and with which the road network is selected. Tests are done to verify the advantage of this method by comparing it with other common stroke-oriented methods using three kinds of typical road network data. Comparision of the results show that this method had few need to semantic data, and could eliminate the negative influence of edge strokes caused by the criteria of BC value well. So, it is better to maintain the global hierarchical structure of road network, and suitable to meet with the selection of various kinds of road network at the same time.

  3. The AlSi10Mg samples produced by selective laser melting: single track, densification, microstructure and mechanical behavior

    International Nuclear Information System (INIS)

    Wei, Pei; Wei, Zhengying; Chen, Zhen; Du, Jun; He, Yuyang; Li, Junfeng; Zhou, Yatong

    2017-01-01

    Highlights: • The thermal behavior of AlSi10Mg molten pool was analyzed. • The SLM-processed sample with a relatively low surface roughness was obtained. • Effects of parameters on surface topography of scan track were investigated. • Effects of parameters on microstructure of parts were investigated. • Optimum processing parameters for AlSi10Mg SLM was obtained. - Abstract: This densification behavior and attendant microstructural characteristics of the selective laser melting (SLM) processed AlSi10Mg alloy affected by the processing parameters were systematically investigated. The samples with a single track were produced by SLM to study the influences of laser power and scanning speed on the surface morphologies of scan tracks. Additionally, the bulk samples were produced to investigate the influence of the laser power, scanning speed, and hatch spacing on the densification level and the resultant microstructure. The experimental results showed that the level of porosity of the SLM-processed samples was significantly governed by energy density of laser beam and the hatch spacing. The tensile properties of SLM-processed samples and the attendant fracture surface can be enhanced by decreasing the level of porosity. The microstructure of SLM-processed samples consists of supersaturated Al-rich cellular structure along with eutectic Al/Si situated at the cellular boundaries. The Si content in the cellular boundaries increases with increasing the laser power and decreasing the scanning speed. The hardness of SLM-processed samples was significantly improved by this fine microstructure compared with the cast samples. Moreover, the hardness of SLM-processed samples at overlaps was lower than the hardness observed at track cores.

  4. The AlSi10Mg samples produced by selective laser melting: single track, densification, microstructure and mechanical behavior

    Energy Technology Data Exchange (ETDEWEB)

    Wei, Pei; Wei, Zhengying, E-mail: zywei@mail.xjtu.edu.cn; Chen, Zhen; Du, Jun; He, Yuyang; Li, Junfeng; Zhou, Yatong

    2017-06-30

    Highlights: • The thermal behavior of AlSi10Mg molten pool was analyzed. • The SLM-processed sample with a relatively low surface roughness was obtained. • Effects of parameters on surface topography of scan track were investigated. • Effects of parameters on microstructure of parts were investigated. • Optimum processing parameters for AlSi10Mg SLM was obtained. - Abstract: This densification behavior and attendant microstructural characteristics of the selective laser melting (SLM) processed AlSi10Mg alloy affected by the processing parameters were systematically investigated. The samples with a single track were produced by SLM to study the influences of laser power and scanning speed on the surface morphologies of scan tracks. Additionally, the bulk samples were produced to investigate the influence of the laser power, scanning speed, and hatch spacing on the densification level and the resultant microstructure. The experimental results showed that the level of porosity of the SLM-processed samples was significantly governed by energy density of laser beam and the hatch spacing. The tensile properties of SLM-processed samples and the attendant fracture surface can be enhanced by decreasing the level of porosity. The microstructure of SLM-processed samples consists of supersaturated Al-rich cellular structure along with eutectic Al/Si situated at the cellular boundaries. The Si content in the cellular boundaries increases with increasing the laser power and decreasing the scanning speed. The hardness of SLM-processed samples was significantly improved by this fine microstructure compared with the cast samples. Moreover, the hardness of SLM-processed samples at overlaps was lower than the hardness observed at track cores.

  5. Selecting between-sample RNA-Seq normalization methods from the perspective of their assumptions.

    Science.gov (United States)

    Evans, Ciaran; Hardin, Johanna; Stoebel, Daniel M

    2017-02-27

    RNA-Seq is a widely used method for studying the behavior of genes under different biological conditions. An essential step in an RNA-Seq study is normalization, in which raw data are adjusted to account for factors that prevent direct comparison of expression measures. Errors in normalization can have a significant impact on downstream analysis, such as inflated false positives in differential expression analysis. An underemphasized feature of normalization is the assumptions on which the methods rely and how the validity of these assumptions can have a substantial impact on the performance of the methods. In this article, we explain how assumptions provide the link between raw RNA-Seq read counts and meaningful measures of gene expression. We examine normalization methods from the perspective of their assumptions, as an understanding of methodological assumptions is necessary for choosing methods appropriate for the data at hand. Furthermore, we discuss why normalization methods perform poorly when their assumptions are violated and how this causes problems in subsequent analysis. To analyze a biological experiment, researchers must select a normalization method with assumptions that are met and that produces a meaningful measure of expression for the given experiment. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  6. Rational approach to solvent system selection for liquid-liquid extraction-assisted sample pretreatment in counter-current chromatography.

    Science.gov (United States)

    Wang, Jiajia; Gu, Dongyu; Wang, Miao; Guo, Xinfeng; Li, Haoquan; Dong, Yue; Guo, Hong; Wang, Yi; Fan, Mengqi; Yang, Yi

    2017-05-15

    A rational liquid-liquid extraction approach was established to pre-treat samples for high-speed counter-current chromatography (HSCCC). n-Hexane-ethyl acetate-methanol-water (4:5:4:5, v/v) and (1:5:1:5, v/v) were selected as solvent systems for liquid-liquid extraction by systematically screening K of target compounds to remove low- and high-polarity impurities in the sample, respectively. After liquid-liquid extraction was performed, 1.4g of crude sample II was obtained from 18.5g of crude sample I which was extracted from the flowers of Robinia pseudoacacia L., and then separated with HSCCC by using a solvent system composed of n-hexane-ethyl acetate-methanol-water (1:2:1:2, v/v). As a result, 31mg of robinin and 37mg of kaempferol 7-O-α-l-rhamnopyranoside were isolated from 200mg of crude sample II in a single run of HSCCC. A scale-up separation was also performed, and 160mg of robinin with 95% purity and 188mg of kaempferol 7-O-α-l-rhamnopyranoside with 97% purity were produced from 1.2g of crude sample II. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. Fuzzy Axiomatic Design approach based green supplier selection

    DEFF Research Database (Denmark)

    Kannan, Devika; Govindan, Kannan; Rajendran, Sivakumar

    2015-01-01

    proposes a multi-criteria decision-making (MCDM) approach called Fuzzy Axiomatic Design (FAD) to select the best green supplier for Singapore-based plastic manufacturing company. At first, the environmental criteria was developed along with the traditional criteria based on the literature review......Abstract Green Supply Chain Management (GSCM) is a developing concept recently utilized by manufacturing firms of all sizes. All industries, small or large, seek improvements in the purchasing of raw materials, manufacturing, allocation, transportation efficiency, in curbing storage time, importing...... responsible in addition to being efficiently managed. A significant way to implement responsible GSCM is to reconsider, in innovative ways, the purchase and supply cycle, and a preliminary step would be to ensure that the supplier of goods successfully incorporates green criteria. Therefore, this paper...

  8. Tunable antenna radome based on graphene frequency selective surface

    Science.gov (United States)

    Qu, Meijun; Rao, Menglou; Li, Shufang; Deng, Li

    2017-09-01

    In this paper, a graphene-based frequency selective surface (FSS) is proposed. The proposed FSS exhibits a tunable bandpass filtering characteristic due to the alterable conductivity of the graphene strips which is controlled by chemical potential. Based on the reconfigurable bandpass property of the proposed FSS, a cylindrical antenna radome is designed using the FSS unit cells. A conventional omnidirectional dipole can realize a two-beam directional pattern when it is placed into the proposed antenna radome. Forward and backward endfire radiations of the dipole loaded with the radome is realized by properly adjusting the chemical potential. The proposed antenna radome is extremely promising for beam-scanning in terahertz and mid-infrared plasmonic devices and systems when the gain of a conventional antenna needs to be enhanced.

  9. SELECTION OF RECIPIENTS FOR HEART TRANSPLANTATION BASED ON URGENCY STATUS

    Directory of Open Access Journals (Sweden)

    O. A. Sujayeva

    2014-01-01

    Full Text Available The article provides the overview of current international recommendations dedicated to selection of heart transplantation recipients based on urgency status. Authors found that cardiopulmonary bicycle stress test allowed to reveal additional criteria of high death risk within 1 year. These additional criteria were: the maximal oxygen consumption VO2max < 30% of the expected considering the age; VD/VT (ratio of physiologic dead space over tidal volume increasing during the test; maximal tolerance to physical loading ≤50 Wt and/or < 20% of the expected considering the age. Authors created mathematical model for prediction of death within 1 year based on above mentioned data. Special software estimating the probability of death within 1 year was also created.

  10. Polymeric ionic liquid-based portable tip microextraction device for on-site sample preparation of water samples.

    Science.gov (United States)

    Chen, Lei; Pei, Junxian; Huang, Xiaojia; Lu, Min

    2018-06-05

    On-site sample preparation is highly desired because it avoids the transportation of large-volume samples and ensures the accuracy of the analytical results. In this work, a portable prototype of tip microextraction device (TMD) was designed and developed for on-site sample pretreatment. The assembly procedure of TMD is quite simple. Firstly, polymeric ionic liquid (PIL)-based adsorbent was in-situ prepared in a pipette tip. After that, the tip was connected with a syringe which was driven by a bidirectional motor. The flow rates in adsorption and desorption steps were controlled accurately by the motor. To evaluate the practicability of the developed device, the TMD was used to on-site sample preparation of waters and combined with high-performance liquid chromatography with diode array detection to measure trace estrogens in water samples. Under the most favorable conditions, the limits of detection (LODs, S/N = 3) for the target analytes were in the range of 4.9-22 ng/L, with good coefficients of determination. Confirmatory study well evidences that the extraction performance of TMD is comparable to that of the traditional laboratory solid-phase extraction process, but the proposed TMD is more simple and convenient. At the same time, the TMD avoids complicated sampling and transferring steps of large-volume water samples. Copyright © 2018 Elsevier B.V. All rights reserved.

  11. The influence of extrinsic motivation on competition-based selection.

    Science.gov (United States)

    Sänger, Jessica; Wascher, Edmund

    2011-10-10

    The biased competition approach to visuo-spatial attention proposes that the selection of competing information is effected by the saliency of the stimulus as well as by an intention-based bias of attention towards behavioural goals. Wascher and Beste (2010) [32] showed that the detection of relevant information depends on its relative saliency compared to irrelevant conflicting stimuli. Furthermore the N1pc, N2pc and N2 of the EEG varied with the strength of the conflict. However, this system could also be modulated by rather global mechanisms like attentional effort. The present study investigates such modulations by testing the influence of extrinsic motivation on the selection of competing stimuli. Participants had to detect a luminance change in various conditions among others against an irrelevant orientation change. Half of the participants were motivated to maximize their performance by the announcement of a monetary reward for correct responses. Participants who were motivated had lower error rates than participants who were not motivated. The event-related lateralizations of the EEG showed no motivation-related effect on the N1pc, which reflects the initial saliency driven orientation of attention towards the more salient stimulus. The subsequent N2pc was enhanced in the motivation condition. Extrinsic motivation was also accompanied by enhanced fronto-central negativities. Thus, the data provide evidence that the improvement of selection performance when participants were extrinsically motivated by announcing a reward was not due to changes in the initial saliency based processing of information but was foremost mediated by improved higher-level mechanisms. Copyright © 2011 Elsevier B.V. All rights reserved.

  12. Pilot studies for the North American Soil Geochemical Landscapes Project - Site selection, sampling protocols, analytical methods, and quality control protocols

    Science.gov (United States)

    Smith, D.B.; Woodruff, L.G.; O'Leary, R. M.; Cannon, W.F.; Garrett, R.G.; Kilburn, J.E.; Goldhaber, M.B.

    2009-01-01

    In 2004, the US Geological Survey (USGS) and the Geological Survey of Canada sampled and chemically analyzed soils along two transects across Canada and the USA in preparation for a planned soil geochemical survey of North America. This effort was a pilot study to test and refine sampling protocols, analytical methods, quality control protocols, and field logistics for the continental survey. A total of 220 sample sites were selected at approximately 40-km intervals along the two transects. The ideal sampling protocol at each site called for a sample from a depth of 0-5 cm and a composite of each of the O, A, and C horizons. The Ca, Fe, K, Mg, Na, S, Ti, Ag, As, Ba, Be, Bi, Cd, Ce, Co, Cr, Cs, Cu, Ga, In, La, Li, Mn, Mo, Nb, Ni, P, Pb, Rb, Sb, Sc, Sn, Sr, Te, Th, Tl, U, V, W, Y, and Zn by inductively coupled plasma-mass spectrometry and inductively coupled plasma-atomic emission spectrometry following a near-total digestion in a mixture of HCl, HNO3, HClO4, and HF. Separate methods were used for Hg, Se, total C, and carbonate-C on this same size fraction. Only Ag, In, and Te had a large percentage of concentrations below the detection limit. Quality control (QC) of the analyses was monitored at three levels: the laboratory performing the analysis, the USGS QC officer, and the principal investigator for the study. This level of review resulted in an average of one QC sample for every 20 field samples, which proved to be minimally adequate for such a large-scale survey. Additional QC samples should be added to monitor within-batch quality to the extent that no more than 10 samples are analyzed between a QC sample. Only Cr (77%), Y (82%), and Sb (80%) fell outside the acceptable limits of accuracy (% recovery between 85 and 115%) because of likely residence in mineral phases resistant to the acid digestion. A separate sample of 0-5-cm material was collected at each site for determination of organic compounds. A subset of 73 of these samples was analyzed for a suite of

  13. Green Synthesis of Fluorescent Carbon Dots for Selective Detection of Tartrazine in Food Samples.

    Science.gov (United States)

    Xu, Hua; Yang, Xiupei; Li, Gu; Zhao, Chuan; Liao, Xiangjun

    2015-08-05

    A simple, economical, and green method for the preparation of water-soluble, high-fluorescent carbon quantum dots (C-dots) has been developed via hydrothermal process using aloe as a carbon source. The synthesized C-dots were characterized by atomic force microscope (AFM), transmission electron microscopy (TEM), fluorescence spectrophotometer, UV-vis absorption spectra as well as Fourier transform infrared spectroscopy (FTIR). The results reveal that the as-prepared C-dots were spherical shape with an average diameter of 5 nm and emit bright yellow photoluminescence (PL) with a quantum yield of approximately 10.37%. The surface of the C-dots was rich in hydroxyl groups and presented various merits including high fluorescent quantum yield, excellent photostability, low toxicity and satisfactory solubility. Additionally, we found that one of the widely used synthetic food colorants, tartrazine, could result in a strong fluorescence quenching of the C-dots through a static quenching process. The decrease of fluorescence intensity made it possible to determine tartrazine in the linear range extending from 0.25 to 32.50 μM, This observation was further successfully applied for the determination of tartrazine in food samples collected from local markets, suggesting its great potential toward food routine analysis. Results from our study may shed light on the production of fluorescent and biocompatible nanocarbons due to our simple and environmental benign strategy to synthesize C-dots in which aloe was used as a carbon source.

  14. Selection of Vendor Based on Intuitionistic Fuzzy Analytical Hierarchy Process

    Directory of Open Access Journals (Sweden)

    Prabjot Kaur

    2014-01-01

    Full Text Available Business environment is characterized by greater domestic and international competitive position in the global market. Vendors play a key role in achieving the so-called corporate competition. It is not easy however to identify good vendors because evaluation is based on multiple criteria. In practice, for VSP most of the input information about the criteria is not known precisely. Intuitionistic fuzzy set is an extension of the classical fuzzy set theory (FST, which is a suitable way to deal with impreciseness. In other words, the application of intuitionistic fuzzy sets instead of fuzzy sets means the introduction of another degree of freedom called nonmembership function into the set description. In this paper, we proposed a triangular intuitionistic fuzzy number based approach for the vendor selection problem using analytical hierarchy process. The crisp data of the vendors is represented in the form of triangular intuitionistic fuzzy numbers. By applying AHP which involves decomposition, pairwise comparison, and deriving priorities for the various levels of the hierarchy, an overall crisp priority is obtained for ranking the best vendor. A numerical example illustrates our method. Lastly a sensitivity analysis is performed to find the most critical criterion on the basis of which vendor is selected.

  15. Selectively Encrypted Pull-Up Based Watermarking of Biometric data

    Science.gov (United States)

    Shinde, S. A.; Patel, Kushal S.

    2012-10-01

    Biometric authentication systems are becoming increasingly popular due to their potential usage in information security. However, digital biometric data (e.g. thumb impression) are themselves vulnerable to security attacks. There are various methods are available to secure biometric data. In biometric watermarking the data are embedded in an image container and are only retrieved if the secrete key is available. This container image is encrypted to have more security against the attack. As wireless devices are equipped with battery as their power supply, they have limited computational capabilities; therefore to reduce energy consumption we use the method of selective encryption of container image. The bit pull-up-based biometric watermarking scheme is based on amplitude modulation and bit priority which reduces the retrieval error rate to great extent. By using selective Encryption mechanism we expect more efficiency in time at the time of encryption as well as decryption. Significant reduction in error rate is expected to be achieved by the bit pull-up method.

  16. Selecting Feature Subsets Based on SVM-RFE and the Overlapping Ratio with Applications in Bioinformatics.

    Science.gov (United States)

    Lin, Xiaohui; Li, Chao; Zhang, Yanhui; Su, Benzhe; Fan, Meng; Wei, Hai

    2017-12-26

    Feature selection is an important topic in bioinformatics. Defining informative features from complex high dimensional biological data is critical in disease study, drug development, etc. Support vector machine-recursive feature elimination (SVM-RFE) is an efficient feature selection technique that has shown its power in many applications. It ranks the features according to the recursive feature deletion sequence based on SVM. In this study, we propose a method, SVM-RFE-OA, which combines the classification accuracy rate and the average overlapping ratio of the samples to determine the number of features to be selected from the feature rank of SVM-RFE. Meanwhile, to measure the feature weights more accurately, we propose a modified SVM-RFE-OA (M-SVM-RFE-OA) algorithm that temporally screens out the samples lying in a heavy overlapping area in each iteration. The experiments on the eight public biological datasets show that the discriminative ability of the feature subset could be measured more accurately by combining the classification accuracy rate with the average overlapping degree of the samples compared with using the classification accuracy rate alone, and shielding the samples in the overlapping area made the calculation of the feature weights more stable and accurate. The methods proposed in this study can also be used with other RFE techniques to define potential biomarkers from big biological data.

  17. Selecting Feature Subsets Based on SVM-RFE and the Overlapping Ratio with Applications in Bioinformatics

    Directory of Open Access Journals (Sweden)

    Xiaohui Lin

    2017-12-01

    Full Text Available Feature selection is an important topic in bioinformatics. Defining informative features from complex high dimensional biological data is critical in disease study, drug development, etc. Support vector machine-recursive feature elimination (SVM-RFE is an efficient feature selection technique that has shown its power in many applications. It ranks the features according to the recursive feature deletion sequence based on SVM. In this study, we propose a method, SVM-RFE-OA, which combines the classification accuracy rate and the average overlapping ratio of the samples to determine the number of features to be selected from the feature rank of SVM-RFE. Meanwhile, to measure the feature weights more accurately, we propose a modified SVM-RFE-OA (M-SVM-RFE-OA algorithm that temporally screens out the samples lying in a heavy overlapping area in each iteration. The experiments on the eight public biological datasets show that the discriminative ability of the feature subset could be measured more accurately by combining the classification accuracy rate with the average overlapping degree of the samples compared with using the classification accuracy rate alone, and shielding the samples in the overlapping area made the calculation of the feature weights more stable and accurate. The methods proposed in this study can also be used with other RFE techniques to define potential biomarkers from big biological data.

  18. Randomization-Based Inference about Latent Variables from Complex Samples: The Case of Two-Stage Sampling

    Science.gov (United States)

    Li, Tiandong

    2012-01-01

    In large-scale assessments, such as the National Assessment of Educational Progress (NAEP), plausible values based on Multiple Imputations (MI) have been used to estimate population characteristics for latent constructs under complex sample designs. Mislevy (1991) derived a closed-form analytic solution for a fixed-effect model in creating…

  19. GMDH-Based Semi-Supervised Feature Selection for Electricity Load Classification Forecasting

    Directory of Open Access Journals (Sweden)

    Lintao Yang

    2018-01-01

    Full Text Available With the development of smart power grids, communication network technology and sensor technology, there has been an exponential growth in complex electricity load data. Irregular electricity load fluctuations caused by the weather and holiday factors disrupt the daily operation of the power companies. To deal with these challenges, this paper investigates a day-ahead electricity peak load interval forecasting problem. It transforms the conventional continuous forecasting problem into a novel interval forecasting problem, and then further converts the interval forecasting problem into the classification forecasting problem. In addition, an indicator system influencing the electricity load is established from three dimensions, namely the load series, calendar data, and weather data. A semi-supervised feature selection algorithm is proposed to address an electricity load classification forecasting issue based on the group method of data handling (GMDH technology. The proposed algorithm consists of three main stages: (1 training the basic classifier; (2 selectively marking the most suitable samples from the unclassified label data, and adding them to an initial training set; and (3 training the classification models on the final training set and classifying the test samples. An empirical analysis of electricity load dataset from four Chinese cities is conducted. Results show that the proposed model can address the electricity load classification forecasting problem more efficiently and effectively than the FW-Semi FS (forward semi-supervised feature selection and GMDH-U (GMDH-based semi-supervised feature selection for customer classification models.

  20. Soft X-Ray Observations of a Complete Sample of X-Ray--selected BL Lacertae Objects

    Science.gov (United States)

    Perlman, Eric S.; Stocke, John T.; Wang, Q. Daniel; Morris, Simon L.

    1996-01-01

    We present the results of ROSAT PSPC observations of the X-ray selected BL Lacertae objects (XBLs) in the complete Einstein Extended Medium Sensitivity Survey (EM MS) sample. None of the objects is resolved in their respective PSPC images, but all are easily detected. All BL Lac objects in this sample are well-fitted by single power laws. Their X-ray spectra exhibit a variety of spectral slopes, with best-fit energy power-law spectral indices between α = 0.5-2.3. The PSPC spectra of this sample are slightly steeper than those typical of flat ratio-spectrum quasars. Because almost all of the individual PSPC spectral indices are equal to or slightly steeper than the overall optical to X-ray spectral indices for these same objects, we infer that BL Lac soft X-ray continua are dominated by steep-spectrum synchrotron radiation from a broad X-ray jet, rather than flat-spectrum inverse Compton radiation linked to the narrower radio/millimeter jet. The softness of the X-ray spectra of these XBLs revives the possibility proposed by Guilbert, Fabian, & McCray (1983) that BL Lac objects are lineless because the circumnuclear gas cannot be heated sufficiently to permit two stable gas phases, the cooler of which would comprise the broad emission-line clouds. Because unified schemes predict that hard self-Compton radiation is beamed only into a small solid angle in BL Lac objects, the steep-spectrum synchrotron tail controls the temperature of the circumnuclear gas at r ≤ 1018 cm and prevents broad-line cloud formation. We use these new ROSAT data to recalculate the X-ray luminosity function and cosmological evolution of the complete EMSS sample by determining accurate K-corrections for the sample and estimating the effects of variability and the possibility of incompleteness in the sample. Our analysis confirms that XBLs are evolving "negatively," opposite in sense to quasars, with Ve/Va = 0.331±0.060. The statistically significant difference between the values for X

  1. Efficacy of liquid-based cytology versus conventional smears in FNA samples

    Directory of Open Access Journals (Sweden)

    Kalpalata Tripathy

    2015-01-01

    Conclusion: LBC performed on FNA samples can be a simple and valuable technique. Only in few selected cases, where background factor is an essential diagnostic clue, a combination of both CP and TP is necessary.

  2. Adaptive Sampling for Nonlinear Dimensionality Reduction Based on Manifold Learning

    DEFF Research Database (Denmark)

    Franz, Thomas; Zimmermann, Ralf; Goertz, Stefan

    2017-01-01

    We make use of the non-intrusive dimensionality reduction method Isomap in order to emulate nonlinear parametric flow problems that are governed by the Reynolds-averaged Navier-Stokes equations. Isomap is a manifold learning approach that provides a low-dimensional embedding space that is approxi...... to detect and fill up gaps in the sampling in the embedding space. The performance of the proposed manifold filling method will be illustrated by numerical experiments, where we consider nonlinear parameter-dependent steady-state Navier-Stokes flows in the transonic regime.......We make use of the non-intrusive dimensionality reduction method Isomap in order to emulate nonlinear parametric flow problems that are governed by the Reynolds-averaged Navier-Stokes equations. Isomap is a manifold learning approach that provides a low-dimensional embedding space...

  3. SELECTION OF FISÁLIS POPULATIONS FOR HIBRIDIZATIONS, BASED ON FRUIT TRAITS

    Directory of Open Access Journals (Sweden)

    NICOLE TREVISANI

    2016-01-01

    Full Text Available ABSTRACT The objective of this study was to characterize the genetic variability in fisális populations and select promising parents based on fruit traits. The experimental design consisted of randomized blocks, with six populations. Five plants per treatment were sampled. The evaluated traits were fruit weight, capsule weight, 1000- seed weight and fruit diameter. The data were subjected to multivariate analysis of variance with error specification between and within (p <0.05. Mahalanobis’ distance was used as a measure of genetic dissimilarity. Significant differences for the assessed traits were detected between fisális populations. The ratio error among by within indicated no need for sampling within the experimental unit. Dissimilarity was greatest between Lages and Vacaria. The most discriminating traits were capsule weight, fruit weight and fruit diameter. The multivariate contrasts indicated differences between the populations of Vacaria and from Caçador, Lages and Peru, selected for hybridizations.

  4. All-polymer microfluidic systems for droplet based sample analysis

    DEFF Research Database (Denmark)

    Poulsen, Carl Esben

    In this PhD project, I pursued to develop an all-polymer injection moulded microfluidic platform with integrated droplet based single cell interrogation. To allow for a proper ”one device - one experiment” methodology and to ensure a high relevancy to non-academic settings, the systems presented ...

  5. Sampling in image space for vision based SLAM

    NARCIS (Netherlands)

    Booij, O.; Zivkovic, Z.; Kröse, B.

    2008-01-01

    Loop closing in vision based SLAM applications is a difficult task. Comparing new image data with all previous image data acquired for the map is practically impossible because of the high computational costs. This problem is part of the bigger problem to acquire local geometric constraints from

  6. The effect of morphometric atlas selection on multi-atlas-based automatic brachial plexus segmentation

    International Nuclear Information System (INIS)

    Van de Velde, Joris; Wouters, Johan; Vercauteren, Tom; De Gersem, Werner; Achten, Eric; De Neve, Wilfried; Van Hoof, Tom

    2015-01-01

    The present study aimed to measure the effect of a morphometric atlas selection strategy on the accuracy of multi-atlas-based BP autosegmentation using the commercially available software package ADMIRE® and to determine the optimal number of selected atlases to use. Autosegmentation accuracy was measured by comparing all generated automatic BP segmentations with anatomically validated gold standard segmentations that were developed using cadavers. Twelve cadaver computed tomography (CT) atlases were included in the study. One atlas was selected as a patient in ADMIRE®, and multi-atlas-based BP autosegmentation was first performed with a group of morphometrically preselected atlases. In this group, the atlases were selected on the basis of similarity in the shoulder protraction position with the patient. The number of selected atlases used started at two and increased up to eight. Subsequently, a group of randomly chosen, non-selected atlases were taken. In this second group, every possible combination of 2 to 8 random atlases was used for multi-atlas-based BP autosegmentation. For both groups, the average Dice similarity coefficient (DSC), Jaccard index (JI) and Inclusion index (INI) were calculated, measuring the similarity of the generated automatic BP segmentations and the gold standard segmentation. Similarity indices of both groups were compared using an independent sample t-test, and the optimal number of selected atlases was investigated using an equivalence trial. For each number of atlases, average similarity indices of the morphometrically selected atlas group were significantly higher than the random group (p < 0,05). In this study, the highest similarity indices were achieved using multi-atlas autosegmentation with 6 selected atlases (average DSC = 0,598; average JI = 0,434; average INI = 0,733). Morphometric atlas selection on the basis of the protraction position of the patient significantly improves multi-atlas-based BP autosegmentation accuracy

  7. Protein expression based multimarker analysis of breast cancer samples

    International Nuclear Information System (INIS)

    Presson, Angela P; Horvath, Steve; Yoon, Nam K; Bagryanova, Lora; Mah, Vei; Alavi, Mohammad; Maresh, Erin L; Rajasekaran, Ayyappan K; Goodglick, Lee; Chia, David

    2011-01-01

    Tissue microarray (TMA) data are commonly used to validate the prognostic accuracy of tumor markers. For example, breast cancer TMA data have led to the identification of several promising prognostic markers of survival time. Several studies have shown that TMA data can also be used to cluster patients into clinically distinct groups. Here we use breast cancer TMA data to cluster patients into distinct prognostic groups. We apply weighted correlation network analysis (WGCNA) to TMA data consisting of 26 putative tumor biomarkers measured on 82 breast cancer patients. Based on this analysis we identify three groups of patients with low (5.4%), moderate (22%) and high (50%) mortality rates, respectively. We then develop a simple threshold rule using a subset of three markers (p53, Na-KATPase-β1, and TGF β receptor II) that can approximately define these mortality groups. We compare the results of this correlation network analysis with results from a standard Cox regression analysis. We find that the rule-based grouping variable (referred to as WGCNA*) is an independent predictor of survival time. While WGCNA* is based on protein measurements (TMA data), it validated in two independent Affymetrix microarray gene expression data (which measure mRNA abundance). We find that the WGCNA patient groups differed by 35% from mortality groups defined by a more conventional stepwise Cox regression analysis approach. We show that correlation network methods, which are primarily used to analyze the relationships between gene products, are also useful for analyzing the relationships between patients and for defining distinct patient groups based on TMA data. We identify a rule based on three tumor markers for predicting breast cancer survival outcomes

  8. PeptideManager: A Peptide Selection Tool for Targeted Proteomic Studies Involving Mixed Samples from Different Species

    Directory of Open Access Journals (Sweden)

    Kevin eDemeure

    2014-09-01

    Full Text Available The search for clinically useful protein biomarkers using advanced mass spectrometry approaches represents a major focus in cancer research. However, the direct analysis of human samples may be challenging due to limited availability, the absence of appropriate control samples, or the large background variability observed in patient material. As an alternative approach, human tumors orthotopically implanted into a different species (xenografts are clinically relevant models that have proven their utility in pre-clinical research. Patient derived xenografts for glioblastoma have been extensively characterized in our laboratory and have been shown to retain the characteristics of the parental tumor at the phenotypic and genetic level. Such models were also found to adequately mimic the behavior and treatment response of human tumors. The reproducibility of such xenograft models, the possibility to identify their host background and perform tumor-host interaction studies, are major advantages over the direct analysis of human samples.At the proteome level, the analysis of xenograft samples is challenged by the presence of proteins from two different species which, depending on tumor size, type or location, often appear at variable ratios. Any proteomics approach aimed at quantifying proteins within such samples must consider the identification of species specific peptides in order to avoid biases introduced by the host proteome. Here, we present an in-house methodology and tool developed to select peptides used as surrogates for protein candidates from a defined proteome (e.g., human in a host proteome background (e.g., mouse, rat suited for a mass spectrometry analysis. The tools presented here are applicable to any species specific proteome, provided a protein database is available. By linking the information from both proteomes, PeptideManager significantly facilitates and expedites the selection of peptides used as surrogates to analyze

  9. Selective Sequential Zero-Base Budgeting Procedures Based on Total Factor Productivity Indicators

    OpenAIRE

    A. Ishikawa; E. F. Sudit

    1981-01-01

    The authors' purpose in this paper is to develop productivity-based sequential budgeting procedures designed to expedite identification of major problem areas in bugetary performance, as well as to reduce the costs associated with comprehensive zero-base analyses. The concept of total factor productivity is reviewed and its relations to ordinary and zero-based budgeting are discussed in detail. An outline for a selective sequential analysis based on monitoring of three key indicators of (a) i...

  10. Selection of reference genes for tissue/organ samples on day 3 fifth-instar larvae in silkworm, Bombyx mori.

    Science.gov (United States)

    Wang, Genhong; Chen, Yanfei; Zhang, Xiaoying; Bai, Bingchuan; Yan, Hao; Qin, Daoyuan; Xia, Qingyou

    2018-06-01

    The silkworm, Bombyx mori, is one of the world's most economically important insect. Surveying variations in gene expression among multiple tissue/organ samples will provide clues for gene function assignments and will be helpful for identifying genes related to economic traits or specific cellular processes. To ensure their accuracy, commonly used gene expression quantification methods require a set of stable reference genes for data normalization. In this study, 24 candidate reference genes were assessed in 10 tissue/organ samples of day 3 fifth-instar B. mori larvae using geNorm and NormFinder. The results revealed that, using the combination of the expression of BGIBMGA003186 and BGIBMGA008209 was the optimum choice for normalizing the expression data of the B. mori tissue/organ samples. The most stable gene, BGIBMGA003186, is recommended if just one reference gene is used. Moreover, the commonly used reference gene encoding cytoplasmic actin was the least appropriate reference gene of the samples investigated. The reliability of the selected reference genes was further confirmed by evaluating the expression profiles of two cathepsin genes. Our results may be useful for future studies involving the quantification of relative gene expression levels of different tissue/organ samples in B. mori. © 2018 Wiley Periodicals, Inc.

  11. Enviromental sampling at remote sites based on radiological screening assessments

    International Nuclear Information System (INIS)

    Ebinger, M.H.; Hansen, W.R.; Wenz, G.; Oxenberg, T.P.

    1996-01-01

    Environmental radiation monitoring (ERM) data from remote sites on the White Sands Missile Range, New Mexico, were used to estimate doses to humans and terrestrial mammals from residual radiation deposited during testing of components containing depleted uranium (DU) and thorium (Th). ERM data were used with the DOE code RESRAD and a simple steady-state pathway code to estimate the potential adverse effects from DU and Th to workers in the contaminated zones, to hunters consuming animals from the contaminated zones, and to terrestrial mammals that inhabit the contaminated zones. Assessments of zones contaminated with DU and Th and DU alone were conducted. Radiological doses from Th and DU in soils were largest with a maximum of about 3.5 mrem y -1 in humans and maximum of about 0.1 mrad d -1 in deer. Dose estimates from DU alone in soils were significantly less with a maximum of about 1 mrem y -1 in humans and about 0.04 mrad d -1 in deer. The results of the dose estimates suggest strongly that environmental sampling in these affected areas can be infrequent and still provide adequate assessments of radiological doses to workers, hunters, and terrestrial mammals

  12. Factors in selecting serum samples for use in determining the positive/negative threshold (cut-off) in ELISA

    International Nuclear Information System (INIS)

    Jacobson, R.H.

    1998-01-01

    The threshold (cut-off) that defines whether a test result is seropositive or seronegative is calculated by testing serum samples from a subpopulation of animals that is assumed to represent the target population in all aspects. For this proposition to be true, it is essential to consider the variables in the target population that must be represented in the subpopulation. Without representation of the variables in the subpopulation, it is likely that the cut-off selected for the test will be errant and will misclassify animals as to their infection status. The purpose of this paper is to identify a few of the principal variables that need to be taken into account when selecting a subpopulation of animals for test validation. (author)

  13. APTIMA assay on SurePath liquid-based cervical samples compared to endocervical swab samples facilitated by a real time database

    Directory of Open Access Journals (Sweden)

    Khader Samer

    2010-01-01

    Full Text Available Background: Liquid-based cytology (LBC cervical samples are increasingly being used to test for pathogens, including: HPV, Chlamydia trachomatis (CT and Neisseria gonorrhoeae (GC using nucleic acid amplification tests. Several reports have shown the accuracy of such testing on ThinPrep (TP LBC samples. Fewer studies have evaluated SurePath (SP LBC samples, which utilize a different specimen preservative. This study was undertaken to assess the performance of the Aptima Combo 2 Assay (AC2 for CT and GC on SP versus endocervical swab samples in our laboratory. Materials and Methods: The live pathology database of Montefiore Medical Center was searched for patients with AC2 endocervical swab specimens and SP Paps taken the same day. SP samples from CT- and/or GC-positive endocervical swab patients and randomly selected negative patients were studied. In each case, 1.5 ml of the residual SP vial sample, which was in SP preservative and stored at room temperature, was transferred within seven days of collection to APTIMA specimen transfer tubes without any sample or patient identifiers. Blind testing with the AC2 assay was performed on the Tigris DTS System (Gen-probe, San Diego, CA. Finalized SP results were compared with the previously reported endocervical swab results for the entire group and separately for patients 25 years and younger and patients over 25 years. Results: SP specimens from 300 patients were tested. This included 181 swab CT-positive, 12 swab GC-positive, 7 CT and GC positive and 100 randomly selected swab CT and GC negative patients. Using the endocervical swab results as the patient′s infection status, AC2 assay of the SP samples showed: CT sensitivity 89.3%, CT specificity 100.0%; GC sensitivity and specificity 100.0%. CT sensitivity for patients 25 years or younger was 93.1%, versus 80.7% for patients over 25 years, a statistically significant difference (P = 0.02. Conclusions: Our results show that AC2 assay of 1.5 ml SP

  14. Selective carbon monoxide oxidation over Ag-based composite oxides

    Energy Technology Data Exchange (ETDEWEB)

    Guldur, C. [Gazi University, Ankara (Turkey). Chemical Engineering Department; Balikci, F. [Gazi University, Ankara (Turkey). Institute of Science and Technology, Environmental Science Department

    2002-02-01

    We report our results of the synthesis of 1 : 1 molar ratio of the silver cobalt and silver manganese composite oxide catalysts to remove carbon monoxide from hydrogen-rich fuels by the catalytic oxidation reaction. Catalysts were synthesized by the co-precipitation method. XRD, BET, TGA, catalytic activity and catalyst deactivation studies were used to identify active catalysts. Both CO oxidation and selective CO oxidation were carried out in a microreactor using a reaction gas mixture of 1 vol% CO in air and another gas mixture was prepared by mixing 1 vol% CO, 2 vol% O{sub 2}, 84 vol% H{sub 2}, the balance being He. 15 vol% CO{sub 2} was added to the reactant gas mixture in order to determine the effect of CO{sub 2}, reaction gases were passed through the humidifier to determine the effect of the water vapor on the oxidation reaction. It was demonstrated that metal oxide base was decomposed to the metallic phase and surface areas of the catalysts were decreased when the calcination temperature increased from 200{sup o}C to 500{sup o}C. Ag/Co composite oxide catalyst calcined at 200{sup o}C gave good activity at low temperatures and 90% of CO conversion at 180{sup o}C was obtained for the selective CO oxidation reaction. The addition of the impurities (CO{sub 2} or H{sub 2}O) decreased the activity of catalyst for selective CO oxidation in order to get highly rich hydrogen fuels. (author)

  15. A sampling-based Bayesian model for gas saturation estimationusing seismic AVA and marine CSEM data

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Jinsong; Hoversten, Michael; Vasco, Don; Rubin, Yoram; Hou,Zhangshuan

    2006-04-04

    We develop a sampling-based Bayesian model to jointly invertseismic amplitude versus angles (AVA) and marine controlled-sourceelectromagnetic (CSEM) data for layered reservoir models. The porosityand fluid saturation in each layer of the reservoir, the seismic P- andS-wave velocity and density in the layers below and above the reservoir,and the electrical conductivity of the overburden are considered asrandom variables. Pre-stack seismic AVA data in a selected time windowand real and quadrature components of the recorded electrical field areconsidered as data. We use Markov chain Monte Carlo (MCMC) samplingmethods to obtain a large number of samples from the joint posteriordistribution function. Using those samples, we obtain not only estimatesof each unknown variable, but also its uncertainty information. Thedeveloped method is applied to both synthetic and field data to explorethe combined use of seismic AVA and EM data for gas saturationestimation. Results show that the developed method is effective for jointinversion, and the incorporation of CSEM data reduces uncertainty influid saturation estimation, when compared to results from inversion ofAVA data only.

  16. A novel selection method of seismic attributes based on gray relational degree and support vector machine.

    Directory of Open Access Journals (Sweden)

    Yaping Huang

    Full Text Available The selection of seismic attributes is a key process in reservoir prediction because the prediction accuracy relies on the reliability and credibility of the seismic attributes. However, effective selection method for useful seismic attributes is still a challenge. This paper presents a novel selection method of seismic attributes for reservoir prediction based on the gray relational degree (GRD and support vector machine (SVM. The proposed method has a two-hierarchical structure. In the first hierarchy, the primary selection of seismic attributes is achieved by calculating the GRD between seismic attributes and reservoir parameters, and the GRD between the seismic attributes. The principle of the primary selection is that these seismic attributes with higher GRD to the reservoir parameters will have smaller GRD between themselves as compared to those with lower GRD to the reservoir parameters. Then the SVM is employed in the second hierarchy to perform an interactive error verification using training samples for the purpose of determining the final seismic attributes. A real-world case study was conducted to evaluate the proposed GRD-SVM method. Reliable seismic attributes were selected to predict the coalbed methane (CBM content in southern Qinshui basin, China. In the analysis, the instantaneous amplitude, instantaneous bandwidth, instantaneous frequency, and minimum negative curvature were selected, and the predicted CBM content was fundamentally consistent with the measured CBM content. This real-world case study demonstrates that the proposed method is able to effectively select seismic attributes, and improve the prediction accuracy. Thus, the proposed GRD-SVM method can be used for the selection of seismic attributes in practice.

  17. Entropy-based gene ranking without selection bias for the predictive classification of microarray data

    Directory of Open Access Journals (Sweden)

    Serafini Maria

    2003-11-01

    Full Text Available Abstract Background We describe the E-RFE method for gene ranking, which is useful for the identification of markers in the predictive classification of array data. The method supports a practical modeling scheme designed to avoid the construction of classification rules based on the selection of too small gene subsets (an effect known as the selection bias, in which the estimated predictive errors are too optimistic due to testing on samples already considered in the feature selection process. Results With E-RFE, we speed up the recursive feature elimination (RFE with SVM classifiers by eliminating chunks of uninteresting genes using an entropy measure of the SVM weights distribution. An optimal subset of genes is selected according to a two-strata model evaluation procedure: modeling is replicated by an external stratified-partition resampling scheme, and, within each run, an internal K-fold cross-validation is used for E-RFE ranking. Also, the optimal number of genes can be estimated according to the saturation of Zipf's law profiles. Conclusions Without a decrease of classification accuracy, E-RFE allows a speed-up factor of 100 with respect to standard RFE, while improving on alternative parametric RFE reduction strategies. Thus, a process for gene selection and error estimation is made practical, ensuring control of the selection bias, and providing additional diagnostic indicators of gene importance.

  18. Hyperspectral band selection based on consistency-measure of neighborhood rough set theory

    International Nuclear Information System (INIS)

    Liu, Yao; Xie, Hong; Wang, Liguo; Tan, Kezhu; Chen, Yuehua; Xu, Zhen

    2016-01-01

    Band selection is a well-known approach for reducing dimensionality in hyperspectral imaging. In this paper, a band selection method based on consistency-measure of neighborhood rough set theory (CMNRS) was proposed to select informative bands from hyperspectral images. A decision-making information system was established by the reflection spectrum of soybeans’ hyperspectral data between 400 nm and 1000 nm wavelengths. The neighborhood consistency-measure, which reflects not only the size of the decision positive region, but also the sample distribution in the boundary region, was used as the evaluation function of band significance. The optimal band subset was selected by a forward greedy search algorithm. A post-pruning strategy was employed to overcome the over-fitting problem and find the minimum subset. To assess the effectiveness of the proposed band selection technique, two classification models (extreme learning machine (ELM) and random forests (RF)) were built. The experimental results showed that the proposed algorithm can effectively select key bands and obtain satisfactory classification accuracy. (paper)

  19. Sampling Key Populations for HIV Surveillance: Results From Eight Cross-Sectional Studies Using Respondent-Driven Sampling and Venue-Based Snowball Sampling.

    Science.gov (United States)

    Rao, Amrita; Stahlman, Shauna; Hargreaves, James; Weir, Sharon; Edwards, Jessie; Rice, Brian; Kochelani, Duncan; Mavimbela, Mpumelelo; Baral, Stefan

    2017-10-20

    In using regularly collected or existing surveillance data to characterize engagement in human immunodeficiency virus (HIV) services among marginalized populations, differences in sampling methods may produce different pictures of the target population and may therefore result in different priorities for response. The objective of this study was to use existing data to evaluate the sample distribution of eight studies of female sex workers (FSW) and men who have sex with men (MSM), who were recruited using different sampling approaches in two locations within Sub-Saharan Africa: Manzini, Swaziland and Yaoundé, Cameroon. MSM and FSW participants were recruited using either respondent-driven sampling (RDS) or venue-based snowball sampling. Recruitment took place between 2011 and 2016. Participants at each study site were administered a face-to-face survey to assess sociodemographics, along with the prevalence of self-reported HIV status, frequency of HIV testing, stigma, and other HIV-related characteristics. Crude and RDS-adjusted prevalence estimates were calculated. Crude prevalence estimates from the venue-based snowball samples were compared with the overlap of the RDS-adjusted prevalence estimates, between both FSW and MSM in Cameroon and Swaziland. RDS samples tended to be younger (MSM aged 18-21 years in Swaziland: 47.6% [139/310] in RDS vs 24.3% [42/173] in Snowball, in Cameroon: 47.9% [99/306] in RDS vs 20.1% [52/259] in Snowball; FSW aged 18-21 years in Swaziland 42.5% [82/325] in RDS vs 8.0% [20/249] in Snowball; in Cameroon 15.6% [75/576] in RDS vs 8.1% [25/306] in Snowball). They were less educated (MSM: primary school completed or less in Swaziland 42.6% [109/310] in RDS vs 4.0% [7/173] in Snowball, in Cameroon 46.2% [138/306] in RDS vs 14.3% [37/259] in Snowball; FSW: primary school completed or less in Swaziland 86.6% [281/325] in RDS vs 23.9% [59/247] in Snowball, in Cameroon 87.4% [520/576] in RDS vs 77.5% [238/307] in Snowball) than the snowball

  20. Sampling-based exploration of folded state of a protein under kinematic and geometric constraints

    KAUST Repository

    Yao, Peggy

    2011-10-04

    Flexibility is critical for a folded protein to bind to other molecules (ligands) and achieve its functions. The conformational selection theory suggests that a folded protein deforms continuously and its ligand selects the most favorable conformations to bind to. Therefore, one of the best options to study protein-ligand binding is to sample conformations broadly distributed over the protein-folded state. This article presents a new sampler, called kino-geometric sampler (KGS). This sampler encodes dominant energy terms implicitly by simple kinematic and geometric constraints. Two key technical contributions of KGS are (1) a robotics-inspired Jacobian-based method to simultaneously deform a large number of interdependent kinematic cycles without any significant break-up of the closure constraints, and (2) a diffusive strategy to generate conformation distributions that diffuse quickly throughout the protein folded state. Experiments on four very different test proteins demonstrate that KGS can efficiently compute distributions containing conformations close to target (e.g., functional) conformations. These targets are not given to KGS, hence are not used to bias the sampling process. In particular, for a lysine-binding protein, KGS was able to sample conformations in both the intermediate and functional states without the ligand, while previous work using molecular dynamics simulation had required the ligand to be taken into account in the potential function. Overall, KGS demonstrates that kino-geometric constraints characterize the folded subset of a protein conformation space and that this subset is small enough to be approximated by a relatively small distribution of conformations. © 2011 Wiley Periodicals, Inc.

  1. Greedy Sampling and Incremental Surrogate Model-Based Tailoring of Aeroservoelastic Model Database for Flexible Aircraft

    Science.gov (United States)

    Wang, Yi; Pant, Kapil; Brenner, Martin J.; Ouellette, Jeffrey A.

    2018-01-01

    This paper presents a data analysis and modeling framework to tailor and develop linear parameter-varying (LPV) aeroservoelastic (ASE) model database for flexible aircrafts in broad 2D flight parameter space. The Kriging surrogate model is constructed using ASE models at a fraction of grid points within the original model database, and then the ASE model at any flight condition can be obtained simply through surrogate model interpolation. The greedy sampling algorithm is developed to select the next sample point that carries the worst relative error between the surrogate model prediction and the benchmark model in the frequency domain among all input-output channels. The process is iterated to incrementally improve surrogate model accuracy till a pre-determined tolerance or iteration budget is met. The methodology is applied to the ASE model database of a flexible aircraft currently being tested at NASA/AFRC for flutter suppression and gust load alleviation. Our studies indicate that the proposed method can reduce the number of models in the original database by 67%. Even so the ASE models obtained through Kriging interpolation match the model in the original database constructed directly from the physics-based tool with the worst relative error far below 1%. The interpolated ASE model exhibits continuously-varying gains along a set of prescribed flight conditions. More importantly, the selected grid points are distributed non-uniformly in the parameter space, a) capturing the distinctly different dynamic behavior and its dependence on flight parameters, and b) reiterating the need and utility for adaptive space sampling techniques for ASE model database compaction. The present framework is directly extendible to high-dimensional flight parameter space, and can be used to guide the ASE model development, model order reduction, robust control synthesis and novel vehicle design of flexible aircraft.

  2. Deducing magnetic resonance neuroimages based on knowledge from samples.

    Science.gov (United States)

    Jiang, Yuwei; Liu, Feng; Fan, Mingxia; Li, Xuzhou; Zhao, Zhiyong; Zeng, Zhaoling; Wang, Yi; Xu, Dongrong

    2017-12-01

    Because individual variance always exists, using the same set of predetermined parameters for magnetic resonance imaging (MRI) may not be exactly suitable for each participant. We propose a knowledge-based method that can repair MRI data of undesired contrast as if a new scan were acquired using imaging parameters that had been individually optimized. The method employed a strategy called analogical reasoning to deduce voxel-wise relaxation properties using morphological and biological similarity. The proposed framework involves steps of intensity normalization, tissue segmentation, relaxation time deducing, and image deducing. This approach has been preliminarily validated using conventional MRI data at 3T from several examples, including 5 normal and 9 clinical datasets. It can effectively improve the contrast of real MRI data by deducing imaging data using optimized imaging parameters based on deduced relaxation properties. The statistics of deduced images shows a high correlation with real data that were actually collected using the same set of imaging parameters. The proposed method of deducing MRI data using knowledge of relaxation times alternatively provides a way of repairing MRI data of less optimal contrast. The method is also capable of optimizing an MRI protocol for individual participants, thereby realizing personalized MR imaging. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. A prototype of behavior selection mechanism based on emotion

    Science.gov (United States)

    Zhang, Guofeng; Li, Zushu

    2007-12-01

    In bionic methodology rather than in design methodology more familiar with, summarizing the psychological researches of emotion, we propose the biologic mechanism of emotion, emotion selection role in creature evolution and a anima framework including emotion similar to the classical control structure; and consulting Prospect Theory, build an Emotion Characteristic Functions(ECF) that computer emotion; two more emotion theories are added to them that higher emotion is preferred and middle emotion makes brain run more efficiently, emotional behavior mechanism comes into being. A simulation of proposed mechanism are designed and carried out on Alife Swarm software platform. In this simulation, a virtual grassland ecosystem is achieved where there are two kinds of artificial animals: herbivore and preyer. These artificial animals execute four types of behavior: wandering, escaping, finding food, finding sex partner in their lives. According the theories of animal ethnology, escaping from preyer is prior to other behaviors for its existence, finding food is secondly important behavior, rating is third one and wandering is last behavior. In keeping this behavior order, based on our behavior characteristic function theory, the specific functions of emotion computing are built of artificial autonomous animals. The result of simulation confirms the behavior selection mechanism.

  4. A multifunctional molecularly imprinted polymer-based biosensor for direct detection of doxycycline in food samples

    DEFF Research Database (Denmark)

    Ashley, Jon; Feng, Xiaotong; Sun, Yi

    2018-01-01

    In this study, we developed a new type of multifunctional molecularly imprinted polymer (MIP) composite as an all-in-one biosensor for the low-cost, rapid and sensitive detection of doxycycline in pig plasma. The MIP composite consisted of a magnetic core for ease of manipulation, and a shell...... of fluorescent MIPs for selective recognition of doxycycline. By simply incorporating a small amount of fluorescent monomer (fluorescein-Oacrylate), the fluorescent MIP layer was successfully grafted onto the magnetic core via a surface imprinting technique. The resultant MIP composites showed significant....... The multifunctional MIP composites were used to directly extract doxycycline from spiked pig plasma samples and quantify the antibiotics based on the quenched fluorescence signals. Recoveries of doxycycline were found in the range of 88–107%....

  5. Evaluation of gene importance in microarray data based upon probability of selection

    Directory of Open Access Journals (Sweden)

    Fu Li M

    2005-03-01

    Full Text Available Abstract Background Microarray devices permit a genome-scale evaluation of gene function. This technology has catalyzed biomedical research and development in recent years. As many important diseases can be traced down to the gene level, a long-standing research problem is to identify specific gene expression patterns linking to metabolic characteristics that contribute to disease development and progression. The microarray approach offers an expedited solution to this problem. However, it has posed a challenging issue to recognize disease-related genes expression patterns embedded in the microarray data. In selecting a small set of biologically significant genes for classifier design, the nature of high data dimensionality inherent in this problem creates substantial amount of uncertainty. Results Here we present a model for probability analysis of selected genes in order to determine their importance. Our contribution is that we show how to derive the P value of each selected gene in multiple gene selection trials based on different combinations of data samples and how to conduct a reliability analysis accordingly. The importance of a gene is indicated by its associated P value in that a smaller value implies higher information content from information theory. On the microarray data concerning the subtype classification of small round blue cell tumors, we demonstrate that the method is capable of finding the smallest set of genes (19 genes with optimal classification performance, compared with results reported in the literature. Conclusion In classifier design based on microarray data, the probability value derived from gene selection based on multiple combinations of data samples enables an effective mechanism for reducing the tendency of fitting local data particularities.

  6. Efficient approach for reliability-based optimization based on weighted importance sampling approach

    International Nuclear Information System (INIS)

    Yuan, Xiukai; Lu, Zhenzhou

    2014-01-01

    An efficient methodology is presented to perform the reliability-based optimization (RBO). It is based on an efficient weighted approach for constructing an approximation of the failure probability as an explicit function of the design variables which is referred to as the ‘failure probability function (FPF)’. It expresses the FPF as a weighted sum of sample values obtained in the simulation-based reliability analysis. The required computational effort for decoupling in each iteration is just single reliability analysis. After the approximation of the FPF is established, the target RBO problem can be decoupled into a deterministic one. Meanwhile, the proposed weighted approach is combined with a decoupling approach and a sequential approximate optimization framework. Engineering examples are given to demonstrate the efficiency and accuracy of the presented methodology

  7. A highly selective and sensitive Tb3+-acetylacetone photo probe for the assessment of acetazolamide in pharmaceutical and serum samples

    Science.gov (United States)

    Youssef, A. O.

    2018-04-01

    A novel, simple, sensitive and selective spectrofluorimetric method was developed for the determination of Acetazolamide in pharmaceutical tablets and serum samples using photo probe Tb3+-ACAC. The Acetazolamide can remarkably quench the luminescence intensity of Tb3+-ACAC complex in DMSO at pH 6.8 and λex = 350 nm. The quenching of luminescence intensity of Tb3+-ACAC complex especially the electrical band at λem = 545 nm is used for the assessment of Acetazolamide in the pharmaceutical tablet and serum samples. The dynamic range found for the determination of Acetazolamide concentration is 4.49 × 10-9-1.28 × 10-7 mol L-1, and the limit of detection (LOD) and limit of quantification (LOQ) are (4.0 × 10-9 and 1.21 × 10-8) mol L-1, respectively.

  8. Feature selection gait-based gender classification under different circumstances

    Science.gov (United States)

    Sabir, Azhin; Al-Jawad, Naseer; Jassim, Sabah

    2014-05-01

    This paper proposes a gender classification based on human gait features and investigates the problem of two variations: clothing (wearing coats) and carrying bag condition as addition to the normal gait sequence. The feature vectors in the proposed system are constructed after applying wavelet transform. Three different sets of feature are proposed in this method. First, Spatio-temporal distance that is dealing with the distance of different parts of the human body (like feet, knees, hand, Human Height and shoulder) during one gait cycle. The second and third feature sets are constructed from approximation and non-approximation coefficient of human body respectively. To extract these two sets of feature we divided the human body into two parts, upper and lower body part, based on the golden ratio proportion. In this paper, we have adopted a statistical method for constructing the feature vector from the above sets. The dimension of the constructed feature vector is reduced based on the Fisher score as a feature selection method to optimize their discriminating significance. Finally k-Nearest Neighbor is applied as a classification method. Experimental results demonstrate that our approach is providing more realistic scenario and relatively better performance compared with the existing approaches.

  9. Selective Osmotic Shock (SOS)-Based Islet Isolation for Microencapsulation.

    Science.gov (United States)

    Enck, Kevin; McQuilling, John Patrick; Orlando, Giuseppe; Tamburrini, Riccardo; Sivanandane, Sittadjody; Opara, Emmanuel C

    2017-01-01

    Islet transplantation (IT) has recently been shown to be a promising alternative to pancreas transplantation for reversing diabetes. IT requires the isolation of the islets from the pancreas, and these islets can be used to fabricate a bio-artificial pancreas. Enzymatic digestion is the current gold standard procedure for islet isolation but has lingering concerns. One such concern is that it has been shown to damage the islets due to nonselective tissue digestion. This chapter provides a detailed description of a nonenzymatic method that we are exploring in our lab as an alternative to current enzymatic digestion procedures for islet isolation from human and nonhuman pancreatic tissues. This method is based on selective destruction and protection of specific cell types and has been shown to leave the extracellular matrix (ECM) of islets intact, which may thus enhance islet viability and functionality. We also show that these SOS-isolated islets can be microencapsulated for transplantation.

  10. Analysis of Trust-Based Approaches for Web Service Selection

    DEFF Research Database (Denmark)

    Dragoni, Nicola; Miotto, Nicola

    2011-01-01

    The basic tenet of Service-Oriented Computing (SOC) is the possibility of building distributed applications on the Web by using Web services as fundamental building blocks. The proliferation of such services is considered the second wave of evolution in the Internet age, moving the Web from...... a collection of pages to a collections of services. Consensus is growing that this Web service revolution wont eventuate until we resolve trust-related issues. Indeed, the intrinsic openness of the SOC vision makes crucial to locate useful services and recognize them as trustworthy. In this paper we review...... the field of trust-based Web service selection, providing a structured classification of current approaches and highlighting the main limitations of each class and of the overall field....

  11. Scleroderma prevalence: demographic variations in a population-based sample.

    Science.gov (United States)

    Bernatsky, S; Joseph, L; Pineau, C A; Belisle, P; Hudson, M; Clarke, A E

    2009-03-15

    To estimate the prevalence of systemic sclerosis (SSc) using population-based administrative data, and to assess the sensitivity of case ascertainment approaches. We ascertained SSc cases from Quebec physician billing and hospitalization databases (covering approximately 7.5 million individuals). Three case definition algorithms were compared, and statistical methods accounting for imperfect case ascertainment were used to estimate SSc prevalence and case ascertainment sensitivity. A hierarchical Bayesian latent class regression model that accounted for possible between-test dependence conditional on disease status estimated the effect of patient characteristics on SSc prevalence and the sensitivity of the 3 ascertainment algorithms. Accounting for error inherent in both the billing and the hospitalization data, we estimated SSc prevalence in 2003 at 74.4 cases per 100,000 women (95% credible interval [95% CrI] 69.3-79.7) and 13.3 cases per 100,000 men (95% CrI 11.1-16.1). Prevalence was higher for older individuals, particularly in urban women (161.2 cases per 100,000, 95% CrI 148.6-175.0). Prevalence was lowest in young men (in rural areas, as low as 2.8 cases per 100,000, 95% CrI 1.4-4.8). In general, no single algorithm was very sensitive, with point estimates for sensitivity ranging from 20-73%. We found marked differences in SSc prevalence according to age, sex, and region. In general, no single case ascertainment approach was very sensitive for SSc. Therefore, using data from multiple sources, with adjustment for the imperfect nature of each, is an important strategy in population-based studies of SSc and similar conditions.

  12. Wave impedance selection for passivity-based bilateral teleoperation

    Science.gov (United States)

    D'Amore, Nicholas John

    When a task must be executed in a remote or dangerous environment, teleoperation systems may be employed to extend the influence of the human operator. In the case of manipulation tasks, haptic feedback of the forces experienced by the remote (slave) system is often highly useful in improving an operator's ability to perform effectively. In many of these cases (especially teleoperation over the internet and ground-to-space teleoperation), substantial communication latency exists in the control loop and has the strong tendency to cause instability of the system. The first viable solution to this problem in the literature was based on a scattering/wave transformation from transmission line theory. This wave transformation requires the designer to select a wave impedance parameter appropriate to the teleoperation system. It is widely recognized that a small value of wave impedance is well suited to free motion and a large value is preferable for contact tasks. Beyond this basic observation, however, very little guidance exists in the literature regarding the selection of an appropriate value. Moreover, prior research on impedance selection generally fails to account for the fact that in any realistic contact task there will simultaneously exist contact considerations (perpendicular to the surface of contact) and quasi-free-motion considerations (parallel to the surface of contact). The primary contribution of the present work is to introduce an approximate linearized optimum for the choice of wave impedance and to apply this quasi-optimal choice to the Cartesian reality of such a contact task, in which it cannot be expected that a given joint will be either perfectly normal to or perfectly parallel to the motion constraint. The proposed scheme selects a wave impedance matrix that is appropriate to the conditions encountered by the manipulator. This choice may be implemented as a static wave impedance value or as a time-varying choice updated according to the

  13. Sampling based uncertainty analysis of 10% hot leg break LOCA in large scale test facility

    International Nuclear Information System (INIS)

    Sengupta, Samiran; Kraina, V.; Dubey, S. K.; Rao, R. S.; Gupta, S. K.

    2010-01-01

    Sampling based uncertainty analysis was carried out to quantify uncertainty in predictions of best estimate code RELAP5/MOD3.2 for a thermal hydraulic test (10% hot leg break LOCA) performed in the Large Scale Test Facility (LSTF) as a part of an IAEA coordinated research project. The nodalisation of the test facility was qualified for both steady state and transient level by systematically applying the procedures led by uncertainty methodology based on accuracy extrapolation (UMAE); uncertainty analysis was carried out using the Latin hypercube sampling (LHS) method to evaluate uncertainty for ten input parameters. Sixteen output parameters were selected for uncertainty evaluation and uncertainty band between 5 th and 95 th percentile of the output parameters were evaluated. It was observed that the uncertainty band for the primary pressure during two phase blowdown is larger than that of the remaining period. Similarly, a larger uncertainty band is observed relating to accumulator injection flow during reflood phase. Importance analysis was also carried out and standard rank regression coefficients were computed to quantify the effect of each individual input parameter on output parameters. It was observed that the break discharge coefficient is the most important uncertain parameter relating to the prediction of all the primary side parameters and that the steam generator (SG) relief pressure setting is the most important parameter in predicting the SG secondary pressure

  14. Representativeness-based sampling network design for the State of Alaska

    Science.gov (United States)

    Forrest M. Hoffman; Jitendra Kumar; Richard T. Mills; William W. Hargrove

    2013-01-01

    Resource and logistical constraints limit the frequency and extent of environmental observations, particularly in the Arctic, necessitating the development of a systematic sampling strategy to maximize coverage and objectively represent environmental variability at desired scales. A quantitative methodology for stratifying sampling domains, informing site selection,...

  15. The NuSTAR  Extragalactic Surveys: X-Ray Spectroscopic Analysis of the Bright Hard-band Selected Sample

    Science.gov (United States)

    Zappacosta, L.; Comastri, A.; Civano, F.; Puccetti, S.; Fiore, F.; Aird, J.; Del Moro, A.; Lansbury, G. B.; Lanzuisi, G.; Goulding, A.; Mullaney, J. R.; Stern, D.; Ajello, M.; Alexander, D. M.; Ballantyne, D. R.; Bauer, F. E.; Brandt, W. N.; Chen, C.-T. J.; Farrah, D.; Harrison, F. A.; Gandhi, P.; Lanz, L.; Masini, A.; Marchesi, S.; Ricci, C.; Treister, E.

    2018-02-01

    We discuss the spectral analysis of a sample of 63 active galactic nuclei (AGN) detected above a limiting flux of S(8{--}24 {keV})=7× {10}-14 {erg} {{{s}}}-1 {{cm}}-2 in the multi-tiered NuSTAR extragalactic survey program. The sources span a redshift range z=0{--}2.1 (median =0.58). The spectral analysis is performed over the broad 0.5–24 keV energy range, combining NuSTAR with Chandra and/or XMM-Newton data and employing empirical and physically motivated models. This constitutes the largest sample of AGN selected at > 10 {keV} to be homogeneously spectrally analyzed at these flux levels. We study the distribution of spectral parameters such as photon index, column density ({N}{{H}}), reflection parameter ({\\boldsymbol{R}}), and 10–40 keV luminosity ({L}{{X}}). Heavily obscured ({log}[{N}{{H}}/{{cm}}-2]≥slant 23) and Compton-thick (CT; {log}[{N}{{H}}/{{cm}}-2]≥slant 24) AGN constitute ∼25% (15–17 sources) and ∼2–3% (1–2 sources) of the sample, respectively. The observed {N}{{H}} distribution agrees fairly well with predictions of cosmic X-ray background population-synthesis models (CXBPSM). We estimate the intrinsic fraction of AGN as a function of {N}{{H}}, accounting for the bias against obscured AGN in a flux-selected sample. The fraction of CT AGN relative to {log}[{N}{{H}}/{{cm}}-2]=20{--}24 AGN is poorly constrained, formally in the range 2–56% (90% upper limit of 66%). We derived a fraction (f abs) of obscured AGN ({log}[{N}{{H}}/{{cm}}-2]=22{--}24) as a function of {L}{{X}} in agreement with CXBPSM and previous zvalues.

  16. Strategy selection in cue-based decision making.

    Science.gov (United States)

    Bryant, David J

    2014-06-01

    People can make use of a range of heuristic and rational, compensatory strategies to perform a multiple-cue judgment task. It has been proposed that people are sensitive to the amount of cognitive effort required to employ decision strategies. Experiment 1 employed a dual-task methodology to investigate whether participants' preference for heuristic versus compensatory decision strategies can be altered by increasing the cognitive demands of the task. As indicated by participants' decision times, a secondary task interfered more with the performance of a heuristic than compensatory decision strategy but did not affect the proportions of participants using either type of strategy. A stimulus set effect suggested that the conjunction of cue salience and cue validity might play a determining role in strategy selection. The results of Experiment 2 indicated that when a perceptually salient cue was also the most valid, the majority of participants preferred a single-cue heuristic strategy. Overall, the results contradict the view that heuristics are more likely to be adopted when a task is made more cognitively demanding. It is argued that people employ 2 learning processes during training, one an associative learning process in which cue-outcome associations are developed by sampling multiple cues, and another that involves the sequential examination of single cues to serve as a basis for a single-cue heuristic.

  17. EFFICIENT SELECTION AND CLASSIFICATION OF INFRARED EXCESS EMISSION STARS BASED ON AKARI AND 2MASS DATA

    Energy Technology Data Exchange (ETDEWEB)

    Huang Yafang; Li Jinzeng [National Astronomical Observatories, Chinese Academy of Sciences, 20A Datun Road, Chaoyang District, Beijing 100012 (China); Rector, Travis A. [University of Alaska, 3211 Providence Drive, Anchorage, AK 99508 (United States); Mallamaci, Carlos C., E-mail: ljz@nao.cas.cn [Observatorio Astronomico Felix Aguilar, Universidad Nacional de San Juan (Argentina)

    2013-05-15

    The selection of young stellar objects (YSOs) based on excess emission in the infrared is easily contaminated by post-main-sequence stars and various types of emission line stars with similar properties. We define in this paper stringent criteria for an efficient selection and classification of stellar sources with infrared excess emission based on combined Two Micron All Sky Survey (2MASS) and AKARI colors. First of all, bright dwarfs and giants with known spectral types were selected from the Hipparcos Catalogue and cross-identified with the 2MASS and AKARI Point Source Catalogues to produce the main-sequence and the post-main-sequence tracks, which appear as expected as tight tracks with very small dispersion. However, several of the main-sequence stars indicate excess emission in the color space. Further investigations based on the SIMBAD data help to clarify their nature as classical Be stars, which are found to be located in a well isolated region on each of the color-color (C-C) diagrams. Several kinds of contaminants were then removed based on their distribution in the C-C diagrams. A test sample of Herbig Ae/Be stars and classical T Tauri stars were cross-identified with the 2MASS and AKARI catalogs to define the loci of YSOs with different masses on the C-C diagrams. Well classified Class I and Class II sources were taken as a second test sample to discriminate between various types of YSOs at possibly different evolutionary stages. This helped to define the loci of different types of YSOs and a set of criteria for selecting YSOs based on their colors in the near- and mid-infrared. Candidate YSOs toward IC 1396 indicating excess emission in the near-infrared were employed to verify the validity of the new source selection criteria defined based on C-C diagrams compiled with the 2MASS and AKARI data. Optical spectroscopy and spectral energy distributions of the IC 1396 sample yield a clear identification of the YSOs and further confirm the criteria defined

  18. EFFICIENT SELECTION AND CLASSIFICATION OF INFRARED EXCESS EMISSION STARS BASED ON AKARI AND 2MASS DATA

    International Nuclear Information System (INIS)

    Huang Yafang; Li Jinzeng; Rector, Travis A.; Mallamaci, Carlos C.

    2013-01-01

    The selection of young stellar objects (YSOs) based on excess emission in the infrared is easily contaminated by post-main-sequence stars and various types of emission line stars with similar properties. We define in this paper stringent criteria for an efficient selection and classification of stellar sources with infrared excess emission based on combined Two Micron All Sky Survey (2MASS) and AKARI colors. First of all, bright dwarfs and giants with known spectral types were selected from the Hipparcos Catalogue and cross-identified with the 2MASS and AKARI Point Source Catalogues to produce the main-sequence and the post-main-sequence tracks, which appear as expected as tight tracks with very small dispersion. However, several of the main-sequence stars indicate excess emission in the color space. Further investigations based on the SIMBAD data help to clarify their nature as classical Be stars, which are found to be located in a well isolated region on each of the color-color (C-C) diagrams. Several kinds of contaminants were then removed based on their distribution in the C-C diagrams. A test sample of Herbig Ae/Be stars and classical T Tauri stars were cross-identified with the 2MASS and AKARI catalogs to define the loci of YSOs with different masses on the C-C diagrams. Well classified Class I and Class II sources were taken as a second test sample to discriminate between various types of YSOs at possibly different evolutionary stages. This helped to define the loci of different types of YSOs and a set of criteria for selecting YSOs based on their colors in the near- and mid-infrared. Candidate YSOs toward IC 1396 indicating excess emission in the near-infrared were employed to verify the validity of the new source selection criteria defined based on C-C diagrams compiled with the 2MASS and AKARI data. Optical spectroscopy and spectral energy distributions of the IC 1396 sample yield a clear identification of the YSOs and further confirm the criteria defined

  19. 40 CFR Appendix X to Part 86 - Sampling Plans for Selective Enforcement Auditing of Heavy-Duty Engines and Light-Duty Trucks

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 19 2010-07-01 2010-07-01 false Sampling Plans for Selective Enforcement Auditing of Heavy-Duty Engines and Light-Duty Trucks X Appendix X to Part 86 Protection of... Plans for Selective Enforcement Auditing of Heavy-Duty Engines and Light-Duty Trucks Table 1—Sampling...

  20. Fatigue behavior of thin-walled grade 2 titanium samples processed by selective laser melting. Application to life prediction of porous titanium implants.

    Science.gov (United States)

    Lipinski, P; Barbas, A; Bonnet, A-S

    2013-12-01

    Because of its biocompatibility and high mechanical properties, the commercially pure grade 2 titanium (CPG2Ti) is largely used for fabrication of patient specific implants or hard tissue substitutes with complex shape. To avoid the stress-shielding and help their colonization by bone, prostheses with a controlled porosity are designed. The selective laser melting (SLM) is well adapted to manufacture such geometrically complicated structures constituted by struts with rough surfaces and relatively small diameters. Few studies were dedicated to characterize the fatigue properties of SLM processed samples and bulk parts. They followed conventional or standard protocols. The fatigue behavior of standard samples is very different from the one of porous raw structures. In this study, the SLM made "as built" (AB) and "heat treated" (HT) tubular samples were tested in fatigue. Wöhler curves were determined in both cases. The obtained endurance limits were equal to σD(AB)=74.5MPa and σD(HT)=65.7MPa, respectively. The heat treatment worsened the endurance limit by relaxation of negative residual stresses measured on the external surface of the samples. Modified Goodman diagram was established for raw specimens. Porous samples, based on the pattern developed by Barbas et al. (2012), were manufactured by SLM. Fatigue tests and finite element simulations performed on these samples enabled the determination of a simple rule of fatigue assessment. The method based on the stress gradient appeared as the best approach to take into account the notch influence on the fatigue life of CPG2Ti structures with a controlled porosity. The direction dependent apparent fatigue strength was found. A criterion based on the effective, or global, nominal stress was proposed taking into account the anisotropy of the porous structures. Thanks to this criterion, the usual calculation methods can be used to design bone substitutes, without a precise modelling of their internal fine porosity.