WorldWideScience

Sample records for randomly selected enumeration

  1. Probiotic bacteria: selective enumeration and survival in dairy foods.

    Science.gov (United States)

    Shah, N P

    2000-04-01

    A number of health benefits have been claimed for probiotic bacteria such as Lactobacillus acidophilus, Bifidobacterium spp., and Lactobacillus casei. Because of the potential health benefits, these organisms are increasingly incorporated into dairy foods. However, studies have shown low viability of probiotics in market preparations. In order to assess viability of probiotic bacteria, it is important to have a working method for selective enumeration of these probiotic bacteria. Viability of probiotic bacteria is important in order to provide health benefits. Viability of probiotic bacteria can be improved by appropriate selection of acid and bile resistant strains, use of oxygen impermeable containers, two-step fermentation, micro-encapsulation, stress adaptation, incorporation of micronutrients such as peptides and amino acids and by sonication of yogurt bacteria. This review will cover selective enumeration and survival of probiotic bacteria in dairy foods.

  2. Evaluation of culture media for selective enumeration of bifidobacteria and lactic acid bacteria

    Directory of Open Access Journals (Sweden)

    Judit Süle

    2014-09-01

    Full Text Available The purpose of this study was to test the suitability of Transgalactosylated oligosaccharides-mupirocin lithium salt (TOS-MUP and MRS-clindamycin-ciprofloxacin (MRS-CC agars, along with several other culture media, for selectively enumerating bifidobacteria and lactic acid bacteria (LAB species commonly used to make fermented milks. Pure culture suspensions of a total of 13 dairy bacteria strains, belonging to eight species and five genera, were tested for growth capability under various incubation conditions. TOS-MUP agar was successfully used for the selective enumeration of both Bifidobacterium animalis subsp. lactis BB-12 and B. breve M-16 V. MRS-CC agar showed relatively good selectivity for Lactobacillus acidophilus, however, it also promoted the growth of Lb. casei strains. For this reason, MRS-CC agar can only be used as a selective medium for the enumeration of Lb. acidophilus if Lb. casei is not present in a product at levels similar to or exceeding those of Lb. acidophilus. Unlike bifidobacteria and coccus-shaped LAB, all the lactobacilli strains involved in this work were found to grow well in MRS pH 5.4 agar incubated under anaerobiosis at 37 °C for 72 h. Therefore, this method proved to be particularly suitable for the selective enumeration of Lactobacillus spp.

  3. Mupirocin-mucin agar for selective enumeration of Bifidobacterium bifidum

    Czech Academy of Sciences Publication Activity Database

    Pechar, R.; Rada, V.; Parafati, L.; Musilová, S.; Bunešová, V.; Vlková, E.; Killer, Jiří; Mrázek, Jakub; Kmeť, V.; Svejštil, R.

    2014-01-01

    Roč. 191, č. 1 (2014), s. 32-35 ISSN 0168-1605 R&D Projects: GA ČR GA13-08803S Institutional support: RVO:67985904 Keywords : probiotics * Bifidobacterium bifidum * selective enumeration Subject RIV: EE - Microbiology, Virology Impact factor: 3.082, year: 2014

  4. Survival of probiotic adjunct cultures in cheese and challenges in their enumeration using selective media.

    Science.gov (United States)

    Oberg, C J; Moyes, L V; Domek, M J; Brothersen, C; McMahon, D J

    2011-05-01

    Various selective media for enumerating probiotic and cheese cultures were screened, with 6 media then used to study survival of probiotic bacteria in full-fat and low-fat Cheddar cheese. Commercial strains of Lactobacillus acidophilus, Lactobacillus casei, Lactobacillus paracasei, or Bifidobacterium lactis were added as probiotic adjuncts. The selective media, designed to promote growth of certain lactic acid bacteria (LAB) over others or to differentiate between LAB, were used to detect individual LAB types during cheese storage. Commercial strains of Lactococcus, Lactobacillus, and Bifidobacterium spp. were initially screened on the 6 selective media along with nonstarter LAB (NSLAB) isolates. The microbial flora of the cheeses was analyzed during 9 mo of storage at 6°C. Many NSLAB were able to grow on media presumed selective for Lactococcus, Bifidobacterium spp., or Lb. acidophilus, which became apparent after 90 d of cheese storage, Between 90 and 120 d of storage, bacterial counts changed on media selective for Bifidobacterium spp., suggesting growth of NSLAB. Appearance of NSLAB on Lb. casei selective media [de man, Rogosa, and Sharpe (MRS)+vancomycin] occurred sooner (30 d) in low-fat cheese than in full-fat control cheeses. Differentiation between NSLAB and Lactococcus was achieved by counting after 18 to 24h when the NSLAB colonies were only pinpoint in size. Growth of NSLAB on the various selective media during aging means that probiotic adjunct cultures added during cheesemaking can only be enumerated with confidence on selective media for up to 3 or 4 mo. After this time, growth of NSLAB obfuscates enumeration of probiotic adjuncts. When adjunct Lb. casei or Lb. paracasei cultures are added during cheesemaking, they appear to remain at high numbers for a long time (9 mo) when counted on MRS+vancomycin medium, but a reasonable probability exists that they have been overtaken by NSLAB, which also grow readily on this medium. Enumeration using multiple

  5. Selective and differential enumerations of Lactobacillus delbrueckii subsp. bulgaricus, Streptococcus thermophilus, Lactobacillus acidophilus, Lactobacillus casei and Bifidobacterium spp. in yoghurt--a review.

    Science.gov (United States)

    Ashraf, Rabia; Shah, Nagendra P

    2011-10-03

    Yoghurt is increasingly being used as a carrier of probiotic bacteria for their potential health benefits. To meet with a recommended level of ≥10(6) viable cells/g of a product, assessment of viability of probiotic bacteria in market preparations is crucial. This requires a working method for selective enumeration of these probiotic bacteria and lactic acid bacteria in yoghurt such as Streptococcus thermophilus, Lactobacillus delbrueckii subsp. bulgaricus, Lb. acidophilus, Lb. casei and Bifidobacterium. This chapter presents an overview of media that could be used for differential and selective enumerations of yoghurt bacteria. De Man Rogosa Sharpe agar containing fructose (MRSF), MRS agar pH 5.2 (MRS 5.2), reinforced clostridial prussian blue agar at pH 5.0 (RCPB 5.0) or reinforced clostridial agar at pH 5.3 (RCA 5.3) are suitable for enumeration of Lb. delbrueckii subsp. bulgaricus when the incubation is carried out at 45°C for 72h. S. thermophilus (ST) agar and M17 are recommended for selective enumeration of S. thermophilus. Selective enumeration of Lb. acidophilus in mixed culture could be made in Rogosa agar added with 5-bromo-4-chloro-3-indolyl-β-d-glucopyranoside (X-Glu) or MRS containing maltose (MRSM) and incubation in a 20% CO2 atmosphere. Lb. casei could be selectively enumerated on specially formulated Lb. casei (LC) agar from products containing yoghurt starter bacteria (S. thermophilus and Lb. delbrueckii subsp. bulgaricus), Lb. acidophilus, Bifidobacterium spp. and Lb. casei. Bifidobacterium could be enumerated on MRS agar supplemented with nalidixic acid, paromomycin, neomycin sulphate and lithium chloride (MRS-NPNL) under anaerobic incubation at 37°C for 72h. Copyright © 2011. Published by Elsevier B.V.

  6. Comparison of selected methods for the enumeration of fecal coliforms and Escherichia coli in shellfish.

    Science.gov (United States)

    Grabow, W O; De Villiers, J C; Schildhauer, C I

    1992-09-01

    In a comparison of five selected methods for the enumeration of fecal coliforms and Escherichia coli in naturally contaminated and sewage-seeded mussels (Choromytilus spp.) and oysters (Ostrea spp.), a spread-plate procedure with mFC agar without rosolic acid and preincubation proved the method of choice for routine quality assessment.

  7. Top-k Based Adaptive Enumeration in Constraint Programming

    Directory of Open Access Journals (Sweden)

    Ricardo Soto

    2015-01-01

    order for variables and values is employed along the search. In this paper, we present a new and more lightweight approach for performing adaptive enumeration. We incorporate a powerful classification technique named Top-k in order to adaptively select strategies along the resolution. We report results on a set of well-known benchmarks where the proposed approach noticeably competes with classical and modern adaptive enumeration methods for constraint satisfaction.

  8. A selective medium for the enumeration and differentiation of Lactobacillus delbrueckii ssp. bulgaricus.

    Science.gov (United States)

    Nwamaioha, Nwadiuto O; Ibrahim, Salam A

    2018-06-01

    Modified reinforced clostridial medium (mRCM) was developed and evaluated for the differential enumeration of Lactobacillus delbrueckii ssp. bulgaricus. Lactobacillus bulgaricus, an important species of lactic acid bacteria with health benefits, is used in the production of yogurt and other fermented foods. Our results showed that supplementing reinforced clostridial medium with 0.025% CaCl 2 , 0.01% uracil, and 0.2% Tween 80 (mRCM) significantly enhanced the growth rate of L. bulgaricus RR and ATCC 11842 strains as measured by the optical densities of these strains after 12 h of incubation at 42°C. The bacterial populations (plate count) of the RR and ATCC 11842 strains were 0.76 and 0.77 log cfu/g higher in mRCM than in de Man, Rogosa, and Sharpe and reinforced clostridial medium media, respectively. Conversely, the population counts for other bacterial species (Bifidobacterium, Lactobacillus rhamnosus, and Lactobacillus reuteri) were significantly inhibited in the mRCM medium. The addition of aniline blue dye to mRCM (mRCM-blue) improved the selectivity of L. bulgaricus in mixed lactic bacterial cultures compared with de Man, Rogosa, and Sharpe medium and lactic agar with regard to colony appearance and morphology. The mRCM-blue performed better than the conventional medium in culturing, enumerating, and differentiating L. bulgaricus. Therefore, mRCM-blue could be used as a selective medium to enhance the growth and differentiation of L. bulgaricus in order to meet the increasing demand for this beneficial species of bacteria. Copyright © 2018 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  9. Multicanonical simulation of the Domb-Joyce model and the Gō model: new enumeration methods for self-avoiding walks

    International Nuclear Information System (INIS)

    Shirai, Nobu C; Kikuchi, Macoto

    2013-01-01

    We develop statistical enumeration methods for self-avoiding walks using a powerful sampling technique called the multicanonical Monte Carlo method. Using these methods, we estimate the numbers of the two dimensional N-step self-avoiding walks up to N = 256 with statistical errors. The developed methods are based on statistical mechanical models of paths which include self-avoiding walks. The criterion for selecting a suitable model for enumerating self-avoiding walks is whether or not the configuration space of the model includes a set for which the number of the elements can be exactly counted. We call this set a scale fixing set. We selected the following two models which satisfy the criterion: the Gō model for lattice proteins and the Domb-Joyce model for generalized random walks. There is a contrast between these two models in the structures of the configuration space. The configuration space of the Gō model is defined as the universal set of self-avoiding walks, and the set of the ground state conformation provides a scale fixing set. On the other hand, the configuration space of the Domb-Joyce model is defined as the universal set of random walks which can be used as a scale fixing set, and the set of the ground state conformation is the same as the universal set of self-avoiding walks. From the perspective of enumeration performance, we conclude that the Domb-Joyce model is the better of the two. The reason for the performance difference is partly explained by the existence of the first-order phase transition of the Gō model

  10. Analysis and enumeration algorithms for biological graphs

    CERN Document Server

    Marino, Andrea

    2015-01-01

    In this work we plan to revise the main techniques for enumeration algorithms and to show four examples of enumeration algorithms that can be applied to efficiently deal with some biological problems modelled by using biological networks: enumerating central and peripheral nodes of a network, enumerating stories, enumerating paths or cycles, and enumerating bubbles. Notice that the corresponding computational problems we define are of more general interest and our results hold in the case of arbitrary graphs. Enumerating all the most and less central vertices in a network according to their eccentricity is an example of an enumeration problem whose solutions are polynomial and can be listed in polynomial time, very often in linear or almost linear time in practice. Enumerating stories, i.e. all maximal directed acyclic subgraphs of a graph G whose sources and targets belong to a predefined subset of the vertices, is on the other hand an example of an enumeration problem with an exponential number of solutions...

  11. Taking a(c)count of eye movements: Multiple mechanisms underlie fixations during enumeration.

    Science.gov (United States)

    Paul, Jacob M; Reeve, Robert A; Forte, Jason D

    2017-03-01

    We habitually move our eyes when we enumerate sets of objects. It remains unclear whether saccades are directed for numerosity processing as distinct from object-oriented visual processing (e.g., object saliency, scanning heuristics). Here we investigated the extent to which enumeration eye movements are contingent upon the location of objects in an array, and whether fixation patterns vary with enumeration demands. Twenty adults enumerated random dot arrays twice: first to report the set cardinality and second to judge the perceived number of subsets. We manipulated the spatial location of dots by presenting arrays at 0°, 90°, 180°, and 270° orientations. Participants required a similar time to enumerate the set or the perceived number of subsets in the same array. Fixation patterns were systematically shifted in the direction of array rotation, and distributed across similar locations when the same array was shown on multiple occasions. We modeled fixation patterns and dot saliency using a simple filtering model and show participants judged groups of dots in close proximity (2°-2.5° visual angle) as distinct subsets. Modeling results are consistent with the suggestion that enumeration involves visual grouping mechanisms based on object saliency, and specific enumeration demands affect spatial distribution of fixations. Our findings highlight the importance of set computation, rather than object processing per se, for models of numerosity processing.

  12. The association of color memory and the enumeration of multiple spatially overlapping sets.

    Science.gov (United States)

    Poltoratski, Sonia; Xu, Yaoda

    2013-07-09

    Using dot displays, Halberda, Sires, and Feigenson (2006) showed that observers could simultaneously encode the numerosity of two spatially overlapping sets and the superset of all items at a glance. With the brief display and the masking used in Halberda et al., the task required observers to encode the colors of each set in order to select and enumerate all the dots in that set. As such, the observed capacity limit for set enumeration could reflect a limit in visual short-term memory (VSTM) capacity for the set color rather than a limit in set enumeration per se. Here, we largely replicated Halberda et al. and found successful enumeration of approximately two sets (the superset was not probed). We also found that only about two and a half colors could be remembered from the colored dot displays whether or not the enumeration task was performed concurrently with the color VSTM task. Because observers must remember the color of a set prior to enumerating it, the under three-item VSTM capacity for color necessarily dictates that set enumeration capacity in this paradigm could not exceed two sets. Thus, the ability to enumerate multiple spatially overlapping sets is likely limited by VSTM capacity to retain the discriminating feature of these sets. This relationship suggests that the capacity for set enumeration cannot be considered independently from the capacity for the set's defining features.

  13. Media for the isolation and enumeration of bifidobacteria in dairy products.

    Science.gov (United States)

    Roy, D

    2001-09-28

    Bifidobacteria are commonly used for the production of fermented milks, alone or in combination with other lactic acid bacteria. Bifidobacteria populations in fermented milks should be over 10(6) bifidobacteria/g at the time of consumption of strain added to the product. Hence, rapid and reliable methods are needed to routinely determine the initial inoculum and to estimate the storage time period bifidobacteria remain viable. Plate count methods are still preferable for quality control measurements in dairy products. It is, therefore, necessary to have a medium that selectively promotes the growth of bifidobacteria, whereas other bacteria are suppressed. The present paper is an overview of media and methods including summaries of published comparisons between different selective media. Culture media for bifidobacteria may be divided into basal, elective, differential and selective culture medium. Non-selective media are useful for routine enumeration of bifidobacteria when present in non-fermented milks. Reinforced Clostridial Agar and De Man Rogosa Sharpe (MRS) supplemented with cysteine and agar available commercially are the media of choice for industrial quality control laboratories. Several media for selective or differential isolation have been described for enumeration of bifidobacteria from other lactic acid bacteria. From the large number of selective media available, it can be concluded that there is no standard medium for the detection of bifidobacteria. However, Columbia agar base media supplemented with lithium chloride and sodium propionate and MRS medium supplemented with neomycin, paromomycin, nalidixic acid and lithium chloride can be recommended for selective enumeration of bifidobacteria in dairy products.

  14. Almost computably enumerable families of sets

    International Nuclear Information System (INIS)

    Kalimullin, I Sh

    2008-01-01

    An almost computably enumerable family that is not Φ'-computably enumerable is constructed. Moreover, it is established that for any computably enumerable (c.e.) set A there exists a family that is X-c.e. if and only if the set X is not A-computable. Bibliography: 5 titles.

  15. Enumeration of Enterobacter cloacae after chloramine exposure.

    OpenAIRE

    Watters, S K; Pyle, B H; LeChevallier, M W; McFeters, G A

    1989-01-01

    Growth of Enterobacter cloacae on various media was compared after disinfection. This was done to examine the effects of monochloramine and chlorine on the enumeration of coliforms. The media used were TLY (nonselective; 5.5% tryptic soy broth, 0.3% yeast extract, 1.0% lactose, and 1.5% Bacto-Agar), m-T7 (selective; developed to recover injured coliforms), m-Endo (selective; contains sodium sulfite), TLYS (TLY with sodium sulfite), and m-T7S (m-T7 with sodium sulfite). Sodium sulfite in any m...

  16. Impact of enumeration method on diversity of Escherichia coli genotypes isolated from surface water.

    Science.gov (United States)

    Martin, E C; Gentry, T J

    2016-11-01

    There are numerous regulatory-approved Escherichia coli enumeration methods, but it is not known whether differences in media composition and incubation conditions impact the diversity of E. coli populations detected by these methods. A study was conducted to determine if three standard water quality assessments, Colilert ® , USEPA Method 1603, (modified mTEC) and USEPA Method 1604 (MI), detect different populations of E. coli. Samples were collected from six watersheds and analysed using the three enumeration approaches followed by E. coli isolation and genotyping. Results indicated that the three methods generally produced similar enumeration data across the sites, although there were some differences on a site-by-site basis. The Colilert ® method consistently generated the least diverse collection of E. coli genotypes as compared to modified mTEC and MI, with those two methods being roughly equal to each other. Although the three media assessed in this study were designed to enumerate E. coli, the differences in the media composition, incubation temperature, and growth platform appear to have a strong selective influence on the populations of E. coli isolated. This study suggests that standardized methods of enumeration and isolation may be warranted if researchers intend to obtain individual E. coli isolates for further characterization. This study characterized the impact of three USEPA-approved Escherichia coli enumeration methods on observed E. coli population diversity in surface water samples. Results indicated that these methods produced similar E. coli enumeration data but were more variable in the diversity of E. coli genotypes observed. Although the three methods enumerate the same species, differences in media composition, growth platform, and incubation temperature likely contribute to the selection of different cultivable populations of E. coli, and thus caution should be used when implementing these methods interchangeably for

  17. Perceptual grouping affects haptic enumeration over the fingers

    NARCIS (Netherlands)

    Overvliet, K.E.; Plaisier, M.A.

    2016-01-01

    Spatial arrangement is known to influence enumeration times in vision. In haptic enumeration, it has been shown that dividing the total number of items over the two hands can speed up enumeration. Here we investigated how spatial arrangement of items and non-items presented to the individual fingers

  18. Blocked Randomization with Randomly Selected Block Sizes

    Directory of Open Access Journals (Sweden)

    Jimmy Efird

    2010-12-01

    Full Text Available When planning a randomized clinical trial, careful consideration must be given to how participants are selected for various arms of a study. Selection and accidental bias may occur when participants are not assigned to study groups with equal probability. A simple random allocation scheme is a process by which each participant has equal likelihood of being assigned to treatment versus referent groups. However, by chance an unequal number of individuals may be assigned to each arm of the study and thus decrease the power to detect statistically significant differences between groups. Block randomization is a commonly used technique in clinical trial design to reduce bias and achieve balance in the allocation of participants to treatment arms, especially when the sample size is small. This method increases the probability that each arm will contain an equal number of individuals by sequencing participant assignments by block. Yet still, the allocation process may be predictable, for example, when the investigator is not blind and the block size is fixed. This paper provides an overview of blocked randomization and illustrates how to avoid selection bias by using random block sizes.

  19. MacWilliams Identity for M-Spotty Weight Enumerator

    Science.gov (United States)

    Suzuki, Kazuyoshi; Fujiwara, Eiji

    M-spotty byte error control codes are very effective for correcting/detecting errors in semiconductor memory systems that employ recent high-density RAM chips with wide I/O data (e.g., 8, 16, or 32bits). In this case, the width of the I/O data is one byte. A spotty byte error is defined as random t-bit errors within a byte of length b bits, where 1 le t ≤ b. Then, an error is called an m-spotty byte error if at least one spotty byte error is present in a byte. M-spotty byte error control codes are characterized by the m-spotty distance, which includes the Hamming distance as a special case for t =1 or t = b. The MacWilliams identity provides the relationship between the weight distribution of a code and that of its dual code. The present paper presents the MacWilliams identity for the m-spotty weight enumerator of m-spotty byte error control codes. In addition, the present paper clarifies that the indicated identity includes the MacWilliams identity for the Hamming weight enumerator as a special case.

  20. Application of hyperplane arrangements to weight enumeration

    NARCIS (Netherlands)

    Jurrius, R.P.M.J.; Pellikaan, G.R.

    2014-01-01

    Many research in coding theory is focussed on linear error-correcting codes. Since these codes are subspaces, linear algebra plays a prominent role in studying them. An important polynomial invariant of linear error-correcting codes is the (extended) weight enumerator. The weight enumerator gives

  1. Sensitive enumeration of Listeria monocytogenes and other Listeria species in various naturally contaminated matrices using a membrane filtration method.

    Science.gov (United States)

    Barre, Léna; Brasseur, Emilie; Doux, Camille; Lombard, Bertrand; Besse, Nathalie Gnanou

    2015-06-01

    For the enumeration of Listeria monocytogenes (L. monocytogenes) in food, a sensitive enumeration method has been recently developed. This method is based on a membrane filtration of the food suspension followed by transfer of the filter on a selective medium to enumerate L. monocytogenes. An evaluation of this method was performed with several categories of foods naturally contaminated with L. monocytogenes. The results obtained with this technique were compared with those obtained from the modified reference EN ISO 11290-2 method for the enumeration of L. monocytogenes in food, and are found to provide more precise results. In most cases, the filtration method enabled to examine a greater quantity of food thus greatly improving the sensitivity of the enumeration. However, it was hardly applicable to some food categories because of filtration problems and background microbiota interference. Copyright © 2014 Elsevier Ltd. All rights reserved.

  2. Ensemble Weight Enumerators for Protograph LDPC Codes

    Science.gov (United States)

    Divsalar, Dariush

    2006-01-01

    Recently LDPC codes with projected graph, or protograph structures have been proposed. In this paper, finite length ensemble weight enumerators for LDPC codes with protograph structures are obtained. Asymptotic results are derived as the block size goes to infinity. In particular we are interested in obtaining ensemble average weight enumerators for protograph LDPC codes which have minimum distance that grows linearly with block size. As with irregular ensembles, linear minimum distance property is sensitive to the proportion of degree-2 variable nodes. In this paper the derived results on ensemble weight enumerators show that linear minimum distance condition on degree distribution of unstructured irregular LDPC codes is a sufficient but not a necessary condition for protograph LDPC codes.

  3. 47 CFR 1.1602 - Designation for random selection.

    Science.gov (United States)

    2010-10-01

    ... 47 Telecommunication 1 2010-10-01 2010-10-01 false Designation for random selection. 1.1602 Section 1.1602 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL PRACTICE AND PROCEDURE Random Selection Procedures for Mass Media Services General Procedures § 1.1602 Designation for random selection...

  4. 47 CFR 1.1603 - Conduct of random selection.

    Science.gov (United States)

    2010-10-01

    ... 47 Telecommunication 1 2010-10-01 2010-10-01 false Conduct of random selection. 1.1603 Section 1.1603 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL PRACTICE AND PROCEDURE Random Selection Procedures for Mass Media Services General Procedures § 1.1603 Conduct of random selection. The...

  5. Enumeration of Enterobacter cloacae after chloramine exposure.

    Science.gov (United States)

    Watters, S K; Pyle, B H; LeChevallier, M W; McFeters, G A

    1989-01-01

    Growth of Enterobacter cloacae on various media was compared after disinfection. This was done to examine the effects of monochloramine and chlorine on the enumeration of coliforms. The media used were TLY (nonselective; 5.5% tryptic soy broth, 0.3% yeast extract, 1.0% lactose, and 1.5% Bacto-Agar), m-T7 (selective; developed to recover injured coliforms), m-Endo (selective; contains sodium sulfite), TLYS (TLY with sodium sulfite), and m-T7S (m-T7 with sodium sulfite). Sodium sulfite in any medium improved the recovery of chloramine-treated E. cloacae. However, sodium sulfite in TLYS and m-T7S did not significantly improve the detection of chlorine-treated E. cloacae, and m-Endo was the least effective medium for recovering chlorinated bacteria. Differences in recovery of chlorine- and chloramine-treated E. cloacae are consistent with mechanistic differences between the disinfectants. PMID:2619309

  6. Enumerating submultisets of multisets

    NARCIS (Netherlands)

    Hage, J.

    2001-01-01

    In this paper we consider the problem of enumerating the submultisets of a multiset, in which each element has equal multiplicity. The crucial property is that consecutive submultisets in this listing differ one in the cardinality of only one of the elements. This is a generalization to k-ary

  7. Enumeration of fungi in barley

    CSIR Research Space (South Africa)

    Rabie, CJ

    1997-04-01

    Full Text Available Estimation of fungal contamination of barley grain is important as certain fungi can proliferate during the malting process. The following factors which may affect the enumeration of fungi were evaluated: dilution versus direct plating, pre...

  8. Template-based combinatorial enumeration of virtual compound libraries for lipids.

    Science.gov (United States)

    Sud, Manish; Fahy, Eoin; Subramaniam, Shankar

    2012-09-25

    A variety of software packages are available for the combinatorial enumeration of virtual libraries for small molecules, starting from specifications of core scaffolds with attachments points and lists of R-groups as SMILES or SD files. Although SD files include atomic coordinates for core scaffolds and R-groups, it is not possible to control 2-dimensional (2D) layout of the enumerated structures generated for virtual compound libraries because different packages generate different 2D representations for the same structure. We have developed a software package called LipidMapsTools for the template-based combinatorial enumeration of virtual compound libraries for lipids. Virtual libraries are enumerated for the specified lipid abbreviations using matching lists of pre-defined templates and chain abbreviations, instead of core scaffolds and lists of R-groups provided by the user. 2D structures of the enumerated lipids are drawn in a specific and consistent fashion adhering to the framework for representing lipid structures proposed by the LIPID MAPS consortium. LipidMapsTools is lightweight, relatively fast and contains no external dependencies. It is an open source package and freely available under the terms of the modified BSD license.

  9. Enumeration of petroleum hydrocarbon utilizing bacteria

    International Nuclear Information System (INIS)

    Mukherjee, S.; Barot, M.; Levine, A.D.

    1996-01-01

    In-situ biological treatment is one among a number of emerging technologies that may be applied to the remediation of contaminated soils and groundwater. In 1985, a surface spill of 1,500 gallons of dielectric transformer oil at the Sandia National Laboratories (HERMES II facility) resulted in contamination of soil up to depths of 160 feet. The extent of contamination and site characteristics favored the application of in-situ bioremediation as a potential remedial technology. The purpose of this research was to enumerate indigenous microbial populations capable of degrading petroleum hydrocarbons. Microbial enumeration and characterization methods suitably adapted for hydrocarbon utilizing bacteria were used as an indicator of the presence of viable microbial consortia in excavated oil samples with hydrocarbon (TPH) concentrations ranging from 300 to 26,850 ppm. Microbial activity was quantified by direct and streak plating soil samples on silica gel media. Effects of toxicity and temperature were studied using batch cultures of hydrocarbon utilizing bacteria (selectively isolated in an enrichment medium), at temperatures of 20 and 35 C. It was concluded from this study that it is possible to isolate native microorganisms from contaminated soils from depths of 60 to 160 feet, and with oil concentration ranging from 300 to 26,850 ppm. About 62% of the microorganisms isolated form the contaminated soil were capable of using contaminant oil as a substrate for growth and metabolism under aerobic conditions. Growth rates were observed to be 50% higher for the highest contaminant concentration at 20 C. Resistance to toxicity to contaminant oil was also observed to be greater at 20 C than at 35 C

  10. Fluorogenic membrane overlays to enumerate total coliforms, Escherichia coli, and total Vibrionaceae in shellfish and seawater

    Science.gov (United States)

    Three assays were developed to enumerate total coliforms, Escherichia coli, and total Vibrionaceae in shellfish and other foods and in seawater and other environmental samples. Assays involve membrane overlays of overnight colonies on non-selective agar plates to detect ß-glucuronidase and lysyl am...

  11. Miniaturized most probable number for the enumeration of Salmonella sp in artificially contaminated chicken meat

    Directory of Open Access Journals (Sweden)

    FL Colla

    2014-03-01

    Full Text Available Salmonella is traditionally identified by conventional microbiological tests, but the enumeration of this bacterium is not used on a routine basis. Methods such as the most probable number (MPN, which utilize an array of multiple tubes, are time-consuming and expensive, whereas miniaturized most probable number (mMPN methods, which use microplates, can be adapted for the enumeration of bacteria, saving up time and materials. The aim of the present paper is to assess two mMPN methods for the enumeration of Salmonella sp in artificially-contaminated chicken meat samples. Microplates containing 24 wells (method A and 96 wells (method B, both with peptone water as pre-enrichment medium and modified semi-solid Rappaport-Vassiliadis (MSRV as selective enrichment medium, were used. The meat matrix consisted of 25g of autoclaved ground chicken breast contaminated with dilutions of up to 10(6 of Salmonella Typhimurium (ST and Escherichia coli (EC. In method A, the dilution 10-5 of Salmonella Typhimurium corresponded to >57 MPN/mL and the dilution 10-6 was equal to 30 MPN/mL. There was a correlation between the counts used for the artificial contamination of the samples and those recovered by mMPN, indicating that the method A was sensitive for the enumeration of different levels of contamination of the meat matrix. In method B, there was no correlation between the inoculated dilutions and the mMPN results.

  12. Enumeration of sugars and sugar alcohols hydroxyl groups by aqueous-based acetylation and MALDI-TOF mass spectrometry

    Science.gov (United States)

    A method is described for enumerating hydroxyl groups on analytes in aqueous media is described, and applied to some common polyalcohols (erythritol, mannitol, and xylitol) and selected carbohydrates. The analytes were derivatized in water with vinyl acetate in presence of sodium phosphate buffer. ...

  13. Effect of storage time and temperature of equine feces on the subsequent enumeration of lactobacilli and cellulolytic bacteria

    Science.gov (United States)

    Cellulolytic bacteria and lactobacilli are beneficial microbes in the equine hindgut. There are several existing methodologies for the enumeration of these bacteria, which vary based on selective and differential media and sample handling procedures including storage time and temperature. The object...

  14. Circulating Tumor Cells, Enumeration and Beyond

    Energy Technology Data Exchange (ETDEWEB)

    Hou, Jian-Mei [Clinical and Experimental Pharmacology Group, Paterson Institute for Cancer Research, Manchester M20 4BX (United Kingdom); Krebs, Matthew [Clinical and Experimental Pharmacology Group, Paterson Institute for Cancer Research, Manchester M20 4BX (United Kingdom); Christie Hospital Foundation NHS Trust, Manchester M20 4BX (United Kingdom); Ward, Tim; Morris, Karen; Sloane, Robert [Clinical and Experimental Pharmacology Group, Paterson Institute for Cancer Research, Manchester M20 4BX (United Kingdom); Blackhall, Fiona [Clinical and Experimental Pharmacology Group, Paterson Institute for Cancer Research, Manchester M20 4BX (United Kingdom); School of Cancer and Enabling Sciences, University of Manchester, Manchester Cancer Research Centre, Manchester Academic Health Sciences Centre, Manchester M20 4BX (United Kingdom); Christie Hospital Foundation NHS Trust, Manchester M20 4BX (United Kingdom); Dive, Caroline, E-mail: cdive@picr.man.ac.uk [Clinical and Experimental Pharmacology Group, Paterson Institute for Cancer Research, Manchester M20 4BX (United Kingdom); School of Cancer and Enabling Sciences, University of Manchester, Manchester Cancer Research Centre, Manchester Academic Health Sciences Centre, Manchester M20 4BX (United Kingdom)

    2010-06-09

    The detection and enumeration of circulating tumor cells (CTCs) has shown significant clinical utility with respect to prognosis in breast, colorectal and prostate cancers. Emerging studies show that CTCs can provide pharmacodynamic information to aid therapy decision making. CTCs as a ‘virtual and real-time biopsy’ have clear potential to facilitate exploration of tumor biology, and in particular, the process of metastasis. The challenge of profiling CTC molecular characteristics and generating CTC signatures using current technologies is that they enrich rather than purify CTCs from whole blood; we face the problem of looking for the proverbial ‘needle in the haystack’. This review summarizes the current methods for CTC detection and enumeration, focuses on molecular characterization of CTCs, unveils some aspects of CTC heterogeneity, describes attempts to purify CTCs and scans the horizon for approaches leading to comprehensive dissection of CTC biology.

  15. Circulating Tumor Cells, Enumeration and Beyond

    International Nuclear Information System (INIS)

    Hou, Jian-Mei; Krebs, Matthew; Ward, Tim; Morris, Karen; Sloane, Robert; Blackhall, Fiona; Dive, Caroline

    2010-01-01

    The detection and enumeration of circulating tumor cells (CTCs) has shown significant clinical utility with respect to prognosis in breast, colorectal and prostate cancers. Emerging studies show that CTCs can provide pharmacodynamic information to aid therapy decision making. CTCs as a ‘virtual and real-time biopsy’ have clear potential to facilitate exploration of tumor biology, and in particular, the process of metastasis. The challenge of profiling CTC molecular characteristics and generating CTC signatures using current technologies is that they enrich rather than purify CTCs from whole blood; we face the problem of looking for the proverbial ‘needle in the haystack’. This review summarizes the current methods for CTC detection and enumeration, focuses on molecular characterization of CTCs, unveils some aspects of CTC heterogeneity, describes attempts to purify CTCs and scans the horizon for approaches leading to comprehensive dissection of CTC biology

  16. The Effect of Enumeration of Self-Relevant Words on Self-Focused Attention and Repetitive Negative Thoughts

    Science.gov (United States)

    Muranaka, Seiji; Sasaki, Jun

    2018-01-01

    Self-focused attention refers to awareness of self-referent, internally generated information. It can be categorized into dysfunctional (i.e., self-rumination) and functional (self-reflection) aspects. According to theory on cognitive resource limitations (e.g., Moreno, 2006), there is a difference in cognitive resource allocation between these two aspects of self-focused attention. We propose a new task, self-relevant word (SRW) enumeration, that can aid in behaviorally identifying individuals’ use of self-rumination and self-reflection. The present study has two purposes: to determine the association between self-focus and SRW enumeration, and to examine the effect of dysfunctional SRW enumeration on repetitive negative thinking. One hundred forty-six undergraduate students participated in this study. They completed a measure of state anxiety twice, before and after imagining a social failure situation. They also completed the SRW enumeration task, Repetitive Thinking Questionnaire, Short Fear of Negative Evaluation Scale, and Rumination-Reflection Questionnaire. A correlational analysis indicated a significant positive correlation between self-reflection and the number of SRWs. Furthermore, individuals high in self-reflection had a tendency to pay more attention to problems than did those high in self-rumination. A significant positive correlation was found between self-rumination and the strength of self-relevance of negative SRWs. Through a path analysis, we found a significant positive effect of the self-relevance of negative SRWs on repetitive negative thinking. Notably, however, the model that excluded self-rumination as an explanatory variable showed a better fit to the data than did the model that included it. In summary, SRW enumeration might enable selective and independent detection of the degree of self-reflection and self-rumination, and therefore should be examined in future research in order to design new behavioral procedures. PMID:29896140

  17. The Effect of Enumeration of Self-Relevant Words on Self-Focused Attention and Repetitive Negative Thoughts

    Directory of Open Access Journals (Sweden)

    Seiji Muranaka

    2018-05-01

    Full Text Available Self-focused attention refers to awareness of self-referent, internally generated information. It can be categorized into dysfunctional (i.e., self-rumination and functional (self-reflection aspects. According to theory on cognitive resource limitations (e.g., Moreno, 2006, there is a difference in cognitive resource allocation between these two aspects of self-focused attention. We propose a new task, self-relevant word (SRW enumeration, that can aid in behaviorally identifying individuals’ use of self-rumination and self-reflection. The present study has two purposes: to determine the association between self-focus and SRW enumeration, and to examine the effect of dysfunctional SRW enumeration on repetitive negative thinking. One hundred forty-six undergraduate students participated in this study. They completed a measure of state anxiety twice, before and after imagining a social failure situation. They also completed the SRW enumeration task, Repetitive Thinking Questionnaire, Short Fear of Negative Evaluation Scale, and Rumination-Reflection Questionnaire. A correlational analysis indicated a significant positive correlation between self-reflection and the number of SRWs. Furthermore, individuals high in self-reflection had a tendency to pay more attention to problems than did those high in self-rumination. A significant positive correlation was found between self-rumination and the strength of self-relevance of negative SRWs. Through a path analysis, we found a significant positive effect of the self-relevance of negative SRWs on repetitive negative thinking. Notably, however, the model that excluded self-rumination as an explanatory variable showed a better fit to the data than did the model that included it. In summary, SRW enumeration might enable selective and independent detection of the degree of self-reflection and self-rumination, and therefore should be examined in future research in order to design new behavioral procedures.

  18. A note on the complexity of finding and enumerating elementary modes.

    NARCIS (Netherlands)

    Acuna, V.; Marchetti-Spaccamela, A.; Sagot, M.-F.; Stougie, L.

    2010-01-01

    In the context of the study into elementary modes of metabolic networks, we prove two complexity results. Enumerating elementary modes containing a specific reaction is hard in an enumeration complexity sense. The decision problem if there exists an elementary mode containing two specific reactions

  19. Automated Mobile System for Accurate Outdoor Tree Crop Enumeration Using an Uncalibrated Camera

    OpenAIRE

    Thuy Tuong Nguyen; David C. Slaughter; Bradley D. Hanson; Andrew Barber; Amy Freitas; Daniel Robles; Erin Whelan

    2015-01-01

    This paper demonstrates an automated computer vision system for outdoor tree crop enumeration in a seedling nursery. The complete system incorporates both hardware components (including an embedded microcontroller, an odometry encoder, and an uncalibrated digital color camera) and software algorithms (including microcontroller algorithms and the proposed algorithm for tree crop enumeration) required to obtain robust performance in a natural outdoor environment. The enumeration system uses a t...

  20. Object grammars and random generation

    Directory of Open Access Journals (Sweden)

    I. Dutour

    1998-12-01

    Full Text Available This paper presents a new systematic approach for the uniform random generation of combinatorial objects. The method is based on the notion of object grammars which give recursive descriptions of objects and generalize context-freegrammars. The application of particular valuations to these grammars leads to enumeration and random generation of objects according to non algebraic parameters.

  1. Circulating Tumor Cells, Enumeration and Beyond

    Directory of Open Access Journals (Sweden)

    Jian-Mei Hou

    2010-06-01

    Full Text Available The detection and enumeration of circulating tumor cells (CTCs has shown significant clinical utility with respect to prognosis in breast, colorectal and prostate cancers. Emerging studies show that CTCs can provide pharmacodynamic information to aid therapy decision making. CTCs as a ‘virtual and real-time biopsy’ have clear potential to facilitate exploration of tumor biology, and in particular, the process of metastasis. The challenge of profiling CTC molecular characteristics and generating CTC signatures using current technologies is that they enrich rather than purify CTCs from whole blood; we face the problem of looking for the proverbial ‘needle in the haystack’. This review summarizes the current methods for CTC detection and enumeration, focuses on molecular characterization of CTCs, unveils some aspects of CTC heterogeneity, describes attempts to purify CTCs and scans the horizon for approaches leading to comprehensive dissection of CTC biology.

  2. Summary of the Skookumchuck Creek bull trout enumeration project 2001.; TOPICAL

    International Nuclear Information System (INIS)

    Baxter, James S.; Baxter, Jeremy

    2002-01-01

    This report summarizes the second year of a bull trout (Salvelinus confluentus) enumeration project on Skookumchuck Creek in southeastern British Columbia. An enumeration fence and traps were installed on the creek from September 6th to October 12th 2001 to enable the capture of post-spawning bull trout emigrating out of the watershed. During the study period, a total of 273 bull trout were sampled through the enumeration fence. Length and weight were determined for all bull trout captured. In total, 39 fish of undetermined sex, 61 males and 173 females were processed through the fence. An additional 19 bull trout were observed on a snorkel survey prior to the fence being removed on October 12th. Coupled with the fence count, the total bull trout enumerated during this project was 292 fish. Several other species of fish were captured at the enumeration fence including westslope cutthroat trout (Oncorhynchus clarki lewisi), Rocky Mountain whitefish (Prosopium williamsoni), and kokanee (O. nerka). A total of 143 bull trout redds were enumerated on the ground in two different locations (river km 27.5-30.5, and km 24.0-25.5) on October 3rd. The majority of redds (n=132) were observed in the 3.0 km index section (river km 27.5-30.5) that has been surveyed over the past five years. The additional 11 redds were observed in a 1.5 km section (river km 24.0-25.5). Summary plots of water temperature for Bradford Creek, Sandown Creek, Buhl Creek, and Skookumchuck Creek at three locations suggested that water temperatures were within the temperature range preferred by bull trout for spawning, egg incubation, and rearing

  3. RatBot: anti-enumeration peer-to-peer botnets

    Energy Technology Data Exchange (ETDEWEB)

    Yan, Guanhua [Los Alamos National Laboratory; Eidenbenz, Stephan [Los Alamos National Laboratory; Chen, Songqing [GEORGE MASON UNIV.

    2010-01-01

    Botnets have emerged as one of the most severe cyber threats in recent years. To obtain high resilience against a single point of failure, the new generation of botnets have adopted the peer-to-peer (P2P) structure. One critical question regarding these P2P botnets is: how big are they indeed? To address this question, researchers have proposed both actively crawling and passively monitoring methods to enumerate existing P2P botnets. In this work, we go further to explore the potential strategies that botnets may have to obfuscate their true sizes. Towards this end, this paper introduces RatBot, a P2P botnet that applies some statistical techniques to defeat existing P2P botnet enumeration methods. The key ideas of RatBot are two-fold: (1) there exist a fraction of bots that are indistinguishable from their fake identities, which are spoofing IP addresses they use to hide themselves; (2) we use a heavy-tailed distribution to generate the number of fake identities for each of these bots so that the sum of observed fake identities converges only slowly and thus has high variation. We use large-scale high-fidelity simulation to quantify the estimation errors under diverse settings, and the results show that a naive enumeration technique can overestimate the sizes of P2P botnets by one order of magnitude. We believe that our work reveals new challenges of accurately estimating the sizes of P2P botnets, and hope that it will raise the awareness of security practitioners with these challenges. We further suggest a few countermeasures that can potentially defeat RatBot's anti-enumeration scheme.

  4. Selectivity and sparseness in randomly connected balanced networks.

    Directory of Open Access Journals (Sweden)

    Cengiz Pehlevan

    Full Text Available Neurons in sensory cortex show stimulus selectivity and sparse population response, even in cases where no strong functionally specific structure in connectivity can be detected. This raises the question whether selectivity and sparseness can be generated and maintained in randomly connected networks. We consider a recurrent network of excitatory and inhibitory spiking neurons with random connectivity, driven by random projections from an input layer of stimulus selective neurons. In this architecture, the stimulus-to-stimulus and neuron-to-neuron modulation of total synaptic input is weak compared to the mean input. Surprisingly, we show that in the balanced state the network can still support high stimulus selectivity and sparse population response. In the balanced state, strong synapses amplify the variation in synaptic input and recurrent inhibition cancels the mean. Functional specificity in connectivity emerges due to the inhomogeneity caused by the generative statistical rule used to build the network. We further elucidate the mechanism behind and evaluate the effects of model parameters on population sparseness and stimulus selectivity. Network response to mixtures of stimuli is investigated. It is shown that a balanced state with unselective inhibition can be achieved with densely connected input to inhibitory population. Balanced networks exhibit the "paradoxical" effect: an increase in excitatory drive to inhibition leads to decreased inhibitory population firing rate. We compare and contrast selectivity and sparseness generated by the balanced network to randomly connected unbalanced networks. Finally, we discuss our results in light of experiments.

  5. Natural product-like virtual libraries: recursive atom-based enumeration.

    Science.gov (United States)

    Yu, Melvin J

    2011-03-28

    A new molecular enumerator is described that allows chemically and architecturally diverse sets of natural product-like and drug-like structures to be generated from a core structure as simple as a single carbon atom or as complex as a polycyclic ring system. Integrated with a rudimentary machine-learning algorithm, the enumerator has the ability to assemble biased virtual libraries enriched in compounds predicted to meet target criteria. The ability to dynamically generate relatively small focused libraries in a recursive manner could reduce the computational time and infrastructure necessary to construct and manage extremely large static libraries. Depending on enumeration conditions, natural product-like structures can be produced with a wide range of heterocyclic and alicyclic ring assemblies. Because natural products represent a proven source of validated structures for identifying and designing new drug candidates, mimicking the structural and topological diversity found in nature with a dynamic set of virtual natural product-like compounds may facilitate the creation of new ideas for novel, biologically relevant lead structures in areas of uncharted chemical space.

  6. Testing, Selection, and Implementation of Random Number Generators

    National Research Council Canada - National Science Library

    Collins, Joseph C

    2008-01-01

    An exhaustive evaluation of state-of-the-art random number generators with several well-known suites of tests provides the basis for selection of suitable random number generators for use in stochastic simulations...

  7. 21 CFR 866.4700 - Automated fluorescence in situ hybridization (FISH) enumeration systems.

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Automated fluorescence in situ hybridization (FISH... Laboratory Equipment and Reagents § 866.4700 Automated fluorescence in situ hybridization (FISH) enumeration... Hybridization (FISH) Enumeration Systems.” See § 866.1(e) for the availability of this guidance document. [70 FR...

  8. TEMA and Dot Enumeration Profiles Predict Mental Addition Problem Solving Speed Longitudinally.

    Science.gov (United States)

    Major, Clare S; Paul, Jacob M; Reeve, Robert A

    2017-01-01

    Different math indices can be used to assess math potential at school entry. We evaluated whether standardized math achievement (TEMA-2 performance), core number abilities (dot enumeration, symbolic magnitude comparison), non-verbal intelligence (NVIQ) and visuo-spatial working memory (VSWM), in combination or separately, predicted mental addition problem solving speed over time. We assessed 267 children's TEMA-2, magnitude comparison, dot enumeration, and VSWM abilities at school entry (5 years) and NVIQ at 8 years. Mental addition problem solving speed was assessed at 6, 8, and 10 years. Longitudinal path analysis supported a model in which dot enumeration performance ability profiles and previous mental addition speed predicted future mental addition speed on all occasions, supporting a componential account of math ability. Standardized math achievement and NVIQ predicted mental addition speed at specific time points, while VSWM and symbolic magnitude comparison did not contribute unique variance to the model. The implications of using standardized math achievement and dot enumeration ability to index math learning potential at school entry are discussed.

  9. Enumeration, identification and decontamination of microorganisms on empty fruit bunches (EFB) and palm press fibre (PPF) from selected palm oil mills in the Peninsular Malaysia

    International Nuclear Information System (INIS)

    Foziah Ali; Muhammad Lebai Juri; Mat Rasol Awang

    1998-01-01

    The PPF and EFB temporarily disposed into the environment at the mill are heavily contaminated with micro-organisms, therefore require decontamination prior to utilisation. The current methods for decontaminating PPF and EFB has been briefly reviewed (Mat Rasol et. al.,1987). They suggested that these by-products can be effectively decontaminated by gamma-irradiation and the resulting sterilised by-products could subsequently be used for conversion into animals feeds by fermentation with fungi or chemical stock. The primary objectives of the investigation are: a) to enumerate contaminating microorganisms on PPF and EFB collected from various oil palm mills in the Peninsular Malaysia, and b) to establish the inactivation curves of the PPF and EFB from the selected palm oil mills

  10. Introducing the fit-criteria assessment plot - A visualisation tool to assist class enumeration in group-based trajectory modelling.

    Science.gov (United States)

    Klijn, Sven L; Weijenberg, Matty P; Lemmens, Paul; van den Brandt, Piet A; Lima Passos, Valéria

    2017-10-01

    Background and objective Group-based trajectory modelling is a model-based clustering technique applied for the identification of latent patterns of temporal changes. Despite its manifold applications in clinical and health sciences, potential problems of the model selection procedure are often overlooked. The choice of the number of latent trajectories (class-enumeration), for instance, is to a large degree based on statistical criteria that are not fail-safe. Moreover, the process as a whole is not transparent. To facilitate class enumeration, we introduce a graphical summary display of several fit and model adequacy criteria, the fit-criteria assessment plot. Methods An R-code that accepts universal data input is presented. The programme condenses relevant group-based trajectory modelling output information of model fit indices in automated graphical displays. Examples based on real and simulated data are provided to illustrate, assess and validate fit-criteria assessment plot's utility. Results Fit-criteria assessment plot provides an overview of fit criteria on a single page, placing users in an informed position to make a decision. Fit-criteria assessment plot does not automatically select the most appropriate model but eases the model assessment procedure. Conclusions Fit-criteria assessment plot is an exploratory, visualisation tool that can be employed to assist decisions in the initial and decisive phase of group-based trajectory modelling analysis. Considering group-based trajectory modelling's widespread resonance in medical and epidemiological sciences, a more comprehensive, easily interpretable and transparent display of the iterative process of class enumeration may foster group-based trajectory modelling's adequate use.

  11. Enumeration of minimal stoichiometric precursor sets in metabolic networks.

    Science.gov (United States)

    Andrade, Ricardo; Wannagat, Martin; Klein, Cecilia C; Acuña, Vicente; Marchetti-Spaccamela, Alberto; Milreu, Paulo V; Stougie, Leen; Sagot, Marie-France

    2016-01-01

    What an organism needs at least from its environment to produce a set of metabolites, e.g. target(s) of interest and/or biomass, has been called a minimal precursor set. Early approaches to enumerate all minimal precursor sets took into account only the topology of the metabolic network (topological precursor sets). Due to cycles and the stoichiometric values of the reactions, it is often not possible to produce the target(s) from a topological precursor set in the sense that there is no feasible flux. Although considering the stoichiometry makes the problem harder, it enables to obtain biologically reasonable precursor sets that we call stoichiometric. Recently a method to enumerate all minimal stoichiometric precursor sets was proposed in the literature. The relationship between topological and stoichiometric precursor sets had however not yet been studied. Such relationship between topological and stoichiometric precursor sets is highlighted. We also present two algorithms that enumerate all minimal stoichiometric precursor sets. The first one is of theoretical interest only and is based on the above mentioned relationship. The second approach solves a series of mixed integer linear programming problems. We compared the computed minimal precursor sets to experimentally obtained growth media of several Escherichia coli strains using genome-scale metabolic networks. The results show that the second approach efficiently enumerates minimal precursor sets taking stoichiometry into account, and allows for broad in silico studies of strains or species interactions that may help to understand e.g. pathotype and niche-specific metabolic capabilities. sasita is written in Java, uses cplex as LP solver and can be downloaded together with all networks and input files used in this paper at http://www.sasita.gforge.inria.fr.

  12. Enumeration of RNA complexes via random matrix theory

    DEFF Research Database (Denmark)

    Andersen, Jørgen E; Chekhov, Leonid O.; Penner, Robert C

    2013-01-01

    molecules and hydrogen bonds in a given complex. The free energies of this matrix model are computed using the so-called topological recursion, which is a powerful new formalism arising from random matrix theory. These numbers of RNA complexes also have profound meaning in mathematics: they provide......In the present article, we review a derivation of the numbers of RNA complexes of an arbitrary topology. These numbers are encoded in the free energy of the Hermitian matrix model with potential V(x)=x(2)/2 - stx/(1 - tx), where s and t are respective generating parameters for the number of RNA...

  13. Pseudo Random Coins Show More Heads Than Tails

    OpenAIRE

    Bauke, Heiko; Mertens, Stephan

    2003-01-01

    Tossing a coin is the most elementary Monte Carlo experiment. In a computer the coin is replaced by a pseudo random number generator. It can be shown analytically and by exact enumerations that popular random number generators are not capable of imitating a fair coin: pseudo random coins show more heads than tails. This bias explains the empirically observed failure of some random number generators in random walk experiments. It can be traced down to the special role of the value zero in the ...

  14. Application of random effects to the study of resource selection by animals.

    Science.gov (United States)

    Gillies, Cameron S; Hebblewhite, Mark; Nielsen, Scott E; Krawchuk, Meg A; Aldridge, Cameron L; Frair, Jacqueline L; Saher, D Joanne; Stevens, Cameron E; Jerde, Christopher L

    2006-07-01

    1. Resource selection estimated by logistic regression is used increasingly in studies to identify critical resources for animal populations and to predict species occurrence. 2. Most frequently, individual animals are monitored and pooled to estimate population-level effects without regard to group or individual-level variation. Pooling assumes that both observations and their errors are independent, and resource selection is constant given individual variation in resource availability. 3. Although researchers have identified ways to minimize autocorrelation, variation between individuals caused by differences in selection or available resources, including functional responses in resource selection, have not been well addressed. 4. Here we review random-effects models and their application to resource selection modelling to overcome these common limitations. We present a simple case study of an analysis of resource selection by grizzly bears in the foothills of the Canadian Rocky Mountains with and without random effects. 5. Both categorical and continuous variables in the grizzly bear model differed in interpretation, both in statistical significance and coefficient sign, depending on how a random effect was included. We used a simulation approach to clarify the application of random effects under three common situations for telemetry studies: (a) discrepancies in sample sizes among individuals; (b) differences among individuals in selection where availability is constant; and (c) differences in availability with and without a functional response in resource selection. 6. We found that random intercepts accounted for unbalanced sample designs, and models with random intercepts and coefficients improved model fit given the variation in selection among individuals and functional responses in selection. Our empirical example and simulations demonstrate how including random effects in resource selection models can aid interpretation and address difficult assumptions

  15. Local randomization in neighbor selection improves PRM roadmap quality

    KAUST Repository

    McMahon, Troy; Jacobs, Sam; Boyd, Bryan; Tapia, Lydia; Amato, Nancy M.

    2012-01-01

    Probabilistic Roadmap Methods (PRMs) are one of the most used classes of motion planning methods. These sampling-based methods generate robot configurations (nodes) and then connect them to form a graph (roadmap) containing representative feasible pathways. A key step in PRM roadmap construction involves identifying a set of candidate neighbors for each node. Traditionally, these candidates are chosen to be the k-closest nodes based on a given distance metric. In this paper, we propose a new neighbor selection policy called LocalRand(k,K'), that first computes the K' closest nodes to a specified node and then selects k of those nodes at random. Intuitively, LocalRand attempts to benefit from random sampling while maintaining the higher levels of local planner success inherent to selecting more local neighbors. We provide a methodology for selecting the parameters k and K'. We perform an experimental comparison which shows that for both rigid and articulated robots, LocalRand results in roadmaps that are better connected than the traditional k-closest policy or a purely random neighbor selection policy. The cost required to achieve these results is shown to be comparable to k-closest. © 2012 IEEE.

  16. Local randomization in neighbor selection improves PRM roadmap quality

    KAUST Repository

    McMahon, Troy

    2012-10-01

    Probabilistic Roadmap Methods (PRMs) are one of the most used classes of motion planning methods. These sampling-based methods generate robot configurations (nodes) and then connect them to form a graph (roadmap) containing representative feasible pathways. A key step in PRM roadmap construction involves identifying a set of candidate neighbors for each node. Traditionally, these candidates are chosen to be the k-closest nodes based on a given distance metric. In this paper, we propose a new neighbor selection policy called LocalRand(k,K\\'), that first computes the K\\' closest nodes to a specified node and then selects k of those nodes at random. Intuitively, LocalRand attempts to benefit from random sampling while maintaining the higher levels of local planner success inherent to selecting more local neighbors. We provide a methodology for selecting the parameters k and K\\'. We perform an experimental comparison which shows that for both rigid and articulated robots, LocalRand results in roadmaps that are better connected than the traditional k-closest policy or a purely random neighbor selection policy. The cost required to achieve these results is shown to be comparable to k-closest. © 2012 IEEE.

  17. A random spatial sampling method in a rural developing nation

    Science.gov (United States)

    Michelle C. Kondo; Kent D.W. Bream; Frances K. Barg; Charles C. Branas

    2014-01-01

    Nonrandom sampling of populations in developing nations has limitations and can inaccurately estimate health phenomena, especially among hard-to-reach populations such as rural residents. However, random sampling of rural populations in developing nations can be challenged by incomplete enumeration of the base population. We describe a stratified random sampling method...

  18. Method to enumerate oocysts of cryptosporidium and cysts of Giardia in water

    International Nuclear Information System (INIS)

    Briancesco, R.; Bonadonna, L.

    2000-01-01

    Cryptosporidium and Giardia have been recognized as etiological agents of gastrointestinal illness in humans with severe consequences on children and immunocompromised individuals. Water seems to be vehicle of infection. In last years many efforts have been done to evaluate a method to enumerate oocysts of Cryptosporidium and cysts of Giardia in waters. Throughout filtration and concentration steps, the two procedures proposed allow to enumerate oocysts and cysts belonging to the two genera of protozoa [it

  19. Enumeration of Rectangles in a Tableau Shape

    Science.gov (United States)

    Mingus, Tabitha T. Y.; Grassl, Richard M.; Diaz, Ricardo; Andrew, Lane; Parker, Frieda

    2010-01-01

    This article analyzes the challenge of counting the number of rectangles of all sizes in the n-tableau and to provide a combinatorial reason for the answer. The authors present a solution on enumerating rectangles in the n-tableau using Grassl and Mingus results. The authors demonstrate their conjecture for the n-tableau and attempt to apply their…

  20. Enumeration, isolation and identification of bacteria and fungi from ...

    African Journals Online (AJOL)

    Enumeration, isolation and identification of bacteria and fungi from soil contaminated with petroleum products ... dropping can be useful in the bioremediation of soil contaminated with petroleum products and possibly other oil polluted sites.

  1. Enumeration of self-avoiding walks on the square lattice

    International Nuclear Information System (INIS)

    Jensen, Iwan

    2004-01-01

    We describe a new algorithm for the enumeration of self-avoiding walks on the square lattice. Using up to 128 processors on a HP Alpha server cluster we have enumerated the number of self-avoiding walks on the square lattice to length 71. Series for the metric properties of mean-square end-to-end distance, mean-square radius of gyration and mean-square distance of monomers from the end points have been derived to length 59. An analysis of the resulting series yields accurate estimates of the critical exponents γ and ν confirming predictions of their exact values. Likewise we obtain accurate amplitude estimates yielding precise values for certain universal amplitude combinations. Finally we report on an analysis giving compelling evidence that the leading non-analytic correction-to-scaling exponent Δ 1 = 3/2

  2. Enumeration of RNA complexes via random matrix theory.

    Science.gov (United States)

    Andersen, Jørgen E; Chekhov, Leonid O; Penner, Robert C; Reidys, Christian M; Sułkowski, Piotr

    2013-04-01

    In the present article, we review a derivation of the numbers of RNA complexes of an arbitrary topology. These numbers are encoded in the free energy of the Hermitian matrix model with potential V(x)=x2/2-stx/(1-tx), where s and t are respective generating parameters for the number of RNA molecules and hydrogen bonds in a given complex. The free energies of this matrix model are computed using the so-called topological recursion, which is a powerful new formalism arising from random matrix theory. These numbers of RNA complexes also have profound meaning in mathematics: they provide the number of chord diagrams of fixed genus with specified numbers of backbones and chords as well as the number of cells in Riemann's moduli spaces for bordered surfaces of fixed topological type.

  3. A combinatorial enumeration problem of RNA secondary structures

    African Journals Online (AJOL)

    use

    2011-12-21

    Dec 21, 2011 ... interesting combinatorial questions (Chen et al., 2005;. Liu, 2006; Schmitt and Waterman 1994; Stein and. Waterman 1978). The research on the enumeration of. RNA secondary structures becomes one of the hot topics in Computational Molecular Biology. An RNA molecule is described by its sequences of.

  4. A state enumeration of the foil knot

    OpenAIRE

    Ramaharo, Franck; Rakotondrajao, Fanja

    2017-01-01

    We split the crossings of the foil knot and enumerate the resulting states with a generating polynomial. Unexpectedly, the number of such states which consist of two components are given by the lazy caterer's sequence. This sequence describes the maximum number of planar regions that is obtained with a given number of straight lines. We then establish a bijection between this partition of the plane and the concerned foil splits sequence.

  5. Enumeration of smallest intervention strategies in genome-scale metabolic networks.

    Directory of Open Access Journals (Sweden)

    Axel von Kamp

    2014-01-01

    Full Text Available One ultimate goal of metabolic network modeling is the rational redesign of biochemical networks to optimize the production of certain compounds by cellular systems. Although several constraint-based optimization techniques have been developed for this purpose, methods for systematic enumeration of intervention strategies in genome-scale metabolic networks are still lacking. In principle, Minimal Cut Sets (MCSs; inclusion-minimal combinations of reaction or gene deletions that lead to the fulfilment of a given intervention goal provide an exhaustive enumeration approach. However, their disadvantage is the combinatorial explosion in larger networks and the requirement to compute first the elementary modes (EMs which itself is impractical in genome-scale networks. We present MCSEnumerator, a new method for effective enumeration of the smallest MCSs (with fewest interventions in genome-scale metabolic network models. For this we combine two approaches, namely (i the mapping of MCSs to EMs in a dual network, and (ii a modified algorithm by which shortest EMs can be effectively determined in large networks. In this way, we can identify the smallest MCSs by calculating the shortest EMs in the dual network. Realistic application examples demonstrate that our algorithm is able to list thousands of the most efficient intervention strategies in genome-scale networks for various intervention problems. For instance, for the first time we could enumerate all synthetic lethals in E.coli with combinations of up to 5 reactions. We also applied the new algorithm exemplarily to compute strain designs for growth-coupled synthesis of different products (ethanol, fumarate, serine by E.coli. We found numerous new engineering strategies partially requiring less knockouts and guaranteeing higher product yields (even without the assumption of optimal growth than reported previously. The strength of the presented approach is that smallest intervention strategies can be

  6. Immunophenotypic enumeration of CD4 + T-lymphocyte values in ...

    African Journals Online (AJOL)

    Background: The enumeration of CD4+ T-lymphocytes in Human Immunodeficiency Virus (HIV)-infected individuals is an essential tool for staging HIV disease, to make decisions for initiation of anti-retroviral therapy (ART), for monitoring response to ART and to initiate chemoprophylaxis against opportunistic infections.

  7. Enumeration of small collections violates Weber's law.

    Science.gov (United States)

    Choo, H; Franconeri, S L

    2014-02-01

    In a phenomenon called subitizing, we can immediately generate exact counts of small collections (one to three objects), in contrast to larger collections, for which we must either create rough estimates or serially count. A parsimonious explanation for this advantage for small collections is that noisy representations of small collections are more tolerable, due to the larger relative differences between consecutive numbers (e.g., 2 vs. 3 is a 50 % increase, but 10 vs. 11 is only a 10 % increase). In contrast, the advantage could stem from the fact that small-collection enumeration is more precise, relying on a unique mechanism. Here, we present two experiments that conclusively showed that the enumeration of small collections is indeed "superprecise." Participants compared numerosity within either small or large visual collections in conditions in which the relative differences were controlled (e.g., performance for 2 vs. 3 was compared with performance for 20 vs. 30). Small-number comparison was still faster and more accurate, across both "more-fewer" judgments (Exp. 1), and "same-different" judgments (Exp. 2). We then reviewed the remaining potential mechanisms that might underlie this superprecision for small collections, including the greater diagnostic value of visual features that correlate with number and a limited capacity for visually individuating objects.

  8. BACTERIOLOGICAL PROPERTIES OF MARINE WATER IN ADRIATIC FISH FARMS: ENUMERATION OF HETEROTROPHIC BACTERIA

    Directory of Open Access Journals (Sweden)

    Emin Teskeredžić

    2012-12-01

    Full Text Available Aquaculture is currently one of the fastest growing food production sectors in the world. Increase in nutrients and organic wastes lead to general deterioration of water quality. The problem of water quality is associated with both physical and chemical factors, as well as microbiological water quality. Heterotrophic bacteria play an important role in the process of decomposition of organic matter in water environment and indicate eutrophication process. Here we present our experience and knowledge on bacterial properties of marine water in the Adriatic fish farms with European sea bass (Dicentrarchus labrax L., 1758, with an emphasis on enumeration of heterotrophic bacteria in marine water. We applied two temperatures of incubation, as well as two methods for enumeration of heterotrophic bacteria: substrate SimPlate® test and spread plate method on conventional artificial media (Marine agar and Tryptic Soy agar with added NaCl. The results of analysis of bacteriological properties of marine water in the Adriatic fish farms showed that enumeration of heterotrophic bacteria in marine water depends on the applied incubation temperature and media for enumeration. At the same time, the incubation temperature of 22C favours more intense growth of marine heterotrophic bacteria, whereas a SimPlate test gives higher values of heterotrophic bacteria. Volatile values of heterotrophic bacteria during this research indicate a possible deterioration of microbiological water quality in the Adriatic fish farms and a need for regular monitoring of marine water quality.

  9. Enumeration of connected Catalan objects by type

    OpenAIRE

    Rhoades, Brendon

    2010-01-01

    Noncrossing set partitions, nonnesting set partitions, Dyck paths, and rooted plane trees are four classes of Catalan objects which carry a notion of type. There exists a product formula which enumerates these objects according to type. We define a notion of `connectivity' for these objects and prove an analogous product formula which counts connected objects by type. Our proof of this product formula is combinatorial and bijective. We extend this to a product formula which counts objects wit...

  10. Comparison of media for enumeration of Clostridium perfringens from foods

    NARCIS (Netherlands)

    Jong, A.E.I. de; Eijhusen, G.P.; Brouwer-Post, E.J.F.; Grand, M.; Johansson, T.; Kärkkäinen, T.; Marugg, J.; Veld, P.H. in 't; Warmerdam, F.H.M.; Wörner, G.; Zicavo, A.; Rombouts, F.M.; Beumer, R.R.

    2003-01-01

    Many media have been developed to enumerate Clostridium perfringens from foods. In this study, six media [iron sulfite (IS) agar, tryptose sulfite cycloserine (TSC) agar, Shahidi Ferguson perfringens (SFP) agar, sulfite cycloserine azide (SCA), differential clostridial agar (DCA), and oleandomycin

  11. Automated Mobile System for Accurate Outdoor Tree Crop Enumeration Using an Uncalibrated Camera.

    Science.gov (United States)

    Nguyen, Thuy Tuong; Slaughter, David C; Hanson, Bradley D; Barber, Andrew; Freitas, Amy; Robles, Daniel; Whelan, Erin

    2015-07-28

    This paper demonstrates an automated computer vision system for outdoor tree crop enumeration in a seedling nursery. The complete system incorporates both hardware components (including an embedded microcontroller, an odometry encoder, and an uncalibrated digital color camera) and software algorithms (including microcontroller algorithms and the proposed algorithm for tree crop enumeration) required to obtain robust performance in a natural outdoor environment. The enumeration system uses a three-step image analysis process based upon: (1) an orthographic plant projection method integrating a perspective transform with automatic parameter estimation; (2) a plant counting method based on projection histograms; and (3) a double-counting avoidance method based on a homography transform. Experimental results demonstrate the ability to count large numbers of plants automatically with no human effort. Results show that, for tree seedlings having a height up to 40 cm and a within-row tree spacing of approximately 10 cm, the algorithms successfully estimated the number of plants with an average accuracy of 95.2% for trees within a single image and 98% for counting of the whole plant population in a large sequence of images.

  12. Accuracy and impact of spatial aids based upon satellite enumeration to improve indoor residual spraying spatial coverage.

    Science.gov (United States)

    Bridges, Daniel J; Pollard, Derek; Winters, Anna M; Winters, Benjamin; Sikaala, Chadwick; Renn, Silvia; Larsen, David A

    2018-02-23

    Indoor residual spraying (IRS) is a key tool in the fight to control, eliminate and ultimately eradicate malaria. IRS protection is based on a communal effect such that an individual's protection primarily relies on the community-level coverage of IRS with limited protection being provided by household-level coverage. To ensure a communal effect is achieved through IRS, achieving high and uniform community-level coverage should be the ultimate priority of an IRS campaign. Ensuring high community-level coverage of IRS in malaria-endemic areas is challenging given the lack of information available about both the location and number of households needing IRS in any given area. A process termed 'mSpray' has been developed and implemented and involves use of satellite imagery for enumeration for planning IRS and a mobile application to guide IRS implementation. This study assessed (1) the accuracy of the satellite enumeration and (2) how various degrees of spatial aid provided through the mSpray process affected community-level IRS coverage during the 2015 spray campaign in Zambia. A 2-stage sampling process was applied to assess accuracy of satellite enumeration to determine number and location of sprayable structures. Results indicated an overall sensitivity of 94% for satellite enumeration compared to finding structures on the ground. After adjusting for structure size, roof, and wall type, households in Nchelenge District where all types of satellite-based spatial aids (paper-based maps plus use of the mobile mSpray application) were used were more likely to have received IRS than Kasama district where maps used were not based on satellite enumeration. The probability of a household being sprayed in Nchelenge district where tablet-based maps were used, did not differ statistically from that of a household in Samfya District, where detailed paper-based spatial aids based on satellite enumeration were provided. IRS coverage from the 2015 spray season benefited from

  13. Optimisation of a direct plating method for the detection and enumeration of Alicyclobacillus acidoterrestris spores.

    Science.gov (United States)

    Henczka, Marek; Djas, Małgorzata; Filipek, Katarzyna

    2013-01-01

    A direct plating method for the detection and enumeration of Alicyclobacillus acidoterrestris spores has been optimised. The results of the application of four types of growth media (BAT agar, YSG agar, K agar and SK agar) regarding the recovery and enumeration of A. acidoterrestris spores were compared. The influence of the type of applied growth medium, heat shock conditions, incubation temperature, incubation time, plating technique and the presence of apple juice in the sample on the accuracy of the detection and enumeration of A. acidoterrestris spores was investigated. Among the investigated media, YSG agar was the most sensitive medium, and its application resulted in the highest recovery of A. acidoterrestris spores, while K agar and BAT agar were the least suitable media. The effect of the heat shock time on the recovery of spores was negligible. When there was a low concentration of spores in a sample, the membrane filtration method was superior to the spread plating method. The obtained results show that heat shock carried out at 80°C for 10 min and plating samples in combination with membrane filtration on YSG agar, followed by incubation at 46°C for 3 days provided the optimal conditions for the detection and enumeration of A. acidoterrestris spores. Application of the presented method allows highly efficient, fast and sensitive identification and enumeration of A. acidoterrestris spores in food products. This methodology will be useful for the fruit juice industry for identifying products contaminated with A. acidoterrestris spores, and its practical application may prevent economic losses for manufacturers. Copyright © 2012 Elsevier B.V. All rights reserved.

  14. Dense module enumeration in biological networks

    Science.gov (United States)

    Tsuda, Koji; Georgii, Elisabeth

    2009-12-01

    Analysis of large networks is a central topic in various research fields including biology, sociology, and web mining. Detection of dense modules (a.k.a. clusters) is an important step to analyze the networks. Though numerous methods have been proposed to this aim, they often lack mathematical rigorousness. Namely, there is no guarantee that all dense modules are detected. Here, we present a novel reverse-search-based method for enumerating all dense modules. Furthermore, constraints from additional data sources such as gene expression profiles or customer profiles can be integrated, so that we can systematically detect dense modules with interesting profiles. We report successful applications in human protein interaction network analyses.

  15. Dense module enumeration in biological networks

    International Nuclear Information System (INIS)

    Tsuda, Koji; Georgii, Elisabeth

    2009-01-01

    Analysis of large networks is a central topic in various research fields including biology, sociology, and web mining. Detection of dense modules (a.k.a. clusters) is an important step to analyze the networks. Though numerous methods have been proposed to this aim, they often lack mathematical rigorousness. Namely, there is no guarantee that all dense modules are detected. Here, we present a novel reverse-search-based method for enumerating all dense modules. Furthermore, constraints from additional data sources such as gene expression profiles or customer profiles can be integrated, so that we can systematically detect dense modules with interesting profiles. We report successful applications in human protein interaction network analyses.

  16. The isolation, enumeration, and characterization of Rhizobium bacteria of the soil in Wamena Biological Garden

    Directory of Open Access Journals (Sweden)

    SRI PURWANINGSIH

    2005-04-01

    Full Text Available The eleven soil samples have been isolated and characterized. The aims of the study were to get the pure culture and some data which described about enumeration and especially their characters in relation to the acids and bases reaction in their growth. The isolation of the bacteria use Yeast Extract Mannitol Agar medium (YEMA while the characterization by using YEMA medium mixed with Brom Thymol Blue and Congo Red indicators respectively. The results showed that eighteen isolates have been isolated which consisted of three low growing and fifteen fast growing bacteria. Two isolates were not indicated Rhizobium and sixteen were Rhizobium. Density of Rhizobium enumeration was varied which related to soil organic matter content. The enumeration bacteria in YEMA medium were in the range of 0.6 x 105 and 11.6 x 105 CFU /g soil. The highest population was found in soil sample of Wieb vegetation.

  17. Automated Mobile System for Accurate Outdoor Tree Crop Enumeration Using an Uncalibrated Camera

    Directory of Open Access Journals (Sweden)

    Thuy Tuong Nguyen

    2015-07-01

    Full Text Available This paper demonstrates an automated computer vision system for outdoor tree crop enumeration in a seedling nursery. The complete system incorporates both hardware components (including an embedded microcontroller, an odometry encoder, and an uncalibrated digital color camera and software algorithms (including microcontroller algorithms and the proposed algorithm for tree crop enumeration required to obtain robust performance in a natural outdoor environment. The enumeration system uses a three-step image analysis process based upon: (1 an orthographic plant projection method integrating a perspective transform with automatic parameter estimation; (2 a plant counting method based on projection histograms; and (3 a double-counting avoidance method based on a homography transform. Experimental results demonstrate the ability to count large numbers of plants automatically with no human effort. Results show that, for tree seedlings having a height up to 40 cm and a within-row tree spacing of approximately 10 cm, the algorithms successfully estimated the number of plants with an average accuracy of 95.2% for trees within a single image and 98% for counting of the whole plant population in a large sequence of images.

  18. 40 CFR 761.308 - Sample selection by random number generation on any two-dimensional square grid.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 30 2010-07-01 2010-07-01 false Sample selection by random number... § 761.79(b)(3) § 761.308 Sample selection by random number generation on any two-dimensional square... area created in accordance with paragraph (a) of this section, select two random numbers: one each for...

  19. Applications of random forest feature selection for fine-scale genetic population assignment.

    Science.gov (United States)

    Sylvester, Emma V A; Bentzen, Paul; Bradbury, Ian R; Clément, Marie; Pearce, Jon; Horne, John; Beiko, Robert G

    2018-02-01

    Genetic population assignment used to inform wildlife management and conservation efforts requires panels of highly informative genetic markers and sensitive assignment tests. We explored the utility of machine-learning algorithms (random forest, regularized random forest and guided regularized random forest) compared with F ST ranking for selection of single nucleotide polymorphisms (SNP) for fine-scale population assignment. We applied these methods to an unpublished SNP data set for Atlantic salmon ( Salmo salar ) and a published SNP data set for Alaskan Chinook salmon ( Oncorhynchus tshawytscha ). In each species, we identified the minimum panel size required to obtain a self-assignment accuracy of at least 90% using each method to create panels of 50-700 markers Panels of SNPs identified using random forest-based methods performed up to 7.8 and 11.2 percentage points better than F ST -selected panels of similar size for the Atlantic salmon and Chinook salmon data, respectively. Self-assignment accuracy ≥90% was obtained with panels of 670 and 384 SNPs for each data set, respectively, a level of accuracy never reached for these species using F ST -selected panels. Our results demonstrate a role for machine-learning approaches in marker selection across large genomic data sets to improve assignment for management and conservation of exploited populations.

  20. Chromogenic media for the detection and/or enumeration of Listeria monocytogenes - results of trials performed by a working group of the International Organization for Standardization - ISO/TC 34/SC 9

    NARCIS (Netherlands)

    Beumer, R.R.; Hazeleger, W.C.

    2007-01-01

    The solid selective media PALCAM and Oxford agar originally described in the ISO (International Organization for Standardization) Standard 11290 part 1 and part 2 "Microbiology of food and animal feeding stuffs - Horizontal method for the detection and enumeration of Listeria monocytogenes", suffer

  1. Planar articulated mechanism design by graph theoretical enumeration

    DEFF Research Database (Denmark)

    Kawamoto, A; Bendsøe, Martin P.; Sigmund, Ole

    2004-01-01

    This paper deals with design of articulated mechanisms using a truss-based ground-structure representation. By applying a graph theoretical enumeration approach we can perform an exhaustive analysis of all possible topologies for a test example for which we seek a symmetric mechanism. This guaran....... This guarantees that one can identify the global optimum solution. The result underlines the importance of mechanism topology and gives insight into the issues specific to articulated mechanism designs compared to compliant mechanism designs....

  2. Interference-aware random beam selection for spectrum sharing systems

    KAUST Repository

    Abdallah, Mohamed M.

    2012-09-01

    Spectrum sharing systems have been introduced to alleviate the problem of spectrum scarcity by allowing secondary unlicensed networks to share the spectrum with primary licensed networks under acceptable interference levels to the primary users. In this paper, we develop interference-aware random beam selection schemes that provide enhanced throughput for the secondary link under the condition that the interference observed at the primary link is within a predetermined acceptable value. For a secondary transmitter equipped with multiple antennas, our schemes select a random beam, among a set of power- optimized orthogonal random beams, that maximizes the capacity of the secondary link while satisfying the interference constraint at the primary receiver for different levels of feedback information describing the interference level at the primary receiver. For the proposed schemes, we develop a statistical analysis for the signal-to-noise and interference ratio (SINR) statistics as well as the capacity of the secondary link. Finally, we present numerical results that study the effect of system parameters including number of beams and the maximum transmission power on the capacity of the secondary link attained using the proposed schemes. © 2012 IEEE.

  3. Interference-aware random beam selection for spectrum sharing systems

    KAUST Repository

    Abdallah, Mohamed M.; Sayed, Mostafa M.; Alouini, Mohamed-Slim; Qaraqe, Khalid A.

    2012-01-01

    . In this paper, we develop interference-aware random beam selection schemes that provide enhanced throughput for the secondary link under the condition that the interference observed at the primary link is within a predetermined acceptable value. For a secondary

  4. Enumeration and estimation of insect attack fruits of some cultivars ...

    African Journals Online (AJOL)

    In this study, five cultivars of Punica granatum identified (two of which are endemic, while the other three are new) were grown in certain farms at Al-Taif, Saudi Arabia. Enumeration to the insects attack its' fruits illustrated that, there are three insects, namely, Virchola livia, Ectomyelois ceratonia and Pseudococcus maitimus ...

  5. Spatial working memory load affects counting but not subitizing in enumeration.

    Science.gov (United States)

    Shimomura, Tomonari; Kumada, Takatsune

    2011-08-01

    The present study investigated whether subitizing reflects capacity limitations associated with two types of working memory tasks. Under a dual-task situation, participants performed an enumeration task in conjunction with either a spatial (Experiment 1) or a nonspatial visual (Experiment 2) working memory task. Experiment 1 showed that spatial working memory load affected the slope of a counting function but did not affect subitizing performance or subitizing range. Experiment 2 showed that nonspatial visual working memory load affected neither enumeration efficiency nor subitizing range. Furthermore, in both spatial and nonspatial memory tasks, neither subitizing efficiency nor subitizing range was affected by amount of imposed memory load. In all the experiments, working memory load failed to influence slope, subitizing range, or overall reaction time. These findings suggest that subitizing is performed without either spatial or nonspatial working memory. A possible mechanism of subitizing with independent capacity of working memory is discussed.

  6. Performance of mycological media in enumerating desiccated food spoilage yeasts: an interlaboratory study.

    Science.gov (United States)

    Beuchat, L R; Frandberg, E; Deak, T; Alzamora, S M; Chen, J; Guerrero, A S; López-Malo, A; Ohlsson, I; Olsen, M; Peinado, J M; Schnurer, J; de Siloniz, M I; Tornai-Lehoczki, J

    2001-10-22

    Dichloran 18% glycerol agar (DG18) was originally formulated to enumerate nonfastidious xerophilic moulds in foods containing rapidly growing Eurotium species. Some laboratories are now using DG18 as a general purpose medium for enumerating yeasts and moulds, although its performance in recovering yeasts from dry foods has not been evaluated. An interlaboratory study compared DG18 with dichloran rose bengal chloramphenicol agar (DRBC), plate count agar supplemented with chloramphenicol (PCAC), tryptone glucose yeast extract chloramphenicol agar (TGYC), acidified potato dextrose agar (APDA), and orange serum agar (OSA) for their suitability to enumerate 14 species of lyophilized yeasts. The coefficient of variation for among-laboratories repeatability within yeast was 1.39% and reproducibility of counts among laboratories was 7.1%. The order of performance of media for recovering yeasts was TGYC > PCAC = OSA > APDA > DRBC > DG 18. A second study was done to determine the combined effects of storage time and temperature on viability of yeasts and suitability of media for recovery. Higher viability was retained at -18 degrees C than at 5 degrees C or 25 degrees C for up to 42 weeks, although the difference in mean counts of yeasts stored at -18 degrees C and 25 degrees C was only 0.78 log10 cfu/ml of rehydrated suspension. TGYC was equal to PCAC and superior to the other four media in recovering yeasts stored at -18 degrees C, 5 degrees C, or 25 degrees C for up to 42 weeks. Results from both the interlaboratory study and the storage study support the use of TGYC for enumerating desiccated yeasts. DG18 is not recommended as a general purpose medium for recovering yeasts from a desiccated condition.

  7. Comparative analysis of chemical similarity methods for modular natural products with a hypothetical structure enumeration algorithm.

    Science.gov (United States)

    Skinnider, Michael A; Dejong, Chris A; Franczak, Brian C; McNicholas, Paul D; Magarvey, Nathan A

    2017-08-16

    Natural products represent a prominent source of pharmaceutically and industrially important agents. Calculating the chemical similarity of two molecules is a central task in cheminformatics, with applications at multiple stages of the drug discovery pipeline. Quantifying the similarity of natural products is a particularly important problem, as the biological activities of these molecules have been extensively optimized by natural selection. The large and structurally complex scaffolds of natural products distinguish their physical and chemical properties from those of synthetic compounds. However, no analysis of the performance of existing methods for molecular similarity calculation specific to natural products has been reported to date. Here, we present LEMONS, an algorithm for the enumeration of hypothetical modular natural product structures. We leverage this algorithm to conduct a comparative analysis of molecular similarity methods within the unique chemical space occupied by modular natural products using controlled synthetic data, and comprehensively investigate the impact of diverse biosynthetic parameters on similarity search. We additionally investigate a recently described algorithm for natural product retrobiosynthesis and alignment, and find that when rule-based retrobiosynthesis can be applied, this approach outperforms conventional two-dimensional fingerprints, suggesting it may represent a valuable approach for the targeted exploration of natural product chemical space and microbial genome mining. Our open-source algorithm is an extensible method of enumerating hypothetical natural product structures with diverse potential applications in bioinformatics.

  8. Development of an enumeration method for arsenic methylating bacteria from mixed culture samples.

    Science.gov (United States)

    Islam, S M Atiqul; Fukushi, Kensuke; Yamamoto, Kazuo

    2005-12-01

    Bacterial methylation of arsenic converts inorganic arsenic into volatile and non-volatile methylated species. It plays an important role in the arsenic cycle in the environment. Despite the potential environmental significance of AsMB, an assessment of their population size and activity remains unknown. This study has now established a protocol for enumeration of AsMB by means of the anaerobic-culture-tube, most probable number (MPN) method. Direct detection of volatile arsenic species is then done by GC-MS. This method is advantageous as it can simultaneously enumerate AsMB and acetate and formate-utilizing methanogens. The incubation time for this method was determined to be 6 weeks, sufficient time for AsMB growth.

  9. The signature of positive selection at randomly chosen loci.

    OpenAIRE

    Przeworski, Molly

    2002-01-01

    In Drosophila and humans, there are accumulating examples of loci with a significant excess of high-frequency-derived alleles or high levels of linkage disequilibrium, relative to a neutral model of a random-mating population of constant size. These are features expected after a recent selective sweep. Their prevalence suggests that positive directional selection may be widespread in both species. However, as I show here, these features do not persist long after the sweep ends: The high-frequ...

  10. Simulated Performance Evaluation of a Selective Tracker Through Random Scenario Generation

    DEFF Research Database (Denmark)

    Hussain, Dil Muhammad Akbar

    2006-01-01

    performance assessment. Therefore, a random target motion scenario is adopted. Its implementation in particular for testing the proposed selective track splitting algorithm using Kalman filters is investigated through a number of performance parameters which gives the activity profile of the tracking scenario......  The paper presents a simulation study on the performance of a target tracker using selective track splitting filter algorithm through a random scenario implemented on a digital signal processor.  In a typical track splitting filter all the observation which fall inside a likelihood ellipse...... are used for update, however, in our proposed selective track splitting filter less number of observations are used for track update.  Much of the previous performance work [1] has been done on specific (deterministic) scenarios. One of the reasons for considering the specific scenarios, which were...

  11. Minimization over randomly selected lines

    Directory of Open Access Journals (Sweden)

    Ismet Sahin

    2013-07-01

    Full Text Available This paper presents a population-based evolutionary optimization method for minimizing a given cost function. The mutation operator of this method selects randomly oriented lines in the cost function domain, constructs quadratic functions interpolating the cost function at three different points over each line, and uses extrema of the quadratics as mutated points. The crossover operator modifies each mutated point based on components of two points in population, instead of one point as is usually performed in other evolutionary algorithms. The stopping criterion of this method depends on the number of almost degenerate quadratics. We demonstrate that the proposed method with these mutation and crossover operations achieves faster and more robust convergence than the well-known Differential Evolution and Particle Swarm algorithms.

  12. Selection for altruism through random drift in variable size populations

    Directory of Open Access Journals (Sweden)

    Houchmandzadeh Bahram

    2012-05-01

    Full Text Available Abstract Background Altruistic behavior is defined as helping others at a cost to oneself and a lowered fitness. The lower fitness implies that altruists should be selected against, which is in contradiction with their widespread presence is nature. Present models of selection for altruism (kin or multilevel show that altruistic behaviors can have ‘hidden’ advantages if the ‘common good’ produced by altruists is restricted to some related or unrelated groups. These models are mostly deterministic, or assume a frequency dependent fitness. Results Evolutionary dynamics is a competition between deterministic selection pressure and stochastic events due to random sampling from one generation to the next. We show here that an altruistic allele extending the carrying capacity of the habitat can win by increasing the random drift of “selfish” alleles. In other terms, the fixation probability of altruistic genes can be higher than those of a selfish ones, even though altruists have a smaller fitness. Moreover when populations are geographically structured, the altruists advantage can be highly amplified and the fixation probability of selfish genes can tend toward zero. The above results are obtained both by numerical and analytical calculations. Analytical results are obtained in the limit of large populations. Conclusions The theory we present does not involve kin or multilevel selection, but is based on the existence of random drift in variable size populations. The model is a generalization of the original Fisher-Wright and Moran models where the carrying capacity depends on the number of altruists.

  13. Selection bias and subject refusal in a cluster-randomized controlled trial

    Directory of Open Access Journals (Sweden)

    Rochelle Yang

    2017-07-01

    Full Text Available Abstract Background Selection bias and non-participation bias are major methodological concerns which impact external validity. Cluster-randomized controlled trials are especially prone to selection bias as it is impractical to blind clusters to their allocation into intervention or control. This study assessed the impact of selection bias in a large cluster-randomized controlled trial. Methods The Improved Cardiovascular Risk Reduction to Enhance Rural Primary Care (ICARE study examined the impact of a remote pharmacist-led intervention in twelve medical offices. To assess eligibility, a standardized form containing patient demographics and medical information was completed for each screened patient. Eligible patients were approached by the study coordinator for recruitment. Both the study coordinator and the patient were aware of the site’s allocation prior to consent. Patients who consented or declined to participate were compared across control and intervention arms for differing characteristics. Statistical significance was determined using a two-tailed, equal variance t-test and a chi-square test with adjusted Bonferroni p-values. Results were adjusted for random cluster variation. Results There were 2749 completed screening forms returned to research staff with 461 subjects who had either consented or declined participation. Patients with poorly controlled diabetes were found to be significantly more likely to decline participation in intervention sites compared to those in control sites. A higher mean diastolic blood pressure was seen in patients with uncontrolled hypertension who declined in the control sites compared to those who declined in the intervention sites. However, these findings were no longer significant after adjustment for random variation among the sites. After this adjustment, females were now found to be significantly more likely to consent than males (odds ratio = 1.41; 95% confidence interval = 1.03, 1

  14. Aboriginal fractions: enumerating identity in Taiwan.

    Science.gov (United States)

    Liu, Jennifer A

    2012-01-01

    Notions of identity in Taiwan are configured in relation to numbers. I examine the polyvalent capacities of enumerative technologies in both the production of ethnic identities and claims to political representation and justice. By critically historicizing the manner in which Aborigines in Taiwan have been, and continue to be, constructed as objects and subjects of scientific knowledge production through technologies of measuring, I examine the genetic claim made by some Taiwanese to be "fractionally" Aboriginal. Numbers and techniques of measuring are used ostensibly to know the Aborigines, but they are also used to construct a genetically unique Taiwanese identity and to incorporate the Aborigines within projects of democratic governance. Technologies of enumeration thus serve within multiple, and sometimes contradictory, projects of representation and knowledge production.

  15. An information theory criteria based blind method for enumerating active users in DS-CDMA system

    Science.gov (United States)

    Samsami Khodadad, Farid; Abed Hodtani, Ghosheh

    2014-11-01

    In this paper, a new and blind algorithm for active user enumeration in asynchronous direct sequence code division multiple access (DS-CDMA) in multipath channel scenario is proposed. The proposed method is based on information theory criteria. There are two main categories of information criteria which are widely used in active user enumeration, Akaike Information Criterion (AIC) and Minimum Description Length (MDL) information theory criteria. The main difference between these two criteria is their penalty functions. Due to this difference, MDL is a consistent enumerator which has better performance in higher signal-to-noise ratios (SNR) but AIC is preferred in lower SNRs. In sequel, we propose a SNR compliance method based on subspace and training genetic algorithm to have the performance of both of them. Moreover, our method uses only a single antenna, in difference to the previous methods which decrease hardware complexity. Simulation results show that the proposed method is capable of estimating the number of active users without any prior knowledge and the efficiency of the method.

  16. A shared, flexible neural map architecture reflects capacity limits in both visual short-term memory and enumeration.

    Science.gov (United States)

    Knops, André; Piazza, Manuela; Sengupta, Rakesh; Eger, Evelyn; Melcher, David

    2014-07-23

    Human cognition is characterized by severe capacity limits: we can accurately track, enumerate, or hold in mind only a small number of items at a time. It remains debated whether capacity limitations across tasks are determined by a common system. Here we measure brain activation of adult subjects performing either a visual short-term memory (vSTM) task consisting of holding in mind precise information about the orientation and position of a variable number of items, or an enumeration task consisting of assessing the number of items in those sets. We show that task-specific capacity limits (three to four items in enumeration and two to three in vSTM) are neurally reflected in the activity of the posterior parietal cortex (PPC): an identical set of voxels in this region, commonly activated during the two tasks, changed its overall response profile reflecting task-specific capacity limitations. These results, replicated in a second experiment, were further supported by multivariate pattern analysis in which we could decode the number of items presented over a larger range during enumeration than during vSTM. Finally, we simulated our results with a computational model of PPC using a saliency map architecture in which the level of mutual inhibition between nodes gives rise to capacity limitations and reflects the task-dependent precision with which objects need to be encoded (high precision for vSTM, lower precision for enumeration). Together, our work supports the existence of a common, flexible system underlying capacity limits across tasks in PPC that may take the form of a saliency map. Copyright © 2014 the authors 0270-6474/14/349857-10$15.00/0.

  17. Discontinuity in the Enumeration of Sequentially Presented Auditory and Visual Stimuli

    Science.gov (United States)

    Camos, Valerie; Tillmann, Barbara

    2008-01-01

    The seeking of discontinuity in enumeration was recently renewed because Cowan [Cowan, N. (2001). "The magical number 4 in short-term memory: A reconsideration of mental storage capacity." "Behavioral and Brain Sciences," 24, 87-185; Cowan, N. (2005). "Working memory capacity." Hove: Psychology Press] suggested that it allows evaluating the limit…

  18. Simple method for correct enumeration of Staphylococcus aureus

    DEFF Research Database (Denmark)

    Haaber, J.; Cohn, M. T.; Petersen, A.

    2016-01-01

    culture. When grown in such liquid cultures, the human pathogen Staphylococcus aureus is characterized by its aggregation of single cells into clusters of variable size. Here, we show that aggregation during growth in the laboratory standard medium tryptic soy broth (TSB) is common among clinical...... and laboratory S. aureus isolates and that aggregation may introduce significant bias when applying standard enumeration methods on S. aureus growing in laboratory batch cultures. We provide a simple and efficient sonication procedure, which can be applied prior to optical density measurements to give...

  19. Interference-aware random beam selection schemes for spectrum sharing systems

    KAUST Repository

    Abdallah, Mohamed; Qaraqe, Khalid; Alouini, Mohamed-Slim

    2012-01-01

    users. In this work, we develop interference-aware random beam selection schemes that provide enhanced performance for the secondary network under the condition that the interference observed by the receivers of the primary network is below a

  20. Monitor and Protect Wigwam River Bull Trout for Koocanusa Reservoir : Summary of the Skookumchuck Creek Bull Trout Enumeration Project Final Report 2000-2002.

    Energy Technology Data Exchange (ETDEWEB)

    Baxter, Jeremy; Baxter, James S.

    2002-12-01

    This report summarizes the third and final year of a bull trout (Salvelinus confluentus) enumeration project on Skookumchuck Creek in southeastern British Columbia. The fence and traps were operated from September 6th to October 11th 2002 in order to enumerate post-spawning bull trout. During the study period a total of 309 bull trout were captured at the fence. In total, 16 fish of undetermined sex, 114 males and 179 females were processed at the fence. Length and weight data, as well as recapture information, were collected for these fish. An additional 41 bull trout were enumerated upstream of the fence by snorkeling prior to fence removal. Coupled with the fence count, the total bull trout enumerated during the project was 350 individuals. Several fish that were tagged in the lower Bull River were recaptured in 2002, as were repeat and alternate year spawners previously enumerated in past years at the fence. A total of 149 bull trout redds were enumerated on the ground in 2002, of which 143 were in the 3.0 km index section (river km 27.5-30.5) that has been surveyed over the past six years. The results of the three year project are summarized, and population characteristics are discussed.

  1. Continuous-Time Mean-Variance Portfolio Selection with Random Horizon

    International Nuclear Information System (INIS)

    Yu, Zhiyong

    2013-01-01

    This paper examines the continuous-time mean-variance optimal portfolio selection problem with random market parameters and random time horizon. Treating this problem as a linearly constrained stochastic linear-quadratic optimal control problem, I explicitly derive the efficient portfolios and efficient frontier in closed forms based on the solutions of two backward stochastic differential equations. Some related issues such as a minimum variance portfolio and a mutual fund theorem are also addressed. All the results are markedly different from those in the problem with deterministic exit time. A key part of my analysis involves proving the global solvability of a stochastic Riccati equation, which is interesting in its own right

  2. Continuous-Time Mean-Variance Portfolio Selection with Random Horizon

    Energy Technology Data Exchange (ETDEWEB)

    Yu, Zhiyong, E-mail: yuzhiyong@sdu.edu.cn [Shandong University, School of Mathematics (China)

    2013-12-15

    This paper examines the continuous-time mean-variance optimal portfolio selection problem with random market parameters and random time horizon. Treating this problem as a linearly constrained stochastic linear-quadratic optimal control problem, I explicitly derive the efficient portfolios and efficient frontier in closed forms based on the solutions of two backward stochastic differential equations. Some related issues such as a minimum variance portfolio and a mutual fund theorem are also addressed. All the results are markedly different from those in the problem with deterministic exit time. A key part of my analysis involves proving the global solvability of a stochastic Riccati equation, which is interesting in its own right.

  3. TEHRAN AIR POLLUTANTS PREDICTION BASED ON RANDOM FOREST FEATURE SELECTION METHOD

    Directory of Open Access Journals (Sweden)

    A. Shamsoddini

    2017-09-01

    Full Text Available Air pollution as one of the most serious forms of environmental pollutions poses huge threat to human life. Air pollution leads to environmental instability, and has harmful and undesirable effects on the environment. Modern prediction methods of the pollutant concentration are able to improve decision making and provide appropriate solutions. This study examines the performance of the Random Forest feature selection in combination with multiple-linear regression and Multilayer Perceptron Artificial Neural Networks methods, in order to achieve an efficient model to estimate carbon monoxide and nitrogen dioxide, sulfur dioxide and PM2.5 contents in the air. The results indicated that Artificial Neural Networks fed by the attributes selected by Random Forest feature selection method performed more accurate than other models for the modeling of all pollutants. The estimation accuracy of sulfur dioxide emissions was lower than the other air contaminants whereas the nitrogen dioxide was predicted more accurate than the other pollutants.

  4. Tehran Air Pollutants Prediction Based on Random Forest Feature Selection Method

    Science.gov (United States)

    Shamsoddini, A.; Aboodi, M. R.; Karami, J.

    2017-09-01

    Air pollution as one of the most serious forms of environmental pollutions poses huge threat to human life. Air pollution leads to environmental instability, and has harmful and undesirable effects on the environment. Modern prediction methods of the pollutant concentration are able to improve decision making and provide appropriate solutions. This study examines the performance of the Random Forest feature selection in combination with multiple-linear regression and Multilayer Perceptron Artificial Neural Networks methods, in order to achieve an efficient model to estimate carbon monoxide and nitrogen dioxide, sulfur dioxide and PM2.5 contents in the air. The results indicated that Artificial Neural Networks fed by the attributes selected by Random Forest feature selection method performed more accurate than other models for the modeling of all pollutants. The estimation accuracy of sulfur dioxide emissions was lower than the other air contaminants whereas the nitrogen dioxide was predicted more accurate than the other pollutants.

  5. Hebbian Learning in a Random Network Captures Selectivity Properties of the Prefrontal Cortex

    Science.gov (United States)

    Lindsay, Grace W.

    2017-01-01

    Complex cognitive behaviors, such as context-switching and rule-following, are thought to be supported by the prefrontal cortex (PFC). Neural activity in the PFC must thus be specialized to specific tasks while retaining flexibility. Nonlinear “mixed” selectivity is an important neurophysiological trait for enabling complex and context-dependent behaviors. Here we investigate (1) the extent to which the PFC exhibits computationally relevant properties, such as mixed selectivity, and (2) how such properties could arise via circuit mechanisms. We show that PFC cells recorded from male and female rhesus macaques during a complex task show a moderate level of specialization and structure that is not replicated by a model wherein cells receive random feedforward inputs. While random connectivity can be effective at generating mixed selectivity, the data show significantly more mixed selectivity than predicted by a model with otherwise matched parameters. A simple Hebbian learning rule applied to the random connectivity, however, increases mixed selectivity and enables the model to match the data more accurately. To explain how learning achieves this, we provide analysis along with a clear geometric interpretation of the impact of learning on selectivity. After learning, the model also matches the data on measures of noise, response density, clustering, and the distribution of selectivities. Of two styles of Hebbian learning tested, the simpler and more biologically plausible option better matches the data. These modeling results provide clues about how neural properties important for cognition can arise in a circuit and make clear experimental predictions regarding how various measures of selectivity would evolve during animal training. SIGNIFICANCE STATEMENT The prefrontal cortex is a brain region believed to support the ability of animals to engage in complex behavior. How neurons in this area respond to stimuli—and in particular, to combinations of stimuli (

  6. Performance Evaluation of User Selection Protocols in Random Networks with Energy Harvesting and Hardware Impairments

    Directory of Open Access Journals (Sweden)

    Tan Nhat Nguyen

    2016-01-01

    Full Text Available In this paper, we evaluate performances of various user selection protocols under impact of hardware impairments. In the considered protocols, a Base Station (BS selects one of available Users (US to serve, while the remaining USs harvest the energy from the Radio Frequency (RF transmitted by the BS. We assume that all of the US randomly appear around the BS. In the Random Selection Protocol (RAN, the BS randomly selects a US to transmit the data. In the second proposed protocol, named Minimum Distance Protocol (MIND, the US that is nearest to the BS will be chosen. In the Optimal Selection Protocol (OPT, the US providing the highest channel gain between itself and the BS will be served. For performance evaluation, we derive exact and asymptotic closed-form expressions of average Outage Probability (OP over Rayleigh fading channels. We also consider average harvested energy per a US. Finally, Monte-Carlo simulations are then performed to verify the theoretical results.

  7. Validation of a Rapid Bacteria Endospore Enumeration System for Planetary Protection Application

    Science.gov (United States)

    Chen, Fei; Kern, Roger; Kazarians, Gayane; Venkateswaran, Kasthuri

    NASA monitors spacecraft surfaces to assure that the presence of bacterial endospores meets strict criteria at launch, to minimize the risk of inadvertent contamination of the surface of Mars. Currently, the only approved method for enumerating the spores is a culture based assay that requires three days to produce results. In order to meet the demanding schedules of spacecraft assembly, a more rapid spore detection assay is being considered as an alternate method to the NASA standard culture-based assay. The Millipore Rapid Microbiology Detection System (RMDS) has been used successfully for rapid bioburden enumeration in the pharmaceutical and food industries. The RMDS is rapid and simple, shows high sensitivity (to 1 colony forming unit [CFU]/sample), and correlates well with traditional culture-based methods. It combines membrane filtration, adenosine triphosphate (ATP) bioluminescence chemistry, and image analysis based on photon detection with a Charge Coupled Device (CCD) camera. In this study, we have optimized the assay conditions and evaluated the use of the RMDS as a rapid spore detection tool for NASA applications. In order to select for spores, the samples were subjected to a heat shock step before proceeding with the RMDS incubation protocol. Seven species of Bacillus (nine strains) that have been repeatedly isolated from clean room environments were assayed. All strains were detected by the RMDS in 5 hours and these assay times were repeatedly demonstrated along with low image background noise. Validation experiments to compare the Rapid Sore Assay (RSA) and NASA standard assay (NSA) were also performed. The evaluation criteria were modeled after the FDA Guideline of Process Validation, and Analytical Test Methods. This body of research demonstrates that the Rapid Spore Assay (RSA) is quick, and of equivalent sensitivity to the NASA standard assay, potentially reducing the assay time for bacterial endospores from over 72 hours to less than 8 hours

  8. The reliability of randomly selected final year pharmacy students in ...

    African Journals Online (AJOL)

    Employing ANOVA, factorial experimental analysis, and the theory of error, reliability studies were conducted on the assessment of the drug product chloroquine phosphate tablets. The G–Study employed equal numbers of the factors for uniform control, and involved three analysts (randomly selected final year Pharmacy ...

  9. Seven-hour fluorescence in situ hybridization technique for enumeration of Enterobacteriaceae in food and environmental water sample.

    Science.gov (United States)

    Ootsubo, M; Shimizu, T; Tanaka, R; Sawabe, T; Tajima, K; Ezura, Y

    2003-01-01

    A fluorescent in situ hybridization (FISH) technique using an Enterobacteriaceae-specific probe (probe D) to target 16S rRNA was improved in order to enumerate, within a single working day, Enterobacteriaceae present in food and environmental water samples. In order to minimize the time required for the FISH procedure, each step of FISH with probe D was re-evaluated using cultured Escherichia coli. Five minutes of ethanol treatment for cell fixation and hybridization were sufficient to visualize cultured E. coli, and FISH could be performed within 1 h. Because of the difficulties in detecting low levels of bacterial cells by FISH without cultivation, a FISH technique for detecting microcolonies on membrane filters was investigated to improve the bacterial detection limit. FISH with probe D following 6 h of cultivation to grow microcolonies on a 13 mm diameter membrane filter was performed, and whole Enterobacteriaceae microcolonies on the filter were then detected and enumerated by manual epifluorescence microscopic scanning at magnification of x100 in ca 5 min. The total time for FISH with probe D following cultivation (FISHFC) was reduced to within 7 h. FISHFC can be applied to enumerate cultivable Enterobacteriaceae in food (above 100 cells g-1) and environmental water samples (above 1 cell ml-1). Cultivable Enterobacteriaceae in food and water samples were enumerated accurately within 7 h using the FISHFC method. A FISHFC method capable of evaluating Enterobacteriaceae contamination in food and environmental water within a single working day was developed.

  10. Random selection of items. Selection of n1 samples among N items composing a stratum

    International Nuclear Information System (INIS)

    Jaech, J.L.; Lemaire, R.J.

    1987-02-01

    STR-224 provides generalized procedures to determine required sample sizes, for instance in the course of a Physical Inventory Verification at Bulk Handling Facilities. The present report describes procedures to generate random numbers and select groups of items to be verified in a given stratum through each of the measurement methods involved in the verification. (author). 3 refs

  11. The use of antibiotics to improve phage detection and enumeration by the double-layer agar technique

    Directory of Open Access Journals (Sweden)

    Ferreira Eugénio C

    2009-07-01

    Full Text Available Abstract Background The Double-Layer Agar (DLA technique is extensively used in phage research to enumerate and identify phages and to isolate mutants and new phages. Many phages form large and well-defined plaques that are easily observed so that they can be enumerated when plated by the DLA technique. However, some give rise to small and turbid plaques that are very difficult to detect and count. To overcome these problems, some authors have suggested the use of dyes to improve the contrast between the plaques and the turbid host lawns. It has been reported that some antibiotics stimulate bacteria to produce phages, resulting in an increase in final titer. Thus, antibiotics might contribute to increasing plaque size in solid media. Results Antibiotics with different mechanisms of action were tested for their ability to enhance plaque morphology without suppressing phage development. Some antibiotics increased the phage plaque surface by up to 50-fold. Conclusion This work presents a modification of the DLA technique that can be used routinely in the laboratory, leading to a more accurate enumeration of phages that would be difficult or even impossible otherwise.

  12. Separating the Classes of Recursively Enumerable Languages Based on Machine Size

    Czech Academy of Sciences Publication Activity Database

    van Leeuwen, J.; Wiedermann, Jiří

    2015-01-01

    Roč. 26, č. 6 (2015), s. 677-695 ISSN 0129-0541 R&D Projects: GA ČR GAP202/10/1333 Grant - others:GA ČR(CZ) GA15-04960S Institutional support: RVO:67985807 Keywords : recursively enumerable languages * RE hierarchy * finite languages * machine size * descriptional complexity * Turing machines with advice Subject RIV: IN - Informatics, Computer Science Impact factor: 0.467, year: 2015

  13. Enumeration versus Multiple Object Tracking: The Case of Action Video Game Players

    Science.gov (United States)

    Green, C. S.; Bavelier, D.

    2006-01-01

    Here, we demonstrate that action video game play enhances subjects' ability in two tasks thought to indicate the number of items that can be apprehended. Using an enumeration task, in which participants have to determine the number of quickly flashed squares, accuracy measures showed a near ceiling performance for low numerosities and a sharp drop…

  14. The mathematics of random mutation and natural selection for multiple simultaneous selection pressures and the evolution of antimicrobial drug resistance.

    Science.gov (United States)

    Kleinman, Alan

    2016-12-20

    The random mutation and natural selection phenomenon act in a mathematically predictable behavior, which when understood leads to approaches to reduce and prevent the failure of the use of these selection pressures when treating infections and cancers. The underlying principle to impair the random mutation and natural selection phenomenon is to use combination therapy, which forces the population to evolve to multiple selection pressures simultaneously that invoke the multiplication rule of probabilities simultaneously as well. Recently, it has been seen that combination therapy for the treatment of malaria has failed to prevent the emergence of drug-resistant variants. Using this empirical example and the principles of probability theory, the derivation of the equations describing this treatment failure is carried out. These equations give guidance as to how to use combination therapy for the treatment of cancers and infectious diseases and prevent the emergence of drug resistance. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  15. Monitor and Protect Wigwam River Bull Trout for Koocanusa Reservoir : Summary of the Skookumchuck Creek Bull Trout Enumeration Project, Annual Report 2001.

    Energy Technology Data Exchange (ETDEWEB)

    Baxter, James S.; Baxter, Jeremy

    2002-03-01

    This report summarizes the second year of a bull trout (Salvelinus confluentus) enumeration project on Skookumchuck Creek in southeastern British Columbia. An enumeration fence and traps were installed on the creek from September 6th to October 12th 2001 to enable the capture of post-spawning bull trout emigrating out of the watershed. During the study period, a total of 273 bull trout were sampled through the enumeration fence. Length and weight were determined for all bull trout captured. In total, 39 fish of undetermined sex, 61 males and 173 females were processed through the fence. An additional 19 bull trout were observed on a snorkel survey prior to the fence being removed on October 12th. Coupled with the fence count, the total bull trout enumerated during this project was 292 fish. Several other species of fish were captured at the enumeration fence including westslope cutthroat trout (Oncorhynchus clarki lewisi), Rocky Mountain whitefish (Prosopium williamsoni), and kokanee (O. nerka). A total of 143 bull trout redds were enumerated on the ground in two different locations (river km 27.5-30.5, and km 24.0-25.5) on October 3rd. The majority of redds (n=132) were observed in the 3.0 km index section (river km 27.5-30.5) that has been surveyed over the past five years. The additional 11 redds were observed in a 1.5 km section (river km 24.0-25.5). Summary plots of water temperature for Bradford Creek, Sandown Creek, Buhl Creek, and Skookumchuck Creek at three locations suggested that water temperatures were within the temperature range preferred by bull trout for spawning, egg incubation, and rearing.

  16. Enumerating all maximal frequent subtrees in collections of phylogenetic trees.

    Science.gov (United States)

    Deepak, Akshay; Fernández-Baca, David

    2014-01-01

    A common problem in phylogenetic analysis is to identify frequent patterns in a collection of phylogenetic trees. The goal is, roughly, to find a subset of the species (taxa) on which all or some significant subset of the trees agree. One popular method to do so is through maximum agreement subtrees (MASTs). MASTs are also used, among other things, as a metric for comparing phylogenetic trees, computing congruence indices and to identify horizontal gene transfer events. We give algorithms and experimental results for two approaches to identify common patterns in a collection of phylogenetic trees, one based on agreement subtrees, called maximal agreement subtrees, the other on frequent subtrees, called maximal frequent subtrees. These approaches can return subtrees on larger sets of taxa than MASTs, and can reveal new common phylogenetic relationships not present in either MASTs or the majority rule tree (a popular consensus method). Our current implementation is available on the web at https://code.google.com/p/mfst-miner/. Our computational results confirm that maximal agreement subtrees and all maximal frequent subtrees can reveal a more complete phylogenetic picture of the common patterns in collections of phylogenetic trees than maximum agreement subtrees; they are also often more resolved than the majority rule tree. Further, our experiments show that enumerating maximal frequent subtrees is considerably more practical than enumerating ordinary (not necessarily maximal) frequent subtrees.

  17. A Bayesian random effects discrete-choice model for resource selection: Population-level selection inference

    Science.gov (United States)

    Thomas, D.L.; Johnson, D.; Griffith, B.

    2006-01-01

    Modeling the probability of use of land units characterized by discrete and continuous measures, we present a Bayesian random-effects model to assess resource selection. This model provides simultaneous estimation of both individual- and population-level selection. Deviance information criterion (DIC), a Bayesian alternative to AIC that is sample-size specific, is used for model selection. Aerial radiolocation data from 76 adult female caribou (Rangifer tarandus) and calf pairs during 1 year on an Arctic coastal plain calving ground were used to illustrate models and assess population-level selection of landscape attributes, as well as individual heterogeneity of selection. Landscape attributes included elevation, NDVI (a measure of forage greenness), and land cover-type classification. Results from the first of a 2-stage model-selection procedure indicated that there is substantial heterogeneity among cow-calf pairs with respect to selection of the landscape attributes. In the second stage, selection of models with heterogeneity included indicated that at the population-level, NDVI and land cover class were significant attributes for selection of different landscapes by pairs on the calving ground. Population-level selection coefficients indicate that the pairs generally select landscapes with higher levels of NDVI, but the relationship is quadratic. The highest rate of selection occurs at values of NDVI less than the maximum observed. Results for land cover-class selections coefficients indicate that wet sedge, moist sedge, herbaceous tussock tundra, and shrub tussock tundra are selected at approximately the same rate, while alpine and sparsely vegetated landscapes are selected at a lower rate. Furthermore, the variability in selection by individual caribou for moist sedge and sparsely vegetated landscapes is large relative to the variability in selection of other land cover types. The example analysis illustrates that, while sometimes computationally intense, a

  18. The signature of positive selection at randomly chosen loci.

    Science.gov (United States)

    Przeworski, Molly

    2002-03-01

    In Drosophila and humans, there are accumulating examples of loci with a significant excess of high-frequency-derived alleles or high levels of linkage disequilibrium, relative to a neutral model of a random-mating population of constant size. These are features expected after a recent selective sweep. Their prevalence suggests that positive directional selection may be widespread in both species. However, as I show here, these features do not persist long after the sweep ends: The high-frequency alleles drift to fixation and no longer contribute to polymorphism, while linkage disequilibrium is broken down by recombination. As a result, loci chosen without independent evidence of recent selection are not expected to exhibit either of these features, even if they have been affected by numerous sweeps in their genealogical history. How then can we explain the patterns in the data? One possibility is population structure, with unequal sampling from different subpopulations. Alternatively, positive selection may not operate as is commonly modeled. In particular, the rate of fixation of advantageous mutations may have increased in the recent past.

  19. Alternative microbial methods: An overview and selection criteria.

    NARCIS (Netherlands)

    Jasson, V.; Jacxsens, L.; Luning, P.A.; Rajkovic, A.; Uyttendaele, M.

    2010-01-01

    This study provides an overview and criteria for the selection of a method, other than the reference method, for microbial analysis of foods. In a first part an overview of the general characteristics of rapid methods available, both for enumeration and detection, is given with reference to relevant

  20. A comparison of test statistics for the recovery of rapid growth-based enumeration tests

    NARCIS (Netherlands)

    van den Heuvel, Edwin R.; IJzerman-Boon, Pieta C.

    This paper considers five test statistics for comparing the recovery of a rapid growth-based enumeration test with respect to the compendial microbiological method using a specific nonserial dilution experiment. The finite sample distributions of these test statistics are unknown, because they are

  1. Enumeration for spanning trees and forests of join graphs based on the combinatorial decomposition

    Directory of Open Access Journals (Sweden)

    Sung Sik U

    2016-10-01

    Full Text Available This paper discusses the enumeration for rooted spanning trees and forests of the labelled join graphs $K_m+H_n$ and $K_m+K_{n,p}$, where $H_n$ is a graph with $n$ isolated vertices. 

  2. Enumeration of Mars years and seasons since the beginning of telescopic exploration

    Science.gov (United States)

    Piqueux, Sylvain; Byrne, Shane; Titus, Timothy N.; Hansen, Candice J.; Kieffer, Hugh H.

    2015-01-01

    A clarification for the enumeration of Mars Years prior to 1955 is presented, along with a table providing the Julian dates associated with Ls = 0° for Mars Years -183 (beginning of the telescopic study of Mars) to 100. A practical algorithm for computing Ls as a function of the Julian Date is provided. No new science results are presented

  3. The Enumeration Structure of 爾雅 Ěryǎ's "Semantic Lists"

    Science.gov (United States)

    Teboul (戴明德), Michel

    Modern linguistic enumeration theory is applied to a study of 爾雅 Ěryǎ's Semantic Lists, leading to an in-depth analysis of the work's first three sections without any recourse to the traditional methods of Chinese classical philology. It is hoped that an extension of the same method can lead to a better understanding of the remaining 16 sections.

  4. Differential privacy-based evaporative cooling feature selection and classification with relief-F and random forests.

    Science.gov (United States)

    Le, Trang T; Simmons, W Kyle; Misaki, Masaya; Bodurka, Jerzy; White, Bill C; Savitz, Jonathan; McKinney, Brett A

    2017-09-15

    Classification of individuals into disease or clinical categories from high-dimensional biological data with low prediction error is an important challenge of statistical learning in bioinformatics. Feature selection can improve classification accuracy but must be incorporated carefully into cross-validation to avoid overfitting. Recently, feature selection methods based on differential privacy, such as differentially private random forests and reusable holdout sets, have been proposed. However, for domains such as bioinformatics, where the number of features is much larger than the number of observations p≫n , these differential privacy methods are susceptible to overfitting. We introduce private Evaporative Cooling, a stochastic privacy-preserving machine learning algorithm that uses Relief-F for feature selection and random forest for privacy preserving classification that also prevents overfitting. We relate the privacy-preserving threshold mechanism to a thermodynamic Maxwell-Boltzmann distribution, where the temperature represents the privacy threshold. We use the thermal statistical physics concept of Evaporative Cooling of atomic gases to perform backward stepwise privacy-preserving feature selection. On simulated data with main effects and statistical interactions, we compare accuracies on holdout and validation sets for three privacy-preserving methods: the reusable holdout, reusable holdout with random forest, and private Evaporative Cooling, which uses Relief-F feature selection and random forest classification. In simulations where interactions exist between attributes, private Evaporative Cooling provides higher classification accuracy without overfitting based on an independent validation set. In simulations without interactions, thresholdout with random forest and private Evaporative Cooling give comparable accuracies. We also apply these privacy methods to human brain resting-state fMRI data from a study of major depressive disorder. Code

  5. Enumerating all maximal frequent subtrees in collections of phylogenetic trees

    Science.gov (United States)

    2014-01-01

    Background A common problem in phylogenetic analysis is to identify frequent patterns in a collection of phylogenetic trees. The goal is, roughly, to find a subset of the species (taxa) on which all or some significant subset of the trees agree. One popular method to do so is through maximum agreement subtrees (MASTs). MASTs are also used, among other things, as a metric for comparing phylogenetic trees, computing congruence indices and to identify horizontal gene transfer events. Results We give algorithms and experimental results for two approaches to identify common patterns in a collection of phylogenetic trees, one based on agreement subtrees, called maximal agreement subtrees, the other on frequent subtrees, called maximal frequent subtrees. These approaches can return subtrees on larger sets of taxa than MASTs, and can reveal new common phylogenetic relationships not present in either MASTs or the majority rule tree (a popular consensus method). Our current implementation is available on the web at https://code.google.com/p/mfst-miner/. Conclusions Our computational results confirm that maximal agreement subtrees and all maximal frequent subtrees can reveal a more complete phylogenetic picture of the common patterns in collections of phylogenetic trees than maximum agreement subtrees; they are also often more resolved than the majority rule tree. Further, our experiments show that enumerating maximal frequent subtrees is considerably more practical than enumerating ordinary (not necessarily maximal) frequent subtrees. PMID:25061474

  6. Assessment of the geoavailability of trace elements from minerals in mine wastes: analytical techniques and assessment of selected copper minerals

    Science.gov (United States)

    Driscoll, Rhonda; Hageman, Phillip L.; Benzel, William M.; Diehl, Sharon F.; Adams, David T.; Morman, Suzette; Choate, LaDonna M.

    2012-01-01

    In this study, four randomly selected copper-bearing minerals were examined—azurite, malachite, bornite, and chalcopyrite. The objectives were to examine and enumerate the crystalline and chemical properties of each of the minerals, to determine which, if any, of the Cu-bearing minerals might adversely affect systems biota, and to provide a multi-procedure reference. Laboratory work included use of computational software for quantifying crystalline and amorphous material and optical and electron imaging instruments to model and project crystalline structures. Chemical weathering, human fluid, and enzyme simulation studies were also conducted. The analyses were conducted systematically: X-ray diffraction and microanalytical studies followed by a series of chemical, bio-leaching, and toxicity experiments.

  7. Interference-aware random beam selection schemes for spectrum sharing systems

    KAUST Repository

    Abdallah, Mohamed

    2012-10-19

    Spectrum sharing systems have been recently introduced to alleviate the problem of spectrum scarcity by allowing secondary unlicensed networks to share the spectrum with primary licensed networks under acceptable interference levels to the primary users. In this work, we develop interference-aware random beam selection schemes that provide enhanced performance for the secondary network under the condition that the interference observed by the receivers of the primary network is below a predetermined/acceptable value. We consider a secondary link composed of a transmitter equipped with multiple antennas and a single-antenna receiver sharing the same spectrum with a primary link composed of a single-antenna transmitter and a single-antenna receiver. The proposed schemes select a beam, among a set of power-optimized random beams, that maximizes the signal-to-interference-plus-noise ratio (SINR) of the secondary link while satisfying the primary interference constraint for different levels of feedback information describing the interference level at the primary receiver. For the proposed schemes, we develop a statistical analysis for the SINR statistics as well as the capacity and bit error rate (BER) of the secondary link.

  8. Topology-selective jamming of fully-connected, code-division random-access networks

    Science.gov (United States)

    Polydoros, Andreas; Cheng, Unjeng

    1990-01-01

    The purpose is to introduce certain models of topology selective stochastic jamming and examine its impact on a class of fully-connected, spread-spectrum, slotted ALOHA-type random access networks. The theory covers dedicated as well as half-duplex units. The dominant role of the spatial duty factor is established, and connections with the dual concept of time selective jamming are discussed. The optimal choices of coding rate and link access parameters (from the users' side) and the jamming spatial fraction are numerically established for DS and FH spreading.

  9. Geometrical critical phenomena on a random surface of arbitrary genus

    International Nuclear Information System (INIS)

    Duplantier, B.; Kostov, I.K.

    1990-01-01

    The statistical mechanics of self-avoiding walks (SAW) or of the O(n)-loop model on a two-dimensional random surface are shown to be exactly solvable. The partition functions of SAW and surface configurations (possibly in the presence of vacuum loops) are calculated by planar diagram enumeration techniques. Two critical regimes are found: a dense phase where the infinite walks and loops fill the infinite surface, the non-filled part staying finite, and a dilute phase where the infinite surface singularity on the one hand, and walk and loop singularities on the other, merge together. The configuration critical exponents of self-avoiding networks of any fixed topology G, on a surface with arbitrary genus H, are calculated as universal functions of G and H. For self-avoiding walks, the exponents are built from an infinite set of basic conformal dimensions associated with central charges c = -2 (dense phase) and c = 0 (dilute phase). The conformal spectrum Δ L , L ≥ 1 associated with L-leg star polymers is calculated exactly, for c = -2 and c = 0. This is generalized to the set of L-line 'watermelon' exponents Δ L of the O(n) model on a random surface. The divergences of the partition functions of self-avoiding networks on the random surface, possibly in the presence of vacuum loops, are shown to satisfy a factorization theorem over the vertices of the network. This provides a proof, in the presence of a fluctuating metric, of a result conjectured earlier in the standard plane. From this, the value of the string susceptibility γ str (H,c) is extracted for a random surface of arbitrary genus H, bearing a field theory of central charge c, or equivalently, embedded in d=c dimensions. Lastly, by enumerating spanning trees on a random lattice, we solve the similar problem of hamiltonian walks on the (fluctuating) Manhattan covering lattice. We also obtain new results for dilute trees on a random surface. (orig./HSI)

  10. An insight into the isolation, enumeration and molecular detection of Listeria monocytogenes in food

    Directory of Open Access Journals (Sweden)

    Jodi Woan-Fei Law

    2015-11-01

    Full Text Available Listeria monocytogenes, a foodborne pathogen that can cause listeriosis through the consumption of food contaminated with this pathogen. The ability of L. monocytogenes to survive in extreme conditions and cause food contaminations have become a major concern. Hence, routine microbiological food testing is necessary to prevent food contamination and outbreaks of foodborne illness. This review provides insight into the methods for cultural detection, enumeration and molecular identification of L. monocytogenes in various food samples. There are a number of enrichment and plating media that can be used for the isolation of L. monocytogenes from food samples. Enrichment media such as buffered Listeria Enrichment Broth (BLEB, Fraser broth and University of Vermont Medium (UVM Listeria enrichment broth are recommended by regulatory agencies such as FDA-BAM, USDA-FSIS and ISO. Many plating media are available for the isolation of L. monocytogenes, for instance, PALCAM, Oxford and other chromogenic media. Besides, reference methods like FDA-BAM, ISO 11290 method and USDA-FSIS method are usually applied for the cultural detection or enumeration of L. monocytogenes. MPN technique is applied for the enumeration of L. monocytogenes in the case of low level contamination. Molecular methods including polymerase chain reaction (PCR, multiplex polymerase chain reaction (mPCR, real-time/quantitative polymerase chain reaction (qPCR, nucleic acid sequence-based amplification (NASBA, loop-mediated isothermal amplification (LAMP, DNA microarray and Next Generation Sequencing (NGS technology for the detection and identification of L. monocytogenes are discussed in this review. Overall, molecular methods are rapid, sensitive, specific, time- and labour-saving. In future, there are chances for the development of new techniques for the detection and identification of foodborne with improved features.

  11. Peculiarities of the statistics of spectrally selected fluorescence radiation in laser-pumped dye-doped random media

    Science.gov (United States)

    Yuvchenko, S. A.; Ushakova, E. V.; Pavlova, M. V.; Alonova, M. V.; Zimnyakov, D. A.

    2018-04-01

    We consider the practical realization of a new optical probe method of the random media which is defined as the reference-free path length interferometry with the intensity moments analysis. A peculiarity in the statistics of the spectrally selected fluorescence radiation in laser-pumped dye-doped random medium is discussed. Previously established correlations between the second- and the third-order moments of the intensity fluctuations in the random interference patterns, the coherence function of the probe radiation, and the path difference probability density for the interfering partial waves in the medium are confirmed. The correlations were verified using the statistical analysis of the spectrally selected fluorescence radiation emitted by a laser-pumped dye-doped random medium. Water solution of Rhodamine 6G was applied as the doping fluorescent agent for the ensembles of the densely packed silica grains, which were pumped by the 532 nm radiation of a solid state laser. The spectrum of the mean path length for a random medium was reconstructed.

  12. Enumeration of microbial populations in radioactive environments by epifluorescence microscopy

    International Nuclear Information System (INIS)

    Pansoy-Hjelvik, M.E.; Strietelmeier, B.A.; Paffett, M.T.

    1997-01-01

    Epifluorescence microscopy was utilized to enumerate halophilic bacterial populations in two studies involving inoculated, actual waste/brine mixtures and pure brine solutions. The studies include an initial set of experiments designed to elucidate potential transformations of actinide-containing wastes under salt-repository conditions, including microbially mediated changes. The first study included periodic enumeration of bacterial populations of a mixed inoculum initially added to a collection of test containers. The contents of the test containers are the different types of actual radioactive waste that could potentially be stored in nuclear waste repositories in a salt environment. The transuranic waste was generated from materials used in actinide laboratory research. The results show that cell numbers decreased with time. Sorption of the bacteria to solid surfaces in the test system is discussed as a possible mechanism for the decrease in cell numbers. The second study was designed to determine radiological and/or chemical effects of 239 Pu, 243 Am, 237 Np, 232 Th and 238 U on the growth of pure and mixed anaerobic, denitrifying bacterial cultures in brine media. Pu, Am, and Np isotopes at concentrations of ≤1x10 -6 M , ≤5x10 -6 M and ≤5x10 -4 M respectively, and Th and U isotopes ≤4x10 -3 M were tested in these media. The results indicate that high concentrations of certain actinides affected both the bacterial growth rate and morphology. However, relatively minor effects from Am were observed at all tested concentrations with the pure culture

  13. Enumeration of virtual libraries of combinatorial modular macrocyclic (bracelet, necklace) architectures and their linear counterparts.

    Science.gov (United States)

    Taniguchi, Masahiko; Du, Hai; Lindsey, Jonathan S

    2013-09-23

    A wide variety of cyclic molecular architectures are built of modular subunits and can be formed combinatorially. The mathematics for enumeration of such objects is well-developed yet lacks key features of importance in chemistry, such as specifying (i) the structures of individual members among a set of isomers, (ii) the distribution (i.e., relative amounts) of products, and (iii) the effect of nonequal ratios of reacting monomers on the product distribution. Here, a software program (Cyclaplex) has been developed to determine the number, identity (including isomers), and relative amounts of linear and cyclic architectures from a given number and ratio of reacting monomers. The program includes both mathematical formulas and generative algorithms for enumeration; the latter go beyond the former to provide desired molecular-relevant information and data-mining features. The program is equipped to enumerate four types of architectures: (i) linear architectures with directionality (macroscopic equivalent = electrical extension cords), (ii) linear architectures without directionality (batons), (iii) cyclic architectures with directionality (necklaces), and (iv) cyclic architectures without directionality (bracelets). The program can be applied to cyclic peptides, cycloveratrylenes, cyclens, calixarenes, cyclodextrins, crown ethers, cucurbiturils, annulenes, expanded meso-substituted porphyrin(ogen)s, and diverse supramolecular (e.g., protein) assemblies. The size of accessible architectures encompasses up to 12 modular subunits derived from 12 reacting monomers or larger architectures (e.g. 13-17 subunits) from fewer types of monomers (e.g. 2-4). A particular application concerns understanding the possible heterogeneity of (natural or biohybrid) photosynthetic light-harvesting oligomers (cyclic, linear) formed from distinct peptide subunits.

  14. Sample handling factors affecting the enumeration of lactobacilli and cellulolytic bacteria in equine feces

    Science.gov (United States)

    The objectives were to compare media types and evaluate the effects of fecal storage time and temperature on the enumeration of cellulolytic bacteria and lactobacilli from horses. Fecal samples were collected from horses (n = 3) and transported to the lab (CO2, 37 ºC, 0.5 h). The samples were assign...

  15. An insight into the isolation, enumeration, and molecular detection of Listeria monocytogenes in food

    Science.gov (United States)

    Law, Jodi Woan-Fei; Ab Mutalib, Nurul-Syakima; Chan, Kok-Gan; Lee, Learn-Han

    2015-01-01

    Listeria monocytogenes, a foodborne pathogen that can cause listeriosis through the consumption of food contaminated with this pathogen. The ability of L. monocytogenes to survive in extreme conditions and cause food contaminations have become a major concern. Hence, routine microbiological food testing is necessary to prevent food contamination and outbreaks of foodborne illness. This review provides insight into the methods for cultural detection, enumeration, and molecular identification of L. monocytogenes in various food samples. There are a number of enrichment and plating media that can be used for the isolation of L. monocytogenes from food samples. Enrichment media such as buffered Listeria enrichment broth, Fraser broth, and University of Vermont Medium (UVM) Listeria enrichment broth are recommended by regulatory agencies such as Food and Drug Administration-bacteriological and analytical method (FDA-BAM), US Department of Agriculture-Food and Safety (USDA-FSIS), and International Organization for Standardization (ISO). Many plating media are available for the isolation of L. monocytogenes, for instance, polymyxin acriflavin lithium-chloride ceftazidime aesculin mannitol, Oxford, and other chromogenic media. Besides, reference methods like FDA-BAM, ISO 11290 method, and USDA-FSIS method are usually applied for the cultural detection or enumeration of L. monocytogenes. most probable number technique is applied for the enumeration of L. monocytogenes in the case of low level contamination. Molecular methods including polymerase chain reaction, multiplex polymerase chain reaction, real-time/quantitative polymerase chain reaction, nucleic acid sequence-based amplification, loop-mediated isothermal amplification, DNA microarray, and next generation sequencing technology for the detection and identification of L. monocytogenes are discussed in this review. Overall, molecular methods are rapid, sensitive, specific, time- and labor-saving. In future, there are

  16. HEMATOPOIETIC PROGENITOR CELLS AS A PREDICTIVE OF CD34+ ENUMERATION PRIOR TO PERIPHERAL BLOOD STEM CELLS HARVESTING

    Directory of Open Access Journals (Sweden)

    Z. Zulkafli

    2014-09-01

    Full Text Available Background: To date, the CD34+ cell enumeration has relied predominantly on flow cytometry technique. However, flow cytometry is time consuming and operator dependent. The application of the hematopoietic progenitor cells (HPCs channel in Sysmex XE-2100, a fully automated hematology analyzer offers an alternative approach, which is with minimal sample manipulation and less operator dependent. This study evaluates the utility of HPC counts as a predictive of CD34+ enumeration prior to peripheral blood stem cells harvesting. Materials and methods: HPC, CD34+, white blood cell (WBC, reticulocytes (retic, immature platelet fraction (IPF and immature reticulocyte fraction (IRF were determined in 61 samples from 19 patients with hematological malignancies (15 lymphoma and 4 multiple myeloma patients at Hospital Universiti Sains Malaysia (Hospital USM who had received granulocyte-colony stimulating factor (G-CSF and planned for autologous transplantation. Results: CD34+ count showed strong and significant correlation with HPC. The receiver operating characteristics (ROC curve analysis revealed that HPC count > 21.5 x 106 / L can predicts a pre harvest CD34+ count of >20 x 106 / L with sensitivity of 77%, specificity of 64% and area under the curve (AUC of 0.802. Conclusion: We concluded that HPC count can be a useful potential parameter in optimizing timing for CD34+ enumeration prior to leukapheresis.

  17. Sieving for pseudosquares and pseudocubes in parallel using doubly-focused enumeration and wheel datastructures

    OpenAIRE

    Sorenson, Jonathan P.

    2010-01-01

    We extend the known tables of pseudosquares and pseudocubes, discuss the implications of these new data on the conjectured distribution of pseudosquares and pseudocubes, and present the details of the algorithm used to do this work. Our algorithm is based on the space-saving wheel data structure combined with doubly-focused enumeration, run in parallel on a cluster supercomputer.

  18. Blind Measurement Selection: A Random Matrix Theory Approach

    KAUST Repository

    Elkhalil, Khalil

    2016-12-14

    This paper considers the problem of selecting a set of $k$ measurements from $n$ available sensor observations. The selected measurements should minimize a certain error function assessing the error in estimating a certain $m$ dimensional parameter vector. The exhaustive search inspecting each of the $n\\\\choose k$ possible choices would require a very high computational complexity and as such is not practical for large $n$ and $k$. Alternative methods with low complexity have recently been investigated but their main drawbacks are that 1) they require perfect knowledge of the measurement matrix and 2) they need to be applied at the pace of change of the measurement matrix. To overcome these issues, we consider the asymptotic regime in which $k$, $n$ and $m$ grow large at the same pace. Tools from random matrix theory are then used to approximate in closed-form the most important error measures that are commonly used. The asymptotic approximations are then leveraged to select properly $k$ measurements exhibiting low values for the asymptotic error measures. Two heuristic algorithms are proposed: the first one merely consists in applying the convex optimization artifice to the asymptotic error measure. The second algorithm is a low-complexity greedy algorithm that attempts to look for a sufficiently good solution for the original minimization problem. The greedy algorithm can be applied to both the exact and the asymptotic error measures and can be thus implemented in blind and channel-aware fashions. We present two potential applications where the proposed algorithms can be used, namely antenna selection for uplink transmissions in large scale multi-user systems and sensor selection for wireless sensor networks. Numerical results are also presented and sustain the efficiency of the proposed blind methods in reaching the performances of channel-aware algorithms.

  19. Specific dot-immunobinding assay for detection and enumeration of Thiobacillus ferrooxidans

    International Nuclear Information System (INIS)

    Arredondo, R.; Jerez, C.A.

    1989-01-01

    A specific and very sensitive dot-immunobinding assay for the detection and enumeration of the bioleaching microorganism Thiobacillus ferrooxidans was developed. Nitrocellulose spotted with samples was incubated with polyclonal antisera against whole T. ferrooxidans cells and then in 125 I-labeled protein A or 125 I-labeled goat antirabbit immunoglobulin G; incubation was followed by autoradiography. Since a minimum of 10 3 cells per dot could be detected, the method offers the possibility of simultaneous processing of numerous samples in a short time to monitor the levels of T. ferrooxidans in bioleaching operations

  20. Multivariable Christoffel-Darboux Kernels and Characteristic Polynomials of Random Hermitian Matrices

    Directory of Open Access Journals (Sweden)

    Hjalmar Rosengren

    2006-12-01

    Full Text Available We study multivariable Christoffel-Darboux kernels, which may be viewed as reproducing kernels for antisymmetric orthogonal polynomials, and also as correlation functions for products of characteristic polynomials of random Hermitian matrices. Using their interpretation as reproducing kernels, we obtain simple proofs of Pfaffian and determinant formulas, as well as Schur polynomial expansions, for such kernels. In subsequent work, these results are applied in combinatorics (enumeration of marked shifted tableaux and number theory (representation of integers as sums of squares.

  1. Using ArcMap, Google Earth, and Global Positioning Systems to select and locate random households in rural Haiti.

    Science.gov (United States)

    Wampler, Peter J; Rediske, Richard R; Molla, Azizur R

    2013-01-18

    A remote sensing technique was developed which combines a Geographic Information System (GIS); Google Earth, and Microsoft Excel to identify home locations for a random sample of households in rural Haiti. The method was used to select homes for ethnographic and water quality research in a region of rural Haiti located within 9 km of a local hospital and source of health education in Deschapelles, Haiti. The technique does not require access to governmental records or ground based surveys to collect household location data and can be performed in a rapid, cost-effective manner. The random selection of households and the location of these households during field surveys were accomplished using GIS, Google Earth, Microsoft Excel, and handheld Garmin GPSmap 76CSx GPS units. Homes were identified and mapped in Google Earth, exported to ArcMap 10.0, and a random list of homes was generated using Microsoft Excel which was then loaded onto handheld GPS units for field location. The development and use of a remote sensing method was essential to the selection and location of random households. A total of 537 homes initially were mapped and a randomized subset of 96 was identified as potential survey locations. Over 96% of the homes mapped using Google Earth imagery were correctly identified as occupied dwellings. Only 3.6% of the occupants of mapped homes visited declined to be interviewed. 16.4% of the homes visited were not occupied at the time of the visit due to work away from the home or market days. A total of 55 households were located using this method during the 10 days of fieldwork in May and June of 2012. The method used to generate and field locate random homes for surveys and water sampling was an effective means of selecting random households in a rural environment lacking geolocation infrastructure. The success rate for locating households using a handheld GPS was excellent and only rarely was local knowledge required to identify and locate households. This

  2. Using ArcMap, Google Earth, and Global Positioning Systems to select and locate random households in rural Haiti

    Directory of Open Access Journals (Sweden)

    Wampler Peter J

    2013-01-01

    Full Text Available Abstract Background A remote sensing technique was developed which combines a Geographic Information System (GIS; Google Earth, and Microsoft Excel to identify home locations for a random sample of households in rural Haiti. The method was used to select homes for ethnographic and water quality research in a region of rural Haiti located within 9 km of a local hospital and source of health education in Deschapelles, Haiti. The technique does not require access to governmental records or ground based surveys to collect household location data and can be performed in a rapid, cost-effective manner. Methods The random selection of households and the location of these households during field surveys were accomplished using GIS, Google Earth, Microsoft Excel, and handheld Garmin GPSmap 76CSx GPS units. Homes were identified and mapped in Google Earth, exported to ArcMap 10.0, and a random list of homes was generated using Microsoft Excel which was then loaded onto handheld GPS units for field location. The development and use of a remote sensing method was essential to the selection and location of random households. Results A total of 537 homes initially were mapped and a randomized subset of 96 was identified as potential survey locations. Over 96% of the homes mapped using Google Earth imagery were correctly identified as occupied dwellings. Only 3.6% of the occupants of mapped homes visited declined to be interviewed. 16.4% of the homes visited were not occupied at the time of the visit due to work away from the home or market days. A total of 55 households were located using this method during the 10 days of fieldwork in May and June of 2012. Conclusions The method used to generate and field locate random homes for surveys and water sampling was an effective means of selecting random households in a rural environment lacking geolocation infrastructure. The success rate for locating households using a handheld GPS was excellent and only

  3. Optimizing Event Selection with the Random Grid Search

    Energy Technology Data Exchange (ETDEWEB)

    Bhat, Pushpalatha C. [Fermilab; Prosper, Harrison B. [Florida State U.; Sekmen, Sezen [Kyungpook Natl. U.; Stewart, Chip [Broad Inst., Cambridge

    2017-06-29

    The random grid search (RGS) is a simple, but efficient, stochastic algorithm to find optimal cuts that was developed in the context of the search for the top quark at Fermilab in the mid-1990s. The algorithm, and associated code, have been enhanced recently with the introduction of two new cut types, one of which has been successfully used in searches for supersymmetry at the Large Hadron Collider. The RGS optimization algorithm is described along with the recent developments, which are illustrated with two examples from particle physics. One explores the optimization of the selection of vector boson fusion events in the four-lepton decay mode of the Higgs boson and the other optimizes SUSY searches using boosted objects and the razor variables.

  4. Non-random mating for selection with restricted rates of inbreeding and overlapping generations

    NARCIS (Netherlands)

    Sonesson, A.K.; Meuwissen, T.H.E.

    2002-01-01

    Minimum coancestry mating with a maximum of one offspring per mating pair (MC1) is compared with random mating schemes for populations with overlapping generations. Optimum contribution selection is used, whereby $\\\\\\\\Delta F$ is restricted. For schemes with $\\\\\\\\Delta F$ restricted to 0.25% per

  5. Coupling graph perturbation theory with scalable parallel algorithms for large-scale enumeration of maximal cliques in biological graphs

    International Nuclear Information System (INIS)

    Samatova, N F; Schmidt, M C; Hendrix, W; Breimyer, P; Thomas, K; Park, B-H

    2008-01-01

    Data-driven construction of predictive models for biological systems faces challenges from data intensity, uncertainty, and computational complexity. Data-driven model inference is often considered a combinatorial graph problem where an enumeration of all feasible models is sought. The data-intensive and the NP-hard nature of such problems, however, challenges existing methods to meet the required scale of data size and uncertainty, even on modern supercomputers. Maximal clique enumeration (MCE) in a graph derived from such biological data is often a rate-limiting step in detecting protein complexes in protein interaction data, finding clusters of co-expressed genes in microarray data, or identifying clusters of orthologous genes in protein sequence data. We report two key advances that address this challenge. We designed and implemented the first (to the best of our knowledge) parallel MCE algorithm that scales linearly on thousands of processors running MCE on real-world biological networks with thousands and hundreds of thousands of vertices. In addition, we proposed and developed the Graph Perturbation Theory (GPT) that establishes a foundation for efficiently solving the MCE problem in perturbed graphs, which model the uncertainty in the data. GPT formulates necessary and sufficient conditions for detecting the differences between the sets of maximal cliques in the original and perturbed graphs and reduces the enumeration time by more than 80% compared to complete recomputation

  6. Inactivation of viable Ascaris eggs by reagents during enumeration.

    Science.gov (United States)

    Nelson, K L; Darby, J L

    2001-12-01

    Various reagents commonly used to enumerate viable helminth eggs from wastewater and sludge were evaluated for their potential to inactivate Ascaris eggs under typical laboratory conditions. Two methods were used to enumerate indigenous Ascaris eggs from sludge samples. All steps in the methods were the same except that in method I a phase extraction step with acid-alcohol (35% ethanol in 0.1 N H(2)SO(4)) and diethyl ether was used whereas in method II the extraction step was avoided by pouring the sample through a 38-microm-mesh stainless steel sieve that retained the eggs. The concentration of eggs and their viability were lower in the samples processed by method I than in the samples processed by method II by an average of 48 and 70%, respectively. A second set of experiments was performed using pure solutions of Ascaris suum eggs to elucidate the effect of the individual reagents and relevant combination of reagents on the eggs. The percentages of viable eggs in samples treated with acid-alcohol alone and in combination with diethyl ether or ethyl acetate were 52, 27, and 4%, respectively, whereas in the rest of the samples the viability was about 80%. Neither the acid nor the diethyl ether alone caused any decrease in egg viability. Thus, the observed inactivation was attributed primarily to the 35% ethanol content of the acid-alcohol solution. Inactivation of the eggs was prevented by limiting the direct exposure to the extraction reagents to 30 min and diluting the residual concentration of acid-alcohol in the sample by a factor of 100 before incubation. Also, the viability of the eggs was maintained if the acid-alcohol solution was replaced with an acetoacetic buffer. None of the reagents used for the flotation step of the sample cleaning procedure (ZnSO(4), MgSO(4), and NaCl) or during incubation (0.1 N H(2)SO(4) and 0.5% formalin) inactivated the Ascaris eggs under the conditions studied.

  7. Comparative Evaluations of Randomly Selected Four Point-of-Care Glucometer Devices in Addis Ababa, Ethiopia.

    Science.gov (United States)

    Wolde, Mistire; Tarekegn, Getahun; Kebede, Tedla

    2018-05-01

    Point-of-care glucometer (PoCG) devices play a significant role in self-monitoring of the blood sugar level, particularly in the follow-up of high blood sugar therapeutic response. The aim of this study was to evaluate blood glucose test results performed with four randomly selected glucometers on diabetes and control subjects versus standard wet chemistry (hexokinase) methods in Addis Ababa, Ethiopia. A prospective cross-sectional study was conducted on randomly selected 200 study participants (100 participants with diabetes and 100 healthy controls). Four randomly selected PoCG devices (CareSens N, DIAVUE Prudential, On Call Extra, i-QARE DS-W) were evaluated against hexokinase method and ISO 15197:2003 and ISO 15197:2013 standards. The minimum and maximum blood sugar values were recorded by CareSens N (21 mg/dl) and hexokinase method (498.8 mg/dl), respectively. The mean sugar values of all PoCG devices except On Call Extra showed significant differences compared with the reference hexokinase method. Meanwhile, all four PoCG devices had strong positive relationship (>80%) with the reference method (hexokinase). On the other hand, none of the four PoCG devices fulfilled the minimum accuracy measurement set by ISO 15197:2003 and ISO 15197:2013 standards. In addition, the linear regression analysis revealed that all four selected PoCG overestimated the glucose concentrations. The overall evaluation of the selected four PoCG measurements were poorly correlated with standard reference method. Therefore, before introducing PoCG devices to the market, there should be a standardized evaluation platform for validation. Further similar large-scale studies on other PoCG devices also need to be undertaken.

  8. Geography and genography: prediction of continental origin using randomly selected single nucleotide polymorphisms

    Directory of Open Access Journals (Sweden)

    Ramoni Marco F

    2007-03-01

    Full Text Available Abstract Background Recent studies have shown that when individuals are grouped on the basis of genetic similarity, group membership corresponds closely to continental origin. There has been considerable debate about the implications of these findings in the context of larger debates about race and the extent of genetic variation between groups. Some have argued that clustering according to continental origin demonstrates the existence of significant genetic differences between groups and that these differences may have important implications for differences in health and disease. Others argue that clustering according to continental origin requires the use of large amounts of genetic data or specifically chosen markers and is indicative only of very subtle genetic differences that are unlikely to have biomedical significance. Results We used small numbers of randomly selected single nucleotide polymorphisms (SNPs from the International HapMap Project to train naïve Bayes classifiers for prediction of ancestral continent of origin. Predictive accuracy was tested on two independent data sets. Genetically similar groups should be difficult to distinguish, especially if only a small number of genetic markers are used. The genetic differences between continentally defined groups are sufficiently large that one can accurately predict ancestral continent of origin using only a minute, randomly selected fraction of the genetic variation present in the human genome. Genotype data from only 50 random SNPs was sufficient to predict ancestral continent of origin in our primary test data set with an average accuracy of 95%. Genetic variations informative about ancestry were common and widely distributed throughout the genome. Conclusion Accurate characterization of ancestry is possible using small numbers of randomly selected SNPs. The results presented here show how investigators conducting genetic association studies can use small numbers of arbitrarily

  9. Fuzzy Random λ-Mean SAD Portfolio Selection Problem: An Ant Colony Optimization Approach

    Science.gov (United States)

    Thakur, Gour Sundar Mitra; Bhattacharyya, Rupak; Mitra, Swapan Kumar

    2010-10-01

    To reach the investment goal, one has to select a combination of securities among different portfolios containing large number of securities. Only the past records of each security do not guarantee the future return. As there are many uncertain factors which directly or indirectly influence the stock market and there are also some newer stock markets which do not have enough historical data, experts' expectation and experience must be combined with the past records to generate an effective portfolio selection model. In this paper the return of security is assumed to be Fuzzy Random Variable Set (FRVS), where returns are set of random numbers which are in turn fuzzy numbers. A new λ-Mean Semi Absolute Deviation (λ-MSAD) portfolio selection model is developed. The subjective opinions of the investors to the rate of returns of each security are taken into consideration by introducing a pessimistic-optimistic parameter vector λ. λ-Mean Semi Absolute Deviation (λ-MSAD) model is preferred as it follows absolute deviation of the rate of returns of a portfolio instead of the variance as the measure of the risk. As this model can be reduced to Linear Programming Problem (LPP) it can be solved much faster than quadratic programming problems. Ant Colony Optimization (ACO) is used for solving the portfolio selection problem. ACO is a paradigm for designing meta-heuristic algorithms for combinatorial optimization problem. Data from BSE is used for illustration.

  10. Pediatric selective mutism therapy: a randomized controlled trial.

    Science.gov (United States)

    Esposito, Maria; Gimigliano, Francesca; Barillari, Maria R; Precenzano, Francesco; Ruberto, Maria; Sepe, Joseph; Barillari, Umberto; Gimigliano, Raffaele; Militerni, Roberto; Messina, Giovanni; Carotenuto, Marco

    2017-10-01

    Selective mutism (SM) is a rare disease in children coded by DSM-5 as an anxiety disorder. Despite the disabling nature of the disease, there is still no specific treatment. The aims of this study were to verify the efficacy of six-month standard psychomotor treatment and the positive changes in lifestyle, in a population of children affected by SM. Randomized controlled trial registered in the European Clinical Trials Registry (EuDract 2015-001161-36). University third level Centre (Child and Adolescent Neuropsychiatry Clinic). Study population was composed by 67 children in group A (psychomotricity treatment) (35 M, mean age 7.84±1.15) and 71 children in group B (behavioral and educational counseling) (37 M, mean age 7.75±1.36). Psychomotor treatment was administered by trained child therapists in residential settings three times per week. Each child was treated for the whole period by the same therapist and all the therapists shared the same protocol. The standard psychomotor session length is of 45 minutes. At T0 and after 6 months (T1) of treatments, patients underwent a behavioral and SM severity assessment. To verify the effects of the psychomotor management, the Child Behavior Checklist questionnaire (CBCL) and Selective Mutism Questionnaire (SMQ) were administered to the parents. After 6 months of psychomotor treatment SM children showed a significant reduction among CBCL scores such as in social relations, anxious/depressed, social problems and total problems (Pselective mutism, even if further studies are needed. The present study identifies in psychomotricity a safe and efficacy therapy for pediatric selective mutism.

  11. Free Energy Self-Averaging in Protein-Sized Random Heteropolymers

    International Nuclear Information System (INIS)

    Chuang, Jeffrey; Grosberg, Alexander Yu.; Kardar, Mehran

    2001-01-01

    Current theories of heteropolymers are inherently macroscopic, but are applied to mesoscopic proteins. To compute the free energy over sequences, one assumes self-averaging -- a property established only in the macroscopic limit. By enumerating the states and energies of compact 18, 27, and 36mers on a lattice with an ensemble of random sequences, we test the self-averaging approximation. We find that fluctuations in the free energy between sequences are weak, and that self-averaging is valid at the scale of real proteins. The results validate sequence design methods which exponentially speed up computational design and simplify experimental realizations

  12. Comparison of methods for detection and enumeration of airborne microorganisms collected by liquid impingement.

    OpenAIRE

    Terzieva, S; Donnelly, J; Ulevicius, V; Grinshpun, S A; Willeke, K; Stelma, G N; Brenner, K P

    1996-01-01

    Bacterial agents and cell components can be spread as bioaerosols, producing infections and asthmatic problems. This study compares four methods for the detection and enumeration of aerosolized bacteria collected in an AGI-30 impinger. Changes in the total and viable concentrations of Pseudomonas fluorescens in the collection fluid with respect to time of impingement were determined. Two direct microscopic methods (acridine orange and BacLight) and aerodynamic aerosol-size spectrometry (Aeros...

  13. Primitive polynomials selection method for pseudo-random number generator

    Science.gov (United States)

    Anikin, I. V.; Alnajjar, Kh

    2018-01-01

    In this paper we suggested the method for primitive polynomials selection of special type. This kind of polynomials can be efficiently used as a characteristic polynomials for linear feedback shift registers in pseudo-random number generators. The proposed method consists of two basic steps: finding minimum-cost irreducible polynomials of the desired degree and applying primitivity tests to get the primitive ones. Finally two primitive polynomials, which was found by the proposed method, used in pseudorandom number generator based on fuzzy logic (FRNG) which had been suggested before by the authors. The sequences generated by new version of FRNG have low correlation magnitude, high linear complexity, less power consumption, is more balanced and have better statistical properties.

  14. Development and application of a new method for specific and sensitive enumeration of spores of nonproteolytic Clostridium botulinum types B, E, and F in foods and food materials.

    Science.gov (United States)

    Peck, Michael W; Plowman, June; Aldus, Clare F; Wyatt, Gary M; Izurieta, Walter Penaloza; Stringer, Sandra C; Barker, Gary C

    2010-10-01

    The highly potent botulinum neurotoxins are responsible for botulism, a severe neuroparalytic disease. Strains of nonproteolytic Clostridium botulinum form neurotoxins of types B, E, and F and are the main hazard associated with minimally heated refrigerated foods. Recent developments in quantitative microbiological risk assessment (QMRA) and food safety objectives (FSO) have made food safety more quantitative and include, as inputs, probability distributions for the contamination of food materials and foods. A new method that combines a selective enrichment culture with multiplex PCR has been developed and validated to enumerate specifically the spores of nonproteolytic C. botulinum. Key features of this new method include the following: (i) it is specific for nonproteolytic C. botulinum (and does not detect proteolytic C. botulinum), (ii) the detection limit has been determined for each food tested (using carefully structured control samples), and (iii) a low detection limit has been achieved by the use of selective enrichment and large test samples. The method has been used to enumerate spores of nonproteolytic C. botulinum in 637 samples of 19 food materials included in pasta-based minimally heated refrigerated foods and in 7 complete foods. A total of 32 samples (5 egg pastas and 27 scallops) contained spores of nonproteolytic C. botulinum type B or F. The majority of samples contained <100 spores/kg, but one sample of scallops contained 444 spores/kg. Nonproteolytic C. botulinum type E was not detected. Importantly, for QMRA and FSO, the construction of probability distributions will enable the frequency of packs containing particular levels of contamination to be determined.

  15. Alternative microbial methods: An overview and selection criteria.

    Science.gov (United States)

    Jasson, Vicky; Jacxsens, Liesbeth; Luning, Pieternel; Rajkovic, Andreja; Uyttendaele, Mieke

    2010-09-01

    This study provides an overview and criteria for the selection of a method, other than the reference method, for microbial analysis of foods. In a first part an overview of the general characteristics of rapid methods available, both for enumeration and detection, is given with reference to relevant bibliography. Perspectives on future development and the potential of the rapid method for routine application in food diagnostics are discussed. As various alternative "rapid" methods in different formats are available on the market, it can be very difficult for a food business operator or for a control authority to select the most appropriate method which fits its purpose. Validation of a method by a third party, according to international accepted protocol based upon ISO 16140, may increase the confidence in the performance of a method. A list of at the moment validated methods for enumeration of both utility indicators (aerobic plate count) and hygiene indicators (Enterobacteriaceae, Escherichia coli, coagulase positive Staphylococcus) as well as for detection of the four major pathogens (Salmonella spp., Listeria monocytogenes, E. coli O157 and Campylobacter spp.) is included with reference to relevant websites to check for updates. In a second part of this study, selection criteria are introduced to underpin the choice of the appropriate method(s) for a defined application. The selection criteria link the definition of the context in which the user of the method functions - and thus the prospective use of the microbial test results - with the technical information on the method and its operational requirements and sustainability. The selection criteria can help the end user of the method to obtain a systematic insight into all relevant factors to be taken into account for selection of a method for microbial analysis. Copyright 2010 Elsevier Ltd. All rights reserved.

  16. 75 FR 1589 - Submission for OMB Review; Comment Request

    Science.gov (United States)

    2010-01-12

    ... testing life cycle. Four panels of random digit dialing (RDD) respondents will be interviewed during May... coverage error (omissions and erroneous enumerations) for housing units and persons in housing units. The... inclusions. The 2010 CCM will be comprised of two samples selected to measure census coverage of housing...

  17. An interlaboratory study to find an alternative to the MPN technique for enumerating Escherichia coli in shellfish

    DEFF Research Database (Denmark)

    Ogden, I.D.; Brown, G.C.; Gallacher, S.

    1998-01-01

    Nine laboratories in eight countries tested 16 batches of common mussels (Mytilus edulis) over a 32 week period in order to find an alternative to the Most Probable Number (MPN) technique to enumerate E. coli. The alternatives investigated included the 3M Petrifilm system, the Merck Chromocult agar...

  18. Randomizing Roaches: Exploring the "Bugs" of Randomization in Experimental Design

    Science.gov (United States)

    Wagler, Amy; Wagler, Ron

    2014-01-01

    Understanding the roles of random selection and random assignment in experimental design is a central learning objective in most introductory statistics courses. This article describes an activity, appropriate for a high school or introductory statistics course, designed to teach the concepts, values and pitfalls of random selection and assignment…

  19. Materials selection for oxide-based resistive random access memories

    International Nuclear Information System (INIS)

    Guo, Yuzheng; Robertson, John

    2014-01-01

    The energies of atomic processes in resistive random access memories (RRAMs) are calculated for four typical oxides, HfO 2 , TiO 2 , Ta 2 O 5 , and Al 2 O 3 , to define a materials selection process. O vacancies have the lowest defect formation energy in the O-poor limit and dominate the processes. A band diagram defines the operating Fermi energy and O chemical potential range. It is shown how the scavenger metal can be used to vary the O vacancy formation energy, via controlling the O chemical potential, and the mean Fermi energy. The high endurance of Ta 2 O 5 RRAM is related to its more stable amorphous phase and the adaptive lattice rearrangements of its O vacancy

  20. Detection and enumeration of Salmonella enteritidis in homemade ice cream associated with an outbreak: comparison of conventional and real-time PCR methods.

    Science.gov (United States)

    Seo, K H; Valentin-Bon, I E; Brackett, R E

    2006-03-01

    Salmonellosis caused by Salmonella Enteritidis (SE) is a significant cause of foodborne illnesses in the United States. Consumption of undercooked eggs and egg-containing products has been the primary risk factor for the disease. The importance of the bacterial enumeration technique has been enormously stressed because of the quantitative risk analysis of SE in shell eggs. Traditional enumeration methods mainly depend on slow and tedious most-probable-number (MPN) methods. Therefore, specific, sensitive, and rapid methods for SE quantitation are needed to collect sufficient data for risk assessment and food safety policy development. We previously developed a real-time quantitative PCR assay for the direct detection and enumeration of SE and, in this study, applied it to naturally contaminated ice cream samples with and without enrichment. The detection limit of the real-time PCR assay was determined with artificially inoculated ice cream. When applied to the direct detection and quantification of SE in ice cream, the real-time PCR assay was as sensitive as the conventional plate count method in frequency of detection. However, populations of SE derived from real-time quantitative PCR were approximately 1 log higher than provided by MPN and CFU values obtained by conventional culture methods. The detection and enumeration of SE in naturally contaminated ice cream can be completed in 3 h by this real-time PCR method, whereas the cultural enrichment method requires 5 to 7 days. A commercial immunoassay for the specific detection of SE was also included in the study. The real-time PCR assay proved to be a valuable tool that may be useful to the food industry in monitoring its processes to improve product quality and safety.

  1. Development and Application of a New Method for Specific and Sensitive Enumeration of Spores of Nonproteolytic Clostridium botulinum Types B, E, and F in Foods and Food Materials ▿

    Science.gov (United States)

    Peck, Michael W.; Plowman, June; Aldus, Clare F.; Wyatt, Gary M.; Penaloza Izurieta, Walter; Stringer, Sandra C.; Barker, Gary C.

    2010-01-01

    The highly potent botulinum neurotoxins are responsible for botulism, a severe neuroparalytic disease. Strains of nonproteolytic Clostridium botulinum form neurotoxins of types B, E, and F and are the main hazard associated with minimally heated refrigerated foods. Recent developments in quantitative microbiological risk assessment (QMRA) and food safety objectives (FSO) have made food safety more quantitative and include, as inputs, probability distributions for the contamination of food materials and foods. A new method that combines a selective enrichment culture with multiplex PCR has been developed and validated to enumerate specifically the spores of nonproteolytic C. botulinum. Key features of this new method include the following: (i) it is specific for nonproteolytic C. botulinum (and does not detect proteolytic C. botulinum), (ii) the detection limit has been determined for each food tested (using carefully structured control samples), and (iii) a low detection limit has been achieved by the use of selective enrichment and large test samples. The method has been used to enumerate spores of nonproteolytic C. botulinum in 637 samples of 19 food materials included in pasta-based minimally heated refrigerated foods and in 7 complete foods. A total of 32 samples (5 egg pastas and 27 scallops) contained spores of nonproteolytic C. botulinum type B or F. The majority of samples contained <100 spores/kg, but one sample of scallops contained 444 spores/kg. Nonproteolytic C. botulinum type E was not detected. Importantly, for QMRA and FSO, the construction of probability distributions will enable the frequency of packs containing particular levels of contamination to be determined. PMID:20709854

  2. The influence of the microbial quality of wastewater, lettuce cultivars and enumeration technique when estimating the microbial contamination of wastewater-irrigated lettuce.

    Science.gov (United States)

    Makkaew, P; Miller, M; Cromar, N J; Fallowfield, H J

    2017-04-01

    This study investigated the volume of wastewater retained on the surface of three different varieties of lettuce, Iceberg, Cos, and Oak leaf, following submersion in wastewater of different microbial qualities (10, 10 2 , 10 3 , and 10 4 E. coli MPN/100 mL) as a surrogate method for estimation of contamination of spray-irrigated lettuce. Uniquely, Escherichia coli was enumerated, after submersion, on both the outer and inner leaves and in a composite sample of lettuce. E. coli were enumerated using two techniques. Firstly, from samples of leaves - the direct method. Secondly, using an indirect method, where the E. coli concentrations were estimated from the volume of wastewater retained by the lettuce and the E. coli concentration of the wastewater. The results showed that different varieties of lettuce retained significantly different volumes of wastewater (p 0.01) were detected between E. coli counts obtained from different parts of lettuce, nor between the direct and indirect enumeration methods. Statistically significant linear relationships were derived relating the E. coli concentration of the wastewater in which the lettuces were submerged to the subsequent E. coli count on each variety the lettuce.

  3. Specificity for field enumeration of Escherichia coli in tropical surface waters

    DEFF Research Database (Denmark)

    Jensen, Peter Kjær Mackie; Aalbaek, B; Aslam, R

    2001-01-01

    In remote rural areas in developing countries, bacteriological monitoring often depends on the use of commercial field media. This paper evaluates a commercial field medium used for the enumeration of Escherichia coli in different surface waters under primitive field conditions in rural Pakistan....... In order to verify the field kit, 117 presumptive E. coli isolates have been tested, finding a specificity of only 40%. By excluding some strains based on colony colours, the calculated specificity could be increased to 65%. Thus, it is suggested that prior to use in a tropical environment, the specificity...... of any commercial medium used should be tested with representative tropical isolates, in order to increase the specificity....

  4. Emergence of multilevel selection in the prisoner's dilemma game on coevolving random networks

    International Nuclear Information System (INIS)

    Szolnoki, Attila; Perc, Matjaz

    2009-01-01

    We study the evolution of cooperation in the prisoner's dilemma game, whereby a coevolutionary rule is introduced that molds the random topology of the interaction network in two ways. First, existing links are deleted whenever a player adopts a new strategy or its degree exceeds a threshold value; second, new links are added randomly after a given number of game iterations. These coevolutionary processes correspond to the generic formation of new links and deletion of existing links that, especially in human societies, appear frequently as a consequence of ongoing socialization, change of lifestyle or death. Due to the counteraction of deletions and additions of links the initial heterogeneity of the interaction network is qualitatively preserved, and thus cannot be held responsible for the observed promotion of cooperation. Indeed, the coevolutionary rule evokes the spontaneous emergence of a powerful multilevel selection mechanism, which despite the sustained random topology of the evolving network, maintains cooperation across the whole span of defection temptation values.

  5. Algebraic computability and enumeration models recursion theory and descriptive complexity

    CERN Document Server

    Nourani, Cyrus F

    2016-01-01

    This book, Algebraic Computability and Enumeration Models: Recursion Theory and Descriptive Complexity, presents new techniques with functorial models to address important areas on pure mathematics and computability theory from the algebraic viewpoint. The reader is first introduced to categories and functorial models, with Kleene algebra examples for languages. Functorial models for Peano arithmetic are described toward important computational complexity areas on a Hilbert program, leading to computability with initial models. Infinite language categories are also introduced to explain descriptive complexity with recursive computability with admissible sets and urelements. Algebraic and categorical realizability is staged on several levels, addressing new computability questions with omitting types realizably. Further applications to computing with ultrafilters on sets and Turing degree computability are examined. Functorial models computability is presented with algebraic trees realizing intuitionistic type...

  6. Assessing the accuracy and stability of variable selection methods for random forest modeling in ecology

    Science.gov (United States)

    Random forest (RF) modeling has emerged as an important statistical learning method in ecology due to its exceptional predictive performance. However, for large and complex ecological datasets there is limited guidance on variable selection methods for RF modeling. Typically, e...

  7. Optimization of the Dutch Matrix Test by Random Selection of Sentences From a Preselected Subset

    Directory of Open Access Journals (Sweden)

    Rolph Houben

    2015-04-01

    Full Text Available Matrix tests are available for speech recognition testing in many languages. For an accurate measurement, a steep psychometric function of the speech materials is required. For existing tests, it would be beneficial if it were possible to further optimize the available materials by increasing the function’s steepness. The objective is to show if the steepness of the psychometric function of an existing matrix test can be increased by selecting a homogeneous subset of recordings with the steepest sentence-based psychometric functions. We took data from a previous multicenter evaluation of the Dutch matrix test (45 normal-hearing listeners. Based on half of the data set, first the sentences (140 out of 311 with a similar speech reception threshold and with the steepest psychometric function (≥9.7%/dB were selected. Subsequently, the steepness of the psychometric function for this selection was calculated from the remaining (unused second half of the data set. The calculation showed that the slope increased from 10.2%/dB to 13.7%/dB. The resulting subset did not allow the construction of enough balanced test lists. Therefore, the measurement procedure was changed to randomly select the sentences during testing. Random selection may interfere with a representative occurrence of phonemes. However, in our material, the median phonemic occurrence remained close to that of the original test. This finding indicates that phonemic occurrence is not a critical factor. The work highlights the possibility that existing speech tests might be improved by selecting sentences with a steep psychometric function.

  8. Random drift versus selection in academic vocabulary: an evolutionary analysis of published keywords.

    Science.gov (United States)

    Bentley, R Alexander

    2008-08-27

    The evolution of vocabulary in academic publishing is characterized via keyword frequencies recorded in the ISI Web of Science citations database. In four distinct case-studies, evolutionary analysis of keyword frequency change through time is compared to a model of random copying used as the null hypothesis, such that selection may be identified against it. The case studies from the physical sciences indicate greater selection in keyword choice than in the social sciences. Similar evolutionary analyses can be applied to a wide range of phenomena; wherever the popularity of multiple items through time has been recorded, as with web searches, or sales of popular music and books, for example.

  9. Use of enrichment real-time PCR to enumerate salmonella on chicken parts.

    Science.gov (United States)

    Oscar, T P

    2014-07-01

    Salmonella bacteria that survive cooking or that cross-contaminate other food during meal preparation and serving represent primary routes of consumer exposure to this pathogen from chicken. In the present study, enrichment real-time PCR (qPCR) was used to enumerate Salmonella bacteria that contaminate raw chicken parts at retail or that cross-contaminate cooked chicken during simulated meal preparation and serving. Whole raw chickens obtained at retail were partitioned into wings, breasts, thighs, and drumsticks using a sterilized knife and cutting board, which were then used to partition a cooked chicken breast to assess cross-contamination. After enrichment in buffered peptone water (400 ml, 8 h, 40°C, 80 rpm), subsamples were used for qPCR and cultural isolation of Salmonella. In some experiments, chicken parts were spiked with 0 to 3.6 log of Salmonella Typhimurium var. 5- to generate a standard curve for enumeration by qPCR. Of 10 raw chickens examined, 7 (70%) had one or more parts contaminated with Salmonella. Of 80 raw parts examined, 15 (19%) were contaminated with Salmonella. Of 20 cooked chicken parts examined, 2 (10%) were cross-contaminated with Salmonella. Predominant serotypes identified were Typhimurium (71%) and its variants (var. 5-, monophasic, and nonmotile) and Kentucky (18%). The number of Salmonella bacteria on contaminated parts ranged from one to two per part. Results of this study indicated that retail chicken parts examined were contaminated with low levels of Salmonella, which resulted in low levels of cross-contamination during simulated meal preparation and serving. Thus, if consumers properly handle and prepare the chicken, it should pose no or very low risk of consumer exposure to Salmonella.

  10. Multicentre evaluation of stable reference whole blood for enumeration of lymphocyte subsets by flow cytometry.

    Science.gov (United States)

    Edwards, Cherry; Belgrave, Danielle; Janossy, George; Bradley, Nicholas J; Stebbings, Richard; Gaines-Das, Rose; Thorpe, Robin; Sawle, Alex; Arroz, Maria Jorge; Brando, Bruno; Gratama, Jan Willem; Orfao de Matos, Alberto; Papa, Stephano; Papamichail, Michael; Lenkei, Rodica; Rothe, Gregor; Barnett, David

    2005-06-22

    BACKGROUND: Clinical indications for lymphocyte subset enumeration by flow cytometry include monitoring of disease progression and timing of therapeutic intervention in infection with human immunodeficiency virus. Until recently international standardisation has not been possible due to a lack of suitable stable reference material. METHODS: This study consisted of two trials of a stabilised whole blood preparation. Eleven participants were sent two standard protocols for staining plus gating strategy and asked to report absolute counts for lymphocyte subsets. RESULTS: No significant difference was detected between the two methods when results from the two assays and all partners were pooled. Significant differences in results from the different partners were observed. However, representative mean counts were obtained for geometric means, geometric coefficient of variation, and 95% confidence interval for CD3 910 cells/mul, 9%, and 888 to 933, respectively), CD4 (495 cells/mul, 12%, and 483 to 507), and CD8 (408 cells/mul, 13%, and 393 to 422). CONCLUSION: We have introduced a stabilised blood preparation and a well-characterized biological standard. The availability of this reference material greatly simplifies the validation of new techniques for CD4(+) T-cell enumeration and the expansion of external quality assurance programmes for clinical laboratories, including those that operate in resource-restricted environments. (c) 2005 Wiley-Liss, Inc.

  11. Materials selection for oxide-based resistive random access memories

    Energy Technology Data Exchange (ETDEWEB)

    Guo, Yuzheng; Robertson, John [Engineering Department, Cambridge University, Cambridge CB2 1PZ (United Kingdom)

    2014-12-01

    The energies of atomic processes in resistive random access memories (RRAMs) are calculated for four typical oxides, HfO{sub 2}, TiO{sub 2}, Ta{sub 2}O{sub 5}, and Al{sub 2}O{sub 3}, to define a materials selection process. O vacancies have the lowest defect formation energy in the O-poor limit and dominate the processes. A band diagram defines the operating Fermi energy and O chemical potential range. It is shown how the scavenger metal can be used to vary the O vacancy formation energy, via controlling the O chemical potential, and the mean Fermi energy. The high endurance of Ta{sub 2}O{sub 5} RRAM is related to its more stable amorphous phase and the adaptive lattice rearrangements of its O vacancy.

  12. Micro-Randomized Trials: An Experimental Design for Developing Just-in-Time Adaptive Interventions

    Science.gov (United States)

    Klasnja, Predrag; Hekler, Eric B.; Shiffman, Saul; Boruvka, Audrey; Almirall, Daniel; Tewari, Ambuj; Murphy, Susan A.

    2015-01-01

    Objective This paper presents an experimental design, the micro-randomized trial, developed to support optimization of just-in-time adaptive interventions (JITAIs). JITAIs are mHealth technologies that aim to deliver the right intervention components at the right times and locations to optimally support individuals’ health behaviors. Micro-randomized trials offer a way to optimize such interventions by enabling modeling of causal effects and time-varying effect moderation for individual intervention components within a JITAI. Methods The paper describes the micro-randomized trial design, enumerates research questions that this experimental design can help answer, and provides an overview of the data analyses that can be used to assess the causal effects of studied intervention components and investigate time-varying moderation of those effects. Results Micro-randomized trials enable causal modeling of proximal effects of the randomized intervention components and assessment of time-varying moderation of those effects. Conclusions Micro-randomized trials can help researchers understand whether their interventions are having intended effects, when and for whom they are effective, and what factors moderate the interventions’ effects, enabling creation of more effective JITAIs. PMID:26651463

  13. 40 CFR 761.306 - Sampling 1 meter square surfaces by random selection of halves.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 30 2010-07-01 2010-07-01 false Sampling 1 meter square surfaces by...(b)(3) § 761.306 Sampling 1 meter square surfaces by random selection of halves. (a) Divide each 1 meter square portion where it is necessary to collect a surface wipe test sample into two equal (or as...

  14. Absolute Enumeration of Probiotic Strains Lactobacillus acidophilus NCFM® and Bifidobacterium animalis subsp. lactis Bl-04® via Chip-Based Digital PCR

    Directory of Open Access Journals (Sweden)

    Sarah J. Z. Hansen

    2018-04-01

    Full Text Available The current standard for enumeration of probiotics to obtain colony forming units by plate counts has several drawbacks: long time to results, high variability and the inability to discern between bacterial strains. Accurate probiotic cell counts are important to confirm the delivery of a clinically documented dose for its associated health benefits. A method is described using chip-based digital PCR (cdPCR to enumerate Bifidobacterium animalis subsp. lactis Bl-04 and Lactobacillus acidophilus NCFM both as single strains and in combination. Primers and probes were designed to differentiate the target strains against other strains of the same species using known single copy, genetic differences. The assay was optimized to include propidium monoazide pre-treatment to prevent amplification of DNA associated with dead probiotic cells as well as liberation of DNA from cells with intact membranes using bead beating. The resulting assay was able to successfully enumerate each strain whether alone or in multiplex. The cdPCR method had a 4 and 5% relative standard deviation (RSD for Bl-04 and NCFM, respectively, making it more precise than plate counts with an industry accepted RSD of 15%. cdPCR has the potential to replace traditional plate counts because of its precision, strain specificity and the ability to obtain results in a matter of hours.

  15. From Protocols to Publications: A Study in Selective Reporting of Outcomes in Randomized Trials in Oncology

    Science.gov (United States)

    Raghav, Kanwal Pratap Singh; Mahajan, Sminil; Yao, James C.; Hobbs, Brian P.; Berry, Donald A.; Pentz, Rebecca D.; Tam, Alda; Hong, Waun K.; Ellis, Lee M.; Abbruzzese, James; Overman, Michael J.

    2015-01-01

    Purpose The decision by journals to append protocols to published reports of randomized trials was a landmark event in clinical trial reporting. However, limited information is available on how this initiative effected transparency and selective reporting of clinical trial data. Methods We analyzed 74 oncology-based randomized trials published in Journal of Clinical Oncology, the New England Journal of Medicine, and The Lancet in 2012. To ascertain integrity of reporting, we compared published reports with their respective appended protocols with regard to primary end points, nonprimary end points, unplanned end points, and unplanned analyses. Results A total of 86 primary end points were reported in 74 randomized trials; nine trials had greater than one primary end point. Nine trials (12.2%) had some discrepancy between their planned and published primary end points. A total of 579 nonprimary end points (median, seven per trial) were planned, of which 373 (64.4%; median, five per trial) were reported. A significant positive correlation was found between the number of planned and nonreported nonprimary end points (Spearman r = 0.66; P < .001). Twenty-eight studies (37.8%) reported a total of 65 unplanned end points; 52 (80.0%) of which were not identified as unplanned. Thirty-one (41.9%) and 19 (25.7%) of 74 trials reported a total of 52 unplanned analyses involving primary end points and 33 unplanned analyses involving nonprimary end points, respectively. Studies reported positive unplanned end points and unplanned analyses more frequently than negative outcomes in abstracts (unplanned end points odds ratio, 6.8; P = .002; unplanned analyses odd ratio, 8.4; P = .007). Conclusion Despite public and reviewer access to protocols, selective outcome reporting persists and is a major concern in the reporting of randomized clinical trials. To foster credible evidence-based medicine, additional initiatives are needed to minimize selective reporting. PMID:26304898

  16. Joint random beam and spectrum selection for spectrum sharing systems with partial channel state information

    KAUST Repository

    Abdallah, Mohamed M.

    2013-11-01

    In this work, we develop joint interference-aware random beam and spectrum selection scheme that provide enhanced performance for the secondary network under the condition that the interference observed at the primary receiver is below a predetermined acceptable value. We consider a secondary link composed of a transmitter equipped with multiple antennas and a single-antenna receiver sharing the same spectrum with a set of primary links composed of a single-antenna transmitter and a single-antenna receiver. The proposed schemes jointly select a beam, among a set of power-optimized random beams, as well as the primary spectrum that maximizes the signal-to-interference-plus-noise ratio (SINR) of the secondary link while satisfying the primary interference constraint. In particular, we consider the case where the interference level is described by a q-bit description of its magnitude, whereby we propose a technique to find the optimal quantizer thresholds in a mean square error (MSE) sense. © 2013 IEEE.

  17. Joint random beam and spectrum selection for spectrum sharing systems with partial channel state information

    KAUST Repository

    Abdallah, Mohamed M.; Sayed, Mostafa M.; Alouini, Mohamed-Slim; Qaraqe, Khalid A.

    2013-01-01

    In this work, we develop joint interference-aware random beam and spectrum selection scheme that provide enhanced performance for the secondary network under the condition that the interference observed at the primary receiver is below a predetermined acceptable value. We consider a secondary link composed of a transmitter equipped with multiple antennas and a single-antenna receiver sharing the same spectrum with a set of primary links composed of a single-antenna transmitter and a single-antenna receiver. The proposed schemes jointly select a beam, among a set of power-optimized random beams, as well as the primary spectrum that maximizes the signal-to-interference-plus-noise ratio (SINR) of the secondary link while satisfying the primary interference constraint. In particular, we consider the case where the interference level is described by a q-bit description of its magnitude, whereby we propose a technique to find the optimal quantizer thresholds in a mean square error (MSE) sense. © 2013 IEEE.

  18. The RANDOM computer program: A linear congruential random number generator

    Science.gov (United States)

    Miles, R. F., Jr.

    1986-01-01

    The RANDOM Computer Program is a FORTRAN program for generating random number sequences and testing linear congruential random number generators (LCGs). The linear congruential form of random number generator is discussed, and the selection of parameters of an LCG for a microcomputer described. This document describes the following: (1) The RANDOM Computer Program; (2) RANDOM.MOD, the computer code needed to implement an LCG in a FORTRAN program; and (3) The RANCYCLE and the ARITH Computer Programs that provide computational assistance in the selection of parameters for an LCG. The RANDOM, RANCYCLE, and ARITH Computer Programs are written in Microsoft FORTRAN for the IBM PC microcomputer and its compatibles. With only minor modifications, the RANDOM Computer Program and its LCG can be run on most micromputers or mainframe computers.

  19. Analysis and applications of a frequency selective surface via a random distribution method

    International Nuclear Information System (INIS)

    Xie Shao-Yi; Huang Jing-Jian; Yuan Nai-Chang; Liu Li-Guo

    2014-01-01

    A novel frequency selective surface (FSS) for reducing radar cross section (RCS) is proposed in this paper. This FSS is based on the random distribution method, so it can be called random surface. In this paper, the stacked patches serving as periodic elements are employed for RCS reduction. Previous work has demonstrated the efficiency by utilizing the microstrip patches, especially for the reflectarray. First, the relevant theory of the method is described. Then a sample of a three-layer variable-sized stacked patch random surface with a dimension of 260 mm×260 mm is simulated, fabricated, and measured in order to demonstrate the validity of the proposed design. For the normal incidence, the 8-dB RCS reduction can be achieved both by the simulation and the measurement in 8 GHz–13 GHz. The oblique incidence of 30° is also investigated, in which the 7-dB RCS reduction can be obtained in a frequency range of 8 GHz–14 GHz. (condensed matter: electronic structure, electrical, magnetic, and optical properties)

  20. Random drift versus selection in academic vocabulary: an evolutionary analysis of published keywords.

    Directory of Open Access Journals (Sweden)

    R Alexander Bentley

    Full Text Available The evolution of vocabulary in academic publishing is characterized via keyword frequencies recorded in the ISI Web of Science citations database. In four distinct case-studies, evolutionary analysis of keyword frequency change through time is compared to a model of random copying used as the null hypothesis, such that selection may be identified against it. The case studies from the physical sciences indicate greater selection in keyword choice than in the social sciences. Similar evolutionary analyses can be applied to a wide range of phenomena; wherever the popularity of multiple items through time has been recorded, as with web searches, or sales of popular music and books, for example.

  1. On theoretical models of gene expression evolution with random genetic drift and natural selection.

    Directory of Open Access Journals (Sweden)

    Osamu Ogasawara

    2009-11-01

    Full Text Available The relative contributions of natural selection and random genetic drift are a major source of debate in the study of gene expression evolution, which is hypothesized to serve as a bridge from molecular to phenotypic evolution. It has been suggested that the conflict between views is caused by the lack of a definite model of the neutral hypothesis, which can describe the long-run behavior of evolutionary change in mRNA abundance. Therefore previous studies have used inadequate analogies with the neutral prediction of other phenomena, such as amino acid or nucleotide sequence evolution, as the null hypothesis of their statistical inference.In this study, we introduced two novel theoretical models, one based on neutral drift and the other assuming natural selection, by focusing on a common property of the distribution of mRNA abundance among a variety of eukaryotic cells, which reflects the result of long-term evolution. Our results demonstrated that (1 our models can reproduce two independently found phenomena simultaneously: the time development of gene expression divergence and Zipf's law of the transcriptome; (2 cytological constraints can be explicitly formulated to describe long-term evolution; (3 the model assuming that natural selection optimized relative mRNA abundance was more consistent with previously published observations than the model of optimized absolute mRNA abundances.The models introduced in this study give a formulation of evolutionary change in the mRNA abundance of each gene as a stochastic process, on the basis of previously published observations. This model provides a foundation for interpreting observed data in studies of gene expression evolution, including identifying an adequate time scale for discriminating the effect of natural selection from that of random genetic drift of selectively neutral variations.

  2. Enumeration of an extremely high particle-to-PFU ratio for Varicella-zoster virus.

    Science.gov (United States)

    Carpenter, John E; Henderson, Ernesto P; Grose, Charles

    2009-07-01

    Varicella-zoster virus (VZV) is renowned for its low titers. Yet investigations to explore the low infectivity are hampered by the fact that the VZV particle-to-PFU ratio has never been determined with precision. Herein, we accomplish that task by applying newer imaging technology. More than 300 images were taken of VZV-infected cells on 4 different samples at high magnification. We enumerated the total number of viral particles within 25 cm(2) of the infected monolayer at 415 million. Based on these numbers, the VZV particle:PFU ratio was approximately 40,000:1 for a cell-free inoculum.

  3. From Protocols to Publications: A Study in Selective Reporting of Outcomes in Randomized Trials in Oncology.

    Science.gov (United States)

    Raghav, Kanwal Pratap Singh; Mahajan, Sminil; Yao, James C; Hobbs, Brian P; Berry, Donald A; Pentz, Rebecca D; Tam, Alda; Hong, Waun K; Ellis, Lee M; Abbruzzese, James; Overman, Michael J

    2015-11-01

    The decision by journals to append protocols to published reports of randomized trials was a landmark event in clinical trial reporting. However, limited information is available on how this initiative effected transparency and selective reporting of clinical trial data. We analyzed 74 oncology-based randomized trials published in Journal of Clinical Oncology, the New England Journal of Medicine, and The Lancet in 2012. To ascertain integrity of reporting, we compared published reports with their respective appended protocols with regard to primary end points, nonprimary end points, unplanned end points, and unplanned analyses. A total of 86 primary end points were reported in 74 randomized trials; nine trials had greater than one primary end point. Nine trials (12.2%) had some discrepancy between their planned and published primary end points. A total of 579 nonprimary end points (median, seven per trial) were planned, of which 373 (64.4%; median, five per trial) were reported. A significant positive correlation was found between the number of planned and nonreported nonprimary end points (Spearman r = 0.66; P medicine, additional initiatives are needed to minimize selective reporting. © 2015 by American Society of Clinical Oncology.

  4. A system for household enumeration and re-identification in densely populated slums to facilitate community research, education, and advocacy.

    Directory of Open Access Journals (Sweden)

    Dana R Thomson

    Full Text Available We devised and implemented an innovative Location-Based Household Coding System (LBHCS appropriate to a densely populated informal settlement in Mumbai, India.LBHCS codes were designed to double as unique household identifiers and as walking directions; when an entire community is enumerated, LBHCS codes can be used to identify the number of households located per road (or lane segment. LBHCS was used in community-wide biometric, mental health, diarrheal disease, and water poverty studies. It also facilitated targeted health interventions by a research team of youth from Mumbai, including intensive door-to-door education of residents, targeted follow-up meetings, and a full census. In addition, LBHCS permitted rapid and low-cost preparation of GIS mapping of all households in the slum, and spatial summation and spatial analysis of survey data.LBHCS was an effective, easy-to-use, affordable approach to household enumeration and re-identification in a densely populated informal settlement where alternative satellite imagery and GPS technologies could not be used.

  5. High Entropy Random Selection Protocols

    NARCIS (Netherlands)

    H. Buhrman (Harry); M. Christandl (Matthias); M. Koucky (Michal); Z. Lotker (Zvi); B. Patt-Shamir; M. Charikar; K. Jansen; O. Reingold; J. Rolim

    2007-01-01

    textabstractIn this paper, we construct protocols for two parties that do not trust each other, to generate random variables with high Shannon entropy. We improve known bounds for the trade off between the number of rounds, length of communication and the entropy of the outcome.

  6. Integrated Behavior Therapy for Selective Mutism: a randomized controlled pilot study.

    Science.gov (United States)

    Bergman, R Lindsey; Gonzalez, Araceli; Piacentini, John; Keller, Melody L

    2013-10-01

    To evaluate the feasibility, acceptability, and preliminary efficacy of a novel behavioral intervention for reducing symptoms of selective mutism and increasing functional speech. A total of 21 children ages 4 to 8 with primary selective mutism were randomized to 24 weeks of Integrated Behavior Therapy for Selective Mutism (IBTSM) or a 12-week Waitlist control. Clinical outcomes were assessed using blind independent evaluators, parent-, and teacher-report, and an objective behavioral measure. Treatment recipients completed a three-month follow-up to assess durability of treatment gains. Data indicated increased functional speaking behavior post-treatment as rated by parents and teachers, with a high rate of treatment responders as rated by blind independent evaluators (75%). Conversely, children in the Waitlist comparison group did not experience significant improvements in speaking behaviors. Children who received IBTSM also demonstrated significant improvements in number of words spoken at school compared to baseline, however, significant group differences did not emerge. Treatment recipients also experienced significant reductions in social anxiety per parent, but not teacher, report. Clinical gains were maintained over 3 month follow-up. IBTSM appears to be a promising new intervention that is efficacious in increasing functional speaking behaviors, feasible, and acceptable to parents and teachers. Copyright © 2013 Elsevier Ltd. All rights reserved.

  7. Rapid detection, characterization, and enumeration of foodborne pathogens.

    Science.gov (United States)

    Hoorfar, J

    2011-11-01

    As food safety management further develops, microbiological testing will continue to play an important role in assessing whether Food Safety Objectives are achieved. However, traditional microbiological culture-based methods are limited, particularly in their ability to provide timely data. The present review discusses the reasons for the increasing interest in rapid methods, current developments in the field, the research needs, and the future trends. The advent of biotechnology has introduced new technologies that led to the emergence of rapid diagnostic methods and altered food testing practices. Rapid methods are comprised of many different detection technologies, including specialized enzyme substrates, antibodies and DNA, ranging from simple differential plating media to the use of sophisticated instruments. The use of non-invasive sampling techniques for live animals especially came into focus with the 1990s outbreak of bovine spongiform encephalopathy that was linked to the human outbreak of Creutzfeldt Jakob's Disease. Serology is still an important tool in preventing foodborne pathogens to enter the human food supply through meat and milk from animals. One of the primary uses of rapid methods is for fast screening of large number of samples, where most of them are expected to be test-negative, leading to faster product release for sale. This has been the main strength of rapid methods such as real-time Polymerase Chain Reaction (PCR). Enrichment PCR, where a primary culture broth is tested in PCR, is the most common approach in rapid testing. Recent reports show that it is possible both to enrich a sample and enumerate by pathogen-specific real-time PCR, if the enrichment time is short. This can be especially useful in situations where food producers ask for the level of pathogen in a contaminated product. Another key issue is automation, where the key drivers are miniaturization and multiple testing, which mean that not only one instrument is flexible

  8. Field-based random sampling without a sampling frame: control selection for a case-control study in rural Africa.

    Science.gov (United States)

    Crampin, A C; Mwinuka, V; Malema, S S; Glynn, J R; Fine, P E

    2001-01-01

    Selection bias, particularly of controls, is common in case-control studies and may materially affect the results. Methods of control selection should be tailored both for the risk factors and disease under investigation and for the population being studied. We present here a control selection method devised for a case-control study of tuberculosis in rural Africa (Karonga, northern Malawi) that selects an age/sex frequency-matched random sample of the population, with a geographical distribution in proportion to the population density. We also present an audit of the selection process, and discuss the potential of this method in other settings.

  9. Treatment selection in a randomized clinical trial via covariate-specific treatment effect curves.

    Science.gov (United States)

    Ma, Yunbei; Zhou, Xiao-Hua

    2017-02-01

    For time-to-event data in a randomized clinical trial, we proposed two new methods for selecting an optimal treatment for a patient based on the covariate-specific treatment effect curve, which is used to represent the clinical utility of a predictive biomarker. To select an optimal treatment for a patient with a specific biomarker value, we proposed pointwise confidence intervals for each covariate-specific treatment effect curve and the difference between covariate-specific treatment effect curves of two treatments. Furthermore, to select an optimal treatment for a future biomarker-defined subpopulation of patients, we proposed confidence bands for each covariate-specific treatment effect curve and the difference between each pair of covariate-specific treatment effect curve over a fixed interval of biomarker values. We constructed the confidence bands based on a resampling technique. We also conducted simulation studies to evaluate finite-sample properties of the proposed estimation methods. Finally, we illustrated the application of the proposed method in a real-world data set.

  10. Validation of Microcapillary Flow Cytometry for Community-Based CD4+ T Lymphocyte Enumeration in Remote Burkina Faso

    Science.gov (United States)

    Renault, Cybèle A; Traore, Arouna; Machekano, Rhoderick N; Israelski, Dennis M

    2010-01-01

    Background: CD4+ T lymphocyte enumeration plays a critical role in the initiation and monitoring of HIV-infected patients on antiretroviral therapy. There is an urgent need for low-cost CD4+ enumeration technologies, particularly for use in dry, dusty climates characteristic of many small cities in Sub-Saharan Africa. Design: Cross-sectional study. Methods: Blood samples from 98 HIV-infected patients followed in a community HIV clinic in Ouahigouya, Burkina Faso were obtained for routine CD4+ T lymphocyte count monitoring. The blood samples were divided into two aliquots, on which parallel CD4+ measurements were performed using microcapillary (Guava EasyCD4) and dedicated (Becton Dickinson FACSCount) CD4+ enumeration systems. Spearman rank correlation coefficient was calculated, and the sensitivity, specificity and positive predictive value (PPV) for EasyCD4 <200 cells/µL were determined compared to the reference standard FACSCount CD4 <200 cells/µL. Results: Mean CD4 counts for the EasyCD4 and FACSCount were 313.75 cells/µL and 303.47 cells/µL, respectively. The Spearman rank correlation coefficient was 0.92 (p<0.001). Median values using EasyCD4 were higher than those with the FACSCount (p=0.004). For a CD4<350 cells/uL, sensitivity of the EasyCD4 was 93.9% (95%CI 85.2-98.3%), specificity was 90.6% (95% CI 75.0-98.0%), and PPV was 95.4% (95%CI 87.1-99.0%). Conclusion: Use of the EasyCD4 system was feasible and highly accurate in the harsh conditions of this remote city in Sub-Saharan Africa, demonstrating acceptable sensitivity and specificity compared to a standard operating system. Microcapillary flow cytometry offers a cost-effective alternative for community-based, point-of-care CD4+ testing and could play a substantial role in scaling up HIV care in remote, resource-limited settings. PMID:21253463

  11. Selective decontamination in pediatric liver transplants. A randomized prospective study.

    Science.gov (United States)

    Smith, S D; Jackson, R J; Hannakan, C J; Wadowsky, R M; Tzakis, A G; Rowe, M I

    1993-06-01

    Although it has been suggested that selective decontamination of the digestive tract (SDD) decreases postoperative aerobic Gram-negative and fungal infections in orthotopic liver transplantation (OLT), no controlled trials exist in pediatric patients. This prospective, randomized controlled study of 36 pediatric OLT patients examines the effect of short-term SDD on postoperative infection and digestive tract flora. Patients were randomized into two groups. The control group received perioperative parenteral antibiotics only. The SDD group received in addition polymyxin E, tobramycin, and amphotericin B enterally and by oropharyngeal swab postoperatively until oral intake was tolerated (6 +/- 4 days). Indications for operation, preoperative status, age, and intensive care unit and hospital length of stay were no different in SDD (n = 18) and control (n = 18) groups. A total of 14 Gram-negative infections (intraabdominal abscess 7, septicemia 5, pneumonia 1, urinary tract 1) developed in the 36 patients studied. Mortality was not significantly different in the two groups. However, there were significantly fewer patients with Gram-negative infections in the SDD group: 3/18 patients (11%) vs. 11/18 patients (50%) in the control group, P < 0.001. There was also significant reduction in aerobic Gram-negative flora in the stool and pharynx in patients receiving SDD. Gram-positive and anaerobic organisms were unaffected. We conclude that short-term postoperative SDD significantly reduces Gram-negative infections in pediatric OLT patients.

  12. Day-ahead load forecast using random forest and expert input selection

    International Nuclear Information System (INIS)

    Lahouar, A.; Ben Hadj Slama, J.

    2015-01-01

    Highlights: • A model based on random forests for short term load forecast is proposed. • An expert feature selection is added to refine inputs. • Special attention is paid to customers behavior, load profile and special holidays. • The model is flexible and able to handle complex load signal. • A technical comparison is performed to assess the forecast accuracy. - Abstract: The electrical load forecast is getting more and more important in recent years due to the electricity market deregulation and integration of renewable resources. To overcome the incoming challenges and ensure accurate power prediction for different time horizons, sophisticated intelligent methods are elaborated. Utilization of intelligent forecast algorithms is among main characteristics of smart grids, and is an efficient tool to face uncertainty. Several crucial tasks of power operators such as load dispatch rely on the short term forecast, thus it should be as accurate as possible. To this end, this paper proposes a short term load predictor, able to forecast the next 24 h of load. Using random forest, characterized by immunity to parameter variations and internal cross validation, the model is constructed following an online learning process. The inputs are refined by expert feature selection using a set of if–then rules, in order to include the own user specifications about the country weather or market, and to generalize the forecast ability. The proposed approach is tested through a real historical set from the Tunisian Power Company, and the simulation shows accurate and satisfactory results for one day in advance, with an average error exceeding rarely 2.3%. The model is validated for regular working days and weekends, and special attention is paid to moving holidays, following non Gregorian calendar

  13. Enumeration and stability analysis of simple periodic orbits in β-Fermi Pasta Ulam lattice

    International Nuclear Information System (INIS)

    Sonone, Rupali L.; Jain, Sudhir R.

    2014-01-01

    We study the well-known one-dimensional problem of N particles with a nonlinear interaction. The special case of quadratic and quartic interaction potential among nearest neighbours is the β-Fermi-Pasta-Ulam model. We enumerate and classify the simple periodic orbits for this system and find the stability zones, employing Floquet theory. Such stability analysis is crucial to understand the transition of FPU lattice from recurrences to globally chaotic behavior, energy transport in lower dimensional system, dynamics of optical lattices and also its impact on shape parameter of bio-polymers such as DNA and RNA

  14. Enumeration and stability analysis of simple periodic orbits in β-Fermi Pasta Ulam lattice

    Energy Technology Data Exchange (ETDEWEB)

    Sonone, Rupali L., E-mail: vaidehisonone@gmail.com; Jain, Sudhir R., E-mail: vaidehisonone@gmail.com [Department of Physics, University of Pune, Pune-411007, India and Nuclear Physics Division, Bhabha Atomic Research Centre, Mumbai - 400085 (India)

    2014-04-24

    We study the well-known one-dimensional problem of N particles with a nonlinear interaction. The special case of quadratic and quartic interaction potential among nearest neighbours is the β-Fermi-Pasta-Ulam model. We enumerate and classify the simple periodic orbits for this system and find the stability zones, employing Floquet theory. Such stability analysis is crucial to understand the transition of FPU lattice from recurrences to globally chaotic behavior, energy transport in lower dimensional system, dynamics of optical lattices and also its impact on shape parameter of bio-polymers such as DNA and RNA.

  15. Comparative evaluation of direct plating and most probable number for enumeration of low levels of Listeria monocytogenes in naturally contaminated ice cream products.

    Science.gov (United States)

    Chen, Yi; Pouillot, Régis; S Burall, Laurel; Strain, Errol A; Van Doren, Jane M; De Jesus, Antonio J; Laasri, Anna; Wang, Hua; Ali, Laila; Tatavarthy, Aparna; Zhang, Guodong; Hu, Lijun; Day, James; Sheth, Ishani; Kang, Jihun; Sahu, Surasri; Srinivasan, Devayani; Brown, Eric W; Parish, Mickey; Zink, Donald L; Datta, Atin R; Hammack, Thomas S; Macarisin, Dumitru

    2017-01-16

    A precise and accurate method for enumeration of low level of Listeria monocytogenes in foods is critical to a variety of studies. In this study, paired comparison of most probable number (MPN) and direct plating enumeration of L. monocytogenes was conducted on a total of 1730 outbreak-associated ice cream samples that were naturally contaminated with low level of L. monocytogenes. MPN was performed on all 1730 samples. Direct plating was performed on all samples using the RAPID'L.mono (RLM) agar (1600 samples) and agar Listeria Ottaviani and Agosti (ALOA; 130 samples). Probabilistic analysis with Bayesian inference model was used to compare paired direct plating and MPN estimates of L. monocytogenes in ice cream samples because assumptions implicit in ordinary least squares (OLS) linear regression analyses were not met for such a comparison. The probabilistic analysis revealed good agreement between the MPN and direct plating estimates, and this agreement showed that the MPN schemes and direct plating schemes using ALOA or RLM evaluated in the present study were suitable for enumerating low levels of L. monocytogenes in these ice cream samples. The statistical analysis further revealed that OLS linear regression analyses of direct plating and MPN data did introduce bias that incorrectly characterized systematic differences between estimates from the two methods. Published by Elsevier B.V.

  16. Distribution of orientation selectivity in recurrent networks of spiking neurons with different random topologies.

    Science.gov (United States)

    Sadeh, Sadra; Rotter, Stefan

    2014-01-01

    Neurons in the primary visual cortex are more or less selective for the orientation of a light bar used for stimulation. A broad distribution of individual grades of orientation selectivity has in fact been reported in all species. A possible reason for emergence of broad distributions is the recurrent network within which the stimulus is being processed. Here we compute the distribution of orientation selectivity in randomly connected model networks that are equipped with different spatial patterns of connectivity. We show that, for a wide variety of connectivity patterns, a linear theory based on firing rates accurately approximates the outcome of direct numerical simulations of networks of spiking neurons. Distance dependent connectivity in networks with a more biologically realistic structure does not compromise our linear analysis, as long as the linearized dynamics, and hence the uniform asynchronous irregular activity state, remain stable. We conclude that linear mechanisms of stimulus processing are indeed responsible for the emergence of orientation selectivity and its distribution in recurrent networks with functionally heterogeneous synaptic connectivity.

  17. The Application of Digital Pathology to Improve Accuracy in Glomerular Enumeration in Renal Biopsies.

    Directory of Open Access Journals (Sweden)

    Avi Z Rosenberg

    Full Text Available In renal biopsy reporting, quantitative measurements, such as glomerular number and percentage of globally sclerotic glomeruli, is central to diagnostic accuracy and prognosis. The aim of this study is to determine the number of glomeruli and percent globally sclerotic in renal biopsies by means of registration of serial tissue sections and manual enumeration, compared to the numbers in pathology reports from routine light microscopic assessment.We reviewed 277 biopsies from the Nephrotic Syndrome Study Network (NEPTUNE digital pathology repository, enumerating 9,379 glomeruli by means of whole slide imaging. Glomerular number and the percentage of globally sclerotic glomeruli are values routinely recorded in the official renal biopsy pathology report from the 25 participating centers. Two general trends in reporting were noted: total number per biopsy or average number per level/section. Both of these approaches were assessed for their accuracy in comparison to the analogous numbers of annotated glomeruli on WSI.The number of glomeruli annotated was consistently higher than those reported (p<0.001; this difference was proportional to the number of glomeruli. In contrast, percent globally sclerotic were similar when calculated on total glomeruli, but greater in FSGS when calculated on average number of glomeruli (p<0.01. The difference in percent globally sclerotic between annotated and those recorded in pathology reports was significant when global sclerosis is greater than 40%.Although glass slides were not available for direct comparison to whole slide image annotation, this study indicates that routine manual light microscopy assessment of number of glomeruli is inaccurate, and the magnitude of this error is proportional to the total number of glomeruli.

  18. The design strategy of selective PTP1B inhibitors over TCPTP.

    Science.gov (United States)

    Li, XiangQian; Wang, LiJun; Shi, DaYong

    2016-08-15

    Protein tyrosine phosphatase 1B (PTP1B) has already been well studied as a highly validated therapeutic target for diabetes and obesity. However, the lack of selectivity limited further studies and clinical applications of PTP1B inhibitors, especially over T-cell protein tyrosine phosphatase (TCPTP). In this review, we enumerate the published specific inhibitors of PTP1B, discuss the structure-activity relationships by analysis of their X-ray structures or docking results, and summarize the characteristic of selectivity related residues and groups. Furthermore, the design strategy of selective PTP1B inhibitors over TCPTP is also proposed. We hope our work could provide an effective way to gain specific PTP1B inhibitors. Copyright © 2016 Elsevier Ltd. All rights reserved.

  19. Factors influencing selection of a HTR for a developing country

    International Nuclear Information System (INIS)

    Karim, C.S.

    1989-01-01

    Consumption of commercial energy and electricity in Bangladesh has to grow rapidly in order to attain socio-economic development of the country. Nuclear power is considered to be an appropriate proposition due to the inadequacy of indigenous primary energy resources. A technical, economic and financial feasibility study of a 300-500 MWe nuclear power plant is underway now. Responses from different suppliers in SMPR range were enumerated jointly by the Consultants and BAEC under the feasibility study. Criteria for selection of technology and the factor influencing the selection of Modular HTR for Bangladesh are described in the paper. Some indicative results of cost economic calculations are included to help form an idea about various limiting conditions, under which a SMPR with the selected technology could become competitive with the other conventional alternatives. Problems in decision making associated with the uncertainties in estimating plant and fuel cycle costs are enumerated. The implications of not having a reference plant vis-a-vis the advantageous safety features are described to show how these aspects can influence the selection of a new technology like HTR for a developing country. Financing is identifiable as the major problem in implementing a nuclear power project in a developing country like Bangladesh. The entire scope of supplies and services may be broken down into components, so that the burden of financing could be shared by more than one exporting country. Some indicative ideas about the packaging of supplies and services are presented in the paper in order to identify different types of financing sources that could be explored for implementation of the project. Some salient features of the effect of joint-venture on the project financing and implementation are described in the paper. (author). 3 refs, 1 fig

  20. Using Random Forests to Select Optimal Input Variables for Short-Term Wind Speed Forecasting Models

    Directory of Open Access Journals (Sweden)

    Hui Wang

    2017-10-01

    Full Text Available Achieving relatively high-accuracy short-term wind speed forecasting estimates is a precondition for the construction and grid-connected operation of wind power forecasting systems for wind farms. Currently, most research is focused on the structure of forecasting models and does not consider the selection of input variables, which can have significant impacts on forecasting performance. This paper presents an input variable selection method for wind speed forecasting models. The candidate input variables for various leading periods are selected and random forests (RF is employed to evaluate the importance of all variable as features. The feature subset with the best evaluation performance is selected as the optimal feature set. Then, kernel-based extreme learning machine is constructed to evaluate the performance of input variables selection based on RF. The results of the case study show that by removing the uncorrelated and redundant features, RF effectively extracts the most strongly correlated set of features from the candidate input variables. By finding the optimal feature combination to represent the original information, RF simplifies the structure of the wind speed forecasting model, shortens the training time required, and substantially improves the model’s accuracy and generalization ability, demonstrating that the input variables selected by RF are effective.

  1. Campylobacter, Salmonella, Listeria monocytogenes, verotoxigenic Escherichia coli, and Escherichia coli prevalence, enumeration, and subtypes on retail chicken breasts with and without skin.

    Science.gov (United States)

    Cook, Angela; Odumeru, Joseph; Lee, Susan; Pollari, Frank

    2012-01-01

    This study examined the prevalence, counts, and subtypes of Campylobacter, Salmonella, Listeria monocytogenes, verotoxigenic Escherichia coli (VTEC), and E. coli on raw retail chicken breast with the skin on versus the skin off. From January to December 2007, 187 raw skin-on chicken breasts and 131 skin-off chicken breasts were collected from randomly selected retail grocery stores in the Region of Waterloo, Ontario, Canada. Campylobacter isolates were recovered from a higher proportion of the skin-off chicken breasts, 55 (42%) of 131, than of the skin-on chicken breasts tested, 55 (29%) of 187 (P = 0.023). There was no difference in the proportion of Salmonella isolates recovered from the two meat types (P = 0.715): 40 (31%) of 131 skin-off chicken breasts versus 61 (33%) of 187 skin-on chicken breasts. L. monocytogenes isolates were recovered from a statistically lower proportion of the skin-off chicken breasts, 15 (15%) of 99, than of the skin-on chicken breasts, 64 (34%) of 187 (P = 0.001). There was no difference in the proportion of E. coli isolates recovered from the skin-off chicken breasts, 33 (33%) of 99, than from the skin-on chicken breasts, 77 (41%) of 187 (P = 0.204). VTEC was detected on a single skin-off chicken breast. Campylobacter jejuni was the most frequent species isolated on both types of chicken meat: skin-on, 48 (87%) of 55, and skin-off, 51 (94%) of 54. Salmonella serotypes Kentucky and Heidelberg and L. monocytogenes serotype 1/2a were the most frequently detected serotypes from both skin-off and skin-on chicken breasts. Although there appeared to be a trend toward higher enumeration values of these pathogens and E. coli on the skin-on chicken, the differences did not exceed 1 log. This study suggested that skin-off chicken breast may represent a higher risk of consumer exposure to Campylobacter, a similar risk for Salmonella, VTEC, and E. coli, and a lower risk for L. monocytogenes than skin-on chicken breast.

  2. Molecular structures enumeration and virtual screening in the chemical space with RetroPath2.0.

    Science.gov (United States)

    Koch, Mathilde; Duigou, Thomas; Carbonell, Pablo; Faulon, Jean-Loup

    2017-12-19

    Network generation tools coupled with chemical reaction rules have been mainly developed for synthesis planning and more recently for metabolic engineering. Using the same core algorithm, these tools apply a set of rules to a source set of compounds, stopping when a sink set of compounds has been produced. When using the appropriate sink, source and rules, this core algorithm can be used for a variety of applications beyond those it has been developed for. Here, we showcase the use of the open source workflow RetroPath2.0. First, we mathematically prove that we can generate all structural isomers of a molecule using a reduced set of reaction rules. We then use this enumeration strategy to screen the chemical space around a set of monomers and predict their glass transition temperatures, as well as around aminoglycosides to search structures maximizing antibacterial activity. We also perform a screening around aminoglycosides with enzymatic reaction rules to ensure biosynthetic accessibility. We finally use our workflow on an E. coli model to complete E. coli metabolome, with novel molecules generated using promiscuous enzymatic reaction rules. These novel molecules are searched on the MS spectra of an E. coli cell lysate interfacing our workflow with OpenMS through the KNIME Analytics Platform. We provide an easy to use and modify, modular, and open-source workflow. We demonstrate its versatility through a variety of use cases including molecular structure enumeration, virtual screening in the chemical space, and metabolome completion. Because it is open source and freely available on MyExperiment.org, workflow community contributions should likely expand further the features of the tool, even beyond the use cases presented in the paper.

  3. Limitations of the A-1M method for fecal coliform enumeration in the Pacific oyster (Crassostrea gigas).

    Science.gov (United States)

    Kaysner, C A; Weagant, S D

    1987-01-01

    Use of the A-1M method, which was originally devised for testing water samples, has recently been extended for enumeration of fecal coliforms and Escherichia coli in shellfish and other food products. Results of our study indicate that while this method is reliable for analysis of growing waters, the use of the A-1M method for testing Pacific oysters may be less reliable because bacteria not belonging to the coliform group but which are sometimes present in these animals also give a positive reaction.

  4. How Metastrategic Considerations Influence the Selection of Frequency Estimation Strategies

    Science.gov (United States)

    Brown, Norman R.

    2008-01-01

    Prior research indicates that enumeration-based frequency estimation strategies become increasingly common as memory for relevant event instances improves and that moderate levels of context memory are associated with moderate rates of enumeration [Brown, N. R. (1995). Estimation strategies and the judgment of event frequency. Journal of…

  5. Implications of structural genomics target selection strategies: Pfam5000, whole genome, and random approaches

    Energy Technology Data Exchange (ETDEWEB)

    Chandonia, John-Marc; Brenner, Steven E.

    2004-07-14

    The structural genomics project is an international effort to determine the three-dimensional shapes of all important biological macromolecules, with a primary focus on proteins. Target proteins should be selected according to a strategy which is medically and biologically relevant, of good value, and tractable. As an option to consider, we present the Pfam5000 strategy, which involves selecting the 5000 most important families from the Pfam database as sources for targets. We compare the Pfam5000 strategy to several other proposed strategies that would require similar numbers of targets. These include including complete solution of several small to moderately sized bacterial proteomes, partial coverage of the human proteome, and random selection of approximately 5000 targets from sequenced genomes. We measure the impact that successful implementation of these strategies would have upon structural interpretation of the proteins in Swiss-Prot, TrEMBL, and 131 complete proteomes (including 10 of eukaryotes) from the Proteome Analysis database at EBI. Solving the structures of proteins from the 5000 largest Pfam families would allow accurate fold assignment for approximately 68 percent of all prokaryotic proteins (covering 59 percent of residues) and 61 percent of eukaryotic proteins (40 percent of residues). More fine-grained coverage which would allow accurate modeling of these proteins would require an order of magnitude more targets. The Pfam5000 strategy may be modified in several ways, for example to focus on larger families, bacterial sequences, or eukaryotic sequences; as long as secondary consideration is given to large families within Pfam, coverage results vary only slightly. In contrast, focusing structural genomics on a single tractable genome would have only a limited impact in structural knowledge of other proteomes: a significant fraction (about 30-40 percent of the proteins, and 40-60 percent of the residues) of each proteome is classified in small

  6. Genome-wide association data classification and SNPs selection using two-stage quality-based Random Forests.

    Science.gov (United States)

    Nguyen, Thanh-Tung; Huang, Joshua; Wu, Qingyao; Nguyen, Thuy; Li, Mark

    2015-01-01

    Single-nucleotide polymorphisms (SNPs) selection and identification are the most important tasks in Genome-wide association data analysis. The problem is difficult because genome-wide association data is very high dimensional and a large portion of SNPs in the data is irrelevant to the disease. Advanced machine learning methods have been successfully used in Genome-wide association studies (GWAS) for identification of genetic variants that have relatively big effects in some common, complex diseases. Among them, the most successful one is Random Forests (RF). Despite of performing well in terms of prediction accuracy in some data sets with moderate size, RF still suffers from working in GWAS for selecting informative SNPs and building accurate prediction models. In this paper, we propose to use a new two-stage quality-based sampling method in random forests, named ts-RF, for SNP subspace selection for GWAS. The method first applies p-value assessment to find a cut-off point that separates informative and irrelevant SNPs in two groups. The informative SNPs group is further divided into two sub-groups: highly informative and weak informative SNPs. When sampling the SNP subspace for building trees for the forest, only those SNPs from the two sub-groups are taken into account. The feature subspaces always contain highly informative SNPs when used to split a node at a tree. This approach enables one to generate more accurate trees with a lower prediction error, meanwhile possibly avoiding overfitting. It allows one to detect interactions of multiple SNPs with the diseases, and to reduce the dimensionality and the amount of Genome-wide association data needed for learning the RF model. Extensive experiments on two genome-wide SNP data sets (Parkinson case-control data comprised of 408,803 SNPs and Alzheimer case-control data comprised of 380,157 SNPs) and 10 gene data sets have demonstrated that the proposed model significantly reduced prediction errors and outperformed

  7. Evaluation of dispersion methods for enumeration of microorganisms from peat and activated carbon biofilters treating volatile organic compounds.

    Science.gov (United States)

    Khammar, Nadia; Malhautier, Luc; Degrange, Valérie; Lensi, Robert; Fanlo, Jean-Louis

    2004-01-01

    To enumerate microorganisms having colonized biofilters treating volatile organic compounds, it is necessary firstly to evaluate dispersion methods. Crushing, shaking and sonication were then tested for the removal of microflora from biofilters packing materials (peat and activated carbon). Continuous or discontinuous procedures, and addition of glass beads had no effect on the number of microorganisms removed from peat particles. The duration of treatment also had no effect for shaking and crushing, but the number of microorganisms after 60 min of treatment with ultrasound was significantly higher than that obtained after 0.5 min. The comparison between these methods showed that crushing was the most efficient for the removal of microorganisms from both peat and activated carbon. The comparison between three chemical dispersion agents showed that 1% Na-pyrophosphate was less efficient, compared with 200 mM phosphate buffer or 1% Na-hexametaphosphate. To optimize the cultivation of microorganisms, three different agar media were compared. Tryptic soy agar tenfold diluted (TSA 1/10) was the most suitable medium for the culture of microflora from a peat biofilter. For the activated carbon biofilter, there was no significant difference between Luria Bertoni, TSA 1/10, and plate count agar. The optimized extraction and enumeration protocols were used to perform a quantitative characterization of microbial populations in an operating laboratory activated carbon biofilter and in two parallel peat biofilters.

  8. Rapid enumeration of low numbers of moulds in tea based drinks using an automated system.

    Science.gov (United States)

    Tanaka, Kouichi; Yamaguchi, Nobuyasu; Baba, Takashi; Amano, Norihide; Nasu, Masao

    2011-01-31

    Aseptically prepared cold drinks based on tea have become popular worldwide. Contamination of these drinks with harmful microbes is a potential health problem because such drinks are kept free from preservatives to maximize aroma and flavour. Heat-tolerant conidia and ascospores of fungi can survive pasteurization, and need to be detected as quickly as possible. We were able to rapidly and accurately detect low numbers of conidia and ascospores in tea-based drinks using fluorescent staining followed by an automated counting system. Conidia or ascospores were inoculated into green tea and oolong tea, and samples were immediately filtered through nitrocellulose membranes (pore size: 0.8 μm) to concentrate fungal propagules. These were transferred onto potato dextrose agar and incubated for 23 h at 28 °C. Fungi germinating on the membranes were fluorescently stained for 30 min. The stained mycelia were counted selectively within 90s using an automated counting system (MGS-10LD; Chuo Electric Works, Osaka, Japan). Very low numbers (1 CFU/100ml) of conidia or ascospores could be rapidly counted, in contrast to traditional labour intensive techniques. All tested mould strains were detected within 24h while conventional plate counting required 72 h for colony enumeration. Counts of slow-growing fungi (Cladosporium cladosporioides) obtained by automated counting and by conventional plate counting were close (r(2) = 0.986). Our combination of methods enables counting of both fast- and slow-growing fungi, and should be useful for microbiological quality control of tea-based and also other drinks. Copyright © 2011 Elsevier B.V. All rights reserved.

  9. Randomized study of control of the primary tumor and survival using preoperative radiation, radiation alone, or surgery alone in head and beck carcinomas

    International Nuclear Information System (INIS)

    Hintz, B.; Charyulu, K.; Chandler, J.R.; Sudarsanam, A.; Garciga, C.

    1979-01-01

    Fifty-five selected patients with previously untreated squamous cell carcinoma of the head and neck regions were studied in a randomized, prospective manner. The three treatment categories were primary radiation (Gp R), primary surgery (Gp S), and preoperative radiation of 4000 rads in four weeks (Gp R/S). The local control rates for the 44 evaluable patients with a two-year minimum followup were 24%, 39%, and 43%, respectively. Further treatment attempts in patients failing initial therapy yielded local control rates of 35%, 39%, and 43% for Gp R, Gp S, and Gp R/S, respectively. None of the local control rates nor the corresponding survival curves were significantly different at P < 0.10. However, the group sizes were sufficiently small that true differences might not have been detected. Postoperative complications were higher in the primary radiation failures subsequently operated upon compared to the primary surgery group (P = 0.07). A table is included in which the types of postoperative complications are listed and enumerated according to treatment regime

  10. Participant-selected music and physical activity in older adults following cardiac rehabilitation: a randomized controlled trial.

    Science.gov (United States)

    Clark, Imogen N; Baker, Felicity A; Peiris, Casey L; Shoebridge, Georgie; Taylor, Nicholas F

    2017-03-01

    To evaluate effects of participant-selected music on older adults' achievement of activity levels recommended in the physical activity guidelines following cardiac rehabilitation. A parallel group randomized controlled trial with measurements at Weeks 0, 6 and 26. A multisite outpatient rehabilitation programme of a publicly funded metropolitan health service. Adults aged 60 years and older who had completed a cardiac rehabilitation programme. Experimental participants selected music to support walking with guidance from a music therapist. Control participants received usual care only. The primary outcome was the proportion of participants achieving activity levels recommended in physical activity guidelines. Secondary outcomes compared amounts of physical activity, exercise capacity, cardiac risk factors, and exercise self-efficacy. A total of 56 participants, mean age 68.2 years (SD = 6.5), were randomized to the experimental ( n = 28) and control groups ( n = 28). There were no differences between groups in proportions of participants achieving activity recommended in physical activity guidelines at Week 6 or 26. Secondary outcomes demonstrated between-group differences in male waist circumference at both measurements (Week 6 difference -2.0 cm, 95% CI -4.0 to 0; Week 26 difference -2.8 cm, 95% CI -5.4 to -0.1), and observed effect sizes favoured the experimental group for amounts of physical activity (d = 0.30), exercise capacity (d = 0.48), and blood pressure (d = -0.32). Participant-selected music did not increase the proportion of participants achieving recommended amounts of physical activity, but may have contributed to exercise-related benefits.

  11. r2VIM: A new variable selection method for random forests in genome-wide association studies.

    Science.gov (United States)

    Szymczak, Silke; Holzinger, Emily; Dasgupta, Abhijit; Malley, James D; Molloy, Anne M; Mills, James L; Brody, Lawrence C; Stambolian, Dwight; Bailey-Wilson, Joan E

    2016-01-01

    Machine learning methods and in particular random forests (RFs) are a promising alternative to standard single SNP analyses in genome-wide association studies (GWAS). RFs provide variable importance measures (VIMs) to rank SNPs according to their predictive power. However, in contrast to the established genome-wide significance threshold, no clear criteria exist to determine how many SNPs should be selected for downstream analyses. We propose a new variable selection approach, recurrent relative variable importance measure (r2VIM). Importance values are calculated relative to an observed minimal importance score for several runs of RF and only SNPs with large relative VIMs in all of the runs are selected as important. Evaluations on simulated GWAS data show that the new method controls the number of false-positives under the null hypothesis. Under a simple alternative hypothesis with several independent main effects it is only slightly less powerful than logistic regression. In an experimental GWAS data set, the same strong signal is identified while the approach selects none of the SNPs in an underpowered GWAS. The novel variable selection method r2VIM is a promising extension to standard RF for objectively selecting relevant SNPs in GWAS while controlling the number of false-positive results.

  12. Development of quantitative real-time PCR for detection and enumeration of Enterobacteriaceae.

    Science.gov (United States)

    Takahashi, Hajime; Saito, Rumi; Miya, Satoko; Tanaka, Yuichiro; Miyamura, Natsumi; Kuda, Takashi; Kimura, Bon

    2017-04-04

    The family Enterobacteriaceae, members of which are widely distributed in the environment, includes many important human pathogens. In this study, a rapid real-time PCR method targeting rplP, coding for L16 protein, a component of the ribosome large subunit, was developed for enumerating Enterobacteriaceae strains, and its efficiency was evaluated using naturally contaminated food products. The rplP-targeted real-time PCR amplified Enterobacteriaceae species with Ct values of 14.0-22.8, whereas the Ct values for non-Enterobacteriaceae species were >30, indicating the specificity of this method for the Enterobacteriaceae. Using a calibration curve of Ct=-3.025 (log CFU/g)+37.35, which was calculated from individual plots of the cell numbers in different concentrations of 5 Enterobacteriaceae species, the rplP-targeted real-time PCR was applied to 51 food samples. A Enterobacteriaceae species in foods rapidly and accurately, and therefore, it can be used for the microbiological risk analysis of foods. Copyright © 2017 Elsevier B.V. All rights reserved.

  13. Two-year Randomized Clinical Trial of Self-etching Adhesives and Selective Enamel Etching.

    Science.gov (United States)

    Pena, C E; Rodrigues, J A; Ely, C; Giannini, M; Reis, A F

    2016-01-01

    The aim of this randomized, controlled prospective clinical trial was to evaluate the clinical effectiveness of restoring noncarious cervical lesions with two self-etching adhesive systems applied with or without selective enamel etching. A one-step self-etching adhesive (Xeno V(+)) and a two-step self-etching system (Clearfil SE Bond) were used. The effectiveness of phosphoric acid selective etching of enamel margins was also evaluated. Fifty-six cavities were restored with each adhesive system and divided into two subgroups (n=28; etch and non-etch). All 112 cavities were restored with the nanohybrid composite Esthet.X HD. The clinical effectiveness of restorations was recorded in terms of retention, marginal integrity, marginal staining, caries recurrence, and postoperative sensitivity after 3, 6, 12, 18, and 24 months (modified United States Public Health Service). The Friedman test detected significant differences only after 18 months for marginal staining in the groups Clearfil SE non-etch (p=0.009) and Xeno V(+) etch (p=0.004). One restoration was lost during the trial (Xeno V(+) etch; p>0.05). Although an increase in marginal staining was recorded for groups Clearfil SE non-etch and Xeno V(+) etch, the clinical effectiveness of restorations was considered acceptable for the single-step and two-step self-etching systems with or without selective enamel etching in this 24-month clinical trial.

  14. A comparison of random forest and its Gini importance with standard chemometric methods for the feature selection and classification of spectral data

    Directory of Open Access Journals (Sweden)

    Himmelreich Uwe

    2009-07-01

    Full Text Available Abstract Background Regularized regression methods such as principal component or partial least squares regression perform well in learning tasks on high dimensional spectral data, but cannot explicitly eliminate irrelevant features. The random forest classifier with its associated Gini feature importance, on the other hand, allows for an explicit feature elimination, but may not be optimally adapted to spectral data due to the topology of its constituent classification trees which are based on orthogonal splits in feature space. Results We propose to combine the best of both approaches, and evaluated the joint use of a feature selection based on a recursive feature elimination using the Gini importance of random forests' together with regularized classification methods on spectral data sets from medical diagnostics, chemotaxonomy, biomedical analytics, food science, and synthetically modified spectral data. Here, a feature selection using the Gini feature importance with a regularized classification by discriminant partial least squares regression performed as well as or better than a filtering according to different univariate statistical tests, or using regression coefficients in a backward feature elimination. It outperformed the direct application of the random forest classifier, or the direct application of the regularized classifiers on the full set of features. Conclusion The Gini importance of the random forest provided superior means for measuring feature relevance on spectral data, but – on an optimal subset of features – the regularized classifiers might be preferable over the random forest classifier, in spite of their limitation to model linear dependencies only. A feature selection based on Gini importance, however, may precede a regularized linear classification to identify this optimal subset of features, and to earn a double benefit of both dimensionality reduction and the elimination of noise from the classification task.

  15. First-principles study of ternary bcc alloys using special quasi-random structures

    International Nuclear Information System (INIS)

    Jiang Chao

    2009-01-01

    Using a combination of exhaustive enumeration and Monte Carlo simulated annealing, we have developed special quasi-random structures (SQSs) for ternary body-centered cubic (bcc) alloys with compositions of A 1 B 1 C 1 , A 2 B 1 C 1 , A 6 B 1 C 1 and A 2 B 3 C 3 , respectively. The structures possess local pair and multisite correlation functions that closely mimic those of the random bcc alloy. We employed the SQSs to predict the mixing enthalpies, nearest neighbor bond length distributions and electronic density of states of bcc Mo-Nb-Ta and Mo-Nb-V solid solutions. Our convergence tests indicate that even small-sized SQSs can give reliable results. Based on the SQS energetics, the predicting powers of the existing empirical ternary extrapolation models were assessed. The present results suggest that it is important to take into account the ternary interaction parameter in order to accurately describe the thermodynamic behaviors of ternary alloys. The proposed SQSs are quite general and can be applied to other ternary bcc alloys.

  16. Random effect selection in generalised linear models

    DEFF Research Database (Denmark)

    Denwood, Matt; Houe, Hans; Forkman, Björn

    We analysed abattoir recordings of meat inspection codes with possible relevance to onfarm animal welfare in cattle. Random effects logistic regression models were used to describe individual-level data obtained from 461,406 cattle slaughtered in Denmark. Our results demonstrate that the largest...

  17. Evaluation of pre-PCR processing approaches for enumeration of Salmonella enterica in naturally contaminated animal feed

    DEFF Research Database (Denmark)

    Schelin, Jenny; Andersson, Gunnar; Vigre, Håkan

    2014-01-01

    Three pre‐PCR processing strategies for the detection and/or quantification of Salmonella in naturally contaminated soya bean meal were evaluated. Methods included: (i) flotation‐qPCR [enumeration of intact Salmonella cells prior to quantitative PCR (qPCR)], (ii) MPN‐PCR (modified most probable...... be due to the presence of nonculturable Salmonella and/or a heterogeneous distribution of Salmonella in the material. The evaluated methods provide different possibilities to assess the prevalence of Salmonella in feed, together with the numbers of culturable, as well as nonculturable cells, and can...... be applied to generate data to allow more accurate quantitative microbial risk assessment for Salmonella in the feed chain....

  18. From Enumerating to Generating: A Linear Time Algorithm for Generating 2D Lattice Paths with a Given Number of Turns

    Directory of Open Access Journals (Sweden)

    Ting Kuo

    2015-05-01

    Full Text Available We propose a linear time algorithm, called G2DLP, for generating 2D lattice L(n1, n2 paths, equivalent to two-item  multiset permutations, with a given number of turns. The usage of turn has three meanings: in the context of multiset permutations, it means that two consecutive elements of a permutation belong to two different items; in lattice path enumerations, it means that the path changes its direction, either from eastward to northward or from northward to eastward; in open shop scheduling, it means that we transfer a job from one type of machine to another. The strategy of G2DLP is divide-and-combine; the division is based on the enumeration results of a previous study and is achieved by aid of an integer partition algorithm and a multiset permutation algorithm; the combination is accomplished by a concatenation algorithm that constructs the paths we require. The advantage of G2DLP is twofold. First, it is optimal in the sense that it directly generates all feasible paths without visiting an infeasible one. Second, it can generate all paths in any specified order of turns, for example, a decreasing order or an increasing order. In practice, two applications, scheduling and cryptography, are discussed.

  19. Selection of terrestrial transfer factors for radioecological assessment models and regulatory guides

    International Nuclear Information System (INIS)

    Ng, Y.C.; Hoffman, F.O.

    1983-01-01

    A parameter value for a radioecological assessment model is not a single value but a distribution of values about a central value. The sources that contribute to the variability of transfer factors to predict foodchain transport of radionuclides are enumerated. Knowledge of these sources, judgement in interpreting the available data, consideration of collateral information, and established criteria that specify the desired level of conservatism in the resulting predictions are essential elements when selecting appropriate parameter values for radioecological assessment models and regulatory guides. 39 references, 4 figures, 5 tables

  20. Selection of terrestrial transfer factors for radioecological assessment models and regulatory guides

    Energy Technology Data Exchange (ETDEWEB)

    Ng, Y.C.; Hoffman, F.O.

    1983-01-01

    A parameter value for a radioecological assessment model is not a single value but a distribution of values about a central value. The sources that contribute to the variability of transfer factors to predict foodchain transport of radionuclides are enumerated. Knowledge of these sources, judgement in interpreting the available data, consideration of collateral information, and established criteria that specify the desired level of conservatism in the resulting predictions are essential elements when selecting appropriate parameter values for radioecological assessment models and regulatory guides. 39 references, 4 figures, 5 tables.

  1. Does availability of physical activity and food outlets differ by race and income? Findings from an enumeration study in a health disparate region.

    Science.gov (United States)

    Hill, Jennie L; Chau, Clarice; Luebbering, Candice R; Kolivras, Korine K; Zoellner, Jamie

    2012-09-06

    Low-income, ethnic/racial minorities and rural populations are at increased risk for obesity and related chronic health conditions when compared to white, urban and higher-socio-economic status (SES) peers. Recent systematic reviews highlight the influence of the built environment on obesity, yet very few of these studies consider rural areas or populations. Utilizing a CBPR process, this study advances community-driven causal models to address obesity by exploring the difference in resources for physical activity and food outlets by block group race and income in a small regional city that anchors a rural health disparate region. To guide this inquiry we hypothesized that lower income and racially diverse block groups would have fewer food outlets, including fewer grocery stores and fewer physical activity outlets. We further hypothesized that walkability, as defined by a computed walkability index, would be lower in the lower income block groups. Using census data and GIS, base maps of the region were created and block groups categorized by income and race. All food outlets and physical activity resources were enumerated and geocoded and a walkability index computed. Analyses included one-way MANOVA and spatial autocorrelation. In total, 49 stores, 160 restaurants and 79 physical activity outlets were enumerated. There were no differences in the number of outlets by block group income or race. Further, spatial analyses suggest that the distribution of outlets is dispersed across all block groups. Under the larger CPBR process, this enumeration study advances the causal models set forth by the community members to address obesity by providing an overview of the food and physical activity environment in this region. This data reflects the food and physical activity resources available to residents in the region and will aid many of the community-academic partners as they pursue intervention strategies targeting obesity.

  2. Does availability of physical activity and food outlets differ by race and income? Findings from an enumeration study in a health disparate region

    Directory of Open Access Journals (Sweden)

    Hill Jennie L

    2012-09-01

    Full Text Available Abstract Background Low-income, ethnic/racial minorities and rural populations are at increased risk for obesity and related chronic health conditions when compared to white, urban and higher-socio-economic status (SES peers. Recent systematic reviews highlight the influence of the built environment on obesity, yet very few of these studies consider rural areas or populations. Utilizing a CBPR process, this study advances community-driven causal models to address obesity by exploring the difference in resources for physical activity and food outlets by block group race and income in a small regional city that anchors a rural health disparate region. To guide this inquiry we hypothesized that lower income and racially diverse block groups would have fewer food outlets, including fewer grocery stores and fewer physical activity outlets. We further hypothesized that walkability, as defined by a computed walkability index, would be lower in the lower income block groups. Methods Using census data and GIS, base maps of the region were created and block groups categorized by income and race. All food outlets and physical activity resources were enumerated and geocoded and a walkability index computed. Analyses included one-way MANOVA and spatial autocorrelation. Results In total, 49 stores, 160 restaurants and 79 physical activity outlets were enumerated. There were no differences in the number of outlets by block group income or race. Further, spatial analyses suggest that the distribution of outlets is dispersed across all block groups. Conclusions Under the larger CPBR process, this enumeration study advances the causal models set forth by the community members to address obesity by providing an overview of the food and physical activity environment in this region. This data reflects the food and physical activity resources available to residents in the region and will aid many of the community-academic partners as they pursue intervention

  3. Random forest variable selection in spatial malaria transmission modelling in Mpumalanga Province, South Africa

    Directory of Open Access Journals (Sweden)

    Thandi Kapwata

    2016-11-01

    Full Text Available Malaria is an environmentally driven disease. In order to quantify the spatial variability of malaria transmission, it is imperative to understand the interactions between environmental variables and malaria epidemiology at a micro-geographic level using a novel statistical approach. The random forest (RF statistical learning method, a relatively new variable-importance ranking method, measures the variable importance of potentially influential parameters through the percent increase of the mean squared error. As this value increases, so does the relative importance of the associated variable. The principal aim of this study was to create predictive malaria maps generated using the selected variables based on the RF algorithm in the Ehlanzeni District of Mpumalanga Province, South Africa. From the seven environmental variables used [temperature, lag temperature, rainfall, lag rainfall, humidity, altitude, and the normalized difference vegetation index (NDVI], altitude was identified as the most influential predictor variable due its high selection frequency. It was selected as the top predictor for 4 out of 12 months of the year, followed by NDVI, temperature and lag rainfall, which were each selected twice. The combination of climatic variables that produced the highest prediction accuracy was altitude, NDVI, and temperature. This suggests that these three variables have high predictive capabilities in relation to malaria transmission. Furthermore, it is anticipated that the predictive maps generated from predictions made by the RF algorithm could be used to monitor the progression of malaria and assist in intervention and prevention efforts with respect to malaria.

  4. Selecting Optimal Parameters of Random Linear Network Coding for Wireless Sensor Networks

    DEFF Research Database (Denmark)

    Heide, J; Zhang, Qi; Fitzek, F H P

    2013-01-01

    This work studies how to select optimal code parameters of Random Linear Network Coding (RLNC) in Wireless Sensor Networks (WSNs). With Rateless Deluge [1] the authors proposed to apply Network Coding (NC) for Over-the-Air Programming (OAP) in WSNs, and demonstrated that with NC a significant...... reduction in the number of transmitted packets can be achieved. However, NC introduces additional computations and potentially a non-negligible transmission overhead, both of which depend on the chosen coding parameters. Therefore it is necessary to consider the trade-off that these coding parameters...... present in order to obtain the lowest energy consumption per transmitted bit. This problem is analyzed and suitable coding parameters are determined for the popular Tmote Sky platform. Compared to the use of traditional RLNC, these parameters enable a reduction in the energy spent per bit which grows...

  5. Diversity and enumeration of halophilic and alkaliphilic bacteria in Spanish-style green table-olive fermentations.

    Science.gov (United States)

    Lucena-Padrós, Helena; Ruiz-Barba, José Luis

    2016-02-01

    The presence and enumeration of halophilic and alkaliphilic bacteria in Spanish-style table-olive fermentations was studied. Twenty 10-tonne fermenters at two large manufacturing companies in Spain, previously studied through both culture dependent and independent (PCR-DGGE) methodologies, were selected. Virtually all this microbiota was isolated during the initial fermentation stage. A total of 203 isolates were obtained and identified based on 16S rRNA gene sequences. They belonged to 13 bacterial species, included in 11 genera. It was noticeable the abundance of halophilic and alkaliphilic lactic acid bacteria (HALAB). These HALAB belonged to the three genera of this group: Alkalibacterium, Marinilactibacillus and Halolactibacillus. Ten bacterial species were isolated for the first time from table olive fermentations, including the genera Amphibacillus, Natronobacillus, Catenococcus and Streptohalobacillus. The isolates were genotyped through RAPD and clustered in a dendrogram where 65 distinct strains were identified. Biodiversity indexes found statistically significant differences between both patios regarding genotype richness, diversity and dominance. However, Jaccard similarity index suggested that the halophilic/alkaliphilic microbiota in both patios was more similar than the overall microbiota at the initial fermentation stage. Thus, up to 7 genotypes of 6 different species were shared, suggesting adaptation of some strains to this fermentation stage. Morisita-Horn similarity index indicated a high level of codominance of the same species in both patios. Halophilic and alkaliphilic bacteria, especially HALAB, appeared to be part of the characteristic microbiota at the initial stage of this table-olive fermentation, and they could contribute to the conditioning of the fermenting brines in readiness for growth of common lactic acid bacteria. Copyright © 2015 Elsevier Ltd. All rights reserved.

  6. Application of random coherence order selection in gradient-enhanced multidimensional NMR

    International Nuclear Information System (INIS)

    Bostock, Mark J.; Nietlispach, Daniel

    2016-01-01

    Development of multidimensional NMR is essential to many applications, for example in high resolution structural studies of biomolecules. Multidimensional techniques enable separation of NMR signals over several dimensions, improving signal resolution, whilst also allowing identification of new connectivities. However, these advantages come at a significant cost. The Fourier transform theorem requires acquisition of a grid of regularly spaced points to satisfy the Nyquist criterion, while frequency discrimination and acquisition of a pure phase spectrum require acquisition of both quadrature components for each time point in every indirect (non-acquisition) dimension, adding a factor of 2 N -1 to the number of free- induction decays which must be acquired, where N is the number of dimensions. Compressed sensing (CS) ℓ 1 -norm minimisation in combination with non-uniform sampling (NUS) has been shown to be extremely successful in overcoming the Nyquist criterion. Previously, maximum entropy reconstruction has also been used to overcome the limitation of frequency discrimination, processing data acquired with only one quadrature component at a given time interval, known as random phase detection (RPD), allowing a factor of two reduction in the number of points for each indirect dimension (Maciejewski et al. 2011 PNAS 108 16640). However, whilst this approach can be easily applied in situations where the quadrature components are acquired as amplitude modulated data, the same principle is not easily extended to phase modulated (P-/N-type) experiments where data is acquired in the form exp (iωt) or exp (-iωt), and which make up many of the multidimensional experiments used in modern NMR. Here we demonstrate a modification of the CS ℓ 1 -norm approach to allow random coherence order selection (RCS) for phase modulated experiments; we generalise the nomenclature for RCS and RPD as random quadrature detection (RQD). With this method, the power of RQD can be extended

  7. On Random Numbers and Design

    Science.gov (United States)

    Ben-Ari, Morechai

    2004-01-01

    The term "random" is frequently used in discussion of the theory of evolution, even though the mathematical concept of randomness is problematic and of little relevance in the theory. Therefore, since the core concept of the theory of evolution is the non-random process of natural selection, the term random should not be used in teaching the…

  8. Strategies for enumeration of circulating microvesicles on a conventional flow cytometer: Counting beads and scatter parameters.

    Science.gov (United States)

    Alkhatatbeh, Mohammad J; Enjeti, Anoop K; Baqar, Sara; Ekinci, Elif I; Liu, Dorothy; Thorne, Rick F; Lincz, Lisa F

    2018-01-01

    Enumeration of circulating microvesicles (MVs) by conventional flow cytometry is accomplished by the addition of a known amount of counting beads and calculated from the formula: MV/μl = (MV count/bead count) × final bead concentration. We sought to optimize each variable in the equation by determining the best parameters for detecting 'MV count' and examining the effects of different bead preparations and concentrations on the final calculation. Three commercially available bead preparations (TruCount, Flow-Count and CountBright) were tested, and MV detection on a BD FACSCanto was optimized for gating by either forward scatter (FSC) or side scatter (SSC); the results were compared by calculating different subsets of MV on a series of 74 typical patient plasma samples. The relationship between the number of beads added to each test and the number of beads counted by flow cytometry remained linear over a wide range of bead concentrations ( R 2 ≥ 0.997). However, TruCount beads produced the most consistent (concentration variation = 3.8%) calculated numbers of plasma CD41 + /Annexin V + MV, which were significantly higher from that calculated using either Flow-Count or CountBright ( p beads by FSC and 0.16 μm beads by SSC, but there were significantly more background events using SSC compared with FSC (3113 vs. 470; p = 0.008). In general, sample analysis by SSC resulted in significantly higher numbers of MV ( p beads provided linear results at concentrations ranging from 6 beads/μl to 100 beads/μl, but TruCount was the most consistent. Using SSC to gate MV events produced high background which negatively affected counting bead enumeration and overall MV calculations. Strategies to reduce SSC background should be employed in order to reliably use this technique.

  9. Selecting for Fast Protein-Protein Association As Demonstrated on a Random TEM1 Yeast Library Binding BLIP.

    Science.gov (United States)

    Cohen-Khait, Ruth; Schreiber, Gideon

    2018-04-27

    Protein-protein interactions mediate the vast majority of cellular processes. Though protein interactions obey basic chemical principles also within the cell, the in vivo physiological environment may not allow for equilibrium to be reached. Thus, in vitro measured thermodynamic affinity may not provide a complete picture of protein interactions in the biological context. Binding kinetics composed of the association and dissociation rate constants are relevant and important in the cell. Therefore, changes in protein-protein interaction kinetics have a significant impact on the in vivo activity of the proteins. The common protocol for the selection of tighter binders from a mutant library selects for protein complexes with slower dissociation rate constants. Here we describe a method to specifically select for variants with faster association rate constants by using pre-equilibrium selection, starting from a large random library. Toward this end, we refine the selection conditions of a TEM1-β-lactamase library against its natural nanomolar affinity binder β-lactamase inhibitor protein (BLIP). The optimal selection conditions depend on the ligand concentration and on the incubation time. In addition, we show that a second sort of the library helps to separate signal from noise, resulting in a higher percent of faster binders in the selected library. Fast associating protein variants are of particular interest for drug development and other biotechnological applications.

  10. An improved automated procedure for informal and temporary dwellings detection and enumeration, using mathematical morphology operators on VHR satellite data

    Science.gov (United States)

    Jenerowicz, Małgorzata; Kemper, Thomas

    2016-10-01

    Every year thousands of people are displaced by conflicts or natural disasters and often gather in large camps. Knowing how many people have been gathered is crucial for an efficient relief operation. However, it is often difficult to collect exact information on the total number of the population. This paper presents the improved morphological methodology for the estimation of dwellings structures located in several Internally Displaced Persons (IDPs) Camps, based on Very High Resolution (VHR) multispectral satellite imagery with pixel sizes of 1 meter or less including GeoEye-1, WorldView-2, QuickBird-2, Ikonos-2, Pléiades-A and Pléiades-B. The main topic of this paper is the approach enhancement with selection of feature extraction algorithm, the improvement and automation of pre-processing and results verification. For the informal and temporary dwellings extraction purpose the high quality of data has to be ensured. The pre-processing has been extended by including the input data hierarchy level assignment and data fusion method selection and evaluation. The feature extraction algorithm follows the procedure presented in Jenerowicz, M., Kemper, T., 2011. Optical data are analysed in a cyclic approach comprising image segmentation, geometrical, textural and spectral class modeling aiming at camp area identification. The successive steps of morphological processing have been combined in a one stand-alone application for automatic dwellings detection and enumeration. Actively implemented, these approaches can provide a reliable and consistent results, independent of the imaging satellite type and different study sites location, providing decision support in emergency response for the humanitarian community like United Nations, European Union and Non-Governmental relief organizations.

  11. A Permutation Importance-Based Feature Selection Method for Short-Term Electricity Load Forecasting Using Random Forest

    Directory of Open Access Journals (Sweden)

    Nantian Huang

    2016-09-01

    Full Text Available The prediction accuracy of short-term load forecast (STLF depends on prediction model choice and feature selection result. In this paper, a novel random forest (RF-based feature selection method for STLF is proposed. First, 243 related features were extracted from historical load data and the time information of prediction points to form the original feature set. Subsequently, the original feature set was used to train an RF as the original model. After the training process, the prediction error of the original model on the test set was recorded and the permutation importance (PI value of each feature was obtained. Then, an improved sequential backward search method was used to select the optimal forecasting feature subset based on the PI value of each feature. Finally, the optimal forecasting feature subset was used to train a new RF model as the final prediction model. Experiments showed that the prediction accuracy of RF trained by the optimal forecasting feature subset was higher than that of the original model and comparative models based on support vector regression and artificial neural network.

  12. Direct random insertion mutagenesis of Helicobacter pylori

    NARCIS (Netherlands)

    de Jonge, Ramon; Bakker, Dennis; van Vliet, Arnoud H. M.; Kuipers, Ernst J.; Vandenbroucke-Grauls, Christina M. J. E.; Kusters, Johannes G.

    2003-01-01

    Random insertion mutagenesis is a widely used technique for the identification of bacterial virulence genes. Most strategies for random mutagenesis involve cloning in Escherichia coli for passage of plasmids or for phenotypic selection. This can result in biased selection due to restriction or

  13. Direct random insertion mutagenesis of Helicobacter pylori.

    NARCIS (Netherlands)

    Jonge, de R.; Bakker, D.; Vliet, van AH; Kuipers, E.J.; Vandenbroucke-Grauls, C.M.J.E.; Kusters, J.G.

    2003-01-01

    Random insertion mutagenesis is a widely used technique for the identification of bacterial virulence genes. Most strategies for random mutagenesis involve cloning in Escherichia coli for passage of plasmids or for phenotypic selection. This can result in biased selection due to restriction or

  14. Enumeration of Salmonellae in Table Eggs, Pasteurized Egg Products, and Egg-Containing Dishes by Using Quantitative Real-Time PCR

    DEFF Research Database (Denmark)

    Jakočiūnė, Džiuginta; Pasquali, Frédérique; da Silva, Cristiana Soares

    2014-01-01

    PCR) was employed for enumeration of salmonellae in different matrices: table eggs, pasteurized egg products, and egg-containing dishes. Salmonella enterica serovar Enteritidis and S. enterica serovar Tennessee were used to artificially contaminate these matrices. The results showed a linear regression between...... the numbers of salmonellae and the quantification cycle (Cq) values for all matrices used, with the exception of pasteurized egg white. Standard curves were constructed by using both stationary-phase cells and heat-stressed cells, with similar results. Finally, this method was used to evaluate the fate...

  15. Selective oropharyngeal decontamination versus selective digestive decontamination in critically ill patients: a meta-analysis of randomized controlled trials

    Directory of Open Access Journals (Sweden)

    Zhao D

    2015-07-01

    Full Text Available Di Zhao,1,* Jian Song,2,* Xuan Gao,3 Fei Gao,4 Yupeng Wu,2 Yingying Lu,5 Kai Hou1 1Department of Neurosurgery, The First Hospital of Hebei Medical University, 2Department of Neurosurgery, 3Department of Neurology, The Second Hospital of Hebei Medical University, 4Hebei Provincial Procurement Centers for Medical Drugs and Devices, 5Department of Neurosurgery, The Second Hospital of Hebei Medical University, Shijiazhuang People’s Republic of China *These authors contributed equally to this work Background: Selective digestive decontamination (SDD and selective oropharyngeal decontamination (SOD are associated with reduced mortality and infection rates among patients in intensive care units (ICUs; however, whether SOD has a superior effect than SDD remains uncertain. Hence, we conducted a meta-analysis of randomized controlled trials (RCTs to compare SOD with SDD in terms of clinical outcomes and antimicrobial resistance rates in patients who were critically ill. Methods: RCTs published in PubMed, Embase, and Web of Science were systematically reviewed to compare the effects of SOD and SDD in patients who were critically ill. Outcomes included day-28 mortality, length of ICU stay, length of hospital stay, duration of mechanical ventilation, ICU-acquired bacteremia, and prevalence of antibiotic-resistant Gram-negative bacteria. Results were expressed as risk ratio (RR with 95% confidence intervals (CIs, and weighted mean differences (WMDs with 95% CIs. Pooled estimates were performed using a fixed-effects model or random-effects model, depending on the heterogeneity among studies. Results: A total of four RCTs involving 23,822 patients met the inclusion criteria and were included in this meta-analysis. Among patients whose admitting specialty was surgery, cardiothoracic surgery (57.3% and neurosurgery (29.7% were the two main types of surgery being performed. Pooled results showed that SOD had similar effects as SDD in day-28 mortality (RR =1

  16. Strategyproof Peer Selection using Randomization, Partitioning, and Apportionment

    OpenAIRE

    Aziz, Haris; Lev, Omer; Mattei, Nicholas; Rosenschein, Jeffrey S.; Walsh, Toby

    2016-01-01

    Peer review, evaluation, and selection is a fundamental aspect of modern science. Funding bodies the world over employ experts to review and select the best proposals of those submitted for funding. The problem of peer selection, however, is much more general: a professional society may want to give a subset of its members awards based on the opinions of all members; an instructor for a MOOC or online course may want to crowdsource grading; or a marketing company may select ideas from group b...

  17. Burden of Surgical Conditions in Uganda: A Cross-sectional Nationwide Household Survey.

    Science.gov (United States)

    Tran, Tu M; Fuller, Anthony T; Butler, Elissa K; Makumbi, Fredrick; Luboga, Samuel; Muhumuza, Christine; Ssennono, Vincent F; Chipman, Jeffrey G; Galukande, Moses; Haglund, Michael M

    2017-08-01

    To quantify the burden of surgical conditions in Uganda. Data on the burden of disease have long served as a cornerstone to health policymaking, planning, and resource allocation. Population-based data are the gold standard, but no data on surgical burden at a national scale exist; therefore, we adapted the Surgeons OverSeas Assessment of Surgical Need survey and conducted a nation-wide, cross-sectional survey of Uganda to quantify the burden of surgically treatable conditions. The 2-stage cluster sample included 105 enumeration areas, representing 74 districts and Kampala Capital City Authority. Enumeration occurred from August 20 to September 12, 2014. In each enumeration area, 24 households were randomly selected; the head of the household provided details regarding any household deaths within the previous 12 months. Two household members were randomly selected for a head-to-toe verbal interview to determine existing untreated and treated surgical conditions. In 2315 households, we surveyed 4248 individuals: 461 (10.6%) reported 1 or more conditions requiring at least surgical consultation [95% confidence interval (CI) 8.9%-12.4%]. The most frequent barrier to surgical care was the lack of financial resources for the direct cost of care. Of the 153 household deaths recalled, 53 deaths (34.2%; 95% CI 22.1%-46.3%) were associated with surgically treatable signs/symptoms. Shortage of time was the most frequently cited reason (25.8%) among the 11.6% household deaths that should have, but did not, receive surgical care (95% CI 6.4%-16.8%). Unmet surgical need is prevalent in Uganda. There is an urgent need to expand the surgical care delivery system starting with the district-level hospitals. Routine surgical data collection at both the health facility and household level should be implemented.

  18. 47 CFR 1.1604 - Post-selection hearings.

    Science.gov (United States)

    2010-10-01

    ... 47 Telecommunication 1 2010-10-01 2010-10-01 false Post-selection hearings. 1.1604 Section 1.1604 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL PRACTICE AND PROCEDURE Random Selection Procedures for Mass Media Services General Procedures § 1.1604 Post-selection hearings. (a) Following the random...

  19. Random genetic drift, natural selection, and noise in human cranial evolution.

    Science.gov (United States)

    Roseman, Charles C

    2016-08-01

    This study assesses the extent to which relationships among groups complicate comparative studies of adaptation in recent human cranial variation and the extent to which departures from neutral additive models of evolution hinder the reconstruction of population relationships among groups using cranial morphology. Using a maximum likelihood evolutionary model fitting approach and a mixed population genomic and cranial data set, I evaluate the relative fits of several widely used models of human cranial evolution. Moreover, I compare the goodness of fit of models of cranial evolution constrained by genomic variation to test hypotheses about population specific departures from neutrality. Models from population genomics are much better fits to cranial variation than are traditional models from comparative human biology. There is not enough evolutionary information in the cranium to reconstruct much of recent human evolution but the influence of population history on cranial variation is strong enough to cause comparative studies of adaptation serious difficulties. Deviations from a model of random genetic drift along a tree-like population history show the importance of environmental effects, gene flow, and/or natural selection on human cranial variation. Moreover, there is a strong signal of the effect of natural selection or an environmental factor on a group of humans from Siberia. The evolution of the human cranium is complex and no one evolutionary process has prevailed at the expense of all others. A holistic unification of phenome, genome, and environmental context, gives us a strong point of purchase on these problems, which is unavailable to any one traditional approach alone. Am J Phys Anthropol 160:582-592, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  20. Evaluation of modified dichloran 18% glycerol (DG18) agar for enumerating fungi in wheat flour: a collaborative study.

    Science.gov (United States)

    Beuchat, L R; Hwang, C A

    1996-04-01

    Dichloran 18% glycerol agar base supplemented with 100 micrograms of chloramphenicol ml-1 (DG18 agar) was compared to DG18 agar supplemented with 100 micrograms of Triton X-301 ml-1 (DG18T) and DG18 agar supplemented with 1 microgram of iprodione [3-(3,5-dichlorophenyl)-N-(1-methyl-ethyl)-2,4-dioxo-1-imidazolidine- carboxamide] ml-1 (DG18I agar) for enumeration of fungi in ten brands of wheat flour. As the flours contained low fungal populations, all were inoculated with two to four strains of xerophilic fungi (Aspergillus candidus, A. penicillioides, Eurotium amstelodami, E. intermedium, E. repens, E. rubrum, E. tonophilum, E. umbrosum and Wallemia sebi), after which counts ranged from 3.87 to 6.37 log10 CFU g-1. Significantly higher populations (p repens or E. tonophilum had also been inoculated into at least one of the three flours showing significantly higher numbers of CFU on DG18T agar. Analysis of collapsed data from all samples showed that DG18T agar was significantly better than DG18 or DG18I agars at p < 0.10 but not at p < 0.05. Coefficients of variation for reproducibility (among-laboratory variation) were 8.4%, 7.5% and 8.6%, respectively, for DG18, DG18T and DG18I agars. DG18I agar restricted colony development most, especially for Eurotium species. Naturally occurring Penicillium species grew equally well on DG18 and DG18T agars, whereas W. sebi grew well on all three media. DG18T agar was judged to be superior to DG18 and DG18I agars for enumerating fungi in wheat flours.

  1. Assessing the accuracy and stability of variable selection methods for random forest modeling in ecology.

    Science.gov (United States)

    Fox, Eric W; Hill, Ryan A; Leibowitz, Scott G; Olsen, Anthony R; Thornbrugh, Darren J; Weber, Marc H

    2017-07-01

    Random forest (RF) modeling has emerged as an important statistical learning method in ecology due to its exceptional predictive performance. However, for large and complex ecological data sets, there is limited guidance on variable selection methods for RF modeling. Typically, either a preselected set of predictor variables are used or stepwise procedures are employed which iteratively remove variables according to their importance measures. This paper investigates the application of variable selection methods to RF models for predicting probable biological stream condition. Our motivating data set consists of the good/poor condition of n = 1365 stream survey sites from the 2008/2009 National Rivers and Stream Assessment, and a large set (p = 212) of landscape features from the StreamCat data set as potential predictors. We compare two types of RF models: a full variable set model with all 212 predictors and a reduced variable set model selected using a backward elimination approach. We assess model accuracy using RF's internal out-of-bag estimate, and a cross-validation procedure with validation folds external to the variable selection process. We also assess the stability of the spatial predictions generated by the RF models to changes in the number of predictors and argue that model selection needs to consider both accuracy and stability. The results suggest that RF modeling is robust to the inclusion of many variables of moderate to low importance. We found no substantial improvement in cross-validated accuracy as a result of variable reduction. Moreover, the backward elimination procedure tended to select too few variables and exhibited numerous issues such as upwardly biased out-of-bag accuracy estimates and instabilities in the spatial predictions. We use simulations to further support and generalize results from the analysis of real data. A main purpose of this work is to elucidate issues of model selection bias and instability to ecologists interested in

  2. Using pattern enumeration to accelerate process development and ramp yield

    Science.gov (United States)

    Zhuang, Linda; Pang, Jenny; Xu, Jessy; Tsai, Mengfeng; Wang, Amy; Zhang, Yifan; Sweis, Jason; Lai, Ya-Chieh; Ding, Hua

    2016-03-01

    During a new technology node process setup phase, foundries do not initially have enough product chip designs to conduct exhaustive process development. Different operational teams use manually designed simple test keys to set up their process flows and recipes. When the very first version of the design rule manual (DRM) is ready, foundries enter the process development phase where new experiment design data is manually created based on these design rules. However, these IP/test keys contain very uniform or simple design structures. This kind of design normally does not contain critical design structures or process unfriendly design patterns that pass design rule checks but are found to be less manufacturable. It is desired to have a method to generate exhaustive test patterns allowed by design rules at development stage to verify the gap of design rule and process. This paper presents a novel method of how to generate test key patterns which contain known problematic patterns as well as any constructs which designers could possibly draw based on current design rules. The enumerated test key patterns will contain the most critical design structures which are allowed by any particular design rule. A layout profiling method is used to do design chip analysis in order to find potential weak points on new incoming products so fab can take preemptive action to avoid yield loss. It can be achieved by comparing different products and leveraging the knowledge learned from previous manufactured chips to find possible yield detractors.

  3. Development of a rapid approach for the enumeration of Escherichia coli in riverbed sediment: case study, the Apies River, South Africa

    CSIR Research Space (South Africa)

    Abia

    2015-12-01

    Full Text Available stream_source_info Abia_2015_ABSTRACT.pdf.txt stream_content_type text/plain stream_size 3292 Content-Encoding UTF-8 stream_name Abia_2015_ABSTRACT.pdf.txt Content-Type text/plain; charset=UTF-8 Journal of Soils... and Sediments Development of a rapid approach for the enumeration of Escherichia coli in riverbed sediment: case study, the Apies River, South Africa L. K. A. Abia: M. N. B. Momba Department of Environmental, Water and Earth Science, Tshwane University...

  4. A Comparative Study of Enumeration Techniques for Free-Roaming Dogs in Rural Baramati, District Pune, India

    Directory of Open Access Journals (Sweden)

    Harish Kumar Tiwari

    2018-05-01

    Full Text Available The presence of unvaccinated free-roaming dogs (FRD amidst human settlements is a major contributor to the high incidence of rabies in countries such as India, where the disease is endemic. Estimating FRD population size is crucial to the planning and evaluation of interventions, such as mass immunisation against rabies. Enumeration techniques for FRD are resource intensive and can vary from simple direct counts to statistically complex capture-recapture techniques primarily developed for ecological studies. In this study we compared eight capture-recapture enumeration methods (Lincoln–Petersen’s index, Chapman’s correction estimate, Beck’s method, Schumacher-Eschmeyer method, Regression method, Mark-resight logit normal method, Huggin’s closed capture models and Application SuperDuplicates on-line tool using direct count data collected from Shirsuphal village of Baramati town in Western India, to recommend a method which yields a reasonably accurate count to use for effective vaccination coverage against rabies with minimal resource inputs. A total of 263 unique dogs were sighted at least once over 6 observation occasions with no new dogs sighted on the 7th occasion. Besides this direct count, the methods that do not account for individual heterogeneity yielded population estimates in the range of 248–270, which likely underestimate the real FRD population size. Higher estimates were obtained using the Huggin’s Mh-Jackknife (437 ± 33, Huggin’s Mth-Chao (391 ± 26, Huggin’s Mh-Chao (385 ± 30, models and Application “SuperDuplicates” tool (392 ± 20 and were considered more robust. When the sampling effort was reduced to only two surveys, the Application SuperDuplicates online tool gave the closest estimate of 349 ± 36, which is 74% of the estimated highest population of free-roaming dogs in Shirsuphal village. This method may thus be considered the most reliable method for estimating the FRD population with

  5. Solving binary-state multi-objective reliability redundancy allocation series-parallel problem using efficient epsilon-constraint, multi-start partial bound enumeration algorithm, and DEA

    International Nuclear Information System (INIS)

    Khalili-Damghani, Kaveh; Amiri, Maghsoud

    2012-01-01

    In this paper, a procedure based on efficient epsilon-constraint method and data envelopment analysis (DEA) is proposed for solving binary-state multi-objective reliability redundancy allocation series-parallel problem (MORAP). In first module, a set of qualified non-dominated solutions on Pareto front of binary-state MORAP is generated using an efficient epsilon-constraint method. In order to test the quality of generated non-dominated solutions in this module, a multi-start partial bound enumeration algorithm is also proposed for MORAP. The performance of both procedures is compared using different metrics on well-known benchmark instance. The statistical analysis represents that not only the proposed efficient epsilon-constraint method outperform the multi-start partial bound enumeration algorithm but also it improves the founded upper bound of benchmark instance. Then, in second module, a DEA model is supplied to prune the generated non-dominated solutions of efficient epsilon-constraint method. This helps reduction of non-dominated solutions in a systematic manner and eases the decision making process for practical implementations. - Highlights: ► A procedure based on efficient epsilon-constraint method and DEA was proposed for solving MORAP. ► The performance of proposed procedure was compared with a multi-start PBEA. ► Methods were statistically compared using multi-objective metrics.

  6. Blind Measurement Selection: A Random Matrix Theory Approach

    KAUST Repository

    Elkhalil, Khalil; Kammoun, Abla; Al-Naffouri, Tareq Y.; Alouini, Mohamed-Slim

    2016-01-01

    -aware fashions. We present two potential applications where the proposed algorithms can be used, namely antenna selection for uplink transmissions in large scale multi-user systems and sensor selection for wireless sensor networks. Numerical results are also

  7. Automatic Slide-Loader Fluorescence Microscope for Discriminative Enumeration of Subseafloor Life

    Directory of Open Access Journals (Sweden)

    Fumio Inagaki

    2010-04-01

    Full Text Available The marine subsurface environment is considered the potentially largest ecosystem on Earth, harboring one-tenth of all living biota (Whitman et al., 1998 and comprising diverse microbial components (Inagaki et al., 2003, 2006; Teske, 2006; Inagaki and Nakagawa, 2008. In deep marine sediments, the discrimination of life is significantly more difficult than in surface sediments and terrestrial soils because buried cells generally have extremely low metabolic activities (D’Hondt et al., 2002, 2004, and a highly consolidated sediment matrix produces auto-fluorescence fromdiatomaceous spicules and other mineral particles (Kallmeyer et al., 2008. The cell abundance in marine subsurface sediments has conventionally been evaluated by acridine orange direct count (AODC; Cragg et al., 1995; Parkes et al., 2000 down to 1613 meters below the seafloor (mbsf (Roussel et al., 2008. Since the cell-derived AOsignals often fade out in a short exposure time, recognizing and counting cells require special training. Hence, such efforts to enumerate AO-stained cells from the subseafloor on photographic images have been difficult, and a verification of counts by other methods has been impossible. In addition, providing mean statistical values from low biomass sedimentary habitats has been complicated byphysical and time limitations, yet these habitats are considered critical for understanding the Earth’s biosphere close to the limits of habitable zones (Hoehler, 2004; D’Hondt et al., 2007.

  8. Effects of choice architecture and chef-enhanced meals on the selection and consumption of healthier school foods: a randomized clinical trial.

    Science.gov (United States)

    Cohen, Juliana F W; Richardson, Scott A; Cluggish, Sarah A; Parker, Ellen; Catalano, Paul J; Rimm, Eric B

    2015-05-01

    Little is known about the long-term effect of a chef-enhanced menu on healthier food selection and consumption in school lunchrooms. In addition, it remains unclear if extended exposure to other strategies to promote healthier foods (eg, choice architecture) also improves food selection or consumption. To evaluate the short- and long-term effects of chef-enhanced meals and extended exposure to choice architecture on healthier school food selection and consumption. A school-based randomized clinical trial was conducted during the 2011-2012 school year among 14 elementary and middle schools in 2 urban, low-income school districts (intent-to-treat analysis). Included in the study were 2638 students in grades 3 through 8 attending participating schools (38.4% of eligible participants). Schools were first randomized to receive a professional chef to improve school meal palatability (chef schools) or to a delayed intervention (control group). To assess the effect of choice architecture (smart café), all schools after 3 months were then randomized to the smart café intervention or to the control group. School food selection was recorded, and consumption was measured using plate waste methods. After 3 months, vegetable selection increased in chef vs control schools (odds ratio [OR], 1.75; 95% CI, 1.36-2.24), but there was no effect on the selection of other components or on meal consumption. After long-term or extended exposure to the chef or smart café intervention, fruit selection increased in the chef (OR, 3.08; 95% CI, 2.23-4.25), smart café (OR, 1.45; 95% CI, 1.13-1.87), and chef plus smart café (OR, 3.10; 95% CI, 2.26-4.25) schools compared with the control schools, and consumption increased in the chef schools (OR, 0.17; 95% CI, 0.03-0.30 cups/d). Vegetable selection increased in the chef (OR, 2.54; 95% CI, 1.83-3.54), smart café (OR, 1.91; 95% CI, 1.46-2.50), and chef plus smart café schools (OR, 7.38, 95% CI, 5.26-10.35) compared with the control schools

  9. Modified random hinge transport mechanics and multiple scattering step-size selection in EGS5

    International Nuclear Information System (INIS)

    Wilderman, S.J.; Bielajew, A.F.

    2005-01-01

    The new transport mechanics in EGS5 allows for significantly longer electron transport step sizes and hence shorter computation times than required for identical problems in EGS4. But as with all Monte Carlo electron transport algorithms, certain classes of problems exhibit step-size dependencies even when operating within recommended ranges, sometimes making selection of step-sizes a daunting task for novice users. Further contributing to this problem, because of the decoupling of multiple scattering and continuous energy loss in the dual random hinge transport mechanics of EGS5, there are two independent step sizes in EGS5, one for multiple scattering and one for continuous energy loss, each of which influences speed and accuracy in a different manner. Further, whereas EGS4 used a single value of fractional energy loss (ESTEPE) to determine step sizes at all energies, to increase performance by decreasing the amount of effort expended simulating lower energy particles, EGS5 permits the fractional energy loss values which are used to determine both the multiple scattering and continuous energy loss step sizes to vary with energy. This results in requiring the user to specify four fractional energy loss values when optimizing computations for speed. Thus, in order to simplify step-size selection and to mitigate step-size dependencies, a method has been devised to automatically optimize step-size selection based on a single material dependent input related to the size of problem tally region. In this paper we discuss the new transport mechanics in EGS5 and describe the automatic step-size optimization algorithm. (author)

  10. Evolving artificial metalloenzymes via random mutagenesis

    Science.gov (United States)

    Yang, Hao; Swartz, Alan M.; Park, Hyun June; Srivastava, Poonam; Ellis-Guardiola, Ken; Upp, David M.; Lee, Gihoon; Belsare, Ketaki; Gu, Yifan; Zhang, Chen; Moellering, Raymond E.; Lewis, Jared C.

    2018-03-01

    Random mutagenesis has the potential to optimize the efficiency and selectivity of protein catalysts without requiring detailed knowledge of protein structure; however, introducing synthetic metal cofactors complicates the expression and screening of enzyme libraries, and activity arising from free cofactor must be eliminated. Here we report an efficient platform to create and screen libraries of artificial metalloenzymes (ArMs) via random mutagenesis, which we use to evolve highly selective dirhodium cyclopropanases. Error-prone PCR and combinatorial codon mutagenesis enabled multiplexed analysis of random mutations, including at sites distal to the putative ArM active site that are difficult to identify using targeted mutagenesis approaches. Variants that exhibited significantly improved selectivity for each of the cyclopropane product enantiomers were identified, and higher activity than previously reported ArM cyclopropanases obtained via targeted mutagenesis was also observed. This improved selectivity carried over to other dirhodium-catalysed transformations, including N-H, S-H and Si-H insertion, demonstrating that ArMs evolved for one reaction can serve as starting points to evolve catalysts for others.

  11. Enumeration of Vibrio parahaemolyticus in the viable but nonculturable state using direct plate counts and recognition of individual gene fluorescence in situ hybridization.

    Science.gov (United States)

    Griffitt, Kimberly J; Noriea, Nicholas F; Johnson, Crystal N; Grimes, D Jay

    2011-05-01

    Vibrio parahaemolyticus is a gram-negative, halophilic bacterium indigenous to marine and estuarine environments and it is capable of causing food and water-borne illness in humans. It can also cause disease in marine animals, including cultured species. Currently, culture-based techniques are used for quantification of V. parahaemolyticus in environmental samples; however, these can be misleading as they fail to detect V. parahaemolyticus in a viable but nonculturable (VBNC) state which leads to an underestimation of the population density. In this study, we used a novel fluorescence visualization technique, called recognition of individual gene fluorescence in situ hybridization (RING-FISH), which targets chromosomal DNA for enumeration. A polynucleotide probe labeled with Cyanine 3 (Cy3) was created corresponding to the ubiquitous V. parahaemolyticus gene that codes for thermolabile hemolysin (tlh). When coupled with the Kogure method to distinguish viable from dead cells, RING-FISH probes reliably enumerated total, viable V. parahaemolyticus. The probe was tested for sensitivity and specificity against a pure culture of tlh(+), tdh(-), trh(-)V. parahaemolyticus, pure cultures of Vibrio vulnificus, Vibrio harveyi, Vibrio alginolyticus and Vibrio fischeri, and a mixed environmental sample. This research will provide additional tools for a better understanding of the risk these environmental organisms pose to human health. Copyright © 2011 Elsevier B.V. All rights reserved.

  12. Comparing the mannitol-egg yolk-polymyxin agar plating method with the three-tube most-probable-number method for enumeration of Bacillus cereus spores in raw and high-temperature, short-time pasteurized milk.

    Science.gov (United States)

    Harper, Nigel M; Getty, Kelly J K; Schmidt, Karen A; Nutsch, Abbey L; Linton, Richard H

    2011-03-01

    The U.S. Food and Drug Administration's Bacteriological Analytical Manual recommends two enumeration methods for Bacillus cereus: (i) standard plate count method with mannitol-egg yolk-polymyxin (MYP) agar and (ii) a most-probable-number (MPN) method with tryptic soy broth (TSB) supplemented with 0.1% polymyxin sulfate. This study compared the effectiveness of MYP and MPN methods for detecting and enumerating B. cereus in raw and high-temperature, short-time pasteurized skim (0.5%), 2%, and whole (3.5%) bovine milk stored at 4°C for 96 h. Each milk sample was inoculated with B. cereus EZ-Spores and sampled at 0, 48, and 96 h after inoculation. There were no differences (P > 0.05) in B. cereus populations among sampling times for all milk types, so data were pooled to obtain overall mean values for each treatment. The overall B. cereus population mean of pooled sampling times for the MPN method (2.59 log CFU/ml) was greater (P milk samples ranged from 2.36 to 3.46 and 2.66 to 3.58 log CFU/ml for inoculated milk treatments for the MYP plate count and MPN methods, respectively, which is below the level necessary for toxin production. The MPN method recovered more B. cereus, which makes it useful for validation research. However, the MYP plate count method for enumeration of B. cereus also had advantages, including its ease of use and faster time to results (2 versus 5 days for MPN).

  13. Probabilistic pathway construction.

    Science.gov (United States)

    Yousofshahi, Mona; Lee, Kyongbum; Hassoun, Soha

    2011-07-01

    Expression of novel synthesis pathways in host organisms amenable to genetic manipulations has emerged as an attractive metabolic engineering strategy to overproduce natural products, biofuels, biopolymers and other commercially useful metabolites. We present a pathway construction algorithm for identifying viable synthesis pathways compatible with balanced cell growth. Rather than exhaustive exploration, we investigate probabilistic selection of reactions to construct the pathways. Three different selection schemes are investigated for the selection of reactions: high metabolite connectivity, low connectivity and uniformly random. For all case studies, which involved a diverse set of target metabolites, the uniformly random selection scheme resulted in the highest average maximum yield. When compared to an exhaustive search enumerating all possible reaction routes, our probabilistic algorithm returned nearly identical distributions of yields, while requiring far less computing time (minutes vs. years). The pathways identified by our algorithm have previously been confirmed in the literature as viable, high-yield synthesis routes. Prospectively, our algorithm could facilitate the design of novel, non-native synthesis routes by efficiently exploring the diversity of biochemical transformations in nature. Copyright © 2011 Elsevier Inc. All rights reserved.

  14. Enumeration of phenanthrene-degrading bacteria by an overlayer technique and its use in evaluation of petroleum-contaminated sites

    International Nuclear Information System (INIS)

    Bogardt, A.H.; Hemmingsen, B.B.

    1992-01-01

    Bacteria that are capable of degrading polycyclic aromatic hydrocarbons were enumerated by incorporating soil and water dilutions together with fine particles of phenanthrene, a polycyclic aromatic hydrocarbon, into an agarose overlayer and pouring the mixture over a mineral salts underlayer. The phenanthrene-degrading bacteria embedded in the overlayer were recognized by a halo of clearing in the opaque phenanthrene layer. Diesel fuel- or creosote-contaminated soil and water that were undergoing bioremediation contained 6 x 10 6 to 100 x 10 6 phenanthrene-degrading bacteria per g and ca. 5 x 10 5 phenanthrene-degrading bacteria per ml, respectively, whereas samples from untreated polluted sites contained substantially lower numbers. Unpolluted soil and water contained no detectable phenanthrene degraders or only very modest numbers of these organisms

  15. Comparison of confirmed inactive and randomly selected compounds as negative training examples in support vector machine-based virtual screening.

    Science.gov (United States)

    Heikamp, Kathrin; Bajorath, Jürgen

    2013-07-22

    The choice of negative training data for machine learning is a little explored issue in chemoinformatics. In this study, the influence of alternative sets of negative training data and different background databases on support vector machine (SVM) modeling and virtual screening has been investigated. Target-directed SVM models have been derived on the basis of differently composed training sets containing confirmed inactive molecules or randomly selected database compounds as negative training instances. These models were then applied to search background databases consisting of biological screening data or randomly assembled compounds for available hits. Negative training data were found to systematically influence compound recall in virtual screening. In addition, different background databases had a strong influence on the search results. Our findings also indicated that typical benchmark settings lead to an overestimation of SVM-based virtual screening performance compared to search conditions that are more relevant for practical applications.

  16. Variable Selection in Time Series Forecasting Using Random Forests

    Directory of Open Access Journals (Sweden)

    Hristos Tyralis

    2017-10-01

    Full Text Available Time series forecasting using machine learning algorithms has gained popularity recently. Random forest is a machine learning algorithm implemented in time series forecasting; however, most of its forecasting properties have remained unexplored. Here we focus on assessing the performance of random forests in one-step forecasting using two large datasets of short time series with the aim to suggest an optimal set of predictor variables. Furthermore, we compare its performance to benchmarking methods. The first dataset is composed by 16,000 simulated time series from a variety of Autoregressive Fractionally Integrated Moving Average (ARFIMA models. The second dataset consists of 135 mean annual temperature time series. The highest predictive performance of RF is observed when using a low number of recent lagged predictor variables. This outcome could be useful in relevant future applications, with the prospect to achieve higher predictive accuracy.

  17. The Long-Term Effectiveness of a Selective, Personality-Targeted Prevention Program in Reducing Alcohol Use and Related Harms: A Cluster Randomized Controlled Trial

    Science.gov (United States)

    Newton, Nicola C.; Conrod, Patricia J.; Slade, Tim; Carragher, Natacha; Champion, Katrina E.; Barrett, Emma L.; Kelly, Erin V.; Nair, Natasha K.; Stapinski, Lexine; Teesson, Maree

    2016-01-01

    Background: This study investigated the long-term effectiveness of Preventure, a selective personality-targeted prevention program, in reducing the uptake of alcohol, harmful use of alcohol, and alcohol-related harms over a 3-year period. Methods: A cluster randomized controlled trial was conducted to assess the effectiveness of Preventure.…

  18. Generating equilateral random polygons in confinement III

    International Nuclear Information System (INIS)

    Diao, Y; Ernst, C; Montemayor, A; Ziegler, U

    2012-01-01

    In this paper we continue our earlier studies (Diao et al 2011 J. Phys. A: Math. Theor. 44 405202, Diao et al J. Phys. A: Math. Theor. 45 275203) on the generation methods of random equilateral polygons confined in a sphere. The first half of this paper is concerned with the generation of confined equilateral random walks. We show that if the selection of a vertex is uniform subject to the position of its previous vertex and the confining condition, then the distributions of the vertices are not uniform, although there exists a distribution such that if the initial vertex is selected following this distribution, then all vertices of the random walk follow this same distribution. Thus in order to generate a confined equilateral random walk, the selection of a vertex cannot be uniform subject to the position of its previous vertex and the confining condition. We provide a simple algorithm capable of generating confined equilateral random walks whose vertex distribution is almost uniform in the confinement sphere. In the second half of this paper we show that any process generating confined equilateral random walks can be turned into a process generating confined equilateral random polygons with the property that the vertex distribution of the polygons approaches the vertex distribution of the walks as the polygons get longer and longer. In our earlier studies, the starting point of the confined polygon is fixed at the center of the sphere. The new approach here allows us to move the starting point of the confined polygon off the center of the sphere. (paper)

  19. Affinity selection of Nipah and Hendra virus-related vaccine candidates from a complex random peptide library displayed on bacteriophage virus-like particles

    Energy Technology Data Exchange (ETDEWEB)

    Peabody, David S.; Chackerian, Bryce; Ashley, Carlee; Carnes, Eric; Negrete, Oscar

    2017-01-24

    The invention relates to virus-like particles of bacteriophage MS2 (MS2 VLPs) displaying peptide epitopes or peptide mimics of epitopes of Nipah Virus envelope glycoprotein that elicit an immune response against Nipah Virus upon vaccination of humans or animals. Affinity selection on Nipah Virus-neutralizing monoclonal antibodies using random sequence peptide libraries on MS2 VLPs selected peptides with sequence similarity to peptide sequences found within the envelope glycoprotein of Nipah itself, thus identifying the epitopes the antibodies recognize. The selected peptide sequences themselves are not necessarily identical in all respects to a sequence within Nipah Virus glycoprotein, and therefore may be referred to as epitope mimics VLPs displaying these epitope mimics can serve as vaccine. On the other hand, display of the corresponding wild-type sequence derived from Nipah Virus and corresponding to the epitope mapped by affinity selection, may also be used as a vaccine.

  20. Fast selection of miRNA candidates based on large-scale pre-computed MFE sets of randomized sequences.

    Science.gov (United States)

    Warris, Sven; Boymans, Sander; Muiser, Iwe; Noback, Michiel; Krijnen, Wim; Nap, Jan-Peter

    2014-01-13

    Small RNAs are important regulators of genome function, yet their prediction in genomes is still a major computational challenge. Statistical analyses of pre-miRNA sequences indicated that their 2D structure tends to have a minimal free energy (MFE) significantly lower than MFE values of equivalently randomized sequences with the same nucleotide composition, in contrast to other classes of non-coding RNA. The computation of many MFEs is, however, too intensive to allow for genome-wide screenings. Using a local grid infrastructure, MFE distributions of random sequences were pre-calculated on a large scale. These distributions follow a normal distribution and can be used to determine the MFE distribution for any given sequence composition by interpolation. It allows on-the-fly calculation of the normal distribution for any candidate sequence composition. The speedup achieved makes genome-wide screening with this characteristic of a pre-miRNA sequence practical. Although this particular property alone will not be able to distinguish miRNAs from other sequences sufficiently discriminative, the MFE-based P-value should be added to the parameters of choice to be included in the selection of potential miRNA candidates for experimental verification.

  1. [The hazards of hospitals and selected public buildings of Legionella pneumophila].

    Science.gov (United States)

    Sikora, Agnieszka; Kozioł-Montewka, Maria; Wójtowicz-Bobin, Małgorzata; Gładysz, Iwona; Dobosz, Paulina

    2013-11-01

    The registered infection and outbreaks of epidemic tend to monitor potential reservoirs of Legionella infection. According to the Act of 29 March 2007 on the requirements for the quality of water intended for human consumption are required to test for the presence and number of Legionella in the water system of hospitals. In case of detection of L. pneumophila serogroup 1 (SG 1) or increased above normal number other serogroups of bacteria it is necessary to eradicate these bacteria from the water system. The aim of this study was to assess the degree of contamination of the water supply system of selected public buildings and analyze the effectiveness of disinfection methods for the elimination of L. pneumophila in hot water systems. The materials for this study were hot and cold water samples which were collected from the water supply system of 23 different objects. Enumeration of Legionella bacteria in water samples was determined by membrane filtration (FM) and/or by surface inoculation methods according to the standards: PN-ISO 11731: 2002: "The quality of the water. Detection and enumeration of Legionella" and PN-EN ISO 11731-2: 2008: "Water quality--Detection and enumeration of Legionella--Part 2: Methodology of membrane filtration for water with a small number of bacteria". L. pneumophila was present in 164 samples of hot water, which accounted for 76.99%. In all tested water samples L. pneumophila SG 2-14 strains were detected. The most virulent strain--L. pneumophila SG 1 was not detected. In examined 23 objects in 12 of L. pneumophila exceed acceptable levels > 100 CFU/100 ml. The presence of L. pneumophila SG 2-14 demonstrated in all examined objects, indicating the risk of infection, and the need for permanent monitoring of the water system supply. The thermal disinfection is the most common, inexpensive, and effective method of control of L. pneumophila used in examined objects, but does not eliminate bacterial biofilm. Disinfection using the filters

  2. Sampling solution traces for the problem of sorting permutations by signed reversals

    Science.gov (United States)

    2012-01-01

    Background Traditional algorithms to solve the problem of sorting by signed reversals output just one optimal solution while the space of all optimal solutions can be huge. A so-called trace represents a group of solutions which share the same set of reversals that must be applied to sort the original permutation following a partial ordering. By using traces, we therefore can represent the set of optimal solutions in a more compact way. Algorithms for enumerating the complete set of traces of solutions were developed. However, due to their exponential complexity, their practical use is limited to small permutations. A partial enumeration of traces is a sampling of the complete set of traces and can be an alternative for the study of distinct evolutionary scenarios of big permutations. Ideally, the sampling should be done uniformly from the space of all optimal solutions. This is however conjectured to be ♯P-complete. Results We propose and evaluate three algorithms for producing a sampling of the complete set of traces that instead can be shown in practice to preserve some of the characteristics of the space of all solutions. The first algorithm (RA) performs the construction of traces through a random selection of reversals on the list of optimal 1-sequences. The second algorithm (DFALT) consists in a slight modification of an algorithm that performs the complete enumeration of traces. Finally, the third algorithm (SWA) is based on a sliding window strategy to improve the enumeration of traces. All proposed algorithms were able to enumerate traces for permutations with up to 200 elements. Conclusions We analysed the distribution of the enumerated traces with respect to their height and average reversal length. Various works indicate that the reversal length can be an important aspect in genome rearrangements. The algorithms RA and SWA show a tendency to lose traces with high average reversal length. Such traces are however rare, and qualitatively our results

  3. Accessing completeness of pregnancy, delivery, and death registration by Accredited Social Health Activists [ASHA] in an innovative mHealth project in the tribal areas of Gujarat: A cross-sectional study

    Directory of Open Access Journals (Sweden)

    D Modi

    2016-01-01

    Full Text Available Background: The Innovative Mobile-phone Technology for Community Health Operation (ImTeCHO is a mobile-phone application that helps Accredited Social Health Activists (ASHAs in complete registration through the strategies employed during implementation that is linking ASHAs′ incentives to digital records, regular feedback, onsite data entry, and demand generation among beneficiaries. Objective: To determine the proportion of pregnancies, deliveries, and infant deaths (events being registered through the ImTeCHO application against actual number of events in a random sample of villages. Materials and Methods: Five representative villages were randomly selected from the ImTeCHO project area in the tribal areas of Gujarat, India to obtain the required sample of 98 recently delivered women. A household survey was done in the entire villages to enumerate each family and create a line-listing of events since January 2014; the line-listing was compared with list of women registered through the ImTeCHO application. The proportion of events being registered through the ImTeCHO application was compared against the actual number of events to find sensitivity of the ImTeCHO application. Result: A total of 844 families were found during household enumeration. Out of actual line-listing of pregnancies (N = 39, deliveries (N = 102, and infant deaths (N = 5 found during household enumeration, 38 (97.43%, 101 (99.01%, and 5 (100% were registered by ASHAs through the ImTeCHO application. Conclusion: The use of mobile-phone technology and strategies applied during the ImTeCHO implementation should be upscaled to supplement efforts to improve the completeness of registration.

  4. Varying levels of difficulty index of skills-test items randomly selected by examinees on the Korean emergency medical technician licensing examination.

    Science.gov (United States)

    Koh, Bongyeun; Hong, Sunggi; Kim, Soon-Sim; Hyun, Jin-Sook; Baek, Milye; Moon, Jundong; Kwon, Hayran; Kim, Gyoungyong; Min, Seonggi; Kang, Gu-Hyun

    2016-01-01

    The goal of this study was to characterize the difficulty index of the items in the skills test components of the class I and II Korean emergency medical technician licensing examination (KEMTLE), which requires examinees to select items randomly. The results of 1,309 class I KEMTLE examinations and 1,801 class II KEMTLE examinations in 2013 were subjected to analysis. Items from the basic and advanced skills test sections of the KEMTLE were compared to determine whether some were significantly more difficult than others. In the class I KEMTLE, all 4 of the items on the basic skills test showed significant variation in difficulty index (P<0.01), as well as 4 of the 5 items on the advanced skills test (P<0.05). In the class II KEMTLE, 4 of the 5 items on the basic skills test showed significantly different difficulty index (P<0.01), as well as all 3 of the advanced skills test items (P<0.01). In the skills test components of the class I and II KEMTLE, the procedure in which examinees randomly select questions should be revised to require examinees to respond to a set of fixed items in order to improve the reliability of the national licensing examination.

  5. Varying levels of difficulty index of skills-test items randomly selected by examinees on the Korean emergency medical technician licensing examination

    Directory of Open Access Journals (Sweden)

    Bongyeun Koh

    2016-01-01

    Full Text Available Purpose: The goal of this study was to characterize the difficulty index of the items in the skills test components of the class I and II Korean emergency medical technician licensing examination (KEMTLE, which requires examinees to select items randomly. Methods: The results of 1,309 class I KEMTLE examinations and 1,801 class II KEMTLE examinations in 2013 were subjected to analysis. Items from the basic and advanced skills test sections of the KEMTLE were compared to determine whether some were significantly more difficult than others. Results: In the class I KEMTLE, all 4 of the items on the basic skills test showed significant variation in difficulty index (P<0.01, as well as 4 of the 5 items on the advanced skills test (P<0.05. In the class II KEMTLE, 4 of the 5 items on the basic skills test showed significantly different difficulty index (P<0.01, as well as all 3 of the advanced skills test items (P<0.01. Conclusion: In the skills test components of the class I and II KEMTLE, the procedure in which examinees randomly select questions should be revised to require examinees to respond to a set of fixed items in order to improve the reliability of the national licensing examination.

  6. Comparison of the quantitative dry culture methods with both conventional media and most probable number method for the enumeration of coliforms and Escherichia coli/coliforms in food.

    Science.gov (United States)

    Teramura, H; Sota, K; Iwasaki, M; Ogihara, H

    2017-07-01

    Sanita-kun™ CC (coliform count) and EC (Escherichia coli/coliform count), sheet quantitative culture systems which can avoid chromogenic interference by lactase in food, were evaluated in comparison with conventional methods for these bacteria. Based on the results of inclusivity and exclusivity studies using 77 micro-organisms, sensitivity and specificity of both Sanita-kun™ met the criteria for ISO 16140. Both media were compared with deoxycholate agar, violet red bile agar, Merck Chromocult™ coliform agar (CCA), 3M Petrifilm™ CC and EC (PEC) and 3-tube MPN, as reference methods, in 100 naturally contaminated food samples. The correlation coefficients of both Sanita-kun™ for coliform detection were more than 0·95 for all comparisons. For E. coli detection, Sanita-kun™ EC was compared with CCA, PEC and MPN in 100 artificially contaminated food samples. The correlation coefficients for E. coli detection of Sanita-kun™ EC were more than 0·95 for all comparisons. There were no significant differences in all comparisons when conducting a one-way analysis of variance (anova). Both Sanita-kun™ significantly inhibited colour interference by lactase when inhibition of enzymatic staining was assessed using 40 natural cheese samples spiked with coliform. Our results demonstrated Sanita-kun™ CC and EC are suitable alternatives for the enumeration of coliforms and E. coli/coliforms, respectively, in a variety of foods, and specifically in fermented foods. Current chromogenic media for coliforms and Escherichia coli/coliforms have enzymatic coloration due to breaking down of chromogenic substrates by food lactase. The novel sheet culture media which have film layer to avoid coloration by food lactase have been developed for enumeration of coliforms and E. coli/coliforms respectively. In this study, we demonstrated these media had comparable performance with reference methods and less interference by food lactase. These media have a possibility not only

  7. Comparison of culture media, simplate, and petrifilm for enumeration of yeasts and molds in food.

    Science.gov (United States)

    Taniwaki, M H; Silva, N; Banhe, A A; Iamanaka, B T

    2001-10-01

    The efficacy of three culture media, dichloran rose bengal chloramphenicol (DRBC), dichloran 18% glycerol agar (DG18), and potato dextrose agar (PDA) supplemented with two antibiotics, were compared with the Simplate and Petrifilm techniques for mold and yeast enumeration. The following foods were analyzed: corn meal, wheat flour, cassava flour, bread crumbs, whole meal, sliced bread, ground peanuts, mozzarella cheese, grated parmesan cheese, cheese rolls, orange juice, pineapple pulp, pineapple cake, and mushroom in conserve. Correlation coefficients of DRBC versus PDA and DG18 for recovering total mold and yeast counts from the composite of 14 foods indicated that the three media were generally equivalent. Correlation coefficients for Petrifilm versus culture media were acceptable, although not as good as between culture media. Correlation coefficients of Simplate versus DRBC, DG18, PDA, and Petrifilm for recovering total yeasts and molds from a composite of 11 foods demonstrated that there was no equivalence between the counts obtained by Simplate and other culture media and Petrifilm, with significant differences observed for the most foods analyzed.

  8. Enumeration of antigen-specific CD8+ T lymphocytes by single-platform, HLA tetramer-based flow cytometry: a European multicenter evaluation.

    Science.gov (United States)

    Heijnen, Ingmar A F M; Barnett, David; Arroz, Maria J; Barry, Simon M; Bonneville, Marc; Brando, Bruno; D'hautcourt, Jean-Luc; Kern, Florian; Tötterman, Thomas H; Marijt, Erik W A; Bossy, David; Preijers, Frank W M B; Rothe, Gregor; Gratama, Jan W

    2004-11-01

    HLA class I peptide tetramers represent powerful diagnostic tools for detection and monitoring of antigen-specific CD8(+) T cells. The impetus for the current multicenter study is the critical need to standardize tetramer flow cytometry if it is to be implemented as a routine diagnostic assay. Hence, the European Working Group on Clinical Cell Analysis set out to develop and evaluate a single-platform tetramer-based method that used cytomegalovirus (CMV) as the antigenic model. Absolute numbers of CMV-specific CD8(+) T cells were obtained by combining the percentage of tetramer-binding cells with the absolute CD8(+) T-cell count. Six send-outs of stabilized blood from healthy individuals or CMV-carrying donors with CMV-specific CD8(+) T-cell counts of 3 to 10 cells/microl were distributed to 7 to 16 clinical sites. These sites were requested to enumerate CD8(+) T cells and, in the case of CMV-positive donors, CMV-specific subsets on three separate occasions using the standard method. Between-site coefficients of variation of less than 10% (absolute CD8(+) T-cell counts) and approximately 30% (percentage and absolute numbers of CMV-specific CD8(+) T cells) were achieved. Within-site coefficients of variation were approximately 5% (absolute CD8(+) T-cell counts), approximately 9% (percentage CMV-specific CD8(+) T cells), and approximately 17% (absolute CMV-specific CD8(+) T-cell counts). The degree of variation tended to correlate inversely with the proportion of CMV-specific CD8(+) T-cell subsets. The single-platform MHC tetramer-based method for antigen-specific CD8(+) T-cell counting has been evaluated by a European group of laboratories and can be considered a reproducible assay for routine enumeration of antigen-specific CD8(+) T cells. (c) 2004 Wiley-Liss, Inc.

  9. A collaborative study on a Nordic standard protocol for detection and enumeration of thermotolerant Campylobacter in food (NMKL 119, 3. Ed., 2007)

    DEFF Research Database (Denmark)

    Rosenquist, Hanne; Bengtsson, Anja; Hansen, Tina Beck

    2007-01-01

    A Nordic standard protocol for detection and enumeration of thermotolerant Campylobacter in food has been elaborated (NMKL 119, 3. Ed., 2007). Performance and precision characteristics of this protocol were evaluated in a collaborative study with participation of 14 laboratories from seven European...... jejuni (SLV-542). Expected concentrations (95% C.I.) (cfu g(-1) or ml(-1)) of both strains in matrices were 0.6-1.4 and 23-60 for qualitative detection, and 0.6-1.4; 23-60; and 420-1200 for semi-quantitative detection. For quantitative determination, the expected concentrations of C. jejuni/C. coli were...

  10. Random survival forests for competing risks

    DEFF Research Database (Denmark)

    Ishwaran, Hemant; Gerds, Thomas A; Kogalur, Udaya B

    2014-01-01

    We introduce a new approach to competing risks using random forests. Our method is fully non-parametric and can be used for selecting event-specific variables and for estimating the cumulative incidence function. We show that the method is highly effective for both prediction and variable selection...

  11. An altered Pseudomonas diversity is recovered from soil by using nutrient-poor Pseudomonas-selective soil extract media

    DEFF Research Database (Denmark)

    Aagot, N.; Nybroe, O.; Nielsen, P.

    2001-01-01

    We designed five Pseudomonas-selective soil extract NAA media containing the selective properties of trimethoprim and sodium lauroyl sarcosine and 0 to 100% of the amount of Casamino Acids used in the classical Pseudomonas-selective Gould's S1 medium. All of the isolates were confirmed to be Pseu......We designed five Pseudomonas-selective soil extract NAA media containing the selective properties of trimethoprim and sodium lauroyl sarcosine and 0 to 100% of the amount of Casamino Acids used in the classical Pseudomonas-selective Gould's S1 medium. All of the isolates were confirmed....... Several of these analyses showed that the amount of Casamino Acids significantly influenced the diversity of the recovered Pseudomonas isolates. Furthermore, the data suggested that specific Pseudomonas subpopulations were represented on the nutrient-poor media. The NAA 1:100 medium, containing ca. 15 mg...... of organic carbon per liter, consistently gave significantly higher Pseudomonas CFU counts than Gould's S1 when tested on four Danish soils. NAA 1:100 may, therefore, be a better medium than Gould's S1 for enumeration and isolation of Pseudomonas from the low-nutrient soil environment....

  12. Using Random Numbers in Science Research Activities.

    Science.gov (United States)

    Schlenker, Richard M.; And Others

    1996-01-01

    Discusses the importance of science process skills and describes ways to select sets of random numbers for selection of subjects for a research study in an unbiased manner. Presents an activity appropriate for grades 5-12. (JRH)

  13. Effectiveness of sanitizing products on controlling selected pathogen surrogates on retail deli slicers.

    Science.gov (United States)

    Yeater, Michael C; Kirsch, Katie R; Taylor, T Matthew; Mitchell, Jeff; Osburn, Wesley N

    2015-04-01

    The objectives of this study were (i) to assess the efficacy of quaternary ammonium chloride-based wet foam (WF) and dry foam (DF) sanitizer systems (600 ppm) for reducing Listeria innocua (a nonpathogenic surrogate of Listeria monocytogenes) or a 100.0 μg/ml rifampin-resistant Salmonella Typhimurium LT2 (a nonpathogenic surrogate of Salmonella enterica serovar Typhimurium) on niche and transfer point areas of an unwashed retail deli slicer as compared with traditional chlorine (Cl(-)) treatment (200 ppm) and (ii) to compare sanitizer surface contact times (10 and 15 min) for pathogen surrogate control. Turkey frankfurter slurries inoculated with L. innocua or Salmonella Typhimurium were used to inoculate seven high-risk sites on a commercial slicer. After 30 min of bacterial attachment, slicers were dry wiped to remove excess food matter, followed by a randomly assigned sanitizer treatment. Surviving pathogen surrogate cells were enumerated on modified Oxford's agar not containing antimicrobic supplement (L. innocua) or on tryptic soy agar supplemented with 100 μg/ml rifampin (Salmonella Typhimurium LT2). Replicate-specific L. innocua and Salmonella Typhimurium reductions were calculated as log CFU per square centimeter of control minus log CFU per square centimeter of enumerated survivors for each site. For both organisms, all sanitizer treatments differed from each other, with Cl(-) producing the least reduction and WF the greatest reduction. A significant (P < 0.05) site-by-treatment interaction was observed. The results of the study indicate that quaternary ammonium chloride sanitizers (600 ppm) applied by both WF and DF were more effective at reducing L. innocua and Salmonella Typhimurium than a traditional Cl sanitizer (200 ppm) on unwashed slicer surfaces.

  14. Product-line selection and pricing with remanufacturing under availability constraints

    Science.gov (United States)

    Aras, Necati; Esenduran, G.÷k.‡e.; Altinel, I. Kuban

    2004-12-01

    Product line selection and pricing are two crucial decisions for the profitability of a manufacturing firm. Remanufacturing, on the other hand, may be a profitable strategy that captures the remaining value in used products. In this paper we develop a mixed-integer nonlinear programming model form the perspective of an original equipment manufacturer (OEM). The objective of the OEM is to select products to manufacture and remanufacture among a set of given alternatives and simultaneously determine their prices so as to maximize its profit. It is assumed that the probability a customer selects a product is proportional to its utility and inversely proportional to its price. The utility of a product is an increasing function of its perceived quality. In our base model, products are discriminated by their unit production costs and utilities. We also analyze a case where remanufacturing is limited by the available quantity of collected remanufacturable products. We show that the resulting problem is decomposed into the pricing and product line selection subproblems. Pricing problem is solved by a variant of the simplex search procedure which can also handle constraints, while complete enumeration and a genetic algorithm are used for the solution of the product line selection problem. A number of experiments are carried out to identify conditions under which it is economically viable for the firm to sell remanufactured products. We also determine the optimal utility and unit production cost values of a remanufactured product, which maximizes the total profit of the OEM.

  15. Correlates of smoking with socioeconomic status, leisure time physical activity and alcohol consumption among Polish adults from randomly selected regions.

    Science.gov (United States)

    Woitas-Slubowska, Donata; Hurnik, Elzbieta; Skarpańska-Stejnborn, Anna

    2010-12-01

    To determine the association between smoking status and leisure time physical activity (LTPA), alcohol consumption, and socioeconomic status (SES) among Polish adults. 466 randomly selected men and women (aged 18-66 years) responded to an anonymous questionnaire regarding smoking, alcohol consumption, LTPA, and SES. Multiple logistic regression was used to examine the association of smoking status with six socioeconomic measures, level of LTPA, and frequency and type of alcohol consumed. Smokers were defined as individuals smoking occasionally or daily. The odds of being smoker were 9 times (men) and 27 times (women) higher among respondents who drink alcohol several times/ week or everyday in comparison to non-drinkers (p times higher compared to those with the high educational attainment (p = 0.007). Among women we observed that students were the most frequent smokers. Female students were almost three times more likely to smoke than non-professional women, and two times more likely than physical workers (p = 0.018). The findings of this study indicated that among randomly selected Polish man and women aged 18-66 smoking and alcohol consumption tended to cluster. These results imply that intervention strategies need to target multiple risk factors simultaneously. The highest risk of smoking was observed among low educated men, female students, and both men and women drinking alcohol several times a week or every day. Information on subgroups with the high risk of smoking will help in planning future preventive strategies.

  16. Comparative assessment of antibiotic susceptibility of coagulase-negative staphylococci in biofilm versus planktonic culture as assessed by bacterial enumeration or rapid XTT colorimetry.

    Science.gov (United States)

    Cerca, Nuno; Martins, Silvia; Cerca, Filipe; Jefferson, Kimberly K; Pier, Gerald B; Oliveira, Rosário; Azeredo, Joana

    2005-08-01

    To quantitatively compare the antibiotic susceptibility of biofilms formed by the coagulase-negative staphylococci (CoNS) Staphylococcus epidermidis and Staphylococcus haemolyticus with the susceptibility of planktonic cultures. Several CoNS strains were grown planktonically or as biofilms to determine the effect of the mode of growth on the level of susceptibility to antibiotics with different mechanisms of action. The utility of a new, rapid colorimetric method that is based on the reduction of a tetrazolium salt (XTT) to measure cell viability was tested by comparison with standard bacterial enumeration techniques. A 6 h kinetic study was performed using dicloxacillin, cefazolin, vancomycin, tetracycline and rifampicin at the peak serum concentration of each antibiotic. In planktonic cells, inhibitors of cell wall synthesis were highly effective over a 3 h period. Biofilms were much less susceptible than planktonic cultures to all antibiotics tested, particularly inhibitors of cell wall synthesis. The susceptibility to inhibitors of protein and RNA synthesis was affected by the biofilm phenotype to a lesser degree. Standard bacterial enumeration techniques and the XTT method produced equivalent results both in biofilms and planktonic assays. This study provides a more accurate comparison between the antibiotic susceptibilities of planktonic versus biofilm populations, because the cell densities in the two populations were similar and because we measured the concentration required to inhibit bacterial metabolism rather than to eradicate the entire bacterial population. While the biofilm phenotype is highly resistant to antibiotics that target cell wall synthesis, it is fairly susceptible to antibiotics that target RNA and protein synthesis.

  17. A census-weighted, spatially-stratified household sampling strategy for urban malaria epidemiology

    Directory of Open Access Journals (Sweden)

    Slutsker Laurence

    2008-02-01

    Full Text Available Abstract Background Urban malaria is likely to become increasingly important as a consequence of the growing proportion of Africans living in cities. A novel sampling strategy was developed for urban areas to generate a sample simultaneously representative of population and inhabited environments. Such a strategy should facilitate analysis of important epidemiological relationships in this ecological context. Methods Census maps and summary data for Kisumu, Kenya, were used to create a pseudo-sampling frame using the geographic coordinates of census-sampled structures. For every enumeration area (EA designated as urban by the census (n = 535, a sample of structures equal to one-tenth the number of households was selected. In EAs designated as rural (n = 32, a geographically random sample totalling one-tenth the number of households was selected from a grid of points at 100 m intervals. The selected samples were cross-referenced to a geographic information system, and coordinates transferred to handheld global positioning units. Interviewers found the closest eligible household to the sampling point and interviewed the caregiver of a child aged Results 4,336 interviews were completed in 473 of the 567 study area EAs from June 2002 through February 2003. EAs without completed interviews were randomly distributed, and non-response was approximately 2%. Mean distance from the assigned sampling point to the completed interview was 74.6 m, and was significantly less in urban than rural EAs, even when controlling for number of households. The selected sample had significantly more children and females of childbearing age than the general population, and fewer older individuals. Conclusion This method selected a sample that was simultaneously population-representative and inclusive of important environmental variation. The use of a pseudo-sampling frame and pre-programmed handheld GPS units is more efficient and may yield a more complete sample than

  18. Algorithm for complete enumeration based on a stroke graph to solve the supply network configuration and operations scheduling problem

    Directory of Open Access Journals (Sweden)

    Julien Maheut

    2013-07-01

    Full Text Available Purpose: The purpose of this paper is to present an algorithm that solves the supply network configuration and operations scheduling problem in a mass customization company that faces alternative operations for one specific tool machine order in a multiplant context. Design/methodology/approach: To achieve this objective, the supply chain network configuration and operations scheduling problem is presented. A model based on stroke graphs allows the design of an algorithm that enumerates all the feasible solutions. The algorithm considers the arrival of a new customized order proposal which has to be inserted into a scheduled program. A selection function is then used to choose the solutions to be simulated in a specific simulation tool implemented in a Decision Support System. Findings and Originality/value: The algorithm itself proves efficient to find all feasible solutions when alternative operations must be considered. The stroke structure is successfully used to schedule operations when considering more than one manufacturing and supply option in each step. Research limitations/implications: This paper includes only the algorithm structure for a one-by-one, sequenced introduction of new products into the list of units to be manufactured. Therefore, the lotsizing process is done on a lot-per-lot basis. Moreover, the validation analysis is done through a case study and no generalization can be done without risk. Practical implications: The result of this research would help stakeholders to determine all the feasible and practical solutions for their problem. It would also allow to assessing the total costs and delivery times of each solution. Moreover, the Decision Support System proves useful to assess alternative solutions. Originality/value: This research offers a simple algorithm that helps solve the supply network configuration problem and, simultaneously, the scheduling problem by considering alternative operations. The proposed system

  19. Effects of one versus two bouts of moderate intensity physical activity on selective attention during a school morning in Dutch primary schoolchildren: A randomized controlled trial.

    Science.gov (United States)

    Altenburg, Teatske M; Chinapaw, Mai J M; Singh, Amika S

    2016-10-01

    Evidence suggests that physical activity is positively related to several aspects of cognitive functioning in children, among which is selective attention. To date, no information is available on the optimal frequency of physical activity on cognitive functioning in children. The current study examined the acute effects of one and two bouts of moderate-intensity physical activity on children's selective attention. Randomized controlled trial (ISRCTN97975679). Thirty boys and twenty-six girls, aged 10-13 years, were randomly assigned to three conditions: (A) sitting all morning working on simulated school tasks; (B) one 20-min physical activity bout after 90min; and (C) two 20-min physical activity bouts, i.e. at the start and after 90min. Selective attention was assessed at five time points during the morning (i.e. at baseline and after 20, 110, 130 and 220min), using the 'Sky Search' subtest of the 'Test of Selective Attention in Children'. We used GEE analysis to examine differences in Sky Search scores between the three experimental conditions, adjusting for school, baseline scores, self-reported screen time and time spent in sports. Children who performed two 20-min bouts of moderate-intensity physical activity had significantly better Sky Search scores compared to children who performed one physical activity bout or remained seated the whole morning (B=-0.26; 95% CI=[-0.52; -0.00]). Our findings support the importance of repeated physical activity during the school day for beneficial effects on selective attention in children. Copyright © 2015 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.

  20. Selection of independent components based on cortical mapping of electromagnetic activity

    Science.gov (United States)

    Chan, Hui-Ling; Chen, Yong-Sheng; Chen, Li-Fen

    2012-10-01

    Independent component analysis (ICA) has been widely used to attenuate interference caused by noise components from the electromagnetic recordings of brain activity. However, the scalp topographies and associated temporal waveforms provided by ICA may be insufficient to distinguish functional components from artifactual ones. In this work, we proposed two component selection methods, both of which first estimate the cortical distribution of the brain activity for each component, and then determine the functional components based on the parcellation of brain activity mapped onto the cortical surface. Among all independent components, the first method can identify the dominant components, which have strong activity in the selected dominant brain regions, whereas the second method can identify those inter-regional associating components, which have similar component spectra between a pair of regions. For a targeted region, its component spectrum enumerates the amplitudes of its parceled brain activity across all components. The selected functional components can be remixed to reconstruct the focused electromagnetic signals for further analysis, such as source estimation. Moreover, the inter-regional associating components can be used to estimate the functional brain network. The accuracy of the cortical activation estimation was evaluated on the data from simulation studies, whereas the usefulness and feasibility of the component selection methods were demonstrated on the magnetoencephalography data recorded from a gender discrimination study.

  1. Mirnacle: machine learning with SMOTE and random forest for improving selectivity in pre-miRNA ab initio prediction.

    Science.gov (United States)

    Marques, Yuri Bento; de Paiva Oliveira, Alcione; Ribeiro Vasconcelos, Ana Tereza; Cerqueira, Fabio Ribeiro

    2016-12-15

    MicroRNAs (miRNAs) are key gene expression regulators in plants and animals. Therefore, miRNAs are involved in several biological processes, making the study of these molecules one of the most relevant topics of molecular biology nowadays. However, characterizing miRNAs in vivo is still a complex task. As a consequence, in silico methods have been developed to predict miRNA loci. A common ab initio strategy to find miRNAs in genomic data is to search for sequences that can fold into the typical hairpin structure of miRNA precursors (pre-miRNAs). The current ab initio approaches, however, have selectivity issues, i.e., a high number of false positives is reported, which can lead to laborious and costly attempts to provide biological validation. This study presents an extension of the ab initio method miRNAFold, with the aim of improving selectivity through machine learning techniques, namely, random forest combined with the SMOTE procedure that copes with imbalance datasets. By comparing our method, termed Mirnacle, with other important approaches in the literature, we demonstrate that Mirnacle substantially improves selectivity without compromising sensitivity. For the three datasets used in our experiments, our method achieved at least 97% of sensitivity and could deliver a two-fold, 20-fold, and 6-fold increase in selectivity, respectively, compared with the best results of current computational tools. The extension of miRNAFold by the introduction of machine learning techniques, significantly increases selectivity in pre-miRNA ab initio prediction, which optimally contributes to advanced studies on miRNAs, as the need of biological validations is diminished. Hopefully, new research, such as studies of severe diseases caused by miRNA malfunction, will benefit from the proposed computational tool.

  2. Randomized algorithms in automatic control and data mining

    CERN Document Server

    Granichin, Oleg; Toledano-Kitai, Dvora

    2015-01-01

    In the fields of data mining and control, the huge amount of unstructured data and the presence of uncertainty in system descriptions have always been critical issues. The book Randomized Algorithms in Automatic Control and Data Mining introduces the readers to the fundamentals of randomized algorithm applications in data mining (especially clustering) and in automatic control synthesis. The methods proposed in this book guarantee that the computational complexity of classical algorithms and the conservativeness of standard robust control techniques will be reduced. It is shown that when a problem requires "brute force" in selecting among options, algorithms based on random selection of alternatives offer good results with certain probability for a restricted time and significantly reduce the volume of operations.

  3. The basic science and mathematics of random mutation and natural selection.

    Science.gov (United States)

    Kleinman, Alan

    2014-12-20

    The mutation and natural selection phenomenon can and often does cause the failure of antimicrobial, herbicidal, pesticide and cancer treatments selection pressures. This phenomenon operates in a mathematically predictable behavior, which when understood leads to approaches to reduce and prevent the failure of the use of these selection pressures. The mathematical behavior of mutation and selection is derived using the principles given by probability theory. The derivation of the equations describing the mutation and selection phenomenon is carried out in the context of an empirical example. Copyright © 2014 John Wiley & Sons, Ltd.

  4. Selecting Optimal Random Forest Predictive Models: A Case Study on Predicting the Spatial Distribution of Seabed Hardness

    Science.gov (United States)

    Li, Jin; Tran, Maggie; Siwabessy, Justy

    2016-01-01

    Spatially continuous predictions of seabed hardness are important baseline environmental information for sustainable management of Australia’s marine jurisdiction. Seabed hardness is often inferred from multibeam backscatter data with unknown accuracy and can be inferred from underwater video footage at limited locations. In this study, we classified the seabed into four classes based on two new seabed hardness classification schemes (i.e., hard90 and hard70). We developed optimal predictive models to predict seabed hardness using random forest (RF) based on the point data of hardness classes and spatially continuous multibeam data. Five feature selection (FS) methods that are variable importance (VI), averaged variable importance (AVI), knowledge informed AVI (KIAVI), Boruta and regularized RF (RRF) were tested based on predictive accuracy. Effects of highly correlated, important and unimportant predictors on the accuracy of RF predictive models were examined. Finally, spatial predictions generated using the most accurate models were visually examined and analysed. This study confirmed that: 1) hard90 and hard70 are effective seabed hardness classification schemes; 2) seabed hardness of four classes can be predicted with a high degree of accuracy; 3) the typical approach used to pre-select predictive variables by excluding highly correlated variables needs to be re-examined; 4) the identification of the important and unimportant predictors provides useful guidelines for further improving predictive models; 5) FS methods select the most accurate predictive model(s) instead of the most parsimonious ones, and AVI and Boruta are recommended for future studies; and 6) RF is an effective modelling method with high predictive accuracy for multi-level categorical data and can be applied to ‘small p and large n’ problems in environmental sciences. Additionally, automated computational programs for AVI need to be developed to increase its computational efficiency and

  5. RANDOM WALK HYPOTHESIS IN FINANCIAL MARKETS

    Directory of Open Access Journals (Sweden)

    Nicolae-Marius JULA

    2017-05-01

    Full Text Available Random walk hypothesis states that the stock market prices do not follow a predictable trajectory, but are simply random. If you are trying to predict a random set of data, one should test for randomness, because, despite the power and complexity of the used models, the results cannot be trustworthy. There are several methods for testing these hypotheses and the use of computational power provided by the R environment makes the work of the researcher easier and with a cost-effective approach. The increasing power of computing and the continuous development of econometric tests should give the potential investors new tools in selecting commodities and investing in efficient markets.

  6. The Goodness of Covariance Selection Problem from AUC Bounds

    OpenAIRE

    Khajavi, Navid Tafaghodi; Kuh, Anthony

    2016-01-01

    We conduct a study of graphical models and discuss the quality of model selection approximation by formulating the problem as a detection problem and examining the area under the curve (AUC). We are specifically looking at the model selection problem for jointly Gaussian random vectors. For Gaussian random vectors, this problem simplifies to the covariance selection problem which is widely discussed in literature by Dempster [1]. In this paper, we give the definition for the correlation appro...

  7. Organic Ferroelectric-Based 1T1T Random Access Memory Cell Employing a Common Dielectric Layer Overcoming the Half-Selection Problem.

    Science.gov (United States)

    Zhao, Qiang; Wang, Hanlin; Ni, Zhenjie; Liu, Jie; Zhen, Yonggang; Zhang, Xiaotao; Jiang, Lang; Li, Rongjin; Dong, Huanli; Hu, Wenping

    2017-09-01

    Organic electronics based on poly(vinylidenefluoride/trifluoroethylene) (P(VDF-TrFE)) dielectric is facing great challenges in flexible circuits. As one indispensable part of integrated circuits, there is an urgent demand for low-cost and easy-fabrication nonvolatile memory devices. A breakthrough is made on a novel ferroelectric random access memory cell (1T1T FeRAM cell) consisting of one selection transistor and one ferroelectric memory transistor in order to overcome the half-selection problem. Unlike complicated manufacturing using multiple dielectrics, this system simplifies 1T1T FeRAM cell fabrication using one common dielectric. To achieve this goal, a strategy for semiconductor/insulator (S/I) interface modulation is put forward and applied to nonhysteretic selection transistors with high performances for driving or addressing purposes. As a result, high hole mobility of 3.81 cm 2 V -1 s -1 (average) for 2,6-diphenylanthracene (DPA) and electron mobility of 0.124 cm 2 V -1 s -1 (average) for N,N'-1H,1H-perfluorobutyl dicyanoperylenecarboxydiimide (PDI-FCN 2 ) are obtained in selection transistors. In this work, we demonstrate this technology's potential for organic ferroelectric-based pixelated memory module fabrication. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  8. Random-walk simulation of selected aspects of dissipative collisions

    International Nuclear Information System (INIS)

    Toeke, J.; Gobbi, A.; Matulewicz, T.

    1984-11-01

    Internuclear thermal equilibrium effects and shell structure effects in dissipative collisions are studied numerically within the framework of the model of stochastic exchanges by applying the random-walk technique. Effective blocking of the drift through the mass flux induced by the temperature difference, while leaving the variances of the mass distributions unaltered is found possible, provided an internuclear potential barrier is present. Presence of the shell structure is found to lead to characteristic correlations between the consecutive exchanges. Experimental evidence for the predicted effects is discussed. (orig.)

  9. Random number generation and creativity.

    Science.gov (United States)

    Bains, William

    2008-01-01

    A previous paper suggested that humans can generate genuinely random numbers. I tested this hypothesis by repeating the experiment with a larger number of highly numerate subjects, asking them to call out a sequence of digits selected from 0 through 9. The resulting sequences were substantially non-random, with an excess of sequential pairs of numbers and a deficit of repeats of the same number, in line with previous literature. However, the previous literature suggests that humans generate random numbers with substantial conscious effort, and distractions which reduce that effort reduce the randomness of the numbers. I reduced my subjects' concentration by asking them to call out in another language, and with alcohol - neither affected the randomness of their responses. This suggests that the ability to generate random numbers is a 'basic' function of the human mind, even if those numbers are not mathematically 'random'. I hypothesise that there is a 'creativity' mechanism, while not truly random, provides novelty as part of the mind's defence against closed programming loops, and that testing for the effects seen here in people more or less familiar with numbers or with spontaneous creativity could identify more features of this process. It is possible that training to perform better at simple random generation tasks could help to increase creativity, through training people to reduce the conscious mind's suppression of the 'spontaneous', creative response to new questions.

  10. Selection and characterization of DNA aptamers

    NARCIS (Netherlands)

    Ruigrok, V.J.B.

    2013-01-01

    This thesis focusses on the selection and characterisation of DNA aptamers and the various aspects related to their selection from large pools of randomized oligonucleotides. Aptamers are affinity tools that can specifically recognize and bind predefined target molecules; this ability, however,

  11. Pseudo-Random Number Generators

    Science.gov (United States)

    Howell, L. W.; Rheinfurth, M. H.

    1984-01-01

    Package features comprehensive selection of probabilistic distributions. Monte Carlo simulations resorted to whenever systems studied not amenable to deterministic analyses or when direct experimentation not feasible. Random numbers having certain specified distribution characteristic integral part of simulations. Package consists of collector of "pseudorandom" number generators for use in Monte Carlo simulations.

  12. Survival of Lactobacillus reuteri DSM 17938 and Lactobacillus rhamnosus GG in the human gastrointestinal tract with daily consumption of a low-fat probiotic spread.

    Science.gov (United States)

    Dommels, Yvonne E M; Kemperman, Robèr A; Zebregs, Yvonne E M P; Draaisma, René B; Jol, Arne; Wolvers, Danielle A W; Vaughan, Elaine E; Albers, Ruud

    2009-10-01

    Probiotics are live microorganisms which, when administered in adequate amounts, confer a health benefit on the host. Therefore, probiotic strains should be able to survive passage through the human gastrointestinal tract. Human gastrointestinal tract survival of probiotics in a low-fat spread matrix has, however, never been tested. The objective of this randomized, double-blind, placebo-controlled human intervention study was to test the human gastrointestinal tract survival of Lactobacillus reuteri DSM 17938 and Lactobacillus rhamnosus GG after daily consumption of a low-fat probiotic spread by using traditional culturing, as well as molecular methods. Forty-two healthy human volunteers were randomly assigned to one of three treatment groups provided with 20 g of placebo spread (n = 13), 20 g of spread with a target dose of 1 x 10(9) CFU of L. reuteri DSM 17938 (n = 13), or 20 g of spread with a target dose of 5 x 10(9) CFU of L. rhamnosus GG (n = 16) daily for 3 weeks. Fecal samples were obtained before and after the intervention period. A significant increase, compared to the baseline, in the recovery of viable probiotic lactobacilli in fecal samples was demonstrated after 3 weeks of daily consumption of the spread containing either L. reuteri DSM 17938 or L. rhamnosus GG by selective enumeration. In the placebo group, no increase was detected. The results of selective enumeration were supported by quantitative PCR, detecting a significant increase in DNA resulting from the probiotics after intervention. Overall, our results indicate for the first time that low-fat spread is a suitable carrier for these probiotic strains.

  13. Refractive error and visual impairment in private school children in Ghana.

    Science.gov (United States)

    Kumah, Ben D; Ebri, Anne; Abdul-Kabir, Mohammed; Ahmed, Abdul-Sadik; Koomson, Nana Ya; Aikins, Samual; Aikins, Amos; Amedo, Angela; Lartey, Seth; Naidoo, Kovin

    2013-12-01

    To assess the prevalence of refractive error and visual impairment in private school children in Ghana. A random selection of geographically defined classes in clusters was used to identify a sample of school children aged 12 to 15 years in the Ashanti Region. Children in 60 clusters were enumerated and examined in classrooms. The examination included visual acuity, retinoscopy, autorefraction under cycloplegia, and examination of anterior segment, media, and fundus. For quality assurance, a random sample of children with reduced and normal vision were selected and re-examined independently. A total of 2454 children attending 53 private schools were enumerated, and of these, 2435 (99.2%) were examined. Prevalence of uncorrected, presenting, and best visual acuity of 20/40 or worse in the better eye was 3.7, 3.5, and 0.4%, respectively. Refractive error was the cause of reduced vision in 71.7% of 152 eyes, amblyopia in 9.9%, retinal disorders in 5.9%, and corneal opacity in 4.6%. Exterior and anterior segment abnormalities occurred in 43 (1.8%) children. Myopia (at least -0.50 D) in one or both eyes was present in 3.2% of children when measured with retinoscopy and in 3.4% measured with autorefraction. Myopia was not significantly associated with gender (P = 0.82). Hyperopia (+2.00 D or more) in at least one eye was present in 0.3% of children with retinoscopy and autorefraction. The prevalence of reduced vision in Ghanaian private school children due to uncorrected refractive error was low. However, the prevalence of amblyopia, retinal disorders, and corneal opacities indicate the need for early interventions.

  14. Model Selection with the Linear Mixed Model for Longitudinal Data

    Science.gov (United States)

    Ryoo, Ji Hoon

    2011-01-01

    Model building or model selection with linear mixed models (LMMs) is complicated by the presence of both fixed effects and random effects. The fixed effects structure and random effects structure are codependent, so selection of one influences the other. Most presentations of LMM in psychology and education are based on a multilevel or…

  15. A Novel Strategy for Detection and Enumeration of Circulating Rare Cell Populations in Metastatic Cancer Patients Using Automated Microfluidic Filtration and Multiplex Immunoassay.

    Directory of Open Access Journals (Sweden)

    Mark Jesus M Magbanua

    Full Text Available Size selection via filtration offers an antigen-independent approach for the enrichment of rare cell populations in blood of cancer patients. We evaluated the performance of a novel approach for multiplex rare cell detection in blood samples from metastatic breast (n = 19 and lung cancer patients (n = 21, and healthy controls (n = 30 using an automated microfluidic filtration and multiplex immunoassay strategy. Captured cells were enumerated after sequential staining for specific markers to identify circulating tumor cells (CTCs, circulating mesenchymal cells (CMCs, putative circulating stem cells (CSCs, and circulating endothelial cells (CECs. Preclinical validation experiments using cancer cells spiked into healthy blood demonstrated high recovery rate (mean = 85% and reproducibility of the assay. In clinical studies, CTCs and CMCs were detected in 35% and 58% of cancer patients, respectively, and were largely absent from healthy controls (3%, p = 0.001. Mean levels of CTCs were significantly higher in breast than in lung cancer patients (p = 0.03. Fifty-three percent (53% of cancer patients harbored putative CSCs, while none were detectable in healthy controls (p<0.0001. In contrast, CECs were observed in both cancer and control groups. Direct comparison of CellSearch® vs. our microfluidic filter method revealed moderate correlation (R2 = 0.46, kappa = 0.47. Serial blood analysis in breast cancer patients demonstrated the feasibility of monitoring circulating rare cell populations over time. Simultaneous assessment of CTCs, CMCs, CSCs and CECs may provide new tools to study mechanisms of disease progression and treatment response/resistance.

  16. A Novel Strategy for Detection and Enumeration of Circulating Rare Cell Populations in Metastatic Cancer Patients Using Automated Microfluidic Filtration and Multiplex Immunoassay.

    Science.gov (United States)

    Magbanua, Mark Jesus M; Pugia, Michael; Lee, Jin Sun; Jabon, Marc; Wang, Victoria; Gubens, Matthew; Marfurt, Karen; Pence, Julia; Sidhu, Harwinder; Uzgiris, Arejas; Rugo, Hope S; Park, John W

    2015-01-01

    Size selection via filtration offers an antigen-independent approach for the enrichment of rare cell populations in blood of cancer patients. We evaluated the performance of a novel approach for multiplex rare cell detection in blood samples from metastatic breast (n = 19) and lung cancer patients (n = 21), and healthy controls (n = 30) using an automated microfluidic filtration and multiplex immunoassay strategy. Captured cells were enumerated after sequential staining for specific markers to identify circulating tumor cells (CTCs), circulating mesenchymal cells (CMCs), putative circulating stem cells (CSCs), and circulating endothelial cells (CECs). Preclinical validation experiments using cancer cells spiked into healthy blood demonstrated high recovery rate (mean = 85%) and reproducibility of the assay. In clinical studies, CTCs and CMCs were detected in 35% and 58% of cancer patients, respectively, and were largely absent from healthy controls (3%, p = 0.001). Mean levels of CTCs were significantly higher in breast than in lung cancer patients (p = 0.03). Fifty-three percent (53%) of cancer patients harbored putative CSCs, while none were detectable in healthy controls (p<0.0001). In contrast, CECs were observed in both cancer and control groups. Direct comparison of CellSearch® vs. our microfluidic filter method revealed moderate correlation (R2 = 0.46, kappa = 0.47). Serial blood analysis in breast cancer patients demonstrated the feasibility of monitoring circulating rare cell populations over time. Simultaneous assessment of CTCs, CMCs, CSCs and CECs may provide new tools to study mechanisms of disease progression and treatment response/resistance.

  17. The adverse effect of selective cyclooxygenase-2 inhibitor on random skin flap survival in rats.

    Directory of Open Access Journals (Sweden)

    Haiyong Ren

    Full Text Available BACKGROUND: Cyclooxygenase-2(COX-2 inhibitors provide desired analgesic effects after injury or surgery, but evidences suggested they also attenuate wound healing. The study is to investigate the effect of COX-2 inhibitor on random skin flap survival. METHODS: The McFarlane flap model was established in 40 rats and evaluated within two groups, each group gave the same volume of Parecoxib and saline injection for 7 days. The necrotic area of the flap was measured, the specimens of the flap were stained with haematoxylin-eosin(HE for histologic analysis. Immunohistochemical staining was performed to analyse the level of VEGF and COX-2 . RESULTS: 7 days after operation, the flap necrotic area ratio in study group (66.65 ± 2.81% was significantly enlarged than that of the control group(48.81 ± 2.33%(P <0.01. Histological analysis demonstrated angiogenesis with mean vessel density per mm(2 being lower in study group (15.4 ± 4.4 than in control group (27.2 ± 4.1 (P <0.05. To evaluate the expression of COX-2 and VEGF protein in the intermediate area II in the two groups by immunohistochemistry test .The expression of COX-2 in study group was (1022.45 ± 153.1, and in control group was (2638.05 ± 132.2 (P <0.01. The expression of VEGF in the study and control groups were (2779.45 ± 472.0 vs (4938.05 ± 123.6(P <0.01.In the COX-2 inhibitor group, the expressions of COX-2 and VEGF protein were remarkably down-regulated as compared with the control group. CONCLUSION: Selective COX-2 inhibitor had adverse effect on random skin flap survival. Suppression of neovascularization induced by low level of VEGF was supposed to be the biological mechanism.

  18. Bias in random forest variable importance measures: Illustrations, sources and a solution

    Directory of Open Access Journals (Sweden)

    Hothorn Torsten

    2007-01-01

    Full Text Available Abstract Background Variable importance measures for random forests have been receiving increased attention as a means of variable selection in many classification tasks in bioinformatics and related scientific fields, for instance to select a subset of genetic markers relevant for the prediction of a certain disease. We show that random forest variable importance measures are a sensible means for variable selection in many applications, but are not reliable in situations where potential predictor variables vary in their scale of measurement or their number of categories. This is particularly important in genomics and computational biology, where predictors often include variables of different types, for example when predictors include both sequence data and continuous variables such as folding energy, or when amino acid sequence data show different numbers of categories. Results Simulation studies are presented illustrating that, when random forest variable importance measures are used with data of varying types, the results are misleading because suboptimal predictor variables may be artificially preferred in variable selection. The two mechanisms underlying this deficiency are biased variable selection in the individual classification trees used to build the random forest on one hand, and effects induced by bootstrap sampling with replacement on the other hand. Conclusion We propose to employ an alternative implementation of random forests, that provides unbiased variable selection in the individual classification trees. When this method is applied using subsampling without replacement, the resulting variable importance measures can be used reliably for variable selection even in situations where the potential predictor variables vary in their scale of measurement or their number of categories. The usage of both random forest algorithms and their variable importance measures in the R system for statistical computing is illustrated and

  19. Multi-Label Learning via Random Label Selection for Protein Subcellular Multi-Locations Prediction.

    Science.gov (United States)

    Wang, Xiao; Li, Guo-Zheng

    2013-03-12

    Prediction of protein subcellular localization is an important but challenging problem, particularly when proteins may simultaneously exist at, or move between, two or more different subcellular location sites. Most of the existing protein subcellular localization methods are only used to deal with the single-location proteins. In the past few years, only a few methods have been proposed to tackle proteins with multiple locations. However, they only adopt a simple strategy, that is, transforming the multi-location proteins to multiple proteins with single location, which doesn't take correlations among different subcellular locations into account. In this paper, a novel method named RALS (multi-label learning via RAndom Label Selection), is proposed to learn from multi-location proteins in an effective and efficient way. Through five-fold cross validation test on a benchmark dataset, we demonstrate our proposed method with consideration of label correlations obviously outperforms the baseline BR method without consideration of label correlations, indicating correlations among different subcellular locations really exist and contribute to improvement of prediction performance. Experimental results on two benchmark datasets also show that our proposed methods achieve significantly higher performance than some other state-of-the-art methods in predicting subcellular multi-locations of proteins. The prediction web server is available at http://levis.tongji.edu.cn:8080/bioinfo/MLPred-Euk/ for the public usage.

  20. Promising Therapeutics with Natural Bioactive Compounds for Improving Learning and Memory — A Review of Randomized Trials

    Directory of Open Access Journals (Sweden)

    Jin-Yong Choi

    2012-09-01

    Full Text Available Cognitive disorders can be associated with brain trauma, neurodegenerative disease or as a part of physiological aging. Aging in humans is generally associated with deterioration of cognitive performance and, in particular, learning and memory. Different therapeutic approaches are available to treat cognitive impairment during physiological aging and neurodegenerative or psychiatric disorders. Traditional herbal medicine and numerous plants, either directly as supplements or indirectly in the form of food, improve brain functions including memory and attention. More than a hundred herbal medicinal plants have been traditionally used for learning and memory improvement, but only a few have been tested in randomized clinical trials. Here, we will enumerate those medicinal plants that show positive effects on various cognitive functions in learning and memory clinical trials. Moreover, besides natural products that show promising effects in clinical trials, we briefly discuss medicinal plants that have promising experimental data or initial clinical data and might have potential to reach a clinical trial in the near future.

  1. Detecting and enumerating soil-transmitted helminth eggs in soil: New method development and results from field testing in Kenya and Bangladesh.

    Directory of Open Access Journals (Sweden)

    Lauren Steinbaum

    2017-04-01

    Full Text Available Globally, about 1.5 billion people are infected with at least one species of soil-transmitted helminth (STH. Soil is a critical environmental reservoir of STH, yet there is no standard method for detecting STH eggs in soil. We developed a field method for enumerating STH eggs in soil and tested the method in Bangladesh and Kenya. The US Environmental Protection Agency (EPA method for enumerating Ascaris eggs in biosolids was modified through a series of recovery efficiency experiments; we seeded soil samples with a known number of Ascaris suum eggs and assessed the effect of protocol modifications on egg recovery. We found the use of 1% 7X as a surfactant compared to 0.1% Tween 80 significantly improved recovery efficiency (two-sided t-test, t = 5.03, p = 0.007 while other protocol modifications-including different agitation and flotation methods-did not have a significant impact. Soil texture affected the egg recovery efficiency; sandy samples resulted in higher recovery compared to loamy samples processed using the same method (two-sided t-test, t = 2.56, p = 0.083. We documented a recovery efficiency of 73% for the final improved method using loamy soil in the lab. To field test the improved method, we processed soil samples from 100 households in Bangladesh and 100 households in Kenya from June to November 2015. The prevalence of any STH (Ascaris, Trichuris or hookworm egg in soil was 78% in Bangladesh and 37% in Kenya. The median concentration of STH eggs in soil in positive samples was 0.59 eggs/g dry soil in Bangladesh and 0.15 eggs/g dry soil in Kenya. The prevalence of STH eggs in soil was significantly higher in Bangladesh than Kenya (chi-square, χ2 = 34.39, p < 0.001 as was the concentration (Mann-Whitney, z = 7.10, p < 0.001. This new method allows for detecting STH eggs in soil in low-resource settings and could be used for standardizing soil STH detection globally.

  2. Survivor bias in Mendelian randomization analysis

    DEFF Research Database (Denmark)

    Vansteelandt, Stijn; Dukes, Oliver; Martinussen, Torben

    2017-01-01

    Mendelian randomization studies employ genotypes as experimental handles to infer the effect of genetically modified exposures (e.g. vitamin D exposure) on disease outcomes (e.g. mortality). The statistical analysis of these studies makes use of the standard instrumental variables framework. Many...... of these studies focus on elderly populations, thereby ignoring the problem of left truncation, which arises due to the selection of study participants being conditional upon surviving up to the time of study onset. Such selection, in general, invalidates the assumptions on which the instrumental variables...... analysis rests. We show that Mendelian randomization studies of adult or elderly populations will therefore, in general, return biased estimates of the exposure effect when the considered genotype affects mortality; in contrast, standard tests of the causal null hypothesis that the exposure does not affect...

  3. High-Tg Polynorbornene-Based Block and Random Copolymers for Butanol Pervaporation Membranes

    Science.gov (United States)

    Register, Richard A.; Kim, Dong-Gyun; Takigawa, Tamami; Kashino, Tomomasa; Burtovyy, Oleksandr; Bell, Andrew

    Vinyl addition polymers of substituted norbornene (NB) monomers possess desirably high glass transition temperatures (Tg); however, until very recently, the lack of an applicable living polymerization chemistry has precluded the synthesis of such polymers with controlled architecture, or copolymers with controlled sequence distribution. We have recently synthesized block and random copolymers of NB monomers bearing hydroxyhexafluoroisopropyl and n-butyl substituents (HFANB and BuNB) via living vinyl addition polymerization with Pd-based catalysts. Both series of polymers were cast into the selective skin layers of thin film composite (TFC) membranes, and these organophilic membranes investigated for the isolation of n-butanol from dilute aqueous solution (model fermentation broth) via pervaporation. The block copolymers show well-defined microphase-separated morphologies, both in bulk and as the selective skin layers on TFC membranes, while the random copolymers are homogeneous. Both block and random vinyl addition copolymers are effective as n-butanol pervaporation membranes, with the block copolymers showing a better flux-selectivity balance. While polyHFANB has much higher permeability and n-butanol selectivity than polyBuNB, incorporating BuNB units into the polymer (in either a block or random sequence) limits the swelling of the polyHFANB and thereby improves the n-butanol pervaporation selectivity.

  4. Random coil chemical shifts in acidic 8 M urea: Implementation of random coil shift data in NMRView

    International Nuclear Information System (INIS)

    Schwarzinger, Stephan; Kroon, Gerard J.A.; Foss, Ted R.; Wright, Peter E.; Dyson, H. Jane

    2000-01-01

    Studies of proteins unfolded in acid or chemical denaturant can help in unraveling events during the earliest phases of protein folding. In order for meaningful comparisons to be made of residual structure in unfolded states, it is necessary to use random coil chemical shifts that are valid for the experimental system under study. We present a set of random coil chemical shifts obtained for model peptides under experimental conditions used in studies of denatured proteins. This new set, together with previously published data sets, has been incorporated into a software interface for NMRView, allowing selection of the random coil data set that fits the experimental conditions best

  5. Detection and Specific Enumeration of Multi-Strain Probiotics in the Lumen Contents and Mucus Layers of the Rat Intestine After Oral Administration.

    Science.gov (United States)

    Lee, Hee Ji; Orlovich, David A; Tagg, John R; Fawcett, J Paul

    2009-12-01

    Although the detection of viable probiotic bacteria following their ingestion and passage through the gastrointestinal tract (GIT) has been well documented, their mucosal attachment in vivo is more difficult to assess. In this study, we investigated the survival and mucosal attachment of multi-strain probiotics transiting the rat GIT. Rats were administered a commercial mixture of the intestinal probiotics Lactobacillus acidophilus LA742, Lactobacillus rhamnosus L2H and Bifidobacterium lactis HN019 and the oral probiotic Streptococcus salivarius K12 every 12 h for 3 days. Intestinal contents, mucus and faeces were tested 6 h, 3 days and 7 days after the last dose by strain-specific enumeration on selective media and by denaturing gradient gel electrophoresis. At 6 h, viable cells and DNA corresponding to all four probiotics were detected in the faeces and in both the lumen contents and mucus layers of the ileum and colon. Viable probiotic cells of B. lactis and L. rhamnosus were detected for 7 days and L. acidophilus for 3 days after the last dose. B. lactis and L. rhamnosus persisted in the ileal mucus and colon contents, whereas the retention of L. acidophilus appeared to be relatively higher in colonic mucus. No viable cells of S. salivarius K12 were detected in any of the samples at either day 3 or 7. The study demonstrates that probiotic strains of intestinal origin but not of oral origin exhibit temporary colonisation of the rat GIT and that these strains may have differing relative affinities for colonic and ileal mucosa.

  6. Discriminative Projection Selection Based Face Image Hashing

    Science.gov (United States)

    Karabat, Cagatay; Erdogan, Hakan

    Face image hashing is an emerging method used in biometric verification systems. In this paper, we propose a novel face image hashing method based on a new technique called discriminative projection selection. We apply the Fisher criterion for selecting the rows of a random projection matrix in a user-dependent fashion. Moreover, another contribution of this paper is to employ a bimodal Gaussian mixture model at the quantization step. Our simulation results on three different databases demonstrate that the proposed method has superior performance in comparison to previously proposed random projection based methods.

  7. The effect of selection on genetic parameter estimates

    African Journals Online (AJOL)

    Unknown

    The South African Journal of Animal Science is available online at ... A simulation study was carried out to investigate the effect of selection on the estimation of genetic ... The model contained a fixed effect, random genetic and random.

  8. A theory for the origin of a self-replicating chemical system. I - Natural selection of the autogen from short, random oligomers

    Science.gov (United States)

    White, D. H.

    1980-01-01

    A general theory is presented for the origin of a self-replicating chemical system, termed an autogen, which is capable of both crude replication and translation (protein synthesis). The theory requires the availability of free energy and monomers to the system, a significant background low-yield synthesis of kinetically stable oligopeptides and oligonucleotides, the localization of the oligomers, crude oligonucleotide selectivity of amino acids during oligopeptide synthesis, crude oligonucleotide replication, and two short peptide families which catalyze replication and translation, to produce a localized group of at least one copy each of two protogenes and two protoenzymes. The model posits a process of random oligomerization, followed by the random nucleation of functional components and the rapid autocatalytic growth of the functioning autogen to macroscopic amounts, to account for the origin of the first self-replicating system. Such a process contains steps of such high probability and short time periods that it is suggested that the emergence of an autogen in a laboratory experiment of reasonable time scale may be possible.

  9. Expansion and Compression of Time Correlate with Information Processing in an Enumeration Task.

    Directory of Open Access Journals (Sweden)

    Andreas Wutz

    Full Text Available Perception of temporal duration is subjective and is influenced by factors such as attention and context. For example, unexpected or emotional events are often experienced as if time subjectively expands, suggesting that the amount of information processed in a unit of time can be increased. Time dilation effects have been measured with an oddball paradigm in which an infrequent stimulus is perceived to last longer than standard stimuli in the rest of the sequence. Likewise, time compression for the oddball occurs when the duration of the standard items is relatively brief. Here, we investigated whether the amount of information processing changes when time is perceived as distorted. On each trial, an oddball stimulus of varying numerosity (1-14 items and duration was presented along with standard items that were either short (70 ms or long (1050 ms. Observers were instructed to count the number of dots within the oddball stimulus and to judge its relative duration with respect to the standards on that trial. Consistent with previous results, oddballs were reliably perceived as temporally distorted: expanded for longer standard stimuli blocks and compressed for shorter standards. The occurrence of these distortions of time perception correlated with perceptual processing; i.e. enumeration accuracy increased when time was perceived as expanded and decreased with temporal compression. These results suggest that subjective time distortions are not epiphenomenal, but reflect real changes in sensory processing. Such short-term plasticity in information processing rate could be evolutionarily advantageous in optimizing perception and action during critical moments.

  10. The genealogy of samples in models with selection.

    Science.gov (United States)

    Neuhauser, C; Krone, S M

    1997-02-01

    We introduce the genealogy of a random sample of genes taken from a large haploid population that evolves according to random reproduction with selection and mutation. Without selection, the genealogy is described by Kingman's well-known coalescent process. In the selective case, the genealogy of the sample is embedded in a graph with a coalescing and branching structure. We describe this graph, called the ancestral selection graph, and point out differences and similarities with Kingman's coalescent. We present simulations for a two-allele model with symmetric mutation in which one of the alleles has a selective advantage over the other. We find that when the allele frequencies in the population are already in equilibrium, then the genealogy does not differ much from the neutral case. This is supported by rigorous results. Furthermore, we describe the ancestral selection graph for other selective models with finitely many selection classes, such as the K-allele models, infinitely-many-alleles models. DNA sequence models, and infinitely-many-sites models, and briefly discuss the diploid case.

  11. Selection gradients, the opportunity for selection, and the coefficient of determination.

    Science.gov (United States)

    Moorad, Jacob A; Wade, Michael J

    2013-03-01

    Abstract We derive the relationship between R(2) (the coefficient of determination), selection gradients, and the opportunity for selection for univariate and multivariate cases. Our main result is to show that the portion of the opportunity for selection that is caused by variation for any trait is equal to the product of its selection gradient and its selection differential. This relationship is a corollary of the first and second fundamental theorems of natural selection, and it permits one to investigate the portions of the total opportunity for selection that are involved in directional selection, stabilizing (and diversifying) selection, and correlational selection, which is important to morphological integration. It also allows one to determine the fraction of fitness variation not explained by variation in measured phenotypes and therefore attributable to random (or, at least, unknown) influences. We apply our methods to a human data set to show how sex-specific mating success as a component of fitness variance can be decoupled from that owing to prereproductive mortality. By quantifying linear sources of sexual selection and quadratic sources of sexual selection, we illustrate that the former is stronger in males, while the latter is stronger in females.

  12. Nitrates and bone turnover (NABT) - trial to select the best nitrate preparation: study protocol for a randomized controlled trial.

    Science.gov (United States)

    Bucur, Roxana C; Reid, Lauren S; Hamilton, Celeste J; Cummings, Steven R; Jamal, Sophie A

    2013-09-08

    comparisons with the best' approach for data analyses, as this strategy allows practical considerations of ease of use and tolerability to guide selection of the preparation for future studies. Data from this protocol will be used to develop a randomized, controlled trial of nitrates to prevent osteoporotic fractures. ClinicalTrials.gov Identifier: NCT01387672. Controlled-Trials.com: ISRCTN08860742.

  13. Enumeration of CD4 and CD8 T-cells in HIV infection in Zimbabwe using a manual immunocytochemical method

    DEFF Research Database (Denmark)

    Gomo, E; Ndhlovu, P; Vennervald, B J

    2001-01-01

    OBJECTIVES: To enumerate CD4 and CD8 T-cells using the simple and cheap immuno-alkaline phosphatase (IA) method and to compare it with flow cytometry (FC); and to study the effects of duration of sample storage on the IA method results. DESIGN: Method comparison study. SETTING: Blair Research...... Laboratory, Harare, Zimbabwe. SUBJECTS: 41 HIV positive and 11 HIV negative men and women from Harare participating in HIV studies at Blair Research Laboratory, Zimbabwe. MAIN OUTCOME MEASURES: CD4 and CD8 T-cell counts by FC and the IA method. RESULTS: The IA method and FC were highly correlated for CD4...... counts (Spearman rs = 0.91), CD4 percentage (rs = 0.84), CD8 count (rs = 0.83), CD8 percentage (rs = 0.96) and CD4/CD8 ratio (rs = 0.89). However, CD4 cell counts and percentage measured by the IA method were (mean difference +/- SE) 133 +/- 24 cells/microL [corrected] and 6.7 +/- 1.1% higher than those...

  14. Performance of Universal Adhesive in Primary Molars After Selective Removal of Carious Tissue: An 18-Month Randomized Clinical Trial.

    Science.gov (United States)

    Lenzi, Tathiane Larissa; Pires, Carine Weber; Soares, Fabio Zovico Maxnuck; Raggio, Daniela Prócida; Ardenghi, Thiago Machado; de Oliveira Rocha, Rachel

    2017-09-15

    To evaluate the 18-month clinical performance of a universal adhesive, applied under different adhesion strategies, after selective carious tissue removal in primary molars. Forty-four subjects (five to 10 years old) contributed with 90 primary molars presenting moderately deep dentin carious lesions on occlusal or occluso-proximal surfaces, which were randomly assigned following either self-etch or etch-and-rinse protocol of Scotchbond Universal Adhesive (3M ESPE). Resin composite was incrementally inserted for all restorations. Restorations were evaluated at one, six, 12, and 18 months using the modified United States Public Health Service criteria. Survival estimates for restorations' longevity were evaluated using the Kaplan-Meier method. Multivariate Cox regression analysis with shared frailty to assess the factors associated with failures (Padhesion strategy did not influence the restorations' longevity (P=0.06; 72.2 percent and 89.7 percent with etch-and-rinse and self-etch mode, respectively). Self-etch and etch-and-rinse strategies did not influence the clinical behavior of universal adhesive used in primary molars after selective carious tissue removal; although there was a tendency for better outcome of the self-etch strategy.

  15. Analysis of swaps in Radix selection

    DEFF Research Database (Denmark)

    Elmasry, Amr Ahmed Abd Elmoneim; Mahmoud, Hosam

    2011-01-01

    Radix Sort is a sorting algorithm based on analyzing digital data. We study the number of swaps made by Radix Select (a one-sided version of Radix Sort) to find an element with a randomly selected rank. This kind of grand average provides a smoothing over all individual distributions for specific...

  16. Integral Histogram with Random Projection for Pedestrian Detection.

    Directory of Open Access Journals (Sweden)

    Chang-Hua Liu

    Full Text Available In this paper, we give a systematic study to report several deep insights into the HOG, one of the most widely used features in the modern computer vision and image processing applications. We first show that, its magnitudes of gradient can be randomly projected with random matrix. To handle over-fitting, an integral histogram based on the differences of randomly selected blocks is proposed. The experiments show that both the random projection and integral histogram outperform the HOG feature obviously. Finally, the two ideas are combined into a new descriptor termed IHRP, which outperforms the HOG feature with less dimensions and higher speed.

  17. Elucidating the genotype-phenotype map by automatic enumeration and analysis of the phenotypic repertoire.

    Science.gov (United States)

    Lomnitz, Jason G; Savageau, Michael A

    The gap between genotype and phenotype is filled by complex biochemical systems most of which are poorly understood. Because these systems are complex, it is widely appreciated that quantitative understanding can only be achieved with the aid of mathematical models. However, formulating models and measuring or estimating their numerous rate constants and binding constants is daunting. Here we present a strategy for automating difficult aspects of the process. The strategy, based on a system design space methodology, is applied to a class of 16 designs for a synthetic gene oscillator that includes seven designs previously formulated on the basis of experimentally measured and estimated parameters. Our strategy provides four important innovations by automating: (1) enumeration of the repertoire of qualitatively distinct phenotypes for a system; (2) generation of parameter values for any particular phenotype; (3) simultaneous realization of parameter values for several phenotypes to aid visualization of transitions from one phenotype to another, in critical cases from functional to dysfunctional; and (4) identification of ensembles of phenotypes whose expression can be phased to achieve a specific sequence of functions for rationally engineering synthetic constructs. Our strategy, applied to the 16 designs, reproduced previous results and identified two additional designs capable of sustained oscillations that were previously missed. Starting with a system's relatively fixed aspects, its architectural features, our method enables automated analysis of nonlinear biochemical systems from a global perspective, without first specifying parameter values. The examples presented demonstrate the efficiency and power of this automated strategy.

  18. Unwilling or Unable to Cheat? Evidence from a Randomized Tax Audit Experiment in Denmark

    OpenAIRE

    Henrik J. Kleven; Martin B. Knudsen; Claus T. Kreiner; Søren Pedersen; Emmanuel Saez

    2010-01-01

    This paper analyzes a randomized tax enforcement experiment in Denmark. In the base year, a stratified and representative sample of over 40,000 individual income tax filers was selected for the experiment. Half of the tax filers were randomly selected to be thoroughly audited, while the rest were deliberately not audited. The following year, "threat-of-audit" letters were randomly assigned and sent to tax filers in both groups. Using comprehensive administrative tax data, we present four main...

  19. Detecting negative selection on recurrent mutations using gene genealogy

    Science.gov (United States)

    2013-01-01

    Background Whether or not a mutant allele in a population is under selection is an important issue in population genetics, and various neutrality tests have been invented so far to detect selection. However, detection of negative selection has been notoriously difficult, partly because negatively selected alleles are usually rare in the population and have little impact on either population dynamics or the shape of the gene genealogy. Recently, through studies of genetic disorders and genome-wide analyses, many structural variations were shown to occur recurrently in the population. Such “recurrent mutations” might be revealed as deleterious by exploiting the signal of negative selection in the gene genealogy enhanced by their recurrence. Results Motivated by the above idea, we devised two new test statistics. One is the total number of mutants at a recurrently mutating locus among sampled sequences, which is tested conditionally on the number of forward mutations mapped on the sequence genealogy. The other is the size of the most common class of identical-by-descent mutants in the sample, again tested conditionally on the number of forward mutations mapped on the sequence genealogy. To examine the performance of these two tests, we simulated recurrently mutated loci each flanked by sites with neutral single nucleotide polymorphisms (SNPs), with no recombination. Using neutral recurrent mutations as null models, we attempted to detect deleterious recurrent mutations. Our analyses demonstrated high powers of our new tests under constant population size, as well as their moderate power to detect selection in expanding populations. We also devised a new maximum parsimony algorithm that, given the states of the sampled sequences at a recurrently mutating locus and an incompletely resolved genealogy, enumerates mutation histories with a minimum number of mutations while partially resolving genealogical relationships when necessary. Conclusions With their

  20. Pediatric Academic Productivity: Pediatric Benchmarks for the h- and g-Indices.

    Science.gov (United States)

    Tschudy, Megan M; Rowe, Tashi L; Dover, George J; Cheng, Tina L

    2016-02-01

    To describe h- and g-indices benchmarks in pediatric subspecialties and general academic pediatrics. Academic productivity is measured increasingly through bibliometrics that derive a statistical enumeration of academic output and impact. The h- and g-indices incorporate the number of publications and citations. Benchmarks for pediatrics have not been reported. Thirty programs were selected randomly from pediatric residency programs accredited by the Accreditation Council for Graduate Medical Education. The h- and g-indices of department chairs were calculated. For general academic pediatrics, pediatric gastroenterology, and pediatric nephrology, a random sample of 30 programs with fellowships were selected. Within each program, an MD faculty member from each academic rank was selected randomly. Google Scholar via Harzing's Publish or Perish was used to calculate the h-index, g-index, and total manuscripts. Only peer-reviewed and English language publications were included. For Chairs, calculations from Google Scholar were compared with Scopus. For all specialties, the mean h- and g-indices significantly increased with academic rank (all P calculation using different bibliographic databases only differed by ±1. Mean h-indices increased with academic rank and were not significantly different across the pediatric specialties. Benchmarks for h- and g-indices in pediatrics are provided and may be one measure of academic productivity and impact. Copyright © 2016 Elsevier Inc. All rights reserved.

  1. Bayesian dose selection design for a binary outcome using restricted response adaptive randomization.

    Science.gov (United States)

    Meinzer, Caitlyn; Martin, Renee; Suarez, Jose I

    2017-09-08

    In phase II trials, the most efficacious dose is usually not known. Moreover, given limited resources, it is difficult to robustly identify a dose while also testing for a signal of efficacy that would support a phase III trial. Recent designs have sought to be more efficient by exploring multiple doses through the use of adaptive strategies. However, the added flexibility may potentially increase the risk of making incorrect assumptions and reduce the total amount of information available across the dose range as a function of imbalanced sample size. To balance these challenges, a novel placebo-controlled design is presented in which a restricted Bayesian response adaptive randomization (RAR) is used to allocate a majority of subjects to the optimal dose of active drug, defined as the dose with the lowest probability of poor outcome. However, the allocation between subjects who receive active drug or placebo is held constant to retain the maximum possible power for a hypothesis test of overall efficacy comparing the optimal dose to placebo. The design properties and optimization of the design are presented in the context of a phase II trial for subarachnoid hemorrhage. For a fixed total sample size, a trade-off exists between the ability to select the optimal dose and the probability of rejecting the null hypothesis. This relationship is modified by the allocation ratio between active and control subjects, the choice of RAR algorithm, and the number of subjects allocated to an initial fixed allocation period. While a responsive RAR algorithm improves the ability to select the correct dose, there is an increased risk of assigning more subjects to a worse arm as a function of ephemeral trends in the data. A subarachnoid treatment trial is used to illustrate how this design can be customized for specific objectives and available data. Bayesian adaptive designs are a flexible approach to addressing multiple questions surrounding the optimal dose for treatment efficacy

  2. Distributional and efficiency results for subset selection

    NARCIS (Netherlands)

    Laan, van der P.

    1996-01-01

    Assume k (??k \\geq 2) populations are given. The associated independent random variables have continuous distribution functions with an unknown location parameter. The statistical selec??tion goal is to select a non??empty subset which contains the best population,?? that is the pop??ulation with

  3. Sequence-Based Prediction of RNA-Binding Proteins Using Random Forest with Minimum Redundancy Maximum Relevance Feature Selection

    Directory of Open Access Journals (Sweden)

    Xin Ma

    2015-01-01

    Full Text Available The prediction of RNA-binding proteins is one of the most challenging problems in computation biology. Although some studies have investigated this problem, the accuracy of prediction is still not sufficient. In this study, a highly accurate method was developed to predict RNA-binding proteins from amino acid sequences using random forests with the minimum redundancy maximum relevance (mRMR method, followed by incremental feature selection (IFS. We incorporated features of conjoint triad features and three novel features: binding propensity (BP, nonbinding propensity (NBP, and evolutionary information combined with physicochemical properties (EIPP. The results showed that these novel features have important roles in improving the performance of the predictor. Using the mRMR-IFS method, our predictor achieved the best performance (86.62% accuracy and 0.737 Matthews correlation coefficient. High prediction accuracy and successful prediction performance suggested that our method can be a useful approach to identify RNA-binding proteins from sequence information.

  4. Randomized Prediction Games for Adversarial Machine Learning.

    Science.gov (United States)

    Rota Bulo, Samuel; Biggio, Battista; Pillai, Ignazio; Pelillo, Marcello; Roli, Fabio

    In spam and malware detection, attackers exploit randomization to obfuscate malicious data and increase their chances of evading detection at test time, e.g., malware code is typically obfuscated using random strings or byte sequences to hide known exploits. Interestingly, randomization has also been proposed to improve security of learning algorithms against evasion attacks, as it results in hiding information about the classifier to the attacker. Recent work has proposed game-theoretical formulations to learn secure classifiers, by simulating different evasion attacks and modifying the classification function accordingly. However, both the classification function and the simulated data manipulations have been modeled in a deterministic manner, without accounting for any form of randomization. In this paper, we overcome this limitation by proposing a randomized prediction game, namely, a noncooperative game-theoretic formulation in which the classifier and the attacker make randomized strategy selections according to some probability distribution defined over the respective strategy set. We show that our approach allows one to improve the tradeoff between attack detection and false alarms with respect to the state-of-the-art secure classifiers, even against attacks that are different from those hypothesized during design, on application examples including handwritten digit recognition, spam, and malware detection.In spam and malware detection, attackers exploit randomization to obfuscate malicious data and increase their chances of evading detection at test time, e.g., malware code is typically obfuscated using random strings or byte sequences to hide known exploits. Interestingly, randomization has also been proposed to improve security of learning algorithms against evasion attacks, as it results in hiding information about the classifier to the attacker. Recent work has proposed game-theoretical formulations to learn secure classifiers, by simulating different

  5. Application of a good manufacturing practices checklist and enumeration of total coliform in swine feed mills

    Directory of Open Access Journals (Sweden)

    Debora da Cruz Payao Pellegrini

    2014-02-01

    Full Text Available A cross-sectional study in four swine feed mills aimed to evaluate the correlation between the score of the inspection checklist defined in the Normative Instruction 4 (IN 4/ Brazilian Ministry of Agriculture, Livestock and Food Supply, and the enumeration of total coliforms throughout the manufacturing process. The most of non-conformities was found in the physical structure of the feed mills. Feed mill B showed the lowest number of unconformities while units A and D had the largest number of nonconformities. In 38.53% (489/1269 of the samples the presence of total coliform was detected, however no significant difference in the bacterial counts was observed between sampling sites and feed mills. The logistic regression pointed higher odds ratio (OR for total coliforms isolation at dosing (OR = 9.51, 95% CI: 4.43 to 20.41, grinding (OR = 7.10, 95% CI = 3.27 to 15.40 and residues (OR = 6.21, 95% CI: 3.88 to 9.95 In spite of having the second score in the checklist inspection, feed mill C presented the highest odds for total coliforms isolation (OR= 2,43, IC 95%: 1,68-3,53. The data indicate no association between the score of checklist and the presence of hygienic indicators in feed mills.

  6. Pseudo-random number generation using a 3-state cellular automaton

    Science.gov (United States)

    Bhattacharjee, Kamalika; Paul, Dipanjyoti; Das, Sukanta

    This paper investigates the potentiality of pseudo-random number generation of a 3-neighborhood 3-state cellular automaton (CA) under periodic boundary condition. Theoretical and empirical tests are performed on the numbers, generated by the CA, to observe the quality of it as pseudo-random number generator (PRNG). We analyze the strength and weakness of the proposed PRNG and conclude that the selected CA is a good random number generator.

  7. Theory of Randomized Search Heuristics in Combinatorial Optimization

    DEFF Research Database (Denmark)

    The rigorous mathematical analysis of randomized search heuristics(RSHs) with respect to their expected runtime is a growing research area where many results have been obtained in recent years. This class of heuristics includes well-known approaches such as Randomized Local Search (RLS), the Metr......The rigorous mathematical analysis of randomized search heuristics(RSHs) with respect to their expected runtime is a growing research area where many results have been obtained in recent years. This class of heuristics includes well-known approaches such as Randomized Local Search (RLS...... analysis of randomized algorithms to RSHs. Mostly, the expected runtime of RSHs on selected problems is analzyed. Thereby, we understand why and when RSHs are efficient optimizers and, conversely, when they cannot be efficient. The tutorial will give an overview on the analysis of RSHs for solving...

  8. Discrete least squares polynomial approximation with random evaluations - application to PDEs with Random parameters

    KAUST Repository

    Nobile, Fabio

    2015-01-07

    We consider a general problem F(u, y) = 0 where u is the unknown solution, possibly Hilbert space valued, and y a set of uncertain parameters. We specifically address the situation in which the parameterto-solution map u(y) is smooth, however y could be very high (or even infinite) dimensional. In particular, we are interested in cases in which F is a differential operator, u a Hilbert space valued function and y a distributed, space and/or time varying, random field. We aim at reconstructing the parameter-to-solution map u(y) from random noise-free or noisy observations in random points by discrete least squares on polynomial spaces. The noise-free case is relevant whenever the technique is used to construct metamodels, based on polynomial expansions, for the output of computer experiments. In the case of PDEs with random parameters, the metamodel is then used to approximate statistics of the output quantity. We discuss the stability of discrete least squares on random points show convergence estimates both in expectation and probability. We also present possible strategies to select, either a-priori or by adaptive algorithms, sequences of approximating polynomial spaces that allow to reduce, and in some cases break, the curse of dimensionality

  9. Evolution in fluctuating environments: decomposing selection into additive components of the Robertson-Price equation.

    Science.gov (United States)

    Engen, Steinar; Saether, Bernt-Erik

    2014-03-01

    We analyze the stochastic components of the Robertson-Price equation for the evolution of quantitative characters that enables decomposition of the selection differential into components due to demographic and environmental stochasticity. We show how these two types of stochasticity affect the evolution of multivariate quantitative characters by defining demographic and environmental variances as components of individual fitness. The exact covariance formula for selection is decomposed into three components, the deterministic mean value, as well as stochastic demographic and environmental components. We show that demographic and environmental stochasticity generate random genetic drift and fluctuating selection, respectively. This provides a common theoretical framework for linking ecological and evolutionary processes. Demographic stochasticity can cause random variation in selection differentials independent of fluctuating selection caused by environmental variation. We use this model of selection to illustrate that the effect on the expected selection differential of random variation in individual fitness is dependent on population size, and that the strength of fluctuating selection is affected by how environmental variation affects the covariance in Malthusian fitness between individuals with different phenotypes. Thus, our approach enables us to partition out the effects of fluctuating selection from the effects of selection due to random variation in individual fitness caused by demographic stochasticity. © 2013 The Author(s). Evolution © 2013 The Society for the Study of Evolution.

  10. Integer Set Compression and Statistical Modeling

    DEFF Research Database (Denmark)

    Larsson, N. Jesper

    2014-01-01

    enumeration of elements may be arbitrary or random, but where statistics is kept in order to estimate probabilities of elements. We present a recursive subset-size encoding method that is able to benefit from statistics, explore the effects of permuting the enumeration order based on element probabilities......Compression of integer sets and sequences has been extensively studied for settings where elements follow a uniform probability distribution. In addition, methods exist that exploit clustering of elements in order to achieve higher compression performance. In this work, we address the case where...

  11. Programmable disorder in random DNA tilings

    Science.gov (United States)

    Tikhomirov, Grigory; Petersen, Philip; Qian, Lulu

    2017-03-01

    Scaling up the complexity and diversity of synthetic molecular structures will require strategies that exploit the inherent stochasticity of molecular systems in a controlled fashion. Here we demonstrate a framework for programming random DNA tilings and show how to control the properties of global patterns through simple, local rules. We constructed three general forms of planar network—random loops, mazes and trees—on the surface of self-assembled DNA origami arrays on the micrometre scale with nanometre resolution. Using simple molecular building blocks and robust experimental conditions, we demonstrate control of a wide range of properties of the random networks, including the branching rules, the growth directions, the proximity between adjacent networks and the size distribution. Much as combinatorial approaches for generating random one-dimensional chains of polymers have been used to revolutionize chemical synthesis and the selection of functional nucleic acids, our strategy extends these principles to random two-dimensional networks of molecules and creates new opportunities for fabricating more complex molecular devices that are organized by DNA nanostructures.

  12. Key Aspects of Nucleic Acid Library Design for in Vitro Selection

    Science.gov (United States)

    Vorobyeva, Maria A.; Davydova, Anna S.; Vorobjev, Pavel E.; Pyshnyi, Dmitrii V.; Venyaminova, Alya G.

    2018-01-01

    Nucleic acid aptamers capable of selectively recognizing their target molecules have nowadays been established as powerful and tunable tools for biospecific applications, be it therapeutics, drug delivery systems or biosensors. It is now generally acknowledged that in vitro selection enables one to generate aptamers to almost any target of interest. However, the success of selection and the affinity of the resulting aptamers depend to a large extent on the nature and design of an initial random nucleic acid library. In this review, we summarize and discuss the most important features of the design of nucleic acid libraries for in vitro selection such as the nature of the library (DNA, RNA or modified nucleotides), the length of a randomized region and the presence of fixed sequences. We also compare and contrast different randomization strategies and consider computer methods of library design and some other aspects. PMID:29401748

  13. DNABP: Identification of DNA-Binding Proteins Based on Feature Selection Using a Random Forest and Predicting Binding Residues.

    Science.gov (United States)

    Ma, Xin; Guo, Jing; Sun, Xiao

    2016-01-01

    DNA-binding proteins are fundamentally important in cellular processes. Several computational-based methods have been developed to improve the prediction of DNA-binding proteins in previous years. However, insufficient work has been done on the prediction of DNA-binding proteins from protein sequence information. In this paper, a novel predictor, DNABP (DNA-binding proteins), was designed to predict DNA-binding proteins using the random forest (RF) classifier with a hybrid feature. The hybrid feature contains two types of novel sequence features, which reflect information about the conservation of physicochemical properties of the amino acids, and the binding propensity of DNA-binding residues and non-binding propensities of non-binding residues. The comparisons with each feature demonstrated that these two novel features contributed most to the improvement in predictive ability. Furthermore, to improve the prediction performance of the DNABP model, feature selection using the minimum redundancy maximum relevance (mRMR) method combined with incremental feature selection (IFS) was carried out during the model construction. The results showed that the DNABP model could achieve 86.90% accuracy, 83.76% sensitivity, 90.03% specificity and a Matthews correlation coefficient of 0.727. High prediction accuracy and performance comparisons with previous research suggested that DNABP could be a useful approach to identify DNA-binding proteins from sequence information. The DNABP web server system is freely available at http://www.cbi.seu.edu.cn/DNABP/.

  14. A Heckman Selection- t Model

    KAUST Repository

    Marchenko, Yulia V.

    2012-03-01

    Sample selection arises often in practice as a result of the partial observability of the outcome of interest in a study. In the presence of sample selection, the observed data do not represent a random sample from the population, even after controlling for explanatory variables. That is, data are missing not at random. Thus, standard analysis using only complete cases will lead to biased results. Heckman introduced a sample selection model to analyze such data and proposed a full maximum likelihood estimation method under the assumption of normality. The method was criticized in the literature because of its sensitivity to the normality assumption. In practice, data, such as income or expenditure data, often violate the normality assumption because of heavier tails. We first establish a new link between sample selection models and recently studied families of extended skew-elliptical distributions. Then, this allows us to introduce a selection-t (SLt) model, which models the error distribution using a Student\\'s t distribution. We study its properties and investigate the finite-sample performance of the maximum likelihood estimators for this model. We compare the performance of the SLt model to the conventional Heckman selection-normal (SLN) model and apply it to analyze ambulatory expenditures. Unlike the SLNmodel, our analysis using the SLt model provides statistical evidence for the existence of sample selection bias in these data. We also investigate the performance of the test for sample selection bias based on the SLt model and compare it with the performances of several tests used with the SLN model. Our findings indicate that the latter tests can be misleading in the presence of heavy-tailed data. © 2012 American Statistical Association.

  15. Profile of selected bacterial counts and Salmonella prevalence on raw poultry in a poultry slaughter establishment.

    Science.gov (United States)

    James, W O; Williams, W O; Prucha, J C; Johnston, R; Christensen, W

    1992-01-01

    The USDA Food Safety and Inspection Service determined populations of bacteria on poultry during processing at a slaughter plant in Puerto Rico in November and December 1987. The plant was selected because of its management's willingness to support important changes in equipment and processing procedures. The plant was representative of modern slaughter facilities. Eight-hundred samples were collected over 20 consecutive 8-hour days of operation from 5 sites in the processing plant. Results indicated that slaughter, dressing, and chilling practices significantly decreased the bacterial contamination on poultry carcasses, as determined by counts of aerobic bacteria, Enterobacteriaceae, and Escherichia coli. Salmonella was not enumerated; rather, it was determined to be present or absent by culturing almost the entire rinse. The prevalence of Salmonella in the study decreased during evisceration, then increased during immersion chilling.

  16. Feature-selective attention in healthy old age: a selective decline in selective attention?

    Science.gov (United States)

    Quigley, Cliodhna; Müller, Matthias M

    2014-02-12

    Deficient selection against irrelevant information has been proposed to underlie age-related cognitive decline. We recently reported evidence for maintained early sensory selection when older and younger adults used spatial selective attention to perform a challenging task. Here we explored age-related differences when spatial selection is not possible and feature-selective attention must be deployed. We additionally compared the integrity of feedforward processing by exploiting the well established phenomenon of suppression of visual cortical responses attributable to interstimulus competition. Electroencephalogram was measured while older and younger human adults responded to brief occurrences of coherent motion in an attended stimulus composed of randomly moving, orientation-defined, flickering bars. Attention was directed to horizontal or vertical bars by a pretrial cue, after which two orthogonally oriented, overlapping stimuli or a single stimulus were presented. Horizontal and vertical bars flickered at different frequencies and thereby elicited separable steady-state visual-evoked potentials, which were used to examine the effect of feature-based selection and the competitive influence of a second stimulus on ongoing visual processing. Age differences were found in feature-selective attentional modulation of visual responses: older adults did not show consistent modulation of magnitude or phase. In contrast, the suppressive effect of a second stimulus was robust and comparable in magnitude across age groups, suggesting that bottom-up processing of the current stimuli is essentially unchanged in healthy old age. Thus, it seems that visual processing per se is unchanged, but top-down attentional control is compromised in older adults when space cannot be used to guide selection.

  17. Status and distribution patterns of selected medicinal and food tree ...

    African Journals Online (AJOL)

    Tree species of ethno-botany and food relevance were identified and enumerated in the course of field survey in the study communities. The spatial distributions of six most-frequently utilized tree species were mapped using Geographical Information System (GIS). Data were statistically analyzed using descriptive statistics ...

  18. Elucidating the genotype–phenotype map by automatic enumeration and analysis of the phenotypic repertoire

    Science.gov (United States)

    Lomnitz, Jason G; Savageau, Michael A

    2015-01-01

    Background: The gap between genotype and phenotype is filled by complex biochemical systems most of which are poorly understood. Because these systems are complex, it is widely appreciated that quantitative understanding can only be achieved with the aid of mathematical models. However, formulating models and measuring or estimating their numerous rate constants and binding constants is daunting. Here we present a strategy for automating difficult aspects of the process. Methods: The strategy, based on a system design space methodology, is applied to a class of 16 designs for a synthetic gene oscillator that includes seven designs previously formulated on the basis of experimentally measured and estimated parameters. Results: Our strategy provides four important innovations by automating: (1) enumeration of the repertoire of qualitatively distinct phenotypes for a system; (2) generation of parameter values for any particular phenotype; (3) simultaneous realization of parameter values for several phenotypes to aid visualization of transitions from one phenotype to another, in critical cases from functional to dysfunctional; and (4) identification of ensembles of phenotypes whose expression can be phased to achieve a specific sequence of functions for rationally engineering synthetic constructs. Our strategy, applied to the 16 designs, reproduced previous results and identified two additional designs capable of sustained oscillations that were previously missed. Conclusions: Starting with a system’s relatively fixed aspects, its architectural features, our method enables automated analysis of nonlinear biochemical systems from a global perspective, without first specifying parameter values. The examples presented demonstrate the efficiency and power of this automated strategy. PMID:26998346

  19. Opportunistic Relay Selection with Cooperative Macro Diversity

    Directory of Open Access Journals (Sweden)

    Yu Chia-Hao

    2010-01-01

    Full Text Available We apply a fully opportunistic relay selection scheme to study cooperative diversity in a semianalytical manner. In our framework, idle Mobile Stations (MSs are capable of being used as Relay Stations (RSs and no relaying is required if the direct path is strong. Our relay selection scheme is fully selection based: either the direct path or one of the relaying paths is selected. Macro diversity, which is often ignored in analytical works, is taken into account together with micro diversity by using a complete channel model that includes both shadow fading and fast fading effects. The stochastic geometry of the network is taken into account by having a random number of randomly located MSs. The outage probability analysis of the selection differs from the case where only fast fading is considered. Under our framework, distribution of the received power is formulated using different Channel State Information (CSI assumptions to simulate both optimistic and practical environments. The results show that the relay selection gain can be significant given a suitable amount of candidate RSs. Also, while relay selection according to incomplete CSI is diversity suboptimal compared to relay selection based on full CSI, the loss in average throughput is not too significant. This is a consequence of the dominance of geometry over fast fading.

  20. Statistical auditing and randomness test of lotto k/N-type games

    Science.gov (United States)

    Coronel-Brizio, H. F.; Hernández-Montoya, A. R.; Rapallo, F.; Scalas, E.

    2008-11-01

    One of the most popular lottery games worldwide is the so-called “lotto k/N”. It considers N numbers 1,2,…,N from which k are drawn randomly, without replacement. A player selects k or more numbers and the first prize is shared amongst those players whose selected numbers match all of the k randomly drawn. Exact rules may vary in different countries. In this paper, mean values and covariances for the random variables representing the numbers drawn from this kind of game are presented, with the aim of using them to audit statistically the consistency of a given sample of historical results with theoretical values coming from a hypergeometric statistical model. The method can be adapted to test pseudorandom number generators.

  1. Random and non-random mating populations: Evolutionary dynamics in meiotic drive.

    Science.gov (United States)

    Sarkar, Bijan

    2016-01-01

    Game theoretic tools are utilized to analyze a one-locus continuous selection model of sex-specific meiotic drive by considering nonequivalence of the viabilities of reciprocal heterozygotes that might be noticed at an imprinted locus. The model draws attention to the role of viability selections of different types to examine the stable nature of polymorphic equilibrium. A bridge between population genetics and evolutionary game theory has been built up by applying the concept of the Fundamental Theorem of Natural Selection. In addition to pointing out the influences of male and female segregation ratios on selection, configuration structure reveals some noted results, e.g., Hardy-Weinberg frequencies hold in replicator dynamics, occurrence of faster evolution at the maximized variance fitness, existence of mixed Evolutionarily Stable Strategy (ESS) in asymmetric games, the tending evolution to follow not only a 1:1 sex ratio but also a 1:1 different alleles ratio at particular gene locus. Through construction of replicator dynamics in the group selection framework, our selection model introduces a redefining bases of game theory to incorporate non-random mating where a mating parameter associated with population structure is dependent on the social structure. Also, the model exposes the fact that the number of polymorphic equilibria will depend on the algebraic expression of population structure. Copyright © 2015 Elsevier Inc. All rights reserved.

  2. Velocity and Dispersion for a Two-Dimensional Random Walk

    International Nuclear Information System (INIS)

    Li Jinghui

    2009-01-01

    In the paper, we consider the transport of a two-dimensional random walk. The velocity and the dispersion of this two-dimensional random walk are derived. It mainly show that: (i) by controlling the values of the transition rates, the direction of the random walk can be reversed; (ii) for some suitably selected transition rates, our two-dimensional random walk can be efficient in comparison with the one-dimensional random walk. Our work is motivated in part by the challenge to explain the unidirectional transport of motor proteins. When the motor proteins move at the turn points of their tracks (i.e., the cytoskeleton filaments and the DNA molecular tubes), some of our results in this paper can be used to deal with the problem. (general)

  3. Risk Attitudes, Sample Selection and Attrition in a Longitudinal Field Experiment

    DEFF Research Database (Denmark)

    Harrison, Glenn W.; Lau, Morten Igel

    with respect to risk attitudes. Our design builds in explicit randomization on the incentives for participation. We show that there are significant sample selection effects on inferences about the extent of risk aversion, but that the effects of subsequent sample attrition are minimal. Ignoring sample...... selection leads to inferences that subjects in the population are more risk averse than they actually are. Correcting for sample selection and attrition affects utility curvature, but does not affect inferences about probability weighting. Properly accounting for sample selection and attrition effects leads...... to findings of temporal stability in overall risk aversion. However, that stability is around different levels of risk aversion than one might naively infer without the controls for sample selection and attrition we are able to implement. This evidence of “randomization bias” from sample selection...

  4. Effects of Video Game Training on Measures of Selective Attention and Working Memory in Older Adults: Results from a Randomized Controlled Trial

    Science.gov (United States)

    Ballesteros, Soledad; Mayas, Julia; Prieto, Antonio; Ruiz-Marquez, Eloísa; Toril, Pilar; Reales, José M.

    2017-01-01

    Video game training with older adults potentially enhances aspects of cognition that decline with aging and could therefore offer a promising training approach. Although, previous published studies suggest that training can produce transfer, many of them have certain shortcomings. This randomized controlled trial (RCT; Clinicaltrials.gov ID: NCT02796508) tried to overcome some of these limitations by incorporating an active control group and the assessment of motivation and expectations. Seventy-five older volunteers were randomly assigned to the experimental group trained for 16 sessions with non-action video games from Lumosity, a commercial platform (http://www.lumosity.com/) or to an active control group trained for the same number of sessions with simulation strategy games. The final sample included 55 older adults (30 in the experimental group and 25 in the active control group). Participants were tested individually before and after training to assess working memory (WM) and selective attention and also reported their perceived improvement, motivation and engagement. The results showed improved performance across the training sessions. The main results were: (1) the experimental group did not show greater improvements in measures of selective attention and working memory than the active control group (the opposite occurred in the oddball task); (2) a marginal training effect was observed for the N-back task, but not for the Stroop task while both groups improved in the Corsi Blocks task. Based on these results, one can conclude that training with non-action games provide modest benefits for untrained tasks. The effect is not specific for that kind of training as a similar effect was observed for strategy video games. Groups did not differ in motivation, engagement or expectations. PMID:29163136

  5. Effects of Video Game Training on Measures of Selective Attention and Working Memory in Older Adults: Results from a Randomized Controlled Trial

    Directory of Open Access Journals (Sweden)

    Soledad Ballesteros

    2017-11-01

    Full Text Available Video game training with older adults potentially enhances aspects of cognition that decline with aging and could therefore offer a promising training approach. Although, previous published studies suggest that training can produce transfer, many of them have certain shortcomings. This randomized controlled trial (RCT; Clinicaltrials.gov ID: NCT02796508 tried to overcome some of these limitations by incorporating an active control group and the assessment of motivation and expectations. Seventy-five older volunteers were randomly assigned to the experimental group trained for 16 sessions with non-action video games from Lumosity, a commercial platform (http://www.lumosity.com/ or to an active control group trained for the same number of sessions with simulation strategy games. The final sample included 55 older adults (30 in the experimental group and 25 in the active control group. Participants were tested individually before and after training to assess working memory (WM and selective attention and also reported their perceived improvement, motivation and engagement. The results showed improved performance across the training sessions. The main results were: (1 the experimental group did not show greater improvements in measures of selective attention and working memory than the active control group (the opposite occurred in the oddball task; (2 a marginal training effect was observed for the N-back task, but not for the Stroop task while both groups improved in the Corsi Blocks task. Based on these results, one can conclude that training with non-action games provide modest benefits for untrained tasks. The effect is not specific for that kind of training as a similar effect was observed for strategy video games. Groups did not differ in motivation, engagement or expectations.

  6. Effects of Video Game Training on Measures of Selective Attention and Working Memory in Older Adults: Results from a Randomized Controlled Trial.

    Science.gov (United States)

    Ballesteros, Soledad; Mayas, Julia; Prieto, Antonio; Ruiz-Marquez, Eloísa; Toril, Pilar; Reales, José M

    2017-01-01

    Video game training with older adults potentially enhances aspects of cognition that decline with aging and could therefore offer a promising training approach. Although, previous published studies suggest that training can produce transfer, many of them have certain shortcomings. This randomized controlled trial (RCT; Clinicaltrials.gov ID: NCT02796508) tried to overcome some of these limitations by incorporating an active control group and the assessment of motivation and expectations. Seventy-five older volunteers were randomly assigned to the experimental group trained for 16 sessions with non-action video games from Lumosity , a commercial platform (http://www.lumosity.com/) or to an active control group trained for the same number of sessions with simulation strategy games. The final sample included 55 older adults (30 in the experimental group and 25 in the active control group). Participants were tested individually before and after training to assess working memory (WM) and selective attention and also reported their perceived improvement, motivation and engagement. The results showed improved performance across the training sessions. The main results were: (1) the experimental group did not show greater improvements in measures of selective attention and working memory than the active control group (the opposite occurred in the oddball task); (2) a marginal training effect was observed for the N -back task, but not for the Stroop task while both groups improved in the Corsi Blocks task. Based on these results, one can conclude that training with non-action games provide modest benefits for untrained tasks. The effect is not specific for that kind of training as a similar effect was observed for strategy video games. Groups did not differ in motivation, engagement or expectations.

  7. Spice: discovery of phenotype-determining component interplays

    Directory of Open Access Journals (Sweden)

    Chen Zhengzhang

    2012-05-01

    system’s phenotype determination compared to individual classifiers and/or other ensemble methods, such as bagging, boosting, random forest, nearest shrunken centroid, and random forest variable selection method.

  8. Lines of Descent Under Selection

    Science.gov (United States)

    Baake, Ellen; Wakolbinger, Anton

    2017-11-01

    We review recent progress on ancestral processes related to mutation-selection models, both in the deterministic and the stochastic setting. We mainly rely on two concepts, namely, the killed ancestral selection graph and the pruned lookdown ancestral selection graph. The killed ancestral selection graph gives a representation of the type of a random individual from a stationary population, based upon the individual's potential ancestry back until the mutations that define the individual's type. The pruned lookdown ancestral selection graph allows one to trace the ancestry of individuals from a stationary distribution back into the distant past, thus leading to the stationary distribution of ancestral types. We illustrate the results by applying them to a prototype model for the error threshold phenomenon.

  9. Differential enumeration of subpopulations in concentrated frozen and lyophilized cultures of Lactobacillus delbrueckii ssp. bulgaricus.

    Science.gov (United States)

    Shao, Yuyu; Wang, Zhaoxia; Bao, Qiuhua; Zhang, Heping

    2017-11-01

    Differential enumeration of subpopulations in concentrated frozen and lyophilized cultures of Lactobacillus delbrueckii ssp. bulgaricus ND02 derived from 2 propagation procedures was determined. The subpopulations consisted of 3 categories (physiological states): viable cells capable of forming colonies on agar plates (VC+), viable cells incapable of forming colonies on agar plates (VC-), widely referred to as viable but nonculturable (VBNC) cells, and nonviable or dead cells (NVC). Counts of VC+ were recorded using a conventional plate count procedure. A fluorescent vital staining procedure that discriminates between viable (VC+ and VC-) and NVC cells was used to determine the number of viable and nonviable cells. Both propagation procedures had 2 variables: in procedure (P)1, the propagation medium was rich in yeast extract (4.0%) and the pH was maintained at 5.7; in P2, the medium was devoid of yeast extract and the pH was maintained at 5.1. The results showed that post-propagation operations-concentration of cells by centrifugation and subsequent freezing or lyophilization of cell concentrate-induced different degrees of transience from VC+ to VC- states in cells derived from P1 and P2. Compared with cells derived from P2, cells from P1 were more labile to stress associated with centrifugation, freezing, and lyophilization, as revealed by differential counting. Copyright © 2017 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  10. Feature Selection with the Boruta Package

    OpenAIRE

    Kursa, Miron B.; Rudnicki, Witold R.

    2010-01-01

    This article describes a R package Boruta, implementing a novel feature selection algorithm for finding emph{all relevant variables}. The algorithm is designed as a wrapper around a Random Forest classification algorithm. It iteratively removes the features which are proved by a statistical test to be less relevant than random probes. The Boruta package provides a convenient interface to the algorithm. The short description of the algorithm and examples of its application are presented.

  11. Pseudo-random-number generators and the square site percolation threshold.

    Science.gov (United States)

    Lee, Michael J

    2008-09-01

    Selected pseudo-random-number generators are applied to a Monte Carlo study of the two-dimensional square-lattice site percolation model. A generator suitable for high precision calculations is identified from an application specific test of randomness. After extended computation and analysis, an ostensibly reliable value of p_{c}=0.59274598(4) is obtained for the percolation threshold.

  12. Acceptance sampling using judgmental and randomly selected samples

    Energy Technology Data Exchange (ETDEWEB)

    Sego, Landon H.; Shulman, Stanley A.; Anderson, Kevin K.; Wilson, John E.; Pulsipher, Brent A.; Sieber, W. Karl

    2010-09-01

    We present a Bayesian model for acceptance sampling where the population consists of two groups, each with different levels of risk of containing unacceptable items. Expert opinion, or judgment, may be required to distinguish between the high and low-risk groups. Hence, high-risk items are likely to be identifed (and sampled) using expert judgment, while the remaining low-risk items are sampled randomly. We focus on the situation where all observed samples must be acceptable. Consequently, the objective of the statistical inference is to quantify the probability that a large percentage of the unsampled items in the population are also acceptable. We demonstrate that traditional (frequentist) acceptance sampling and simpler Bayesian formulations of the problem are essentially special cases of the proposed model. We explore the properties of the model in detail, and discuss the conditions necessary to ensure that required samples sizes are non-decreasing function of the population size. The method is applicable to a variety of acceptance sampling problems, and, in particular, to environmental sampling where the objective is to demonstrate the safety of reoccupying a remediated facility that has been contaminated with a lethal agent.

  13. Effect of mirtazapine versus selective serotonin reuptake inhibitors on benzodiazepine use in patients with major depressive disorder: a pragmatic, multicenter, open-label, randomized, active-controlled, 24-week trial.

    Science.gov (United States)

    Hashimoto, Tasuku; Shiina, Akihiro; Hasegawa, Tadashi; Kimura, Hiroshi; Oda, Yasunori; Niitsu, Tomihisa; Ishikawa, Masatomo; Tachibana, Masumi; Muneoka, Katsumasa; Matsuki, Satoshi; Nakazato, Michiko; Iyo, Masaomi

    2016-01-01

    This study aimed to evaluate whether selecting mirtazapine as the first choice for current depressive episode instead of selective serotonin reuptake inhibitors (SSRIs) reduces benzodiazepine use in patients with major depressive disorder (MDD). We concurrently examined the relationship between clinical responses and serum mature brain-derived neurotrophic factor (BDNF) and its precursor, proBDNF. We conducted an open-label randomized trial in routine psychiatric practice settings. Seventy-seven MDD outpatients were randomly assigned to the mirtazapine or predetermined SSRIs groups, and investigators arbitrarily selected sertraline or paroxetine. The primary outcome was the proportion of benzodiazepine users at weeks 6, 12, and 24 between the groups. We defined patients showing a ≥50 % reduction in Hamilton depression rating scale (HDRS) scores from baseline as responders. Blood samples were collected at baseline, weeks 6, 12, and 24. Sixty-five patients prescribed benzodiazepines from prescription day 1 were analyzed for the primary outcome. The percentage of benzodiazepine users was significantly lower in the mirtazapine than in the SSRIs group at weeks 6, 12, and 24 (21.4 vs. 81.8 %; 11.1 vs. 85.7 %, both P  depressive episodes may reduce benzodiazepine use in patients with MDD. Trial registration UMIN000004144. Registered 2nd September 2010. The date of enrolment of the first participant to the trial was 24th August 2010. This study was retrospectively registered 9 days after the first participant was enrolled.

  14. Maximal Conflict Set Enumeration Algorithm Based on Locality of Petri Nets%基于Pe tri网局部性的极大冲突集枚举算法

    Institute of Scientific and Technical Information of China (English)

    潘理; 郑红; 刘显明; 杨勃

    2016-01-01

    冲突是Petri网研究的重要主题。目前Petri网冲突研究主要集中于冲突建模和冲突消解策略,而对冲突问题本身的计算复杂性却很少关注。提出Petri网的冲突集问题,并证明冲突集问题是NP(Non-deterministic Polyno-mial)完全的。提出极大冲突集动态枚举算法,该算法基于当前标识的所有极大冲突集,利用Petri网实施局部性,仅计算下一标识中受局部性影响的极大冲突集,从而避免重新枚举所有极大冲突集。该算法时间复杂度为O(m2 n),m是当前标识的极大冲突集数目,n是变迁数。最后证明自由选择网、非对称选择网的极大冲突集枚举算法复杂度可降至O(n2)。极大冲突集枚举算法研究将为Petri网冲突问题的算法求解提供理论参考。%Conflict is an essential concept in Petri net theory.The existing research focuses on the modelling and resolu-tion strategies of conflict problems,but less on the computational complexity of the problems theirselves.In this paper,we pro-pose the conflict set problem for Petri nets,and prove that the conflict set problem is NP-complete.Furthermore,we present a dynamic algorithm for the maximal conflict set enumeration.Our algorithm only computes those conflict sets that are affected by local firing,which avoids enumerating all maximal conflict sets at each marking.The algorithm needs time O(m2n)where m is the number of maximal conflict sets at the current marking and n is the number of transitions.Finally,we show that the maximal conflict set enumeration problem can be solved in O(n2)for free-choice nets and asymmetric choice nets.The results on complexity of thel conflict set problem provide a theoretical reference for solving conflict problems of Petri nets.

  15. Pseudo-random number generator for the Sigma 5 computer

    Science.gov (United States)

    Carroll, S. N.

    1983-01-01

    A technique is presented for developing a pseudo-random number generator based on the linear congruential form. The two numbers used for the generator are a prime number and a corresponding primitive root, where the prime is the largest prime number that can be accurately represented on a particular computer. The primitive root is selected by applying Marsaglia's lattice test. The technique presented was applied to write a random number program for the Sigma 5 computer. The new program, named S:RANDOM1, is judged to be superior to the older program named S:RANDOM. For applications requiring several independent random number generators, a table is included showing several acceptable primitive roots. The technique and programs described can be applied to any computer having word length different from that of the Sigma 5.

  16. Review of Random Phase Encoding in Volume Holographic Storage

    Directory of Open Access Journals (Sweden)

    Wei-Chia Su

    2012-09-01

    Full Text Available Random phase encoding is a unique technique for volume hologram which can be applied to various applications such as holographic multiplexing storage, image encryption, and optical sensing. In this review article, we first review and discuss diffraction selectivity of random phase encoding in volume holograms, which is the most important parameter related to multiplexing capacity of volume holographic storage. We then review an image encryption system based on random phase encoding. The alignment of phase key for decryption of the encoded image stored in holographic memory is analyzed and discussed. In the latter part of the review, an all-optical sensing system implemented by random phase encoding and holographic interconnection is presented.

  17. Managing salinity in Upper Colorado River Basin streams: Selecting catchments for sediment control efforts using watershed characteristics and random forests models

    Science.gov (United States)

    Tillman, Fred; Anning, David W.; Heilman, Julian A.; Buto, Susan G.; Miller, Matthew P.

    2018-01-01

    Elevated concentrations of dissolved-solids (salinity) including calcium, sodium, sulfate, and chloride, among others, in the Colorado River cause substantial problems for its water users. Previous efforts to reduce dissolved solids in upper Colorado River basin (UCRB) streams often focused on reducing suspended-sediment transport to streams, but few studies have investigated the relationship between suspended sediment and salinity, or evaluated which watershed characteristics might be associated with this relationship. Are there catchment properties that may help in identifying areas where control of suspended sediment will also reduce salinity transport to streams? A random forests classification analysis was performed on topographic, climate, land cover, geology, rock chemistry, soil, and hydrologic information in 163 UCRB catchments. Two random forests models were developed in this study: one for exploring stream and catchment characteristics associated with stream sites where dissolved solids increase with increasing suspended-sediment concentration, and the other for predicting where these sites are located in unmonitored reaches. Results of variable importance from the exploratory random forests models indicate that no simple source, geochemical process, or transport mechanism can easily explain the relationship between dissolved solids and suspended sediment concentrations at UCRB monitoring sites. Among the most important watershed characteristics in both models were measures of soil hydraulic conductivity, soil erodibility, minimum catchment elevation, catchment area, and the silt component of soil in the catchment. Predictions at key locations in the basin were combined with observations from selected monitoring sites, and presented in map-form to give a complete understanding of where catchment sediment control practices would also benefit control of dissolved solids in streams.

  18. Randomizer for High Data Rates

    Science.gov (United States)

    Garon, Howard; Sank, Victor J.

    2018-01-01

    NASA as well as a number of other space agencies now recognize that the current recommended CCSDS randomizer used for telemetry (TM) is too short. When multiple applications of the PN8 Maximal Length Sequence (MLS) are required in order to fully cover a channel access data unit (CADU), spectral problems in the form of elevated spurious discretes (spurs) appear. Originally the randomizer was called a bit transition generator (BTG) precisely because it was thought that its primary value was to insure sufficient bit transitions to allow the bit/symbol synchronizer to lock and remain locked. We, NASA, have shown that the old BTG concept is a limited view of the real value of the randomizer sequence and that the randomizer also aids in signal acquisition as well as minimizing the potential for false decoder lock. Under the guidelines we considered here there are multiple maximal length sequences under GF(2) which appear attractive in this application. Although there may be mitigating reasons why another MLS sequence could be selected, one sequence in particular possesses a combination of desired properties which offsets it from the others.

  19. Feature Selection with the Boruta Package

    Directory of Open Access Journals (Sweden)

    Miron B. Kursa

    2010-10-01

    Full Text Available This article describes a R package Boruta, implementing a novel feature selection algorithm for finding emph{all relevant variables}. The algorithm is designed as a wrapper around a Random Forest classification algorithm. It iteratively removes the features which are proved by a statistical test to be less relevant than random probes. The Boruta package provides a convenient interface to the algorithm. The short description of the algorithm and examples of its application are presented.

  20. A Monte Carlo study of adsorption of random copolymers on random surfaces

    CERN Document Server

    Moghaddam, M S

    2003-01-01

    We study the adsorption problem of a random copolymer on a random surface in which a self-avoiding walk in three dimensions interacts with a plane defining a half-space to which the walk is confined. Each vertex of the walk is randomly labelled A with probability p sub p or B with probability 1 - p sub p , and only vertices labelled A are attracted to the surface plane. Each lattice site on the plane is also labelled either A with probability p sub s or B with probability 1 - p sub s , and only lattice sites labelled A interact with the walk. We study two variations of this model: in the first case the A-vertices of the walk interact only with the A-sites on the surface. In the second case the constraint of selective binding is removed; that is, any contact between the walk and the surface that involves an A-labelling, either from the surface or from the walk, is counted as a visit to the surface. The system is quenched in both cases, i.e. the labellings of the walk and of the surface are fixed as thermodynam...

  1. Universal Prevention for Anxiety and Depressive Symptoms in Children: A Meta-analysis of Randomized and Cluster-Randomized Trials.

    Science.gov (United States)

    Ahlen, Johan; Lenhard, Fabian; Ghaderi, Ata

    2015-12-01

    Although under-diagnosed, anxiety and depression are among the most prevalent psychiatric disorders in children and adolescents, leading to severe impairment, increased risk of future psychiatric problems, and a high economic burden to society. Universal prevention may be a potent way to address these widespread problems. There are several benefits to universal relative to targeted interventions because there is limited knowledge as to how to screen for anxiety and depression in the general population. Earlier meta-analyses of the prevention of depression and anxiety symptoms among children suffer from methodological inadequacies such as combining universal, selective, and indicated interventions in the same analyses, and comparing cluster-randomized trials with randomized trials without any correction for clustering effects. The present meta-analysis attempted to determine the effectiveness of universal interventions to prevent anxiety and depressive symptoms after correcting for clustering effects. A systematic search of randomized studies in PsychINFO, Cochrane Library, and Google Scholar resulted in 30 eligible studies meeting inclusion criteria, namely peer-reviewed, randomized or cluster-randomized trials of universal interventions for anxiety and depressive symptoms in school-aged children. Sixty-three percent of the studies reported outcome data regarding anxiety and 87 % reported outcome data regarding depression. Seventy percent of the studies used randomization at the cluster level. There were small but significant effects regarding anxiety (.13) and depressive (.11) symptoms as measured at immediate posttest. At follow-up, which ranged from 3 to 48 months, effects were significantly larger than zero regarding depressive (.07) but not anxiety (.11) symptoms. There was no significant moderation effect of the following pre-selected variables: the primary aim of the intervention (anxiety or depression), deliverer of the intervention, gender distribution

  2. Prevalence of at-risk genotypes for genotoxic effects decreases with age in a randomly selected population in Flanders: a cross sectional study

    Directory of Open Access Journals (Sweden)

    van Delft Joost HM

    2011-10-01

    Full Text Available Abstract Background We hypothesized that in Flanders (Belgium, the prevalence of at-risk genotypes for genotoxic effects decreases with age due to morbidity and mortality resulting from chronic diseases. Rather than polymorphisms in single genes, the interaction of multiple genetic polymorphisms in low penetrance genes involved in genotoxic effects might be of relevance. Methods Genotyping was performed on 399 randomly selected adults (aged 50-65 and on 442 randomly selected adolescents. Based on their involvement in processes relevant to genotoxicity, 28 low penetrance polymorphisms affecting the phenotype in 19 genes were selected (xenobiotic metabolism, oxidative stress defense and DNA repair, respectively 13, 6 and 9 polymorphisms. Polymorphisms which, based on available literature, could not clearly be categorized a priori as leading to an 'increased risk' or a 'protective effect' were excluded. Results The mean number of risk alleles for all investigated polymorphisms was found to be lower in the 'elderly' (17.0 ± 2.9 than the 'adolescent' (17.6 ± 3.1 subpopulation (P = 0.002. These results were not affected by gender nor smoking. The prevalence of a high (> 17 = median number of risk alleles was less frequent in the 'elderly' (40.6% than the 'adolescent' (51.4% subpopulation (P = 0.002. In particular for phase II enzymes, the mean number of risk alleles was lower in the 'elderly' (4.3 ± 1.6 than the 'adolescent' age group (4.8 ± 1.9 P 4 = median number of risk alleles was less frequent in the 'elderly' (41.3% than the adolescent subpopulation (56.3%, P 8 = median number of risk alleles for DNA repair enzyme-coding genes was lower in the 'elderly' (37,3% than the 'adolescent' subpopulation (45.6%, P = 0.017. Conclusions These observations are consistent with the hypothesis that, in Flanders, the prevalence of at-risk alleles in genes involved in genotoxic effects decreases with age, suggesting that persons carrying a higher number of

  3. Effects of prey abundance, distribution, visual contrast and morphology on selection by a pelagic piscivore

    Science.gov (United States)

    Hansen, Adam G.; Beauchamp, David A.

    2014-01-01

    Most predators eat only a subset of possible prey. However, studies evaluating diet selection rarely measure prey availability in a manner that accounts for temporal–spatial overlap with predators, the sensory mechanisms employed to detect prey, and constraints on prey capture.We evaluated the diet selection of cutthroat trout (Oncorhynchus clarkii) feeding on a diverse planktivore assemblage in Lake Washington to test the hypothesis that the diet selection of piscivores would reflect random (opportunistic) as opposed to non-random (targeted) feeding, after accounting for predator–prey overlap, visual detection and capture constraints.Diets of cutthroat trout were sampled in autumn 2005, when the abundance of transparent, age-0 longfin smelt (Spirinchus thaleichthys) was low, and 2006, when the abundance of smelt was nearly seven times higher. Diet selection was evaluated separately using depth-integrated and depth-specific (accounted for predator–prey overlap) prey abundance. The abundance of different prey was then adjusted for differences in detectability and vulnerability to predation to see whether these factors could explain diet selection.In 2005, cutthroat trout fed non-randomly by selecting against the smaller, transparent age-0 longfin smelt, but for the larger age-1 longfin smelt. After adjusting prey abundance for visual detection and capture, cutthroat trout fed randomly. In 2006, depth-integrated and depth-specific abundance explained the diets of cutthroat trout well, indicating random feeding. Feeding became non-random after adjusting for visual detection and capture. Cutthroat trout selected strongly for age-0 longfin smelt, but against similar sized threespine stickleback (Gasterosteus aculeatus) and larger age-1 longfin smelt in 2006. Overlap with juvenile sockeye salmon (O. nerka) was minimal in both years, and sockeye salmon were rare in the diets of cutthroat trout.The direction of the shift between random and non-random selection

  4. Why the null matters: statistical tests, random walks and evolution.

    Science.gov (United States)

    Sheets, H D; Mitchell, C E

    2001-01-01

    A number of statistical tests have been developed to determine what type of dynamics underlie observed changes in morphology in evolutionary time series, based on the pattern of change within the time series. The theory of the 'scaled maximum', the 'log-rate-interval' (LRI) method, and the Hurst exponent all operate on the same principle of comparing the maximum change, or rate of change, in the observed dataset to the maximum change expected of a random walk. Less change in a dataset than expected of a random walk has been interpreted as indicating stabilizing selection, while more change implies directional selection. The 'runs test' in contrast, operates on the sequencing of steps, rather than on excursion. Applications of these tests to computer generated, simulated time series of known dynamical form and various levels of additive noise indicate that there is a fundamental asymmetry in the rate of type II errors of the tests based on excursion: they are all highly sensitive to noise in models of directional selection that result in a linear trend within a time series, but are largely noise immune in the case of a simple model of stabilizing selection. Additionally, the LRI method has a lower sensitivity than originally claimed, due to the large range of LRI rates produced by random walks. Examination of the published results of these tests show that they have seldom produced a conclusion that an observed evolutionary time series was due to directional selection, a result which needs closer examination in light of the asymmetric response of these tests.

  5. Plaque retention by self-ligating vs elastomeric orthodontic brackets: quantitative comparison of oral bacteria and detection with adenosine triphosphate-driven bioluminescence.

    NARCIS (Netherlands)

    Pellegrini, P.; Sauerwein, R.W.; Finlayson, T.; McLeod, J.; Covell, D.A.; Maier, T.; Machida, C.A.

    2009-01-01

    INTRODUCTION: Enamel decalcification is a common problem in orthodontics. The objectives of this randomized clinical study were to enumerate and compare plaque bacteria surrounding 2 bracket types, self-ligating (SL) vs elastomeric ligating (E), and to determine whether adenosine triphosphate

  6. Natural Selection as an Emergent Process: Instructional Implications

    Science.gov (United States)

    Cooper, Robert A.

    2017-01-01

    Student reasoning about cases of natural selection is often plagued by errors that stem from miscategorising selection as a direct, causal process, misunderstanding the role of randomness, and from the intuitive ideas of intentionality, teleology and essentialism. The common thread throughout many of these reasoning errors is a failure to apply…

  7. Conversion of the random amplified polymorphic DNA (RAPD ...

    African Journals Online (AJOL)

    Conversion of the random amplified polymorphic DNA (RAPD) marker UBC#116 linked to Fusarium crown and root rot resistance gene (Frl) into a co-dominant sequence characterized amplified region (SCAR) marker for marker-assisted selection of tomato.

  8. Random matrices and random difference equations

    International Nuclear Information System (INIS)

    Uppuluri, V.R.R.

    1975-01-01

    Mathematical models leading to products of random matrices and random difference equations are discussed. A one-compartment model with random behavior is introduced, and it is shown how the average concentration in the discrete time model converges to the exponential function. This is of relevance to understanding how radioactivity gets trapped in bone structure in blood--bone systems. The ideas are then generalized to two-compartment models and mammillary systems, where products of random matrices appear in a natural way. The appearance of products of random matrices in applications in demography and control theory is considered. Then random sequences motivated from the following problems are studied: constant pulsing and random decay models, random pulsing and constant decay models, and random pulsing and random decay models

  9. Effect of a Counseling Session Bolstered by Text Messaging on Self-Selected Health Behaviors in College Students: A Preliminary Randomized Controlled Trial.

    Science.gov (United States)

    Sandrick, Janice; Tracy, Doreen; Eliasson, Arn; Roth, Ashley; Bartel, Jeffrey; Simko, Melanie; Bowman, Tracy; Harouse-Bell, Karen; Kashani, Mariam; Vernalis, Marina

    2017-05-17

    The college experience is often the first time when young adults live independently and make their own lifestyle choices. These choices affect dietary behaviors, exercise habits, techniques to deal with stress, and decisions on sleep time, all of which direct the trajectory of future health. There is a need for effective strategies that will encourage healthy lifestyle choices in young adults attending college. This preliminary randomized controlled trial tested the effect of coaching and text messages (short message service, SMS) on self-selected health behaviors in the domains of diet, exercise, stress, and sleep. A second analysis measured the ripple effect of the intervention on health behaviors not specifically selected as a goal by participants. Full-time students aged 18-30 years were recruited by word of mouth and campuswide advertisements (flyers, posters, mailings, university website) at a small university in western Pennsylvania from January to May 2015. Exclusions included pregnancy, eating disorders, chronic medical diagnoses, and prescription medications other than birth control. Of 60 participants, 30 were randomized to receive a single face-to-face meeting with a health coach to review results of behavioral questionnaires and to set a health behavior goal for the 8-week study period. The face-to-face meeting was followed by SMS text messages designed to encourage achievement of the behavioral goal. A total of 30 control subjects underwent the same health and behavioral assessments at intake and program end but did not receive coaching or SMS text messages. The texting app showed that 87.31% (2187/2505) of messages were viewed by intervention participants. Furthermore, 28 of the 30 intervention participants and all 30 control participants provided outcome data. Among intervention participants, 22 of 30 (73%) showed improvement in health behavior goal attainment, with the whole group (n=30) showing a mean improvement of 88% (95% CI 39-136). Mean

  10. A novel dendritic cell-based direct ex vivo assay for detection and enumeration of circulating antigen-specific human T cells.

    Science.gov (United States)

    Carrio, Roberto; Zhang, Ge; Drake, Donald R; Schanen, Brian C

    2018-05-07

    Although a variety of assays have been used to examine T cell responses in vitro, standardized ex vivo detection of antigen-specific CD4 + T cells from human circulatory PBMCs remains constrained by low-dimensional characterization outputs and the need for polyclonal, mitogen-induced expansion methods to generate detectable response signals. To overcome these limitations, we developed a novel methodology utilizing antigen-pulsed autologous human dendritic target cells in a rapid and sensitive assay to accurately enumerate antigen-specific CD4 + T cell precursor frequency by multiparametric flow cytometry. With this approach, we demonstrate the ability to reproducibly quantitate poly-functional T cell responses following both primary and recall antigenic stimulation. Furthermore, this approach enables more comprehensive phenotypic profiling of circulating antigen-specific CD4 + T cells, providing valuable insights into the pre-existing polarization of antigen-specific T cells in humans. Combined, this approach permits sensitive and detailed ex vivo detection of antigen-specific CD4 + T cells delivering an important tool for advancing vaccine, immune-oncology and other therapeutic studies.

  11. Sampling large random knots in a confined space

    International Nuclear Information System (INIS)

    Arsuaga, J; Blackstone, T; Diao, Y; Hinson, K; Karadayi, E; Saito, M

    2007-01-01

    DNA knots formed under extreme conditions of condensation, as in bacteriophage P4, are difficult to analyze experimentally and theoretically. In this paper, we propose to use the uniform random polygon model as a supplementary method to the existing methods for generating random knots in confinement. The uniform random polygon model allows us to sample knots with large crossing numbers and also to generate large diagrammatically prime knot diagrams. We show numerically that uniform random polygons sample knots with large minimum crossing numbers and certain complicated knot invariants (as those observed experimentally). We do this in terms of the knot determinants or colorings. Our numerical results suggest that the average determinant of a uniform random polygon of n vertices grows faster than O(e n 2 )). We also investigate the complexity of prime knot diagrams. We show rigorously that the probability that a randomly selected 2D uniform random polygon of n vertices is almost diagrammatically prime goes to 1 as n goes to infinity. Furthermore, the average number of crossings in such a diagram is at the order of O(n 2 ). Therefore, the two-dimensional uniform random polygons offer an effective way in sampling large (prime) knots, which can be useful in various applications

  12. Sampling large random knots in a confined space

    Science.gov (United States)

    Arsuaga, J.; Blackstone, T.; Diao, Y.; Hinson, K.; Karadayi, E.; Saito, M.

    2007-09-01

    DNA knots formed under extreme conditions of condensation, as in bacteriophage P4, are difficult to analyze experimentally and theoretically. In this paper, we propose to use the uniform random polygon model as a supplementary method to the existing methods for generating random knots in confinement. The uniform random polygon model allows us to sample knots with large crossing numbers and also to generate large diagrammatically prime knot diagrams. We show numerically that uniform random polygons sample knots with large minimum crossing numbers and certain complicated knot invariants (as those observed experimentally). We do this in terms of the knot determinants or colorings. Our numerical results suggest that the average determinant of a uniform random polygon of n vertices grows faster than O(e^{n^2}) . We also investigate the complexity of prime knot diagrams. We show rigorously that the probability that a randomly selected 2D uniform random polygon of n vertices is almost diagrammatically prime goes to 1 as n goes to infinity. Furthermore, the average number of crossings in such a diagram is at the order of O(n2). Therefore, the two-dimensional uniform random polygons offer an effective way in sampling large (prime) knots, which can be useful in various applications.

  13. Sampling large random knots in a confined space

    Energy Technology Data Exchange (ETDEWEB)

    Arsuaga, J [Department of Mathematics, San Francisco State University, 1600 Holloway Ave, San Francisco, CA 94132 (United States); Blackstone, T [Department of Computer Science, San Francisco State University, 1600 Holloway Ave., San Francisco, CA 94132 (United States); Diao, Y [Department of Mathematics and Statistics, University of North Carolina at Charlotte, Charlotte, NC 28223 (United States); Hinson, K [Department of Mathematics and Statistics, University of North Carolina at Charlotte, Charlotte, NC 28223 (United States); Karadayi, E [Department of Mathematics, University of South Florida, 4202 E Fowler Avenue, Tampa, FL 33620 (United States); Saito, M [Department of Mathematics, University of South Florida, 4202 E Fowler Avenue, Tampa, FL 33620 (United States)

    2007-09-28

    DNA knots formed under extreme conditions of condensation, as in bacteriophage P4, are difficult to analyze experimentally and theoretically. In this paper, we propose to use the uniform random polygon model as a supplementary method to the existing methods for generating random knots in confinement. The uniform random polygon model allows us to sample knots with large crossing numbers and also to generate large diagrammatically prime knot diagrams. We show numerically that uniform random polygons sample knots with large minimum crossing numbers and certain complicated knot invariants (as those observed experimentally). We do this in terms of the knot determinants or colorings. Our numerical results suggest that the average determinant of a uniform random polygon of n vertices grows faster than O(e{sup n{sup 2}}). We also investigate the complexity of prime knot diagrams. We show rigorously that the probability that a randomly selected 2D uniform random polygon of n vertices is almost diagrammatically prime goes to 1 as n goes to infinity. Furthermore, the average number of crossings in such a diagram is at the order of O(n{sup 2}). Therefore, the two-dimensional uniform random polygons offer an effective way in sampling large (prime) knots, which can be useful in various applications.

  14. Yersinia enterocolitica in slaughter pig tonsils: enumeration and detection by enrichment versus direct plating culture.

    Science.gov (United States)

    Van Damme, Inge; Habib, Ihab; De Zutter, Lieven

    2010-02-01

    Tonsil samples from 139 slaughter pigs were examined for the presence of pathogenic Yersinia enterocolitica by enrichment procedures based on the standard method ISO 10273:2003. In addition, samples were tested by direct plating method to evaluate its efficiency compared to the enrichment culture methods and to quantify the level of contamination in porcine tonsils. In total, 52 samples (37.4%) were positive for pathogenic Y. enterocolitica, all belonging to bioserotype 4/O:3. Fifty out of the 52 positive samples (96.2%) were detected by direct plating. Enumeration showed an average concentration of 4.5 log(10) CFU g(-1) and 4.4 log(10) CFU g(-1) tonsil on Salmonella-Shigella-desoxycholate-calcium chloride (SSDC) and cefsulodin-irgasan-novobiocin (CIN) agar plates, respectively. The enrichment procedures recommended by the ISO 10273:2003 method were not optimal for the isolation of pathogenic Y. enterocolitica from pig tonsils: two days enrichment in irgasan-ticarcillin-potassium chlorate (ITC) broth resulted in an isolation rate of 84.6%, while 5 days enrichment in peptone-sorbitol-bile (PSB) broth recovered only 59.6% of positive samples. Reducing the enrichment time in PSB from 5 to 2 days resulted in a significantly higher recovery rate (94.2%) and might serve as an appropriate enrichment protocol for the isolation of pathogenic Y. enterocolitica from pig tonsils. Compared to enrichment culture methods, results based on direct plating can be obtained in a shorter time course and provide quantitative data that might be needed for further risk assessment studies.

  15. Bridging Emergent Attributes and Darwinian Principles in Teaching Natural Selection

    Science.gov (United States)

    Xu, Dongchen; Chi, Michelene T. H.

    2016-01-01

    Students often have misconceptions about natural selection as they misuse a direct causal schema to explain the process. Natural selection is in fact an emergent process where random interactions lead to changes in a population. The misconceptions stem from students' lack of emergent schema for natural selection. In order to help students…

  16. Selection of representative calibration sample sets for near-infrared reflectance spectroscopy to predict nitrogen concentration in grasses

    DEFF Research Database (Denmark)

    Shetty, Nisha; Rinnan, Åsmund; Gislum, René

    2012-01-01

    ) algorithm were used and compared. Both Puchwein and CADEX methods provide a calibration set equally distributed in space, and both methods require a minimum prior of knowledge. The samples were also selected randomly using complete random, cultivar random (year fixed), year random (cultivar fixed......) and interaction (cultivar × year fixed) random procedures to see the influence of different factors on sample selection. Puchwein's method performed best with lowest RMSEP followed by CADEX, interaction random, year random, cultivar random and complete random. Out of 118 samples of the complete calibration set...... effectively enhance the cost-effectiveness of NIR spectral analysis by reducing the number of analyzed samples in the calibration set by more than 80%, which substantially reduces the effort of laboratory analyses with no significant loss in prediction accuracy....

  17. Selective mutism.

    Science.gov (United States)

    Hua, Alexandra; Major, Nili

    2016-02-01

    Selective mutism is a disorder in which an individual fails to speak in certain social situations though speaks normally in other settings. Most commonly, this disorder initially manifests when children fail to speak in school. Selective mutism results in significant social and academic impairment in those affected by it. This review will summarize the current understanding of selective mutism with regard to diagnosis, epidemiology, cause, prognosis, and treatment. Studies over the past 20 years have consistently demonstrated a strong relationship between selective mutism and anxiety, most notably social phobia. These findings have led to the recent reclassification of selective mutism as an anxiety disorder in the Diagnostic and Statistical Manual of Mental Disorders, 5th Edition. In addition to anxiety, several other factors have been implicated in the development of selective mutism, including communication delays and immigration/bilingualism, adding to the complexity of the disorder. In the past few years, several randomized studies have supported the efficacy of psychosocial interventions based on a graduated exposure to situations requiring verbal communication. Less data are available regarding the use of pharmacologic treatment, though there are some studies that suggest a potential benefit. Selective mutism is a disorder that typically emerges in early childhood and is currently conceptualized as an anxiety disorder. The development of selective mutism appears to result from the interplay of a variety of genetic, temperamental, environmental, and developmental factors. Although little has been published about selective mutism in the general pediatric literature, pediatric clinicians are in a position to play an important role in the early diagnosis and treatment of this debilitating condition.

  18. Statistical properties of random clique networks

    Science.gov (United States)

    Ding, Yi-Min; Meng, Jun; Fan, Jing-Fang; Ye, Fang-Fu; Chen, Xiao-Song

    2017-10-01

    In this paper, a random clique network model to mimic the large clustering coefficient and the modular structure that exist in many real complex networks, such as social networks, artificial networks, and protein interaction networks, is introduced by combining the random selection rule of the Erdös and Rényi (ER) model and the concept of cliques. We find that random clique networks having a small average degree differ from the ER network in that they have a large clustering coefficient and a power law clustering spectrum, while networks having a high average degree have similar properties as the ER model. In addition, we find that the relation between the clustering coefficient and the average degree shows a non-monotonic behavior and that the degree distributions can be fit by multiple Poisson curves; we explain the origin of such novel behaviors and degree distributions.

  19. Benchmarking Variable Selection in QSAR.

    Science.gov (United States)

    Eklund, Martin; Norinder, Ulf; Boyer, Scott; Carlsson, Lars

    2012-02-01

    Variable selection is important in QSAR modeling since it can improve model performance and transparency, as well as reduce the computational cost of model fitting and predictions. Which variable selection methods that perform well in QSAR settings is largely unknown. To address this question we, in a total of 1728 benchmarking experiments, rigorously investigated how eight variable selection methods affect the predictive performance and transparency of random forest models fitted to seven QSAR datasets covering different endpoints, descriptors sets, types of response variables, and number of chemical compounds. The results show that univariate variable selection methods are suboptimal and that the number of variables in the benchmarked datasets can be reduced with about 60 % without significant loss in model performance when using multivariate adaptive regression splines MARS and forward selection. Copyright © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  20. Feature Selection for Chemical Sensor Arrays Using Mutual Information

    Science.gov (United States)

    Wang, X. Rosalind; Lizier, Joseph T.; Nowotny, Thomas; Berna, Amalia Z.; Prokopenko, Mikhail; Trowell, Stephen C.

    2014-01-01

    We address the problem of feature selection for classifying a diverse set of chemicals using an array of metal oxide sensors. Our aim is to evaluate a filter approach to feature selection with reference to previous work, which used a wrapper approach on the same data set, and established best features and upper bounds on classification performance. We selected feature sets that exhibit the maximal mutual information with the identity of the chemicals. The selected features closely match those found to perform well in the previous study using a wrapper approach to conduct an exhaustive search of all permitted feature combinations. By comparing the classification performance of support vector machines (using features selected by mutual information) with the performance observed in the previous study, we found that while our approach does not always give the maximum possible classification performance, it always selects features that achieve classification performance approaching the optimum obtained by exhaustive search. We performed further classification using the selected feature set with some common classifiers and found that, for the selected features, Bayesian Networks gave the best performance. Finally, we compared the observed classification performances with the performance of classifiers using randomly selected features. We found that the selected features consistently outperformed randomly selected features for all tested classifiers. The mutual information filter approach is therefore a computationally efficient method for selecting near optimal features for chemical sensor arrays. PMID:24595058

  1. OCURRENCE OF Campylobacter sp IN BROILER FLOCKSAND CORRESPONDING CARCASSES OCORRÊNCIA DE Campylobacter sp EM LOTES DE FRANGOS DE CORTE E NAS CARCAÇAS CORRESPONDENTES

    OpenAIRE

    Hamilton Luiz de Souza Moraes; Carlos Tadeu Pippi Salle; Laura Beatriz Rodrigues; Luciana Ruschel dos Santos; Suzete Lora Kuana; Vladimir Pinheiro do Nascimento

    2008-01-01

    The aim of the present study was to assess the dissemination and levels of Campylobacter contamination in broiler flocks and related carcasses. Twenty-two flocks aged 3 weeks or older were assessed, and 110 cecal droppings and 96 carcasses (38 carcasses after defeathering and 58 after the last chilling operation) were enumerated. Bolton selective enrichment broth was used for enumeration of the organism. Additionally, the carcasses wer...

  2. The Effect of Speed Alterations on Tempo Note Selection.

    Science.gov (United States)

    Madsen, Clifford K.; And Others

    1986-01-01

    Investigated the tempo note preferences of 100 randomly selected college-level musicians using familiar orchestral music as stimuli. Subjects heard selections at increased, decreased, and unaltered tempi. Results showed musicians were not accurate in estimating original tempo and showed consistent preference for faster than actual tempo.…

  3. Random walk on random walks

    NARCIS (Netherlands)

    Hilário, M.; Hollander, den W.Th.F.; Sidoravicius, V.; Soares dos Santos, R.; Teixeira, A.

    2014-01-01

    In this paper we study a random walk in a one-dimensional dynamic random environment consisting of a collection of independent particles performing simple symmetric random walks in a Poisson equilibrium with density ¿¿(0,8). At each step the random walk performs a nearest-neighbour jump, moving to

  4. Occurrence of faecal contamination in households along the US-Mexico border.

    Science.gov (United States)

    Carrasco, L; Mena, K D; Mota, L C; Ortiz, M; Behravesh, C B; Gibbs, S G; Bristol, J R; Mayberry, L; Cardenas, V M

    2008-06-01

    The study aim was to determine the presence of total and faecal coliforms on kitchen surfaces, in tap water and on the hands of caregivers in households on both sides of the US-Mexico border. Samples were collected in 135 randomly selected households in Ciudad Juarez, Mexico, and El Paso, Texas. Different surfaces throughout the kitchen and head of households' hands were sampled using sterile cotton swabs moistened in D/E neutralizing solution. Sponge/dishcloth and drinking water samples were also obtained. Total and faecal coliforms were enumerated on m-Endo LES and mFC respectively. Total coliforms and Escherichia coli in drinking water samples were enumerated in accordance with the Quanti-Tray method. Sponge/dishcloth samples were the most commonly contaminated kitchen sites, followed by countertops and cutting boards. We recovered faecal coliforms from 14% of the hands of child caregivers, and this indicator was moderately associated with self-reported failure to wash hands after using the toilet (OR = 3.2; 95% CI: 0.9, 11.1). Hand washing should continue to be emphasized, and additional interventions should be directed to specific kitchen areas, such as sponges/dishcloths, tables/countertops and cutting boards. There is a need for additional interventions regarding kitchen sanitation.

  5. Biased random key genetic algorithm with insertion and gender selection for capacitated vehicle routing problem with time windows

    Science.gov (United States)

    Rochman, Auliya Noor; Prasetyo, Hari; Nugroho, Munajat Tri

    2017-06-01

    Vehicle Routing Problem (VRP) often occurs when the manufacturers need to distribute their product to some customers/outlets. The distribution process is typically restricted by the capacity of the vehicle and the working hours at the distributor. This type of VRP is also known as Capacitated Vehicle Routing Problem with Time Windows (CVRPTW). A Biased Random Key Genetic Algorithm (BRKGA) was designed and coded in MATLAB to solve the CVRPTW case of soft drink distribution. The standard BRKGA was then modified by applying chromosome insertion into the initial population and defining chromosome gender for parent undergoing crossover operation. The performance of the established algorithms was then compared to a heuristic procedure for solving a soft drink distribution. Some findings are revealed (1) the total distribution cost of BRKGA with insertion (BRKGA-I) results in a cost saving of 39% compared to the total cost of heuristic method, (2) BRKGA with the gender selection (BRKGA-GS) could further improve the performance of the heuristic method. However, the BRKGA-GS tends to yield worse results compared to that obtained from the standard BRKGA.

  6. Enumeration and identification of gram negative bacteria present in soil underlying urban waste-sites in southwestern Nigeria.

    Science.gov (United States)

    Achudume, A C; Olawale, J T

    2010-09-01

    Samples of soils underlying wastes were collected from four sites representing four demographic regions of a medium sized town in southwestern Nigeria. Standard methods and reference strains of isolated bacteria were employed for identification. Evaluation of the enzymatic and biochemical reactions showed that all isolated and identified microbes were non-fermenting heterotrophic (HTB). For example, Klebsiella pnemuniae may be involved in wound infections, particularly following bowel surgery. Similarly Pseudomonas aeruginosa can produce serious nosocomial infections if it gains access to the body through wounds or intravenous lines. From the 15 culure plates, 88 colonies with various characteristics were enumerated. They differed in aspect of viscosity and color. The bacterial species were identified by percent positive reactions while oxidative and sugar fermentation tests revealed various characteristics among the isolated strains. All of the isolates were negative for citrate utilization, gelatin liquefaction, nitrate reduction, methyl red and Voges Proskaur, motility and hydrogen sulphate production. The quantity of HTB present in an area serves as an index of the general sanitary conditions of that area. The presence of a large number of HTB, in an ecological area may be considered a liability as it can enhance the spread of diseases and on a larger scale may enable epidemics to arise. Therefore, there is need for control of waste sites by recovery and regular germicidal sanitation.

  7. Numerical and experimental characterization of solid-state micropore-based cytometer for detection and enumeration of biological cells.

    Science.gov (United States)

    Guo, Jinhong; Chen, Liang; Ai, Ye; Cheng, Yuanbing; Li, Chang Ming; Kang, Yuejun; Wang, Zhiming

    2015-03-01

    Portable diagnostic devices have emerged as important tools in various biomedical applications since they can provide an effective solution for low-cost and rapid clinical diagnosis. In this paper, we present a micropore-based resistive cytometer for the detection and enumeration of biological cells. The proposed device was fabricated on a silicon wafer by a standard microelectromechanical system processing technology, which enables a mass production of the proposed chip. The working principle of this cytometer is based upon a bias potential modulated pulse, originating from the biological particle's physical blockage of the micropore. Polystyrene particles of different sizes (7, 10, and 16 μm) were used to test and calibrate the proposed device. A finite element simulation was developed to predict the bias potential modulated pulse (peak amplitude vs. pulse bandwidth), which can provide critical insight into the design of this microfluidic flow cytometer. Furthermore, HeLa cells (a type of tumor cell lines) spiked in a suspension of blood cells, including red blood cells and white blood cells, were used to assess the performance for detecting and counting tumor cells. The proposed microfluidic flow cytometer is able to provide a promising platform to address the current unmet need for point-of-care clinical diagnosis. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  8. Nonlinear Pricing with Random Participation

    OpenAIRE

    Jean-Charles Rochet; Lars A. Stole

    2002-01-01

    The canonical selection contracting programme takes the agent's participation decision as deterministic and finds the optimal contract, typically satisfying this constraint for the worst type. Upon weakening this assumption of known reservation values by introducing independent randomness into the agents' outside options, we find that some of the received wisdom from mechanism design and nonlinear pricing is not robust and the richer model which allows for stochastic participation affords a m...

  9. Genetic search feature selection for affective modeling

    DEFF Research Database (Denmark)

    Martínez, Héctor P.; Yannakakis, Georgios N.

    2010-01-01

    Automatic feature selection is a critical step towards the generation of successful computational models of affect. This paper presents a genetic search-based feature selection method which is developed as a global-search algorithm for improving the accuracy of the affective models built....... The method is tested and compared against sequential forward feature selection and random search in a dataset derived from a game survey experiment which contains bimodal input features (physiological and gameplay) and expressed pairwise preferences of affect. Results suggest that the proposed method...

  10. Comparison of New and Traditional Culture-Dependent Media for Enumerating Foodborne Yeasts and Molds.

    Science.gov (United States)

    Beuchat, Larry R; Mann, David A

    2016-01-01

    Fifty-six foods and food ingredients were analyzed for populations of naturally occurring yeasts and molds using Petrifilm rapid yeast and mold (RYM) count plates, Petrifilm yeast and mold (YM) count plates, dichloran rose bengal chloramphenicol (DRBC) agar plates, acidified potato dextrose agar (APDA) plates, and dichloran 18% glycerol (DG18) agar plates. Colonies were counted after incubating plates for 48, 72, and 120 h at 25°C. Of 56 foods in which either yeasts or molds were detected on at least one medium incubated for 120 h, neither yeasts nor molds were detected in 55.4, 73.2, 21.4, 19.6, and 71.4% of foods plated on the five respective media and incubated for 48 h; 10.7, 14.3, 3.6, 1.8, and 19.6% of foods were negative after 72 h, and 3.6, 1.8, 0, 0, and 0% of foods were negative after 120 h. Considering all enumeration media, correlation coefficients were 0.03 to 0.97 at 48 h of incubation; these values increased to 0.75 to 0.99 at 120 h. Coefficients of variation for total yeasts and molds were as high as 30.0, 30.8, and 27.2% at 48, 72, and 120 h, respectively. The general order of performance was DRBC = APDA > RYM Petrifilm > YM Petrifilm ≥ DG18 when plates were incubated for 48 h, DRBC > APDA > RYM Petrifilm > YM Petrifilm ≥ DG18 when plates were incubated for 72 h, and DRBC > APDA > RYM Petrifilm > YM Petrifilm > DG18 when plates were incubated for 120 h. Differences in performance among media are attributed to the diversity of yeasts and molds likely to be present in test foods and differences in nutrient, pH, and water activity requirements for resuscitation of stressed cells and colony development.

  11. Impact of selective genotyping in the training population on accuracy and bias of genomic selection.

    Science.gov (United States)

    Zhao, Yusheng; Gowda, Manje; Longin, Friedrich H; Würschum, Tobias; Ranc, Nicolas; Reif, Jochen C

    2012-08-01

    Estimating marker effects based on routinely generated phenotypic data of breeding programs is a cost-effective strategy to implement genomic selection. Truncation selection in breeding populations, however, could have a strong impact on the accuracy to predict genomic breeding values. The main objective of our study was to investigate the influence of phenotypic selection on the accuracy and bias of genomic selection. We used experimental data of 788 testcross progenies from an elite maize breeding program. The testcross progenies were evaluated in unreplicated field trials in ten environments and fingerprinted with 857 SNP markers. Random regression best linear unbiased prediction method was used in combination with fivefold cross-validation based on genotypic sampling. We observed a substantial loss in the accuracy to predict genomic breeding values in unidirectional selected populations. In contrast, estimating marker effects based on bidirectional selected populations led to only a marginal decrease in the prediction accuracy of genomic breeding values. We concluded that bidirectional selection is a valuable approach to efficiently implement genomic selection in applied plant breeding programs.

  12. Algebraic methods in random matrices and enumerative geometry

    CERN Document Server

    Eynard, Bertrand

    2008-01-01

    We review the method of symplectic invariants recently introduced to solve matrix models loop equations, and further extended beyond the context of matrix models. For any given spectral curve, one defined a sequence of differential forms, and a sequence of complex numbers Fg . We recall the definition of the invariants Fg, and we explain their main properties, in particular symplectic invariance, integrability, modularity,... Then, we give several example of applications, in particular matrix models, enumeration of discrete surfaces (maps), algebraic geometry and topological strings, non-intersecting brownian motions,...

  13. Noncontextuality with Marginal Selectivity in Reconstructing Mental Architectures

    Directory of Open Access Journals (Sweden)

    Ru eZhang

    2015-06-01

    Full Text Available We present a general theory of series-parallel mental architectures with selectively influenced stochastically non-independent components. A mental architecture is a hypothetical network of processes aimed at performing a task, of which we only observe the overall time it takes under variable parameters of the task. It is usually assumed that the network contains several processes selectively influenced by different experimental factors, and then the question is asked as to how these processes are arranged within the network, e.g., whether they are concurrent or sequential. One way of doing this is to consider the distribution functions for the overall processing time and compute certain linear combinations thereof (interaction contrasts. The theory of selective influences in psychology can be viewed as a special application of the interdisciplinary theory of (noncontextuality having its origins and main applications in quantum theory. In particular, lack of contextuality is equivalent to the existence of a hidden random entity of which all the random variables in play are functions. Consequently, for any given value of this common random entity, the processing times and their compositions (minima, maxima, or sums become deterministic quantities. These quantities, in turn, can be treated as random variables with (shifted Heaviside distribution functions, for which one can easily compute various linear combinations across different treatments, including interaction contrasts. This mathematical fact leads to a simple method, more general than the previously used ones, to investigate and characterize the interaction contrast for different types of series-parallel architectures.

  14. Attitudes towards smoking restrictions and tobacco advertisement bans in Georgia.

    Science.gov (United States)

    Bakhturidze, George D; Mittelmark, Maurice B; Aarø, Leif E; Peikrishvili, Nana T

    2013-11-25

    This study aims to provide data on a public level of support for restricting smoking in public places and banning tobacco advertisements. A nationally representative multistage sampling design, with sampling strata defined by region (sampling quotas proportional to size) and substrata defined by urban/rural and mountainous/lowland settlement, within which census enumeration districts were randomly sampled, within which households were randomly sampled, within which a randomly selected respondent was interviewed. The country of Georgia, population 4.7 million, located in the Caucasus region of Eurasia. One household member aged between 13 and 70 was selected as interviewee. In households with more than one age-eligible person, selection was carried out at random. Of 1588 persons selected, 14 refused to participate and interviews were conducted with 915 women and 659 men. Respondents were interviewed about their level of agreement with eight possible smoking restrictions/bans, used to calculate a single dichotomous (agree/do not agree) opinion indicator. The level of agreement with restrictions was analysed in bivariate and multivariate analyses by age, gender, education, income and tobacco use status. Overall, 84.9% of respondents indicated support for smoking restrictions and tobacco advertisement bans. In all demographic segments, including tobacco users, the majority of respondents indicated agreement with restrictions, ranging from a low of 51% in the 13-25 age group to a high of 98% in the 56-70 age group. Logistic regression with all demographic variables entered showed that agreement with restrictions was higher with age, and was significantly higher among never smokers as compared to daily smokers. Georgian public opinion is normatively supportive of more stringent tobacco-control measures in the form of smoking restrictions and tobacco advertisement bans.

  15. Patterns of survival and volatile metabolites of selected Lactobacillus strains during long-term incubation in milk.

    Science.gov (United States)

    Łaniewska-Trokenheim, Łucja; Olszewska, Magdalena; Miks-Krajnik, Marta; Zadernowska, Anna

    2010-08-01

    The focus of this study was to monitor the survival of populations and the volatile compound profiles of selected Lactobacillus strains during long-term incubation in milk. The enumeration of cells was determined by both the Direct Epifluorescent Filter Technique using carboxyfluorescein diacetate (CFDA) staining and the plate method. Volatile compounds were analysed by the gas-chromatography technique. All strains exhibited good survival in cultured milks, but Lactobacillus crispatus L800 was the only strain with comparable growth and viability in milk, assessed by plate and epifluorescence methods. The significant differences in cell numbers between plate and microscopic counts were obtained for L. acidophilus strains. The investigated strains exhibited different metabolic profiles. Depending on the strain used, 3 to 8 compounds were produced. The strains produced significantly higher concentrations of acetic acid, compared to other volatiles. Lactobacillus strains differed from one another in number and contents of the volatile compounds.

  16. DNA-based random number generation in security circuitry.

    Science.gov (United States)

    Gearheart, Christy M; Arazi, Benjamin; Rouchka, Eric C

    2010-06-01

    DNA-based circuit design is an area of research in which traditional silicon-based technologies are replaced by naturally occurring phenomena taken from biochemistry and molecular biology. This research focuses on further developing DNA-based methodologies to mimic digital data manipulation. While exhibiting fundamental principles, this work was done in conjunction with the vision that DNA-based circuitry, when the technology matures, will form the basis for a tamper-proof security module, revolutionizing the meaning and concept of tamper-proofing and possibly preventing it altogether based on accurate scientific observations. A paramount part of such a solution would be self-generation of random numbers. A novel prototype schema employs solid phase synthesis of oligonucleotides for random construction of DNA sequences; temporary storage and retrieval is achieved through plasmid vectors. A discussion of how to evaluate sequence randomness is included, as well as how these techniques are applied to a simulation of the random number generation circuitry. Simulation results show generated sequences successfully pass three selected NIST random number generation tests specified for security applications.

  17. Multistage Selection and the Financing of New Ventures

    OpenAIRE

    Jonathan T. Eckhardt; Scott Shane; Frédéric Delmar

    2006-01-01

    Using a random sample of 221 new Swedish ventures initiated in 1998, we examine why some new ventures are more likely than others to successfully be awarded capital from external sources. We examine venture financing as a staged selection process in which two sequential selection events systematically winnow the population of ventures and influence which ventures receive financing. For a venture to receive external financing its founders must first select it as a candidate for external fundin...

  18. Rural Women\\'s Preference For Selected Programmes Of The ...

    African Journals Online (AJOL)

    The study focused on the rural women's preference for selected programmes of the National Special Programme for Food Security (NSPFS) in Imo State, Nigeria. Data was collected with the aid of structured interview from 150 randomly selected women in the study area. Results from the study showed that respondents ...

  19. Airflow obstruction in young adults in Canada

    DEFF Research Database (Denmark)

    Al-Hazmi, Manal; Wooldrage, Kate; Anthonisen, Nicholas R.

    2007-01-01

    OBJECTIVE: Airflow obstruction is relatively uncommon in young adults, and may indicate potential for the development of progressive disease. The objective of the present study was to enumerate and characterize airflow obstruction in a random sample of Canadians aged 20 to 44 years. SETTING: The ...

  20. A New Random Walk for Replica Detection in WSNs

    Science.gov (United States)

    Aalsalem, Mohammed Y.; Saad, N. M.; Hossain, Md. Shohrab; Atiquzzaman, Mohammed; Khan, Muhammad Khurram

    2016-01-01

    Wireless Sensor Networks (WSNs) are vulnerable to Node Replication attacks or Clone attacks. Among all the existing clone detection protocols in WSNs, RAWL shows the most promising results by employing Simple Random Walk (SRW). More recently, RAND outperforms RAWL by incorporating Network Division with SRW. Both RAND and RAWL have used SRW for random selection of witness nodes which is problematic because of frequently revisiting the previously passed nodes that leads to longer delays, high expenditures of energy with lower probability that witness nodes intersect. To circumvent this problem, we propose to employ a new kind of constrained random walk, namely Single Stage Memory Random Walk and present a distributed technique called SSRWND (Single Stage Memory Random Walk with Network Division). In SSRWND, single stage memory random walk is combined with network division aiming to decrease the communication and memory costs while keeping the detection probability higher. Through intensive simulations it is verified that SSRWND guarantees higher witness node security with moderate communication and memory overheads. SSRWND is expedient for security oriented application fields of WSNs like military and medical. PMID:27409082

  1. A New Random Walk for Replica Detection in WSNs.

    Science.gov (United States)

    Aalsalem, Mohammed Y; Khan, Wazir Zada; Saad, N M; Hossain, Md Shohrab; Atiquzzaman, Mohammed; Khan, Muhammad Khurram

    2016-01-01

    Wireless Sensor Networks (WSNs) are vulnerable to Node Replication attacks or Clone attacks. Among all the existing clone detection protocols in WSNs, RAWL shows the most promising results by employing Simple Random Walk (SRW). More recently, RAND outperforms RAWL by incorporating Network Division with SRW. Both RAND and RAWL have used SRW for random selection of witness nodes which is problematic because of frequently revisiting the previously passed nodes that leads to longer delays, high expenditures of energy with lower probability that witness nodes intersect. To circumvent this problem, we propose to employ a new kind of constrained random walk, namely Single Stage Memory Random Walk and present a distributed technique called SSRWND (Single Stage Memory Random Walk with Network Division). In SSRWND, single stage memory random walk is combined with network division aiming to decrease the communication and memory costs while keeping the detection probability higher. Through intensive simulations it is verified that SSRWND guarantees higher witness node security with moderate communication and memory overheads. SSRWND is expedient for security oriented application fields of WSNs like military and medical.

  2. The concentration of heavy metals: zinc, cadmium, lead, copper, mercury, iron and calcium in head hair of a randomly selected sample of Kenyan people

    International Nuclear Information System (INIS)

    Wandiga, S.O.; Jumba, I.O.

    1982-01-01

    An intercomparative analysis of the concentration of heavy metals:zinc, cadmium, lead, copper, mercury, iron and calcium in head hair of a randomly selected sample of Kenyan people using the techniques of atomic absorption spectrophotometry (AAS) and differential pulse anodic stripping voltammetry (DPAS) has been undertaken. The percent relative standard deviation for each sample analysed using either of the techniques show good sensitivity and correlation between the techniques. The DPAS was found to be slightly sensitive than the AAs instrument used. The recalculated body burden rations of Cd to Zn, Pb to Fe reveal no unusual health impairement symptoms and suggest a relatively clean environment in Kenya.(author)

  3. A computational model of selection by consequences.

    OpenAIRE

    McDowell, J J

    2004-01-01

    Darwinian selection by consequences was instantiated in a computational model that consisted of a repertoire of behaviors undergoing selection, reproduction, and mutation over many generations. The model in effect created a digital organism that emitted behavior continuously. The behavior of this digital organism was studied in three series of computational experiments that arranged reinforcement according to random-interval (RI) schedules. The quantitative features of the model were varied o...

  4. Portfolio Selection with Jumps under Regime Switching

    Directory of Open Access Journals (Sweden)

    Lin Zhao

    2010-01-01

    Full Text Available We investigate a continuous-time version of the mean-variance portfolio selection model with jumps under regime switching. The portfolio selection is proposed and analyzed for a market consisting of one bank account and multiple stocks. The random regime switching is assumed to be independent of the underlying Brownian motion and jump processes. A Markov chain modulated diffusion formulation is employed to model the problem.

  5. Music in the cath lab: who should select it?

    Science.gov (United States)

    Goertz, Wolfram; Dominick, Klaus; Heussen, Nicole; vom Dahl, Juergen

    2011-05-01

    The ALMUT study wants to evaluate the anxiolytic effects of different music styles and no music in 200 patients undergoing cardiac catheterization and to assess if there is a difference if patients select one of these therapies or are randomized to one of them. The anxiolytic and analgesic effects of music have been described in previous trials. Some authors have suggested to evaluate whether patient-selected music is more effective than the music selected by the physician in reducing anxiety and stress levels. After randomization 100 patients (group A) were allowed to choose between classical music, relaxing modern music, smooth jazz, and no music. One hundred patients (group B) were randomized directly to one of these therapies (n = 25 each). Complete data were available for 197 patients (65 ± 10 years; 134 male). Using the State-Trait Anxiety Inventory (STAI) all patients in group B who listened to music showed a significantly higher decrease of their anxiety level (STAI-State difference pre-post of 16.8 ± 10.2) compared to group A (13.3 ± 11.1; p = 0.0176). Patients without music (6.2 ± 6.7) had a significantly weaker reduction of anxiety compared to all music-listeners (14.9 ± 10.7, p music in the cath lab support previous reports. Surprisingly, the hypothesis that the patient's choice of preferred music might yield higher benefits than a randomized assignment could be dismissed.

  6. On the foundations of the random lattice approach to quantum gravity

    International Nuclear Information System (INIS)

    Levin, A.; Morozov, A.

    1990-01-01

    We discuss the problem which can arise in the identification of conventional 2D quantum gravity, involving the sum over Riemann surfaces, with the results of the lattice approach, based on the enumeration of the Feynman graphs of matrix models. A potential difficulty is related to the (hypothetical) fact that the arithmetic curves are badly distributed in the module spaces for high enough genera (at least for g≥17). (orig.)

  7. A comparison of cryopreservation methods: Slow-cooling vs. rapid-cooling based on cell viability, oxidative stress, apoptosis, and CD34+ enumeration of human umbilical cord blood mononucleated cells

    Directory of Open Access Journals (Sweden)

    Sandra Ferry

    2011-09-01

    Full Text Available Abstract Background The finding of human umbilical cord blood as one of the most likely sources of hematopoietic stem cells offers a less invasive alternative for the need of hematopoietic stem cell transplantation. Due to the once-in-a-life time chance of collecting it, an optimum cryopreservation method that can preserve the life and function of the cells contained is critically needed. Methods Until now, slow-cooling has been the routine method of cryopreservation; however, rapid-cooling offers a simple, efficient, and harmless method for preserving the life and function of the desired cells. Therefore, this study was conducted to compare the effectiveness of slow- and rapid-cooling to preserve umbilical cord blood of mononucleated cells suspected of containing hematopoietic stem cells. The parameters used in this study were differences in cell viability, malondialdehyde content, and apoptosis level. The identification of hematopoietic stem cells themselves was carried out by enumerating CD34+ in a flow cytometer. Results Our results showed that mononucleated cell viability after rapid-cooling (91.9% was significantly higher than that after slow-cooling (75.5%, with a p value = 0.003. Interestingly, the malondialdehyde level in the mononucleated cell population after rapid-cooling (56.45 μM was also significantly higher than that after slow-cooling (33.25 μM, with a p value p value = 0.138. However, CD34+ enumeration was much higher in the population that underwent slow-cooling (23.32 cell/μl than in the one that underwent rapid-cooling (2.47 cell/μl, with a p value = 0.001. Conclusions Rapid-cooling is a potential cryopreservation method to be used to preserve the umbilical cord blood of mononucleated cells, although further optimization of the number of CD34+ cells after rapid-cooling is critically needed.

  8. Analyzing the behavior and reliability of voting systems comprising tri-state units using enumerated simulation

    International Nuclear Information System (INIS)

    Yacoub, Sherif

    2003-01-01

    Voting is a common technique used in combining results from peer experts, for multiple purposes, and in a variety of domains. In distributed decision making systems, voting mechanisms are used to obtain a decision by incorporating the opinion of multiple units. Voting systems have many applications in fault tolerant systems, mutual exclusion in distributed systems, and replicated databases. We are specifically interested in voting systems as used in decision-making applications. In this paper, we describe a synthetic experimental procedure to study the behavior of a variety of voting system configurations using a simulator to: analyze the state of each expert, apply a voting mechanism, and analyze the voting results. We introduce an enumerated-simulation approach and compare it to existing mathematical approaches. The paper studies the following behaviors of a voting system: (1) the reliability of the voting system, R; (2) the probability of reaching a consensus, P c ; (3) certainty index, T; and (4) the confidence index, C. The configuration parameters controlling the analysis are: (1) the number of participating experts, N, (2) the possible output states of an expert, and (3) the probability distribution of each expert states. We illustrate the application of this approach to a voting system that consists of N units, each has three states: correct (success), wrong (failed), and abstain (did not produce an output). The final output of the decision-making (voting) system is correct if a consensus is reached on a correct unit output, abstain if all units abstain from voting, and wrong otherwise. We will show that using the proposed approach, we can easily conduct studies to unleash several behaviors of a decision-making system with tri-state experts

  9. Combined impact of negative lifestyle factors on cardiovascular risk in children: a randomized prospective study

    OpenAIRE

    Meyer, Ursina; Schindler, Christian; Bloesch, Tamara; Schmocker, Eliane; Zahner, Lukas; Puder, Jardena J; Kriemler, Susi

    2014-01-01

    PURPOSE: Negative lifestyle factors are known to be associated with increased cardiovascular risk (CVR) in children, but research on their combined impact on a general population of children is sparse. Therefore, we aimed to quantify the combined impact of easily assessable negative lifestyle factors on the CVR scores of randomly selected children after 4 years. METHODS: Of the 540 randomly selected 6- to 13-year-old children, 502 children participated in a baseline health assessment, and ...

  10. Brain Tumor Segmentation Based on Random Forest

    Directory of Open Access Journals (Sweden)

    László Lefkovits

    2016-09-01

    Full Text Available In this article we present a discriminative model for tumor detection from multimodal MR images. The main part of the model is built around the random forest (RF classifier. We created an optimization algorithm able to select the important features for reducing the dimensionality of data. This method is also used to find out the training parameters used in the learning phase. The algorithm is based on random feature properties for evaluating the importance of the variable, the evolution of learning errors and the proximities between instances. The detection performances obtained have been compared with the most recent systems, offering similar results.

  11. Exploring pseudo- and chaotic random Monte Carlo simulations

    Science.gov (United States)

    Blais, J. A. Rod; Zhang, Zhan

    2011-07-01

    Computer simulations are an increasingly important area of geoscience research and development. At the core of stochastic or Monte Carlo simulations are the random number sequences that are assumed to be distributed with specific characteristics. Computer-generated random numbers, uniformly distributed on (0, 1), can be very different depending on the selection of pseudo-random number (PRN) or chaotic random number (CRN) generators. In the evaluation of some definite integrals, the resulting error variances can even be of different orders of magnitude. Furthermore, practical techniques for variance reduction such as importance sampling and stratified sampling can be applied in most Monte Carlo simulations and significantly improve the results. A comparative analysis of these strategies has been carried out for computational applications in planar and spatial contexts. Based on these experiments, and on some practical examples of geodetic direct and inverse problems, conclusions and recommendations concerning their performance and general applicability are included.

  12. Fixation probability in a two-locus intersexual selection model.

    Science.gov (United States)

    Durand, Guillermo; Lessard, Sabin

    2016-06-01

    We study a two-locus model of intersexual selection in a finite haploid population reproducing according to a discrete-time Moran model with a trait locus expressed in males and a preference locus expressed in females. We show that the probability of ultimate fixation of a single mutant allele for a male ornament introduced at random at the trait locus given any initial frequency state at the preference locus is increased by weak intersexual selection and recombination, weak or strong. Moreover, this probability exceeds the initial frequency of the mutant allele even in the case of a costly male ornament if intersexual selection is not too weak. On the other hand, the probability of ultimate fixation of a single mutant allele for a female preference towards a male ornament introduced at random at the preference locus is increased by weak intersexual selection and weak recombination if the female preference is not costly, and is strong enough in the case of a costly male ornament. The analysis relies on an extension of the ancestral recombination-selection graph for samples of haplotypes to take into account events of intersexual selection, while the symbolic calculation of the fixation probabilities is made possible in a reasonable time by an optimizing algorithm. Copyright © 2016 Elsevier Inc. All rights reserved.

  13. Microbiological detection of probiotic microorganisms in fermented milk products

    Directory of Open Access Journals (Sweden)

    Radka Burdychová

    2007-01-01

    Full Text Available A number of health benefits have been claimed for probiotic bacteria such as Lactobacillus acidophilus, Bifidobacterium spp. and Lactobacillus rhamnosus. Because of the potential health benefits, these organisms are increasingly incorporated into dairy foods. However, to reach health benefits, the concentration of probiotics have to be 106 CFU/g of a product. For assessing of required probiotic bacteria quantity, it is important to have a working method for selective enumeration of these probiotic bacteria. Five bacteriological media were evaluated to assess their suitability to selectively enumerate Streptococcus thermophilus, Lactobacillus rhamnosus, Lactobacillus acidophilus and Bifidobacterium spp. Bacteriological media evaluated included Streptococcus thermophilus agar, pH modified MRS agar, MRS-vancomycine agar and BSM (Bifidus selective medium agar under different culture conditions.Seven selected fermented milk products with probiotic culture were analyzed for their bacterial populations using the described selective bacteriological media and culture conditions. All milk products contained probiotic microorganisms claimed to be present in declared quantity (106–107/g.

  14. Impact of Low and High Doses of Marbofloxacin on the Selection of Resistant Enterobacteriaceae in the Commensal Gut Flora of Young Cattle: Discussion of Data from 2 Study Populations.

    Science.gov (United States)

    Lhermie, Guillaume; Dupouy, Véronique; El Garch, Farid; Ravinet, Nadine; Toutain, Pierre-Louis; Bousquet-Mélou, Alain; Seegers, Henri; Assié, Sébastien

    2017-03-01

    In the context of requested decrease of antimicrobial use in veterinary medicine, our objective was to assess the impact of two doses of marbofloxacin administered on young bulls (YBs) and veal calves (VCs) treated for bovine respiratory disease, on the total population of Enterobacteriaceae in gut flora and on the emergence of resistant Enterobacteriaceae. In two independent experiments, 48 YBs from 6 commercial farms and 33 VCs previously colostrum deprived and exposed to cefquinome were randomly assigned to one of the three groups LOW, HIGH, and Control. In LOW and HIGH groups, animals received a single injection of, respectively, 2 and 10 mg/kg marbofloxacin. Feces were sampled before treatment, and at several times after treatment. Total and resistant Enterobacteriaceae enumerating were performed by plating dilutions of fecal samples on MacConkey agar plates that were supplemented or not with quinolone. In YBs, marbofloxacin treatment was associated with a transient decrease in total Enterobacteriaceae count between day (D)1 and D3 after treatment. Total Enterobacteriaceae count returned to baseline between D5 and D7 in all groups. None of the 48 YBs harbored marbofloxacin-resistant Enterobacteriaceae before treatment. After treatment, 1 out of 20 YBs from the Control group and 1 out of 14 YBs from the HIGH group exhibited marbofloxacin-resistant Enterobacteriaceae. In VCs, the rate of fluoroquinolone-resistant Enterobacteriaceae significantly increased after low and high doses of marbofloxacin treatment. However, the effect was similar for the two doses, which was probably related to the high level of resistant Enterobacteriaceae exhibited before treatment. Our results suggest that a single treatment with 2 or 10 mg/kg marbofloxacin exerts a moderate selective pressure on commensal Enterobacteriaceae in YBs and in VCs. A fivefold decrease of marbofloxacin regimen did not affect the selection of resistances among commensal bacteria.

  15. How random is a random vector?

    International Nuclear Information System (INIS)

    Eliazar, Iddo

    2015-01-01

    Over 80 years ago Samuel Wilks proposed that the “generalized variance” of a random vector is the determinant of its covariance matrix. To date, the notion and use of the generalized variance is confined only to very specific niches in statistics. In this paper we establish that the “Wilks standard deviation” –the square root of the generalized variance–is indeed the standard deviation of a random vector. We further establish that the “uncorrelation index” –a derivative of the Wilks standard deviation–is a measure of the overall correlation between the components of a random vector. Both the Wilks standard deviation and the uncorrelation index are, respectively, special cases of two general notions that we introduce: “randomness measures” and “independence indices” of random vectors. In turn, these general notions give rise to “randomness diagrams”—tangible planar visualizations that answer the question: How random is a random vector? The notion of “independence indices” yields a novel measure of correlation for Lévy laws. In general, the concepts and results presented in this paper are applicable to any field of science and engineering with random-vectors empirical data.

  16. How random is a random vector?

    Science.gov (United States)

    Eliazar, Iddo

    2015-12-01

    Over 80 years ago Samuel Wilks proposed that the "generalized variance" of a random vector is the determinant of its covariance matrix. To date, the notion and use of the generalized variance is confined only to very specific niches in statistics. In this paper we establish that the "Wilks standard deviation" -the square root of the generalized variance-is indeed the standard deviation of a random vector. We further establish that the "uncorrelation index" -a derivative of the Wilks standard deviation-is a measure of the overall correlation between the components of a random vector. Both the Wilks standard deviation and the uncorrelation index are, respectively, special cases of two general notions that we introduce: "randomness measures" and "independence indices" of random vectors. In turn, these general notions give rise to "randomness diagrams"-tangible planar visualizations that answer the question: How random is a random vector? The notion of "independence indices" yields a novel measure of correlation for Lévy laws. In general, the concepts and results presented in this paper are applicable to any field of science and engineering with random-vectors empirical data.

  17. Evaluation of Petrifilm™ method for enumerating aerobic bacteria in Crottin goat's cheese Evaluación de PetrifilmTM para la enumeración de bacterias aerobias en queso de cabra Crottin

    Directory of Open Access Journals (Sweden)

    G.B. de Sousa

    2005-12-01

    Full Text Available The PetrifilmTM Aerobic Count Plate (ACP developed by 3M laboratories, is a ready-to-use culture medium system, useful for the enumeration of aerobic bacteria in food. PetrifilmTMwas compared with a standard method in several different food products with satisfactory results. However, many studies showed that bacterial counts in PetrifilmTM were significantly lower than those obtained with conventional methods in fermented food. The purpose of this study was to compare the PetrifilmTM method for enumerating aerobic bacteria with a conventional method (PCA in Crottin goat's cheese. Thirty samples were used for the colony count. The mean count and standard deviation were 7.18 ± 1.17 log CFU g-1 on PCA and 7.11 ± 1.05 log CFU g-1 on PetrifilmTM. Analysis of variance revealed no significant differences between both methods (t = 1.33, P = 0.193. The Pearson correlation coefficient (0.971, P=0.0001 indicated a strong linear relationship between the PetrifilmTM and the standard method. The results showed that PetrifilmTM is suitable and a convenient alternative to this standard method for the enumeration of aerobic flora in goat soft cheese.PetrifilmTM Aerobic Count Plate (ACP desarrollado por 3M es un sistema listo para usar, empleado para el recuento de bacterias aerobias en alimentos. PetrifilmTMfue comparado con los métodos estándar en diferentes productos alimenticios con resultados satisfactorios. Sin embargo, en alimentos fermentados, algunos estudios mostraron que el recuento de bacterias aerobias en PetrifilmTM fue significativamente menor que aquellos obtenidos con los métodos convencionales (PCA. El propósito de este estudio fue comparar el método PetrifilmTM para el recuento de bacterias aerobias con un método convencional en queso de cabra Crottin. Se usaron 30 muestras para el recuento de colonias. Las medias y desviaciones estándar fueron 7,18 ± 1,17 log UFC g-1 en PCA y 7,11 ± 1,05 log UFC g-1 en PetrifilmTM. El análisis de

  18. A selective and differential medium for Vibrio harveyi.

    OpenAIRE

    Harris, L; Owens, L; Smith, S

    1996-01-01

    A new medium, termed Vibrio harveyi agar, has been developed for the isolation and enumeration of V. harveyi. It is possible to differentiate V. harveyi colonies from the colonies of strains representing 15 other Vibrio species with this medium. This medium has been shown to inhibit the growth of two strains of marine Pseudomonas spp. and two strains of marine Flavobacterium spp. but to allow the growth of Photobacterium strains. Colonies displaying typical V. harveyi morphology were isolated...

  19. Aphid Identification and Counting Based on Smartphone and Machine Vision

    Directory of Open Access Journals (Sweden)

    Suo Xuesong

    2017-01-01

    Full Text Available Exact enumeration of aphids before the aphids outbreak can provide basis for precision spray. This paper designs counting software that can be run on smartphones for real-time enumeration of aphids. As a first step of the method used in this paper, images of the yellow sticky board that is aiming to catch insects are segmented from complex background by using GrabCut method; then the images will be normalized by perspective transformation method. The second step is the pretreatment on the images; images of aphids will be segmented by using OSTU threshold method after the effect of random illumination is eliminated by single image difference method. The last step of the method is aphids’ recognition and counting according to area feature of aphids after extracting contours of aphids by contour detection method. At last, the result of the experiment proves that the effect of random illumination can be effectively eliminated by using single image difference method. The counting accuracy in greenhouse is above 95%, while it can reach 92.5% outside. Thus, it can be seen that the counting software designed in this paper can realize exact enumeration of aphids under complicated illumination which can be used widely. The design method proposed in this paper can provide basis for precision spray according to its effective detection insects.

  20. An Evaluation of the Use of Simulated Annealing to Optimize Thinning Rates for Single Even-Aged Stands

    Directory of Open Access Journals (Sweden)

    Kai Moriguchi

    2015-01-01

    Full Text Available We evaluated the potential of simulated annealing as a reliable method for optimizing thinning rates for single even-aged stands. Four types of yield models were used as benchmark models to examine the algorithm’s versatility. Thinning rate, which was constrained to 0–50% every 5 years at stand ages of 10–45 years, was optimized to maximize the net present value for one fixed rotation term (50 years. The best parameters for the simulated annealing were chosen from 113 patterns, using the mean of the net present value from 39 runs to ensure the best performance. We compared the solutions with those from coarse full enumeration to evaluate the method’s reliability and with 39 runs of random search to evaluate its efficiency. In contrast to random search, the best run of simulated annealing for each of the four yield models resulted in a better solution than coarse full enumeration. However, variations in the objective function for two yield models obtained with simulated annealing were significantly larger than those of random search. In conclusion, simulated annealing with optimized parameters is more efficient for optimizing thinning rates than random search. However, it is necessary to execute multiple runs to obtain reliable solutions.

  1. Randomized random walk on a random walk

    International Nuclear Information System (INIS)

    Lee, P.A.

    1983-06-01

    This paper discusses generalizations of the model introduced by Kehr and Kunter of the random walk of a particle on a one-dimensional chain which in turn has been constructed by a random walk procedure. The superimposed random walk is randomised in time according to the occurrences of a stochastic point process. The probability of finding the particle in a particular position at a certain instant is obtained explicitly in the transform domain. It is found that the asymptotic behaviour for large time of the mean-square displacement of the particle depends critically on the assumed structure of the basic random walk, giving a diffusion-like term for an asymmetric walk or a square root law if the walk is symmetric. Many results are obtained in closed form for the Poisson process case, and these agree with those given previously by Kehr and Kunter. (author)

  2. 77 FR 2606 - Pipeline Safety: Random Drug Testing Rate

    Science.gov (United States)

    2012-01-18

    ... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration [Docket ID PHMSA-2012-0004] Pipeline Safety: Random Drug Testing Rate AGENCY: Pipeline and Hazardous Materials... pipelines and operators of liquefied natural gas facilities must select and test a percentage of covered...

  3. 75 FR 9018 - Pipeline Safety: Random Drug Testing Rate

    Science.gov (United States)

    2010-02-26

    ... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration [Docket ID PHMSA-2010-0034] Pipeline Safety: Random Drug Testing Rate AGENCY: Pipeline and Hazardous Materials... pipelines and operators of liquefied natural gas facilities must select and test a percentage of covered...

  4. Influence of Maximum Inbreeding Avoidance under BLUP EBV Selection on Pinzgau Population Diversity

    Directory of Open Access Journals (Sweden)

    Radovan Kasarda

    2011-05-01

    Full Text Available Evaluated was effect of mating (random vs. maximum avoidance of inbreeding under BLUP EBV selection strategy. Existing population structure was under Monte Carlo stochastic simulation analyzed from the point to minimize increase of inbreeding. Maximum avoidance of inbreeding under BLUP selection resulted into comparable increase of inbreeding then random mating in average of 10 generation development. After 10 generations of simulation of mating strategy was observed ΔF= 6,51 % (2 sires, 5,20 % (3 sires, 3,22 % (4 sires resp. 2,94 % (5 sires. With increased number of sires selected, decrease of inbreeding was observed. With use of 4, resp. 5 sires increase of inbreeding was comparable to random mating with phenotypic selection. For saving of genetic diversity and prevention of population loss is important to minimize increase of inbreeding in small populations. Classical approach was based on balancing ratio of sires and dams in mating program. Contrariwise in the most of commercial populations small number of sires was used with high mating ratio.

  5. Adoption of selected innovations in rice production and their effect ...

    African Journals Online (AJOL)

    Adoption of selected innovations in rice production and their effect on farmers living standard in Bauchi local government area, Bauchi state, Nigeria. ... International Journal of Natural and Applied Sciences ... Simple random sampling technique was used for the selection of 82 rice growers from these villages. The data ...

  6. RARtool: A MATLAB Software Package for Designing Response-Adaptive Randomized Clinical Trials with Time-to-Event Outcomes.

    Science.gov (United States)

    Ryeznik, Yevgen; Sverdlov, Oleksandr; Wong, Weng Kee

    2015-08-01

    Response-adaptive randomization designs are becoming increasingly popular in clinical trial practice. In this paper, we present RARtool , a user interface software developed in MATLAB for designing response-adaptive randomized comparative clinical trials with censored time-to-event outcomes. The RARtool software can compute different types of optimal treatment allocation designs, and it can simulate response-adaptive randomization procedures targeting selected optimal allocations. Through simulations, an investigator can assess design characteristics under a variety of experimental scenarios and select the best procedure for practical implementation. We illustrate the utility of our RARtool software by redesigning a survival trial from the literature.

  7. Response to family selection and genetic parameters in Japanese quail selected for four week breast weight

    DEFF Research Database (Denmark)

    Khaldari, Majid; Yeganeh, Hassan Mehrabani; Pakdel, Abbas

    2011-01-01

    An experiment was conducted to investigate the effect of short-term selection for 4 week breast weight (4wk BRW), and to estimate genetic parameters of body weight, and carcass traits. A selection (S) line and control (C) line was randomly selected from a base population. Data were collected over...... was 0.35±0.06. There were a significant difference for BW, and carcass weights but not for carcass percent components between lines (Pcarcass and leg weights were 0.46, 0.41 and 0.47, and 13.2, 16.2, 4.4 %, respectively....... The genetic correlations of BRW with BW, carcass, leg, and back weights were 0.85, 0.88 and 0.72, respectively. Selection for 4 wk BRW improved feed conversion ratio (FCR) about 0.19 units over the selection period. Inbreeding caused an insignificant decline of the mean of some traits. Results from...

  8. 40 CFR 205.57-2 - Test vehicle sample selection.

    Science.gov (United States)

    2010-07-01

    ... pursuant to a test request in accordance with this subpart will be selected in the manner specified in the... then using a table of random numbers to select the number of vehicles as specified in paragraph (c) of... with the desig-nated AQL are contained in Appendix I, -Table II. (c) The appropriate batch sample size...

  9. Record statistics of financial time series and geometric random walks.

    Science.gov (United States)

    Sabir, Behlool; Santhanam, M S

    2014-09-01

    The study of record statistics of correlated series in physics, such as random walks, is gaining momentum, and several analytical results have been obtained in the past few years. In this work, we study the record statistics of correlated empirical data for which random walk models have relevance. We obtain results for the records statistics of select stock market data and the geometric random walk, primarily through simulations. We show that the distribution of the age of records is a power law with the exponent α lying in the range 1.5≤α≤1.8. Further, the longest record ages follow the Fréchet distribution of extreme value theory. The records statistics of geometric random walk series is in good agreement with that obtained from empirical stock data.

  10. Mating schemes for optimum contribution selection with constrained rates of inbreeding

    NARCIS (Netherlands)

    Sonesson, A.K.; Meuwissen, T.H.E.

    2000-01-01

    The effect of non-random mating on genetic response was compared for populations with discrete generations. Mating followed a selection step where the average coancestry of selected animals was constrained, while genetic response was maximised. Minimum coancestry (MC), Minimum coancestry with a

  11. Vast Portfolio Selection with Gross-exposure Constraints().

    Science.gov (United States)

    Fan, Jianqing; Zhang, Jingjin; Yu, Ke

    2012-01-01

    We introduce the large portfolio selection using gross-exposure constraints. We show that with gross-exposure constraint the empirically selected optimal portfolios based on estimated covariance matrices have similar performance to the theoretical optimal ones and there is no error accumulation effect from estimation of vast covariance matrices. This gives theoretical justification to the empirical results in Jagannathan and Ma (2003). We also show that the no-short-sale portfolio can be improved by allowing some short positions. The applications to portfolio selection, tracking, and improvements are also addressed. The utility of our new approach is illustrated by simulation and empirical studies on the 100 Fama-French industrial portfolios and the 600 stocks randomly selected from Russell 3000.

  12. Novel β-lactamase-random peptide fusion libraries for phage display selection of cancer cell-targeting agents suitable for enzyme prodrug therapy

    Science.gov (United States)

    Shukla, Girja S.; Krag, David N.

    2010-01-01

    Novel phage-displayed random linear dodecapeptide (X12) and cysteine-constrained decapeptide (CX10C) libraries constructed in fusion to the amino-terminus of P99 β-lactamase molecules were used for identifying β-lactamase-linked cancer cell-specific ligands. The size and quality of both libraries were comparable to the standards of other reported phage display systems. Using the single-round panning method based on phage DNA recovery, we identified severalβ-lactamase fusion peptides that specifically bind to live human breast cancer MDA-MB-361 cells. The β-lactamase fusion to the peptides helped in conducting the enzyme activity-based clone normalization and cell-binding screening in a very time- and cost-efficient manner. The methods were suitable for 96-well readout as well as microscopic imaging. The success of the biopanning was indicated by the presence of ~40% cancer cell-specific clones among recovered phages. One of the binding clones appeared multiple times. The cancer cell-binding fusion peptides also shared several significant motifs. This opens a new way of preparing and selecting phage display libraries. The cancer cell-specific β-lactamase-linked affinity reagents selected from these libraries can be used for any application that requires a reporter for tracking the ligand molecules. Furthermore, these affinity reagents have also a potential for their direct use in the targeted enzyme prodrug therapy of cancer. PMID:19751096

  13. Malaria parasitemia amongst pregnant women attending selected ...

    African Journals Online (AJOL)

    A cross-sectional study to determine malaria parasitemia amongst 300 randomly selected pregnant women attending government and private healthcare facilities in Rivers State was carried out. Blood samples were obtained through venous procedure and the presence or absence of Plasmodium was determined ...

  14. Polyatomic Trilobite Rydberg Molecules in a Dense Random Gas.

    Science.gov (United States)

    Luukko, Perttu J J; Rost, Jan-Michael

    2017-11-17

    Trilobites are exotic giant dimers with enormous dipole moments. They consist of a Rydberg atom and a distant ground-state atom bound together by short-range electron-neutral attraction. We show that highly polar, polyatomic trilobite states unexpectedly persist and thrive in a dense ultracold gas of randomly positioned atoms. This is caused by perturbation-induced quantum scarring and the localization of electron density on randomly occurring atom clusters. At certain densities these states also mix with an s state, overcoming selection rules that hinder the photoassociation of ordinary trilobites.

  15. Randomized clinical trials in dentistry: Risks of bias, risks of random errors, reporting quality, and methodologic quality over the years 1955-2013.

    Directory of Open Access Journals (Sweden)

    Humam Saltaji

    Full Text Available To examine the risks of bias, risks of random errors, reporting quality, and methodological quality of randomized clinical trials of oral health interventions and the development of these aspects over time.We included 540 randomized clinical trials from 64 selected systematic reviews. We extracted, in duplicate, details from each of the selected randomized clinical trials with respect to publication and trial characteristics, reporting and methodologic characteristics, and Cochrane risk of bias domains. We analyzed data using logistic regression and Chi-square statistics.Sequence generation was assessed to be inadequate (at unclear or high risk of bias in 68% (n = 367 of the trials, while allocation concealment was inadequate in the majority of trials (n = 464; 85.9%. Blinding of participants and blinding of the outcome assessment were judged to be inadequate in 28.5% (n = 154 and 40.5% (n = 219 of the trials, respectively. A sample size calculation before the initiation of the study was not performed/reported in 79.1% (n = 427 of the trials, while the sample size was assessed as adequate in only 17.6% (n = 95 of the trials. Two thirds of the trials were not described as double blinded (n = 358; 66.3%, while the method of blinding was appropriate in 53% (n = 286 of the trials. We identified a significant decrease over time (1955-2013 in the proportion of trials assessed as having inadequately addressed methodological quality items (P < 0.05 in 30 out of the 40 quality criteria, or as being inadequate (at high or unclear risk of bias in five domains of the Cochrane risk of bias tool: sequence generation, allocation concealment, incomplete outcome data, other sources of bias, and overall risk of bias.The risks of bias, risks of random errors, reporting quality, and methodological quality of randomized clinical trials of oral health interventions have improved over time; however, further efforts that contribute to the development of more stringent

  16. Random broadcast on random geometric graphs

    Energy Technology Data Exchange (ETDEWEB)

    Bradonjic, Milan [Los Alamos National Laboratory; Elsasser, Robert [UNIV OF PADERBORN; Friedrich, Tobias [ICSI/BERKELEY; Sauerwald, Tomas [ICSI/BERKELEY

    2009-01-01

    In this work, we consider the random broadcast time on random geometric graphs (RGGs). The classic random broadcast model, also known as push algorithm, is defined as: starting with one informed node, in each succeeding round every informed node chooses one of its neighbors uniformly at random and informs it. We consider the random broadcast time on RGGs, when with high probability: (i) RGG is connected, (ii) when there exists the giant component in RGG. We show that the random broadcast time is bounded by {Omicron}({radical} n + diam(component)), where diam(component) is a diameter of the entire graph, or the giant component, for the regimes (i), or (ii), respectively. In other words, for both regimes, we derive the broadcast time to be {Theta}(diam(G)), which is asymptotically optimal.

  17. Using maximum entropy modeling for optimal selection of sampling sites for monitoring networks

    Science.gov (United States)

    Stohlgren, Thomas J.; Kumar, Sunil; Barnett, David T.; Evangelista, Paul H.

    2011-01-01

    Environmental monitoring programs must efficiently describe state shifts. We propose using maximum entropy modeling to select dissimilar sampling sites to capture environmental variability at low cost, and demonstrate a specific application: sample site selection for the Central Plains domain (453,490 km2) of the National Ecological Observatory Network (NEON). We relied on four environmental factors: mean annual temperature and precipitation, elevation, and vegetation type. A “sample site” was defined as a 20 km × 20 km area (equal to NEON’s airborne observation platform [AOP] footprint), within which each 1 km2 cell was evaluated for each environmental factor. After each model run, the most environmentally dissimilar site was selected from all potential sample sites. The iterative selection of eight sites captured approximately 80% of the environmental envelope of the domain, an improvement over stratified random sampling and simple random designs for sample site selection. This approach can be widely used for cost-efficient selection of survey and monitoring sites.

  18. Random ancestor trees

    International Nuclear Information System (INIS)

    Ben-Naim, E; Krapivsky, P L

    2010-01-01

    We investigate a network growth model in which the genealogy controls the evolution. In this model, a new node selects a random target node and links either to this target node, or to its parent, or to its grandparent, etc; all nodes from the target node to its most ancient ancestor are equiprobable destinations. The emerging random ancestor tree is very shallow: the fraction g n of nodes at distance n from the root decreases super-exponentially with n, g n = e −1 /(n − 1)!. We find that a macroscopic hub at the root coexists with highly connected nodes at higher generations. The maximal degree of a node at the nth generation grows algebraically as N 1/β n , where N is the system size. We obtain the series of nontrivial exponents which are roots of transcendental equations: β 1 ≅1.351 746, β 2 ≅1.682 201, etc. As a consequence, the fraction p k of nodes with degree k has an algebraic tail, p k ∼ k −γ , with γ = β 1 + 1 = 2.351 746

  19. Towards a pro-health food-selection model for gatekeepers in ...

    African Journals Online (AJOL)

    The purpose of this study was to develop a pro-health food selection model for gatekeepers of Bulawayo high-density suburbs in Zimbabwe. Gatekeepers in five suburbs constituted the study population from which a sample of 250 subjects was randomly selected. Of the total respondents (N= 182), 167 had their own ...

  20. Random practice - one of the factors of the motor learning process

    Directory of Open Access Journals (Sweden)

    Petr Valach

    2012-01-01

    Full Text Available BACKGROUND: An important concept of acquiring motor skills is the random practice (contextual interference - CI. The explanation of the effect of contextual interference is that the memory has to work more intensively, and therefore it provides higher effect of motor skills retention than the block practice. Only active remembering of a motor skill assigns the practical value for appropriate using in the future. OBJECTIVE: The aim of this research was to determine the difference in how the motor skills in sport gymnastics are acquired and retained using the two different teaching methods - blocked and random practice. METHODS: The blocked and random practice on the three selected gymnastics tasks were applied in the two groups students of physical education (blocked practice - the group BP, random practice - the group RP during two months, in one session a week (totally 80 trials. At the end of the experiment and 6 months after (retention tests the groups were tested on the selected gymnastics skills. RESULTS: No significant differences in a level of the gymnastics skills were found between BP group and RP group at the end of the experiment. However, the retention tests showed significantly higher level of the gymnastics skills in the RP group in comparison with the BP group. CONCLUSION: The results confirmed that a retention of the gymnastics skills using the teaching method of the random practice was significantly higher than with use of the blocked practice.

  1. The influence of selection on the evolutionary distance estimated from the base changes observed between homologous nucleotide sequences.

    Science.gov (United States)

    Otsuka, J; Kawai, Y; Sugaya, N

    2001-11-21

    In most studies of molecular evolution, the nucleotide base at a site is assumed to change with the apparent rate under functional constraint, and the comparison of base changes between homologous genes is thought to yield the evolutionary distance corresponding to the site-average change rate multiplied by the divergence time. However, this view is not sufficiently successful in estimating the divergence time of species, but mostly results in the construction of tree topology without a time-scale. In the present paper, this problem is investigated theoretically by considering that observed base changes are the results of comparing the survivals through selection of mutated bases. In the case of weak selection, the time course of base changes due to mutation and selection can be obtained analytically, leading to a theoretical equation showing how the selection has influence on the evolutionary distance estimated from the enumeration of base changes. This result provides a new method for estimating the divergence time more accurately from the observed base changes by evaluating both the strength of selection and the mutation rate. The validity of this method is verified by analysing the base changes observed at the third codon positions of amino acid residues with four-fold codon degeneracy in the protein genes of mammalian mitochondria; i.e. the ratios of estimated divergence times are fairly well consistent with a series of fossil records of mammals. Throughout this analysis, it is also suggested that the mutation rates in mitochondrial genomes are almost the same in different lineages of mammals and that the lineage-specific base-change rates indicated previously are due to the selection probably arising from the preference of transfer RNAs to codons.

  2. Random ensemble learning for EEG classification.

    Science.gov (United States)

    Hosseini, Mohammad-Parsa; Pompili, Dario; Elisevich, Kost; Soltanian-Zadeh, Hamid

    2018-01-01

    Real-time detection of seizure activity in epilepsy patients is critical in averting seizure activity and improving patients' quality of life. Accurate evaluation, presurgical assessment, seizure prevention, and emergency alerts all depend on the rapid detection of seizure onset. A new method of feature selection and classification for rapid and precise seizure detection is discussed wherein informative components of electroencephalogram (EEG)-derived data are extracted and an automatic method is presented using infinite independent component analysis (I-ICA) to select independent features. The feature space is divided into subspaces via random selection and multichannel support vector machines (SVMs) are used to classify these subspaces. The result of each classifier is then combined by majority voting to establish the final output. In addition, a random subspace ensemble using a combination of SVM, multilayer perceptron (MLP) neural network and an extended k-nearest neighbors (k-NN), called extended nearest neighbor (ENN), is developed for the EEG and electrocorticography (ECoG) big data problem. To evaluate the solution, a benchmark ECoG of eight patients with temporal and extratemporal epilepsy was implemented in a distributed computing framework as a multitier cloud-computing architecture. Using leave-one-out cross-validation, the accuracy, sensitivity, specificity, and both false positive and false negative ratios of the proposed method were found to be 0.97, 0.98, 0.96, 0.04, and 0.02, respectively. Application of the solution to cases under investigation with ECoG has also been effected to demonstrate its utility. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. Phage display peptide libraries: deviations from randomness and correctives

    Science.gov (United States)

    Ryvkin, Arie; Ashkenazy, Haim; Weiss-Ottolenghi, Yael; Piller, Chen; Pupko, Tal; Gershoni, Jonathan M

    2018-01-01

    Abstract Peptide-expressing phage display libraries are widely used for the interrogation of antibodies. Affinity selected peptides are then analyzed to discover epitope mimetics, or are subjected to computational algorithms for epitope prediction. A critical assumption for these applications is the random representation of amino acids in the initial naïve peptide library. In a previous study, we implemented next generation sequencing to evaluate a naïve library and discovered severe deviations from randomness in UAG codon over-representation as well as in high G phosphoramidite abundance causing amino acid distribution biases. In this study, we demonstrate that the UAG over-representation can be attributed to the burden imposed on the phage upon the assembly of the recombinant Protein 8 subunits. This was corrected by constructing the libraries using supE44-containing bacteria which suppress the UAG driven abortive termination. We also demonstrate that the overabundance of G stems from variant synthesis-efficiency and can be corrected using compensating oligonucleotide-mixtures calibrated by mass spectroscopy. Construction of libraries implementing these correctives results in markedly improved libraries that display random distribution of amino acids, thus ensuring that enriched peptides obtained in biopanning represent a genuine selection event, a fundamental assumption for phage display applications. PMID:29420788

  4. Topics in random walks in random environment

    International Nuclear Information System (INIS)

    Sznitman, A.-S.

    2004-01-01

    Over the last twenty-five years random motions in random media have been intensively investigated and some new general methods and paradigms have by now emerged. Random walks in random environment constitute one of the canonical models of the field. However in dimension bigger than one they are still poorly understood and many of the basic issues remain to this day unresolved. The present series of lectures attempt to give an account of the progresses which have been made over the last few years, especially in the study of multi-dimensional random walks in random environment with ballistic behavior. (author)

  5. A randomized controlled trial of an electronic informed consent process.

    Science.gov (United States)

    Rothwell, Erin; Wong, Bob; Rose, Nancy C; Anderson, Rebecca; Fedor, Beth; Stark, Louisa A; Botkin, Jeffrey R

    2014-12-01

    A pilot study assessed an electronic informed consent model within a randomized controlled trial (RCT). Participants who were recruited for the parent RCT project were randomly selected and randomized to either an electronic consent group (n = 32) or a simplified paper-based consent group (n = 30). Results from the electronic consent group reported significantly higher understanding of the purpose of the study, alternatives to participation, and who to contact if they had questions or concerns about the study. However, participants in the paper-based control group reported higher mean scores on some survey items. This research suggests that an electronic informed consent presentation may improve participant understanding for some aspects of a research study. © The Author(s) 2014.

  6. Balancing treatment allocations by clinician or center in randomized trials allows unacceptable levels of treatment prediction.

    Science.gov (United States)

    Hills, Robert K; Gray, Richard; Wheatley, Keith

    2009-08-01

    Randomized controlled trials are the standard method for comparing treatments because they avoid the selection bias that might arise if clinicians were free to choose which treatment a patient would receive. In practice, allocation of treatments in randomized controlled trials is often not wholly random with various 'pseudo-randomization' methods, such as minimization or balanced blocks, used to ensure good balance between treatments within potentially important prognostic or predictive subgroups. These methods avoid selection bias so long as full concealment of the next treatment allocation is maintained. There is concern, however, that pseudo-random methods may allow clinicians to predict future treatment allocations from previous allocation history, particularly if allocations are balanced by clinician or center. We investigate here to what extent treatment prediction is possible. Using computer simulations of minimization and balanced block randomizations, the success rates of various prediction strategies were investigated for varying numbers of stratification variables, including the patient's clinician. Prediction rates for minimization and balanced block randomization typically exceed 60% when clinician is included as a stratification variable and, under certain circumstances, can exceed 80%. Increasing the number of clinicians and other stratification variables did not greatly reduce the prediction rates. Without clinician as a stratification variable, prediction rates are poor unless few clinicians participate. Prediction rates are unacceptably high when allocations are balanced by clinician or by center. This could easily lead to selection bias that might suggest spurious, or mask real, treatment effects. Unless treatment is blinded, randomization should not be balanced by clinician (or by center), and clinician-center effects should be allowed for instead by retrospectively stratified analyses. © 2009 Blackwell Publishing Asia Pty Ltd and Chinese

  7. An improved procedure for detection and enumeration of walrus signatures in airborne thermal imagery

    Science.gov (United States)

    Burn, Douglas M.; Udevitz, Mark S.; Speckman, Suzann G.; Benter, R. Bradley

    2009-01-01

    In recent years, application of remote sensing to marine mammal surveys has been a promising area of investigation for wildlife managers and researchers. In April 2006, the United States and Russia conducted an aerial survey of Pacific walrus (Odobenus rosmarus divergens) using thermal infrared sensors to detect groups of animals resting on pack ice in the Bering Sea. The goal of this survey was to estimate the size of the Pacific walrus population. An initial analysis of the U.S. data using previously-established methods resulted in lower detectability of walrus groups in the imagery and higher variability in calibration models than was expected based on pilot studies. This paper describes an improved procedure for detection and enumeration of walrus groups in airborne thermal imagery. Thermal images were first subdivided into smaller 200 x 200 pixel "tiles." We calculated three statistics to represent characteristics of walrus signatures from the temperature histogram for each the. Tiles that exhibited one or more of these characteristics were examined further to determine if walrus signatures were present. We used cluster analysis on tiles that contained walrus signatures to determine which pixels belonged to each group. We then calculated a thermal index value for each walrus group in the imagery and used generalized linear models to estimate detection functions (the probability of a group having a positive index value) and calibration functions (the size of a group as a function of its index value) based on counts from matched digital aerial photographs. The new method described here improved our ability to detect walrus groups at both 2 m and 4 m spatial resolution. In addition, the resulting calibration models have lower variance than the original method. We anticipate that the use of this new procedure will greatly improve the quality of the population estimate derived from these data. This procedure may also have broader applicability to thermal infrared

  8. Immigration And Self-Selection

    OpenAIRE

    George J. Borjas

    1988-01-01

    Self-selection plays a dominant role in determining the size and composition of immigrant flows. The United States competes with other potential host countries in the "immigration market". Host countries vary in their "offers" of economic opportunities and also differ in the way they ration entry through their immigration policies. Potential immigrants compare the various opportunities and are non-randomly sorted by the immigration market among the various host countries. This paper presents ...

  9. Robust portfolio selection based on asymmetric measures of variability of stock returns

    Science.gov (United States)

    Chen, Wei; Tan, Shaohua

    2009-10-01

    This paper addresses a new uncertainty set--interval random uncertainty set for robust optimization. The form of interval random uncertainty set makes it suitable for capturing the downside and upside deviations of real-world data. These deviation measures capture distributional asymmetry and lead to better optimization results. We also apply our interval random chance-constrained programming to robust mean-variance portfolio selection under interval random uncertainty sets in the elements of mean vector and covariance matrix. Numerical experiments with real market data indicate that our approach results in better portfolio performance.

  10. Convergence analysis for Latin-hypercube lattice-sample selection strategies for 3D correlated random hydraulic-conductivity fields

    OpenAIRE

    Simuta-Champo, R.; Herrera-Zamarrón, G. S.

    2010-01-01

    The Monte Carlo technique provides a natural method for evaluating uncertainties. The uncertainty is represented by a probability distribution or by related quantities such as statistical moments. When the groundwater flow and transport governing equations are solved and the hydraulic conductivity field is treated as a random spatial function, the hydraulic head, velocities and concentrations also become random spatial functions. When that is the case, for the stochastic simulation of groundw...

  11. Understanding perspectives on sex-selection in India: an intersectional study

    OpenAIRE

    Sonya Davey, BA; Manisha Sharma, PhD MFA

    2014-01-01

    Background: Sex-selective abortion results in fewer girls than boys in India (914 girls:1000 boys). To understand perspectives about who is responsible for sex-selective abortion, our aim was to focus on narratives of vastly diverse stakeholders in Indian society. Methods: The qualitative study was undertaken in urban sectors of six northwestern Indian states. Ethnographic unstructured, conversation-style interviews with randomly selected participants were held for an unbiased study. To ca...

  12. Vast Portfolio Selection with Gross-exposure Constraints*

    Science.gov (United States)

    Fan, Jianqing; Zhang, Jingjin; Yu, Ke

    2012-01-01

    We introduce the large portfolio selection using gross-exposure constraints. We show that with gross-exposure constraint the empirically selected optimal portfolios based on estimated covariance matrices have similar performance to the theoretical optimal ones and there is no error accumulation effect from estimation of vast covariance matrices. This gives theoretical justification to the empirical results in Jagannathan and Ma (2003). We also show that the no-short-sale portfolio can be improved by allowing some short positions. The applications to portfolio selection, tracking, and improvements are also addressed. The utility of our new approach is illustrated by simulation and empirical studies on the 100 Fama-French industrial portfolios and the 600 stocks randomly selected from Russell 3000. PMID:23293404

  13. An explicit semantic relatedness measure based on random walk

    Directory of Open Access Journals (Sweden)

    HU Sihui

    2016-10-01

    Full Text Available The semantic relatedness calculation of open domain knowledge network is a significant issue.In this paper,pheromone strategy is drawn from the thought of ant colony algorithm and is integrated into the random walk which is taken as the basic framework of calculating the semantic relatedness degree.The pheromone distribution is taken as a criterion of determining the tightness degree of semantic relatedness.A method of calculating semantic relatedness degree based on random walk is proposed and the exploration process of calculating the semantic relatedness degree is presented in a dominant way.The method mainly contains Path Select Model(PSM and Semantic Relatedness Computing Model(SRCM.PSM is used to simulate the path selection of ants and pheromone release.SRCM is used to calculate the semantic relatedness by utilizing the information returned by ants.The result indicates that the method could complete semantic relatedness calculation in linear complexity and extend the feasible strategy of semantic relatedness calculation.

  14. Prediction of plant promoters based on hexamers and random triplet pair analysis

    Directory of Open Access Journals (Sweden)

    Noman Nasimul

    2011-06-01

    Full Text Available Abstract Background With an increasing number of plant genome sequences, it has become important to develop a robust computational method for detecting plant promoters. Although a wide variety of programs are currently available, prediction accuracy of these still requires further improvement. The limitations of these methods can be addressed by selecting appropriate features for distinguishing promoters and non-promoters. Methods In this study, we proposed two feature selection approaches based on hexamer sequences: the Frequency Distribution Analyzed Feature Selection Algorithm (FDAFSA and the Random Triplet Pair Feature Selecting Genetic Algorithm (RTPFSGA. In FDAFSA, adjacent triplet-pairs (hexamer sequences were selected based on the difference in the frequency of hexamers between promoters and non-promoters. In RTPFSGA, random triplet-pairs (RTPs were selected by exploiting a genetic algorithm that distinguishes frequencies of non-adjacent triplet pairs between promoters and non-promoters. Then, a support vector machine (SVM, a nonlinear machine-learning algorithm, was used to classify promoters and non-promoters by combining these two feature selection approaches. We referred to this novel algorithm as PromoBot. Results Promoter sequences were collected from the PlantProm database. Non-promoter sequences were collected from plant mRNA, rRNA, and tRNA of PlantGDB and plant miRNA of miRBase. Then, in order to validate the proposed algorithm, we applied a 5-fold cross validation test. Training data sets were used to select features based on FDAFSA and RTPFSGA, and these features were used to train the SVM. We achieved 89% sensitivity and 86% specificity. Conclusions We compared our PromoBot algorithm to five other algorithms. It was found that the sensitivity and specificity of PromoBot performed well (or even better with the algorithms tested. These results show that the two proposed feature selection methods based on hexamer frequencies

  15. Functional Principal Component Analysis and Randomized Sparse Clustering Algorithm for Medical Image Analysis

    Science.gov (United States)

    Lin, Nan; Jiang, Junhai; Guo, Shicheng; Xiong, Momiao

    2015-01-01

    Due to the advancement in sensor technology, the growing large medical image data have the ability to visualize the anatomical changes in biological tissues. As a consequence, the medical images have the potential to enhance the diagnosis of disease, the prediction of clinical outcomes and the characterization of disease progression. But in the meantime, the growing data dimensions pose great methodological and computational challenges for the representation and selection of features in image cluster analysis. To address these challenges, we first extend the functional principal component analysis (FPCA) from one dimension to two dimensions to fully capture the space variation of image the signals. The image signals contain a large number of redundant features which provide no additional information for clustering analysis. The widely used methods for removing the irrelevant features are sparse clustering algorithms using a lasso-type penalty to select the features. However, the accuracy of clustering using a lasso-type penalty depends on the selection of the penalty parameters and the threshold value. In practice, they are difficult to determine. Recently, randomized algorithms have received a great deal of attentions in big data analysis. This paper presents a randomized algorithm for accurate feature selection in image clustering analysis. The proposed method is applied to both the liver and kidney cancer histology image data from the TCGA database. The results demonstrate that the randomized feature selection method coupled with functional principal component analysis substantially outperforms the current sparse clustering algorithms in image cluster analysis. PMID:26196383

  16. Fragmentation of random trees

    International Nuclear Information System (INIS)

    Kalay, Z; Ben-Naim, E

    2015-01-01

    We study fragmentation of a random recursive tree into a forest by repeated removal of nodes. The initial tree consists of N nodes and it is generated by sequential addition of nodes with each new node attaching to a randomly-selected existing node. As nodes are removed from the tree, one at a time, the tree dissolves into an ensemble of separate trees, namely, a forest. We study statistical properties of trees and nodes in this heterogeneous forest, and find that the fraction of remaining nodes m characterizes the system in the limit N→∞. We obtain analytically the size density ϕ s of trees of size s. The size density has power-law tail ϕ s ∼s −α with exponent α=1+(1/m). Therefore, the tail becomes steeper as further nodes are removed, and the fragmentation process is unusual in that exponent α increases continuously with time. We also extend our analysis to the case where nodes are added as well as removed, and obtain the asymptotic size density for growing trees. (paper)

  17. Unwilling or Unable to Cheat? Evidence from a Randomized Tax Audit Experiment in Denmark

    DEFF Research Database (Denmark)

    Kleven, Henrik Jacobsen; Knudsen, Martin B.; Kreiner, Claus Thustrup

    2010-01-01

    This paper analyzes a randomized tax enforcement experiment in Denmark. In the base year, a stratified and representative sample of over 40,000 individual income tax filers was selected for the experiment. Half of the tax filers were randomly selected to be thoroughly audited, while the rest were...... deliberately not audited. The following year, "threat-of-audit" letters were randomly assigned and sent to tax filers in both groups. Using comprehensive administrative tax data, we present four main findings. First, we find that the tax evasion rate is very small (0.3%) for income subject to third...... impact on tax evasion, but that this effect is small in comparison to avoidance responses. Third, we find that prior audits substantially increase self-reported income, implying that individuals update their beliefs about detection probability based on experiencing an audit. Fourth, threat-of-audit...

  18. Familial versus mass selection in small populations

    Directory of Open Access Journals (Sweden)

    Couvet Denis

    2003-07-01

    Full Text Available Abstract We used diffusion approximations and a Markov-chain approach to investigate the consequences of familial selection on the viability of small populations both in the short and in the long term. The outcome of familial selection was compared to the case of a random mating population under mass selection. In small populations, the higher effective size, associated with familial selection, resulted in higher fitness for slightly deleterious and/or highly recessive alleles. Conversely, because familial selection leads to a lower rate of directional selection, a lower fitness was observed for more detrimental genes that are not highly recessive, and with high population sizes. However, in the long term, genetic load was almost identical for both mass and familial selection for populations of up to 200 individuals. In terms of mean time to extinction, familial selection did not have any negative effect at least for small populations (N ≤ 50. Overall, familial selection could be proposed for use in management programs of small populations since it increases genetic variability and short-term viability without impairing the overall persistence times.

  19. [Intel random number generator-based true random number generator].

    Science.gov (United States)

    Huang, Feng; Shen, Hong

    2004-09-01

    To establish a true random number generator on the basis of certain Intel chips. The random numbers were acquired by programming using Microsoft Visual C++ 6.0 via register reading from the random number generator (RNG) unit of an Intel 815 chipset-based computer with Intel Security Driver (ISD). We tested the generator with 500 random numbers in NIST FIPS 140-1 and X(2) R-Squared test, and the result showed that the random number it generated satisfied the demand of independence and uniform distribution. We also compared the random numbers generated by Intel RNG-based true random number generator and those from the random number table statistically, by using the same amount of 7500 random numbers in the same value domain, which showed that the SD, SE and CV of Intel RNG-based random number generator were less than those of the random number table. The result of u test of two CVs revealed no significant difference between the two methods. Intel RNG-based random number generator can produce high-quality random numbers with good independence and uniform distribution, and solves some problems with random number table in acquisition of the random numbers.

  20. Mobile access to virtual randomization for investigator-initiated trials.

    Science.gov (United States)

    Deserno, Thomas M; Keszei, András P

    2017-08-01

    Background/aims Randomization is indispensable in clinical trials in order to provide unbiased treatment allocation and a valid statistical inference. Improper handling of allocation lists can be avoided using central systems, for example, human-based services. However, central systems are unaffordable for investigator-initiated trials and might be inaccessible from some places, where study subjects need allocations. We propose mobile access to virtual randomization, where the randomization lists are non-existent and the appropriate allocation is computed on demand. Methods The core of the system architecture is an electronic data capture system or a clinical trial management system, which is extended by an R interface connecting the R server using the Java R Interface. Mobile devices communicate via the representational state transfer web services. Furthermore, a simple web-based setup allows configuring the appropriate statistics by non-statisticians. Our comprehensive R script supports simple randomization, restricted randomization using a random allocation rule, block randomization, and stratified randomization for un-blinded, single-blinded, and double-blinded trials. For each trial, the electronic data capture system or the clinical trial management system stores the randomization parameters and the subject assignments. Results Apps are provided for iOS and Android and subjects are randomized using smartphones. After logging onto the system, the user selects the trial and the subject, and the allocation number and treatment arm are displayed instantaneously and stored in the core system. So far, 156 subjects have been allocated from mobile devices serving five investigator-initiated trials. Conclusion Transforming pre-printed allocation lists into virtual ones ensures the correct conduct of trials and guarantees a strictly sequential processing in all trial sites. Covering 88% of all randomization models that are used in recent trials, virtual randomization

  1. Ray tracing method for simulation of laser beam interaction with random packings of powders

    Science.gov (United States)

    Kovalev, O. B.; Kovaleva, I. O.; Belyaev, V. V.

    2018-03-01

    Selective laser sintering is a technology of rapid manufacturing of a free form that is created as a solid object by selectively fusing successive layers of powder using a laser. The motivation of this study is due to the currently insufficient understanding of the processes and phenomena of selective laser melting of powders whose time scales differ by orders of magnitude. To construct random packings from mono- and polydispersed solid spheres, the algorithm of their generation based on the discrete element method is used. A numerical method of ray tracing is proposed that is used to simulate the interaction of laser radiation with a random bulk packing of spherical particles and to predict the optical properties of the granular layer, the extinction and absorption coefficients, depending on the optical properties of a powder material.

  2. Antenna Selection for Full-Duplex MIMO Two-Way Communication Systems

    KAUST Repository

    Wilson-Nunn, Daniel; Chaaban, Anas; Sezgin, Aydin; Alouini, Mohamed-Slim

    2017-01-01

    Antenna selection for full-duplex communication between two nodes, each equipped with a predefined number of antennae and transmit/receive chains, is studied. Selection algorithms are proposed based on magnitude, orthogonality, and determinant criteria. The algorithms are compared to optimal selection obtained by exhaustive search as well as random selection, and are shown to yield performance fairly close to optimal at a much lower complexity. Performance comparison for a Rayleigh fading symmetric channel reveals that selecting a single transmit antenna is best at low signal-to-noise ratio (SNR), while selecting an equal number of transmit and receive antennae is best at high SNR.

  3. Antenna Selection for Full-Duplex MIMO Two-Way Communication Systems

    KAUST Repository

    Wilson-Nunn, Daniel

    2017-03-11

    Antenna selection for full-duplex communication between two nodes, each equipped with a predefined number of antennae and transmit/receive chains, is studied. Selection algorithms are proposed based on magnitude, orthogonality, and determinant criteria. The algorithms are compared to optimal selection obtained by exhaustive search as well as random selection, and are shown to yield performance fairly close to optimal at a much lower complexity. Performance comparison for a Rayleigh fading symmetric channel reveals that selecting a single transmit antenna is best at low signal-to-noise ratio (SNR), while selecting an equal number of transmit and receive antennae is best at high SNR.

  4. A Collective Study on Modeling and Simulation of Resistive Random Access Memory

    Science.gov (United States)

    Panda, Debashis; Sahu, Paritosh Piyush; Tseng, Tseung Yuen

    2018-01-01

    In this work, we provide a comprehensive discussion on the various models proposed for the design and description of resistive random access memory (RRAM), being a nascent technology is heavily reliant on accurate models to develop efficient working designs and standardize its implementation across devices. This review provides detailed information regarding the various physical methodologies considered for developing models for RRAM devices. It covers all the important models reported till now and elucidates their features and limitations. Various additional effects and anomalies arising from memristive system have been addressed, and the solutions provided by the models to these problems have been shown as well. All the fundamental concepts of RRAM model development such as device operation, switching dynamics, and current-voltage relationships are covered in detail in this work. Popular models proposed by Chua, HP Labs, Yakopcic, TEAM, Stanford/ASU, Ielmini, Berco-Tseng, and many others have been compared and analyzed extensively on various parameters. The working and implementations of the window functions like Joglekar, Biolek, Prodromakis, etc. has been presented and compared as well. New well-defined modeling concepts have been discussed which increase the applicability and accuracy of the models. The use of these concepts brings forth several improvements in the existing models, which have been enumerated in this work. Following the template presented, highly accurate models would be developed which will vastly help future model developers and the modeling community.

  5. Randomization tests

    CERN Document Server

    Edgington, Eugene

    2007-01-01

    Statistical Tests That Do Not Require Random Sampling Randomization Tests Numerical Examples Randomization Tests and Nonrandom Samples The Prevalence of Nonrandom Samples in Experiments The Irrelevance of Random Samples for the Typical Experiment Generalizing from Nonrandom Samples Intelligibility Respect for the Validity of Randomization Tests Versatility Practicality Precursors of Randomization Tests Other Applications of Permutation Tests Questions and Exercises Notes References Randomized Experiments Unique Benefits of Experiments Experimentation without Mani

  6. Randomized Oversampling for Generalized Multiscale Finite Element Methods

    KAUST Repository

    Calo, Victor M.

    2016-03-23

    In this paper, we develop efficient multiscale methods for flows in heterogeneous media. We use the generalized multiscale finite element (GMsFEM) framework. GMsFEM approximates the solution space locally using a few multiscale basis functions. This approximation selects an appropriate snapshot space and a local spectral decomposition, e.g., the use of oversampled regions, in order to achieve an efficient model reduction. However, the successful construction of snapshot spaces may be costly if too many local problems need to be solved in order to obtain these spaces. We use a moderate quantity of local solutions (or snapshot vectors) with random boundary conditions on oversampled regions with zero forcing to deliver an efficient methodology. Motivated by the randomized algorithm presented in [P. G. Martinsson, V. Rokhlin, and M. Tygert, A Randomized Algorithm for the approximation of Matrices, YALEU/DCS/TR-1361, Yale University, 2006], we consider a snapshot space which consists of harmonic extensions of random boundary conditions defined in a domain larger than the target region. Furthermore, we perform an eigenvalue decomposition in this small space. We study the application of randomized sampling for GMsFEM in conjunction with adaptivity, where local multiscale spaces are adaptively enriched. Convergence analysis is provided. We present representative numerical results to validate the method proposed.

  7. Enumeration and rapid identification of yeasts during extraction processes of extra virgin olive oil in Tuscany.

    Science.gov (United States)

    Mari, Eleonora; Guerrini, Simona; Granchi, Lisa; Vincenzini, Massimo

    2016-06-01

    The aim of this study was to evaluate the occurrence of yeast populations during different olive oil extraction processes, carried out in three consecutive years in Tuscany (Italy), by analysing crushed pastes, kneaded pastes, oil from decanter and pomaces. The results showed yeast concentrations ranging between 10(3) and 10(5) CFU/g or per mL. Seventeen dominant yeast species were identified by random amplified polymorphic DNA with primer M13 and their identification was confirmed by restriction fragments length polymorphism of ribosomal internal transcribed spacer and sequencing rRNA genes. The isolation frequencies of each species in the collected samples pointed out that the occurrence of the various yeast species in olive oil extraction process was dependent not only on the yeasts contaminating the olives but also on the yeasts colonizing the plant for oil extraction. In fact, eleven dominant yeast species were detected from the washed olives, but only three of them were also found in oil samples at significant isolation frequency. On the contrary, the most abundant species in oil samples, Yamadazyma terventina, did not occur in washed olive samples. These findings suggest a phenomenon of contamination of the plant for oil extraction that selects some yeast species that could affect the quality of olive oil.

  8. Using a Calendar and Explanatory Instructions to Aid Within-Household Selection in Mail Surveys

    Science.gov (United States)

    Stange, Mathew; Smyth, Jolene D.; Olson, Kristen

    2016-01-01

    Although researchers can easily select probability samples of addresses using the U.S. Postal Service's Delivery Sequence File, randomly selecting respondents within households for surveys remains challenging. Researchers often place within-household selection instructions, such as the next or last birthday methods, in survey cover letters to…

  9. Enumerating bone marrow blasts from nonerythroid cellularity improves outcome prediction in myelodysplastic syndromes and permits a better definition of the intermediate risk category of the Revised International Prognostic Scoring System (IPSS-R).

    Science.gov (United States)

    Calvo, Xavier; Arenillas, Leonor; Luño, Elisa; Senent, Leonor; Arnan, Montserrat; Ramos, Fernando; Pedro, Carme; Tormo, Mar; Montoro, Julia; Díez-Campelo, María; Blanco, María Laura; Arrizabalaga, Beatriz; Xicoy, Blanca; Bonanad, Santiago; Jerez, Andrés; Nomdedeu, Meritxell; Ferrer, Ana; Sanz, Guillermo F; Florensa, Lourdes

    2017-07-01

    The Revised International Prognostic Scoring System (IPSS-R) has been recognized as the score with the best outcome prediction capability in MDS, but this brought new concerns about the accurate prognostication of patients classified into the intermediate risk category. The correct enumeration of blasts is essential in prognostication of MDS. Recent data evidenced that considering blasts from nonerythroid cellularity (NECs) improves outcome prediction in the context of IPSS and WHO classification. We assessed the percentage of blasts from total nucleated cells (TNCs) and NECs in 3924 MDS patients from the GESMD, 498 of whom were MDS with erythroid predominance (MDS-E). We assessed if calculating IPSS-R by enumerating blasts from NECs improves prognostication of MDS. Twenty-four percent of patients classified into the intermediate category were reclassified into higher-risk categories and showed shorter overall survival (OS) and time to AML evolution than those who remained into the intermediate one. Likewise, a better distribution of patients was observed, since lower-risk patients showed longer survivals than previously whereas higher-risk ones maintained the outcome expected in this poor prognostic group (median OS < 20 months). Furthermore, our approach was particularly useful for detecting patients at risk of dying with AML. Regarding MDS-E, 51% patients classified into the intermediate category were reclassified into higher-risk ones and showed shorter OS and time to AML. In this subgroup of MDS, IPSS-R was capable of splitting our series in five groups with significant differences in OS only when blasts were assessed from NECs. In conclusion, our easy-applicable approach improves prognostic assessment of MDS patients. © 2017 Wiley Periodicals, Inc.

  10. Mucositis reduction by selective elimination of oral flora in irradiated cancers of the head and neck: a placebo-controlled double-blind randomized study

    International Nuclear Information System (INIS)

    Wijers, Oda B.; Levendag, Peter C.; Harms, Erik; Gan-Teng, A.M.; Schmitz, Paul I.M.; Hendriks, W.D.H.; Wilms, Erik B.; Est, Henri van der; Visch, Leo L.

    2001-01-01

    Purpose: The aim of the study was to test the hypothesis that aerobic Gram-negative bacteria (AGNB) play a crucial role in the pathogenesis of radiation-induced mucositis; consequently, selective elimination of these bacteria from the oral flora should result in a reduction of the mucositis. Methods and Materials: Head-and-neck cancer patients, when scheduled for treatment by external beam radiation therapy (EBRT), were randomized for prophylactic treatment with an oral paste containing either a placebo or a combination of the antibiotics polymyxin E, tobramycin, and amphotericin B (PTA group). Weekly, the objective and subjective mucositis scores and microbiologic counts of the oral flora were noted. The primary study endpoint was the mucositis grade after 3 weeks of EBRT. Results: Seventy-seven patients were evaluable. No statistically significant difference for the objective and subjective mucositis scores was observed between the two study arms (p=0.33). The percentage of patients with positive cultures of AGNB was significantly reduced in the PTA group (p=0.01). However, complete eradication of AGNB was not achieved. Conclusions: Selective elimination of AGNB of the oral flora did not result in a reduction of radiation-induced mucositis and therefore does not support the hypothesis that these bacteria play a crucial role in the pathogenesis of mucositis

  11. Selected papers on noise and stochastic processes

    CERN Document Server

    1954-01-01

    Six classic papers on stochastic process, selected to meet the needs of physicists, applied mathematicians, and engineers. Contents: 1.Chandrasekhar, S.: Stochastic Problems in Physics and Astronomy. 2. Uhlenbeck, G. E. and Ornstein, L. S.: On the Theory of the Browninan Motion. 3. Ming Chen Wang and Uhlenbeck, G. E.: On the Theory of the Browninan Motion II. 4. Rice, S. O.: Mathematical Analysis of Random Noise. 5. Kac, Mark: Random Walk and the Theory of Brownian Motion. 6. Doob, J. L.: The Brownian Movement and Stochastic Equations. Unabridged republication of the Dover reprint (1954). Pre

  12. Early prevention of antisocial personality: long-term follow-up of two randomized controlled trials comparing indicated and selective approaches.

    Science.gov (United States)

    Scott, Stephen; Briskman, Jackie; O'Connor, Thomas G

    2014-06-01

    Antisocial personality is a common adult problem that imposes a major public health burden, but for which there is no effective treatment. Affected individuals exhibit persistent antisocial behavior and pervasive antisocial character traits, such as irritability, manipulativeness, and lack of remorse. Prevention of antisocial personality in childhood has been advocated, but evidence for effective interventions is lacking. The authors conducted two follow-up studies of randomized trials of group parent training. One involved 120 clinic-referred 3- to 7-year-olds with severe antisocial behavior for whom treatment was indicated, 93 of whom were reassessed between ages 10 and 17. The other involved 109 high-risk 4- to 6-year-olds with elevated antisocial behavior who were selectively screened from the community, 90 of whom were reassessed between ages 9 and 13. The primary psychiatric outcome measures were the two elements of antisocial personality, namely, antisocial behavior (assessed by a diagnostic interview) and antisocial character traits (assessed by a questionnaire). Also assessed were reading achievement (an important domain of youth functioning at work) and parent-adolescent relationship quality. In the indicated sample, both elements of antisocial personality were improved in the early intervention group at long-term follow-up compared with the control group (antisocial behavior: odds ratio of oppositional defiant disorder=0.20, 95% CI=0.06, 0.69; antisocial character traits: B=-4.41, 95% CI=-1.12, -8.64). Additionally, reading ability improved (B=9.18, 95% CI=0.58, 18.0). Parental expressed emotion was warmer (B=0.86, 95% CI=0.20, 1.41) and supervision was closer (B=-0.43, 95% CI=-0.11, -0.75), but direct observation of parenting showed no differences. Teacher-rated and self-rated antisocial behavior were unchanged. In contrast, in the selective high-risk sample, early intervention was not associated with improved long-term outcomes. Early intervention with

  13. Using histograms to introduce randomization in the generation of ensembles of decision trees

    Science.gov (United States)

    Kamath, Chandrika; Cantu-Paz, Erick; Littau, David

    2005-02-22

    A system for decision tree ensembles that includes a module to read the data, a module to create a histogram, a module to evaluate a potential split according to some criterion using the histogram, a module to select a split point randomly in an interval around the best split, a module to split the data, and a module to combine multiple decision trees in ensembles. The decision tree method includes the steps of reading the data; creating a histogram; evaluating a potential split according to some criterion using the histogram, selecting a split point randomly in an interval around the best split, splitting the data, and combining multiple decision trees in ensembles.

  14. Study of Randomness in AES Ciphertexts Produced by Randomly Generated S-Boxes and S-Boxes with Various Modulus and Additive Constant Polynomials

    Science.gov (United States)

    Das, Suman; Sadique Uz Zaman, J. K. M.; Ghosh, Ranjan

    2016-06-01

    In Advanced Encryption Standard (AES), the standard S-Box is conventionally generated by using a particular irreducible polynomial {11B} in GF(28) as the modulus and a particular additive constant polynomial {63} in GF(2), though it can be generated by many other polynomials. In this paper, it has been shown that it is possible to generate secured AES S-Boxes by using some other selected modulus and additive polynomials and also can be generated randomly, using a PRNG like BBS. A comparative study has been made on the randomness of corresponding AES ciphertexts generated, using these S-Boxes, by the NIST Test Suite coded for this paper. It has been found that besides using the standard one, other moduli and additive constants are also able to generate equally or better random ciphertexts; the same is true for random S-Boxes also. As these new types of S-Boxes are user-defined, hence unknown, they are able to prevent linear and differential cryptanalysis. Moreover, they act as additional key-inputs to AES, thus increasing the key-space.

  15. Atomic structure calculations using the relativistic random phase approximation

    International Nuclear Information System (INIS)

    Cheng, K.T.; Johnson, W.R.

    1981-01-01

    A brief review is given for the relativistic random phase approximation (RRPA) applied to atomic transition problems. Selected examples of RRPA calculations on discrete excitations and photoionization are given to illustrate the need of relativistic many-body theories in dealing with atomic processes where both relativity and correlation are important

  16. Students' level of skillfulness and use of the internet in selected ...

    African Journals Online (AJOL)

    The study examined level of skillfulness and the use of the Internet for learning among secondary school students in Lagos State, Nigeria. The descriptive survey research method was adopted for the study. A sample of 450 students was randomly selected from the three secondary schools. One intact arm was selected from ...

  17. Identification and DNA fingerprinting of Legionella strains by randomly amplified polymorphic DNA analysis.

    OpenAIRE

    Bansal, N S; McDonell, F

    1997-01-01

    The randomly amplified polymorphic DNA (RAPD) technique was used in the development of a fingerprinting (typing) and identification protocol for Legionella strains. Twenty decamer random oligonucleotide primers were screened for their discriminatory abilities. Two candidate primers were selected. By using a combination of these primers, RAPD analysis allowed for the differentiation between all different species, between the serogroups, and further differentiation between subtypes of the same ...

  18. 10-Year Mortality Outcome of a Routine Invasive Strategy Versus a Selective Invasive Strategy in Non-ST-Segment Elevation Acute Coronary Syndrome: The British Heart Foundation RITA-3 Randomized Trial.

    Science.gov (United States)

    Henderson, Robert A; Jarvis, Christopher; Clayton, Tim; Pocock, Stuart J; Fox, Keith A A

    2015-08-04

    The RITA-3 (Third Randomised Intervention Treatment of Angina) trial compared outcomes of a routine early invasive strategy (coronary arteriography and myocardial revascularization, as clinically indicated) to those of a selective invasive strategy (coronary arteriography for recurrent ischemia only) in patients with non-ST-segment elevation acute coronary syndrome (NSTEACS). At a median of 5 years' follow-up, the routine invasive strategy was associated with a 24% reduction in the odds of all-cause mortality. This study reports 10-year follow-up outcomes of the randomized cohort to determine the impact of a routine invasive strategy on longer-term mortality. We randomized 1,810 patients with NSTEACS to receive routine invasive or selective invasive strategies. All randomized patients had annual follow-up visits up to 5 years, and mortality was documented thereafter using data from the Office of National Statistics. Over 10 years, there were no differences in mortality between the 2 groups (all-cause deaths in 225 [25.1%] vs. 232 patients [25.4%]: p = 0.94; and cardiovascular deaths in 135 [15.1%] vs. 147 patients [16.1%]: p = 0.65 in the routine invasive and selective invasive groups, respectively). Multivariate analysis identified several independent predictors of 10-year mortality: age, previous myocardial infarction, heart failure, smoking status, diabetes, heart rate, and ST-segment depression. A modified post-discharge Global Registry of Acute Coronary Events (GRACE) score was used to calculate an individual risk score for each patient and to form low-risk, medium-risk, and high-risk groups. Risk of death within 10 years varied markedly from 14.4 % in the low-risk group to 56.2% in the high-risk group. This mortality trend did not depend on the assigned treatment strategy. The advantage of reduced mortality of routine early invasive strategy seen at 5 years was attenuated during later follow-up, with no evidence of a difference in outcome at 10 years

  19. Comparative Study Of Two Non-Selective Cyclooxygenase ...

    African Journals Online (AJOL)

    The comparative study of the effects of two non-selective cyclooxygenase inhibitors ibuprofen and paracetamol on maternal and neonatal growth was conducted using 15 Sprague dawley rats, with mean body weight ranging between 165 and 179g. The rats were separated at random into three groups (A, B and C).

  20. Errors in causal inference: an organizational schema for systematic error and random error.

    Science.gov (United States)

    Suzuki, Etsuji; Tsuda, Toshihide; Mitsuhashi, Toshiharu; Mansournia, Mohammad Ali; Yamamoto, Eiji

    2016-11-01

    To provide an organizational schema for systematic error and random error in estimating causal measures, aimed at clarifying the concept of errors from the perspective of causal inference. We propose to divide systematic error into structural error and analytic error. With regard to random error, our schema shows its four major sources: nondeterministic counterfactuals, sampling variability, a mechanism that generates exposure events and measurement variability. Structural error is defined from the perspective of counterfactual reasoning and divided into nonexchangeability bias (which comprises confounding bias and selection bias) and measurement bias. Directed acyclic graphs are useful to illustrate this kind of error. Nonexchangeability bias implies a lack of "exchangeability" between the selected exposed and unexposed groups. A lack of exchangeability is not a primary concern of measurement bias, justifying its separation from confounding bias and selection bias. Many forms of analytic errors result from the small-sample properties of the estimator used and vanish asymptotically. Analytic error also results from wrong (misspecified) statistical models and inappropriate statistical methods. Our organizational schema is helpful for understanding the relationship between systematic error and random error from a previously less investigated aspect, enabling us to better understand the relationship between accuracy, validity, and precision. Copyright © 2016 Elsevier Inc. All rights reserved.

  1. Risk-Controlled Multiobjective Portfolio Selection Problem Using a Principle of Compromise

    Directory of Open Access Journals (Sweden)

    Takashi Hasuike

    2014-01-01

    Full Text Available This paper proposes a multiobjective portfolio selection problem with most probable random distribution derived from current market data and other random distributions of boom and recession under the risk-controlled parameters determined by an investor. The current market data and information include not only historical data but also interpretations of economists’ oral and linguistic information, and hence, the boom and recession are often caused by these nonnumeric data. Therefore, investors need to consider several situations from most probable condition to boom and recession and to avoid the risk less than the target return in each situation. Furthermore, it is generally difficult to set random distributions of these cases exactly. Therefore, a robust-based approach for portfolio selection problems using the only mean values and variances of securities is proposed as a multiobjective programming problem. In addition, an exact algorithm is developed to obtain an explicit optimal portfolio using a principle of compromise.

  2. Role of selective interaction in wealth distribution

    International Nuclear Information System (INIS)

    Gupta, A.K.

    2005-08-01

    In our simplified description 'money' is wealth. A kinetic theory model of money is investigated where two agents interact (trade) selectively and exchange random amount of money between them while keeping total money of all the agents constant. The probability distributions of individual money (P(m) vs. m) is seen to be influenced by certain modes of selective interactions. The distributions shift away from Boltzmann-Gibbs like exponential distribution and in some cases distributions emerge with power law tails known as Pareto's law (P(m) ∝ m -(1+α) ). (author)

  3. Wide brick tunnel randomization - an unequal allocation procedure that limits the imbalance in treatment totals.

    Science.gov (United States)

    Kuznetsova, Olga M; Tymofyeyev, Yevgen

    2014-04-30

    In open-label studies, partial predictability of permuted block randomization provides potential for selection bias. To lessen the selection bias in two-arm studies with equal allocation, a number of allocation procedures that limit the imbalance in treatment totals at a pre-specified level but do not require the exact balance at the ends of the blocks were developed. In studies with unequal allocation, however, the task of designing a randomization procedure that sets a pre-specified limit on imbalance in group totals is not resolved. Existing allocation procedures either do not preserve the allocation ratio at every allocation or do not include all allocation sequences that comply with the pre-specified imbalance threshold. Kuznetsova and Tymofyeyev described the brick tunnel randomization for studies with unequal allocation that preserves the allocation ratio at every step and, in the two-arm case, includes all sequences that satisfy the smallest possible imbalance threshold. This article introduces wide brick tunnel randomization for studies with unequal allocation that allows all allocation sequences with imbalance not exceeding any pre-specified threshold while preserving the allocation ratio at every step. In open-label studies, allowing a larger imbalance in treatment totals lowers selection bias because of the predictability of treatment assignments. The applications of the technique in two-arm and multi-arm open-label studies with unequal allocation are described. Copyright © 2013 John Wiley & Sons, Ltd.

  4. Cardiac resynchronization therapy : advances in optimal patient selection

    NARCIS (Netherlands)

    Bleeker, Gabe Berend

    2007-01-01

    Despite the impressive results of cardiac resynchronization theraphy (CRT) in recent large randomized trials a consistent number of patients fails to improve following CRT implantation when the established CRT selection criteria (NYHA class III-IV heart failure, LV ejection fraction ≤35 % and QRS

  5. Random a-adic groups and random net fractals

    Energy Technology Data Exchange (ETDEWEB)

    Li Yin [Department of Mathematics, Nanjing University, Nanjing 210093 (China)], E-mail: Lyjerry7788@hotmail.com; Su Weiyi [Department of Mathematics, Nanjing University, Nanjing 210093 (China)], E-mail: suqiu@nju.edu.cn

    2008-08-15

    Based on random a-adic groups, this paper investigates the relationship between the existence conditions of a positive flow in a random network and the estimation of the Hausdorff dimension of a proper random net fractal. Subsequently we describe some particular random fractals for which our results can be applied. Finally the Mauldin and Williams theorem is shown to be very important example for a random Cantor set with application in physics as shown in E-infinity theory.

  6. Non-compact random generalized games and random quasi-variational inequalities

    OpenAIRE

    Yuan, Xian-Zhi

    1994-01-01

    In this paper, existence theorems of random maximal elements, random equilibria for the random one-person game and random generalized game with a countable number of players are given as applications of random fixed point theorems. By employing existence theorems of random generalized games, we deduce the existence of solutions for non-compact random quasi-variational inequalities. These in turn are used to establish several existence theorems of noncompact generalized random ...

  7. Implementation of client versus care-provider strategies to improve external cephalic version rates: a cluster randomized controlled trial

    NARCIS (Netherlands)

    Vlemmix, Floortje; Rosman, Ageeth N.; Rijnders, Marlies E.; Beuckens, Antje; Opmeer, Brent C.; Mol, Ben W. J.; Kok, Marjolein; Fleuren, Margot A. H.

    2015-01-01

    To determine the effectiveness of a client or care-provider strategy to improve the implementation of external cephalic version. Cluster randomized controlled trial. Twenty-five clusters; hospitals and their referring midwifery practices randomly selected in the Netherlands. Singleton breech

  8. Implementation of client versus care-provider strategies to improve external cephalic version rates: a cluster randomized controlled trial

    NARCIS (Netherlands)

    Vlemmix, F.; Rosman, A.N.; Rijnders, M.E.; Beuckens, A.; Opmeer, B.C.; Mol, B.W.J.; Kok, M.; Fleuren, M.A.H.

    2015-01-01

    Onjective: To determine the effectiveness of a client or care-provider strategy to improve the implementation of external cephalic version. Design: Cluster randomized controlled trial.Setting: Twenty-five clusters; hospitals and their referring midwifery practices randomly selected in the

  9. A note on mate allocation for dominance handling in genomic selection

    Directory of Open Access Journals (Sweden)

    Toro Miguel A

    2010-08-01

    Full Text Available Abstract Estimation of non-additive genetic effects in animal breeding is important because it increases the accuracy of breeding value prediction and the value of mate allocation procedures. With the advent of genomic selection these ideas should be revisited. The objective of this study was to quantify the efficiency of including dominance effects and practising mating allocation under a whole-genome evaluation scenario. Four strategies of selection, carried out during five generations, were compared by simulation techniques. In the first scenario (MS, individuals were selected based on their own phenotypic information. In the second (GSA, they were selected based on the prediction generated by the Bayes A method of whole-genome evaluation under an additive model. In the third (GSD, the model was expanded to include dominance effects. These three scenarios used random mating to construct future generations, whereas in the fourth one (GSD + MA, matings were optimized by simulated annealing. The advantage of GSD over GSA ranges from 9 to 14% of the expected response and, in addition, using mate allocation (GSD + MA provides an additional response ranging from 6% to 22%. However, mate selection can improve the expected genetic response over random mating only in the first generation of selection. Furthermore, the efficiency of genomic selection is eroded after a few generations of selection, thus, a continued collection of phenotypic data and re-evaluation will be required.

  10. QUANTITATIVE GENETICS OF MORPHOLOGICAL DIFFERENTIATION IN PEROMYSCUS. II. ANALYSIS OF SELECTION AND DRIFT.

    Science.gov (United States)

    Lofsvold, David

    1988-01-01

    The hypothesis that the morphological divergence of local populations of Peromyscus is due to random genetic drift was evaluated by testing the proportionality of the among-locality covariance matrix, L, and the additive genetic covariance matrix, G. Overall, significant proportionality of L̂ and Ĝ was not observed, indicating the evolutionary divergence of local populations does not result from random genetic drift. The forces of selection needed to differentiate three taxa of Peromyscus were reconstructed to examine the divergence of species and subspecies. The selection gradients obtained illustrate the inadequacy of univariate analyses of selection by finding that some characters evolve in the direction opposite to the force of selection acting directly on them. A retrospective selection index was constructed using the estimated selection gradients, and truncation selection on this index was used to estimate the minimum selective mortality per generation required to produce the observed change. On any of the time scales used, the proportion of the population that would need to be culled was quite low, the greatest being of the same order of magnitude as the selective intensities observed in extant natural populations. Thus, entirely plausible intensities of directional natural selection can produce species-level differences in a period of time too short to be resolved in the fossil record. © 1988 The Society for the Study of Evolution.

  11. Rural Women\\'s Response To Selected Crop Production ...

    African Journals Online (AJOL)

    The study centered on rural women's response to selected crop production technologies in Imo State with a view to making policy recommendations. Structured questionnaire and interview schedule were administered through the assistance of extension agents to 258 randomly sampled rural women farmers from the three ...

  12. How random are random numbers generated using photons?

    International Nuclear Information System (INIS)

    Solis, Aldo; Angulo Martínez, Alí M; Ramírez Alarcón, Roberto; Cruz Ramírez, Hector; U’Ren, Alfred B; Hirsch, Jorge G

    2015-01-01

    Randomness is fundamental in quantum theory, with many philosophical and practical implications. In this paper we discuss the concept of algorithmic randomness, which provides a quantitative method to assess the Borel normality of a given sequence of numbers, a necessary condition for it to be considered random. We use Borel normality as a tool to investigate the randomness of ten sequences of bits generated from the differences between detection times of photon pairs generated by spontaneous parametric downconversion. These sequences are shown to fulfil the randomness criteria without difficulties. As deviations from Borel normality for photon-generated random number sequences have been reported in previous work, a strategy to understand these diverging findings is outlined. (paper)

  13. NAVAIR Portable Source Initiative (NPSI) Standard for Material Properties Reference Database (MPRD) V2.2

    Science.gov (United States)

    2012-09-26

    of a material to conduct electricity . p-electrical_resisitivity electrical resistivity electrical ohm-m The property of a material...that resists the flow of electrical current. p-magnetic_susceptibility magnetic susceptibility electrical The degree to which a...enumeration Zr enumeration Nb enumeration Mo enumeration Tc enumeration Ru enumeration Rh enumeration Pd enumeration Ag enumeration

  14. Optimized bioregenerative space diet selection with crew choice

    Science.gov (United States)

    Vicens, Carrie; Wang, Carolyn; Olabi, Ammar; Jackson, Peter; Hunter, Jean

    2003-01-01

    Previous studies on optimization of crew diets have not accounted for choice. A diet selection model with crew choice was developed. Scenario analyses were conducted to assess the feasibility and cost of certain crew preferences, such as preferences for numerous-desserts, high-salt, and high-acceptability foods. For comparison purposes, a no-choice and a random-choice scenario were considered. The model was found to be feasible in terms of food variety and overall costs. The numerous-desserts, high-acceptability, and random-choice scenarios all resulted in feasible solutions costing between 13.2 and 17.3 kg ESM/person-day. Only the high-sodium scenario yielded an infeasible solution. This occurred when the foods highest in salt content were selected for the crew-choice portion of the diet. This infeasibility can be avoided by limiting the total sodium content in the crew-choice portion of the diet. Cost savings were found by reducing food variety in scenarios where the preference bias strongly affected nutritional content.

  15. Holographic memories with encryption-selectable function

    Science.gov (United States)

    Su, Wei-Chia; Lee, Xuan-Hao

    2006-03-01

    Volume holographic storage has received increasing attention owing to its potential high storage capacity and access rate. In the meanwhile, encrypted holographic memory using random phase encoding technique is attractive for an optical community due to growing demand for protection of information. In this paper, encryption-selectable holographic storage algorithms in LiNbO 3 using angular multiplexing are proposed and demonstrated. Encryption-selectable holographic memory is an advance concept of security storage for content protection. It offers more flexibility to encrypt the data or not optionally during the recording processes. In our system design, the function of encryption and non-encryption storage is switched by a random phase pattern and a uniform phase pattern. Based on a 90-degree geometry, the input patterns including the encryption and non-encryption storage are stored via angular multiplexing with reference plane waves at different incident angles. Image is encrypted optionally by sliding the ground glass into one of the recording waves or removing it away in each exposure. The ground glass is a key for encryption. Besides, it is also an important key available for authorized user to decrypt the encrypted information.

  16. Comparison between paricalcitol and active non-selective vitamin D receptor activator for secondary hyperparathyroidism in chronic kidney disease: a systematic review and meta-analysis of randomized controlled trials.

    Science.gov (United States)

    Cai, Panpan; Tang, Xiaohong; Qin, Wei; Ji, Ling; Li, Zi

    2016-04-01

    The goal of this systematic review is to evaluate the efficacy and safety of paricalcitol versus active non-selective vitamin D receptor activators (VDRAs) for secondary hyperparathyroidism (SHPT) management in chronic kidney disease (CKD) patients. PubMed, EMBASE, Cochrane Central Register of Controlled Trials (CENTRAL), clinicaltrials.gov (inception to September 2015), and ASN Web site were searched for relevant studies. A meta-analysis of randomized controlled trials (RCTs) and quasi-RCTs that assessed the effects and adverse events of paricalcitol and active non-selective VDRA in adult CKD patients with SHPT was performed using Review Manager 5.2. A total of 10 trials involving 734 patients were identified for this review. The quality of included trials was limited, and very few trials reported all-cause mortality or cardiovascular calcification without any differences between two groups. Compared with active non-selective VDRAs, paricalcitol showed no significant difference in both PTH reduction (MD -7.78, 95% CI -28.59-13.03, P = 0.46) and the proportion of patients who achieved the target reduction of PTH (OR 1.27, 95% CI 0.87-1.85, P = 0.22). In addition, no statistical differences were found in terms of serum calcium, episodes of hypercalcemia, serum phosphorus, calcium × phosphorus products, and bone metabolism index. Current evidence is insufficient, showing paricalcitol is superior to active non-selective VDRAs in lowering PTH or reducing the burden of mineral loading. Further trials are required to prove the tissue-selective effect of paricalcitol and to overcome the limitation of current research.

  17. Implementation of a validated HACCP system for the control of microbiological contamination of pig carcasses at a small abattoir

    Science.gov (United States)

    Bryant, Jeffrey; Brereton, Donald A.; Gill, Colin O.

    2003-01-01

    To guide the implementation of a Hazard Analysis Critical Control Point (HACCP) system at a small abattoir, the microbiological conditions of pig carcasses at various stages of processing were assessed by enumerating total aerobes, coliforms, and Escherichia coli in samples collected from randomly selected sites on the carcasses. Those data indicated that carcasses were contaminated with bacteria mainly during dehairing and operations on the head. When carcasses were pasteurized after head removal, the numbers of total aerobes on dressed carcasses were reduced by about 1 order and the numbers of coliforms and E. coli were reduced by more than 2 orders of magnitude. Implementation of an HACCP system on the basis of the microbiological data gave cooled carcasses with mean numbers of total aerobes < 100/cm2, and mean numbers of coliforms and E. coli about 1/1000 cm2. PMID:12619556

  18. Application of quantitative real-time PCR compared to filtration methods for the enumeration of Escherichia coli in surface waters within Vietnam.

    Science.gov (United States)

    Vital, Pierangeli G; Van Ha, Nguyen Thi; Tuyet, Le Thi Hong; Widmer, Kenneth W

    2017-02-01

    Surface water samples in Vietnam were collected from the Saigon River, rural and suburban canals, and urban runoff canals in Ho Chi Minh City, Vietnam, and were processed to enumerate Escherichia coli. Quantification was done through membrane filtration and quantitative real-time polymerase chain reaction (PCR). Mean log colony-forming unit (CFU)/100 ml E. coli counts in the dry season for river/suburban canals and urban canals were log 2.8 and 3.7, respectively, using a membrane filtration method, while using Taqman quantitative real-time PCR they were log 2.4 and 2.8 for river/suburban canals and urban canals, respectively. For the wet season, data determined by the membrane filtration method in river/suburban canals and urban canals samples had mean counts of log 3.7 and 4.1, respectively. While mean log CFU/100 ml counts in the wet season using quantitative PCR were log 3 and 2, respectively. Additionally, the urban canal samples were significantly lower than those determined by conventional culture methods for the wet season. These results show that while quantitative real-time PCR can be used to determine levels of fecal indicator bacteria in surface waters, there are some limitations to its application and it may be impacted by sources of runoff based on surveyed samples.

  19. Correlated randomness and switching phenomena

    Science.gov (United States)

    Stanley, H. E.; Buldyrev, S. V.; Franzese, G.; Havlin, S.; Mallamace, F.; Kumar, P.; Plerou, V.; Preis, T.

    2010-08-01

    One challenge of biology, medicine, and economics is that the systems treated by these serious scientific disciplines have no perfect metronome in time and no perfect spatial architecture-crystalline or otherwise. Nonetheless, as if by magic, out of nothing but randomness one finds remarkably fine-tuned processes in time and remarkably fine-tuned structures in space. Further, many of these processes and structures have the remarkable feature of “switching” from one behavior to another as if by magic. The past century has, philosophically, been concerned with placing aside the human tendency to see the universe as a fine-tuned machine. Here we will address the challenge of uncovering how, through randomness (albeit, as we shall see, strongly correlated randomness), one can arrive at some of the many spatial and temporal patterns in biology, medicine, and economics and even begin to characterize the switching phenomena that enables a system to pass from one state to another. Inspired by principles developed by A. Nihat Berker and scores of other statistical physicists in recent years, we discuss some applications of correlated randomness to understand switching phenomena in various fields. Specifically, we present evidence from experiments and from computer simulations supporting the hypothesis that water’s anomalies are related to a switching point (which is not unlike the “tipping point” immortalized by Malcolm Gladwell), and that the bubbles in economic phenomena that occur on all scales are not “outliers” (another Gladwell immortalization). Though more speculative, we support the idea of disease as arising from some kind of yet-to-be-understood complex switching phenomenon, by discussing data on selected examples, including heart disease and Alzheimer disease.

  20. A randomized controlled trial investigating the use of a predictive nomogram for the selection of the FSH starting dose in IVF/ICSI cycles.

    Science.gov (United States)

    Allegra, Adolfo; Marino, Angelo; Volpes, Aldo; Coffaro, Francesco; Scaglione, Piero; Gullo, Salvatore; La Marca, Antonio

    2017-04-01

    The number of oocytes retrieved is a relevant intermediate outcome in women undergoing IVF/intracytoplasmic sperm injection (ICSI). This trial compared the efficiency of the selection of the FSH starting dose according to a nomogram based on multiple biomarkers (age, day 3 FSH, anti-Müllerian hormone) versus an age-based strategy. The primary outcome measure was the proportion of women with an optimal number of retrieved oocytes defined as 8-14. At their first IVF/ICSI cycle, 191 patients underwent a long gonadotrophin-releasing hormone agonist protocol and were randomized to receive a starting dose of recombinant (human) FSH, based on their age (150 IU if ≤35 years, 225 IU if >35 years) or based on the nomogram. Optimal response was observed in 58/92 patients (63%) in the nomogram group and in 42/99 (42%) in the control group (+21%, 95% CI = 0.07 to 0.35, P = 0.0037). No significant differences were found in the clinical pregnancy rate or the number of embryos cryopreserved per patient. The study showed that the FSH starting dose selected according to ovarian reserve is associated with an increase in the proportion of patients with an optimal response: large trials are recommended to investigate any possible effect on the live-birth rate. Copyright © 2017 Reproductive Healthcare Ltd. Published by Elsevier Ltd. All rights reserved.