WorldWideScience

Sample records for large set size

  1. Determining an Estimate of an Equivalence Relation for Moderate and Large Sized Sets

    Directory of Open Access Journals (Sweden)

    Leszek Klukowski

    2017-01-01

    Full Text Available This paper presents two approaches to determining estimates of an equivalence relation on the basis of pairwise comparisons with random errors. Obtaining such an estimate requires the solution of a discrete programming problem which minimizes the sum of the differences between the form of the relation and the comparisons. The problem is NP hard and can be solved with the use of exact algorithms for sets of moderate size, i.e. about 50 elements. In the case of larger sets, i.e. at least 200 comparisons for each element, it is necessary to apply heuristic algorithms. The paper presents results (a statistical preprocessing, which enable us to determine the optimal or a near-optimal solution with acceptable computational cost. They include: the development of a statistical procedure producing comparisons with low probabilities of errors and a heuristic algorithm based on such comparisons. The proposed approach guarantees the applicability of such estimators for any size of set. (original abstract

  2. Set size and culture influence children's attention to number.

    Science.gov (United States)

    Cantrell, Lisa; Kuwabara, Megumi; Smith, Linda B

    2015-03-01

    Much research evidences a system in adults and young children for approximately representing quantity. Here we provide evidence that the bias to attend to discrete quantity versus other dimensions may be mediated by set size and culture. Preschool-age English-speaking children in the United States and Japanese-speaking children in Japan were tested in a match-to-sample task where number was pitted against cumulative surface area in both large and small numerical set comparisons. Results showed that children from both cultures were biased to attend to the number of items for small sets. Large set responses also showed a general attention to number when ratio difficulty was easy. However, relative to the responses for small sets, attention to number decreased for both groups; moreover, both U.S. and Japanese children showed a significant bias to attend to total amount for difficult numerical ratio distances, although Japanese children shifted attention to total area at relatively smaller set sizes than U.S. children. These results add to our growing understanding of how quantity is represented and how such representation is influenced by context--both cultural and perceptual. Copyright © 2014 Elsevier Inc. All rights reserved.

  3. Looking at large data sets using binned data plots

    Energy Technology Data Exchange (ETDEWEB)

    Carr, D.B.

    1990-04-01

    This report addresses the monumental challenge of developing exploratory analysis methods for large data sets. The goals of the report are to increase awareness of large data sets problems and to contribute simple graphical methods that address some of the problems. The graphical methods focus on two- and three-dimensional data and common task such as finding outliers and tail structure, assessing central structure and comparing central structures. The methods handle large sample size problems through binning, incorporate information from statistical models and adapt image processing algorithms. Examples demonstrate the application of methods to a variety of publicly available large data sets. The most novel application addresses the too many plots to examine'' problem by using cognostics, computer guiding diagnostics, to prioritize plots. The particular application prioritizes views of computational fluid dynamics solution sets on the fly. That is, as each time step of a solution set is generated on a parallel processor the cognostics algorithms assess virtual plots based on the previous time step. Work in such areas is in its infancy and the examples suggest numerous challenges that remain. 35 refs., 15 figs.

  4. The large sample size fallacy.

    Science.gov (United States)

    Lantz, Björn

    2013-06-01

    Significance in the statistical sense has little to do with significance in the common practical sense. Statistical significance is a necessary but not a sufficient condition for practical significance. Hence, results that are extremely statistically significant may be highly nonsignificant in practice. The degree of practical significance is generally determined by the size of the observed effect, not the p-value. The results of studies based on large samples are often characterized by extreme statistical significance despite small or even trivial effect sizes. Interpreting such results as significant in practice without further analysis is referred to as the large sample size fallacy in this article. The aim of this article is to explore the relevance of the large sample size fallacy in contemporary nursing research. Relatively few nursing articles display explicit measures of observed effect sizes or include a qualitative discussion of observed effect sizes. Statistical significance is often treated as an end in itself. Effect sizes should generally be calculated and presented along with p-values for statistically significant results, and observed effect sizes should be discussed qualitatively through direct and explicit comparisons with the effects in related literature. © 2012 Nordic College of Caring Science.

  5. Parallel clustering algorithm for large-scale biological data sets.

    Science.gov (United States)

    Wang, Minchao; Zhang, Wu; Ding, Wang; Dai, Dongbo; Zhang, Huiran; Xie, Hao; Chen, Luonan; Guo, Yike; Xie, Jiang

    2014-01-01

    Recent explosion of biological data brings a great challenge for the traditional clustering algorithms. With increasing scale of data sets, much larger memory and longer runtime are required for the cluster identification problems. The affinity propagation algorithm outperforms many other classical clustering algorithms and is widely applied into the biological researches. However, the time and space complexity become a great bottleneck when handling the large-scale data sets. Moreover, the similarity matrix, whose constructing procedure takes long runtime, is required before running the affinity propagation algorithm, since the algorithm clusters data sets based on the similarities between data pairs. Two types of parallel architectures are proposed in this paper to accelerate the similarity matrix constructing procedure and the affinity propagation algorithm. The memory-shared architecture is used to construct the similarity matrix, and the distributed system is taken for the affinity propagation algorithm, because of its large memory size and great computing capacity. An appropriate way of data partition and reduction is designed in our method, in order to minimize the global communication cost among processes. A speedup of 100 is gained with 128 cores. The runtime is reduced from serval hours to a few seconds, which indicates that parallel algorithm is capable of handling large-scale data sets effectively. The parallel affinity propagation also achieves a good performance when clustering large-scale gene data (microarray) and detecting families in large protein superfamilies.

  6. Information overload or search-amplified risk? Set size and order effects on decisions from experience.

    Science.gov (United States)

    Hills, Thomas T; Noguchi, Takao; Gibbert, Michael

    2013-10-01

    How do changes in choice-set size influence information search and subsequent decisions? Moreover, does information overload influence information processing with larger choice sets? We investigated these questions by letting people freely explore sets of gambles before choosing one of them, with the choice sets either increasing or decreasing in number for each participant (from two to 32 gambles). Set size influenced information search, with participants taking more samples overall, but sampling a smaller proportion of gambles and taking fewer samples per gamble, when set sizes were larger. The order of choice sets also influenced search, with participants sampling from more gambles and taking more samples overall if they started with smaller as opposed to larger choice sets. Inconsistent with information overload, information processing appeared consistent across set sizes and choice order conditions, reliably favoring gambles with higher sample means. Despite the lack of evidence for information overload, changes in information search did lead to systematic changes in choice: People who started with smaller choice sets were more likely to choose gambles with the highest expected values, but only for small set sizes. For large set sizes, the increase in total samples increased the likelihood of encountering rare events at the same time that the reduction in samples per gamble amplified the effect of these rare events when they occurred-what we call search-amplified risk. This led to riskier choices for individuals whose choices most closely followed the sample mean.

  7. Classification of large-sized hyperspectral imagery using fast machine learning algorithms

    Science.gov (United States)

    Xia, Junshi; Yokoya, Naoto; Iwasaki, Akira

    2017-07-01

    We present a framework of fast machine learning algorithms in the context of large-sized hyperspectral images classification from the theoretical to a practical viewpoint. In particular, we assess the performance of random forest (RF), rotation forest (RoF), and extreme learning machine (ELM) and the ensembles of RF and ELM. These classifiers are applied to two large-sized hyperspectral images and compared to the support vector machines. To give the quantitative analysis, we pay attention to comparing these methods when working with high input dimensions and a limited/sufficient training set. Moreover, other important issues such as the computational cost and robustness against the noise are also discussed.

  8. Word length, set size, and lexical factors: Re-examining what causes the word length effect.

    Science.gov (United States)

    Guitard, Dominic; Gabel, Andrew J; Saint-Aubin, Jean; Surprenant, Aimée M; Neath, Ian

    2018-04-19

    The word length effect, better recall of lists of short (fewer syllables) than long (more syllables) words has been termed a benchmark effect of working memory. Despite this, experiments on the word length effect can yield quite different results depending on set size and stimulus properties. Seven experiments are reported that address these 2 issues. Experiment 1 replicated the finding of a preserved word length effect under concurrent articulation for large stimulus sets, which contrasts with the abolition of the word length effect by concurrent articulation for small stimulus sets. Experiment 2, however, demonstrated that when the short and long words are equated on more dimensions, concurrent articulation abolishes the word length effect for large stimulus sets. Experiment 3 shows a standard word length effect when output time is equated, but Experiments 4-6 show no word length effect when short and long words are equated on increasingly more dimensions that previous demonstrations have overlooked. Finally, Experiment 7 compared recall of a small and large neighborhood words that were equated on all the dimensions used in Experiment 6 (except for those directly related to neighborhood size) and a neighborhood size effect was still observed. We conclude that lexical factors, rather than word length per se, are better predictors of when the word length effect will occur. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  9. Iterative dictionary construction for compression of large DNA data sets.

    Science.gov (United States)

    Kuruppu, Shanika; Beresford-Smith, Bryan; Conway, Thomas; Zobel, Justin

    2012-01-01

    Genomic repositories increasingly include individual as well as reference sequences, which tend to share long identical and near-identical strings of nucleotides. However, the sequential processing used by most compression algorithms, and the volumes of data involved, mean that these long-range repetitions are not detected. An order-insensitive, disk-based dictionary construction method can detect this repeated content and use it to compress collections of sequences. We explore a dictionary construction method that improves repeat identification in large DNA data sets. Our adaptation, COMRAD, of an existing disk-based method identifies exact repeated content in collections of sequences with similarities within and across the set of input sequences. COMRAD compresses the data over multiple passes, which is an expensive process, but allows COMRAD to compress large data sets within reasonable time and space. COMRAD allows for random access to individual sequences and subsequences without decompressing the whole data set. COMRAD has no competitor in terms of the size of data sets that it can compress (extending to many hundreds of gigabytes) and, even for smaller data sets, the results are competitive compared to alternatives; as an example, 39 S. cerevisiae genomes compressed to 0.25 bits per base.

  10. Large-scale ocean connectivity and planktonic body size

    KAUST Repository

    Villarino, Ernesto; Watson, James R.; Jö nsson, Bror; Gasol, Josep M.; Salazar, Guillem; Acinas, Silvia G.; Estrada, Marta; Massana, Ramó n; Logares, Ramiro; Giner, Caterina R.; Pernice, Massimo C.; Olivar, M. Pilar; Citores, Leire; Corell, Jon; Rodrí guez-Ezpeleta, Naiara; Acuñ a, José Luis; Molina-Ramí rez, Axayacatl; Gonzá lez-Gordillo, J. Ignacio; Có zar, André s; Martí , Elisa; Cuesta, José A.; Agusti, Susana; Fraile-Nuez, Eugenio; Duarte, Carlos M.; Irigoien, Xabier; Chust, Guillem

    2018-01-01

    Global patterns of planktonic diversity are mainly determined by the dispersal of propagules with ocean currents. However, the role that abundance and body size play in determining spatial patterns of diversity remains unclear. Here we analyse spatial community structure - β-diversity - for several planktonic and nektonic organisms from prokaryotes to small mesopelagic fishes collected during the Malaspina 2010 Expedition. β-diversity was compared to surface ocean transit times derived from a global circulation model, revealing a significant negative relationship that is stronger than environmental differences. Estimated dispersal scales for different groups show a negative correlation with body size, where less abundant large-bodied communities have significantly shorter dispersal scales and larger species spatial turnover rates than more abundant small-bodied plankton. Our results confirm that the dispersal scale of planktonic and micro-nektonic organisms is determined by local abundance, which scales with body size, ultimately setting global spatial patterns of diversity.

  11. Large-scale ocean connectivity and planktonic body size

    KAUST Repository

    Villarino, Ernesto

    2018-01-04

    Global patterns of planktonic diversity are mainly determined by the dispersal of propagules with ocean currents. However, the role that abundance and body size play in determining spatial patterns of diversity remains unclear. Here we analyse spatial community structure - β-diversity - for several planktonic and nektonic organisms from prokaryotes to small mesopelagic fishes collected during the Malaspina 2010 Expedition. β-diversity was compared to surface ocean transit times derived from a global circulation model, revealing a significant negative relationship that is stronger than environmental differences. Estimated dispersal scales for different groups show a negative correlation with body size, where less abundant large-bodied communities have significantly shorter dispersal scales and larger species spatial turnover rates than more abundant small-bodied plankton. Our results confirm that the dispersal scale of planktonic and micro-nektonic organisms is determined by local abundance, which scales with body size, ultimately setting global spatial patterns of diversity.

  12. Large Data Set Mining

    NARCIS (Netherlands)

    Leemans, I.B.; Broomhall, Susan

    2017-01-01

    Digital emotion research has yet to make history. Until now large data set mining has not been a very active field of research in early modern emotion studies. This is indeed surprising since first, the early modern field has such rich, copyright-free, digitized data sets and second, emotion studies

  13. Large Time Asymptotics for a Continuous Coagulation-Fragmentation Model with Degenerate Size-Dependent Diffusion

    KAUST Repository

    Desvillettes, Laurent

    2010-01-01

    We study a continuous coagulation-fragmentation model with constant kernels for reacting polymers (see [M. Aizenman and T. Bak, Comm. Math. Phys., 65 (1979), pp. 203-230]). The polymers are set to diffuse within a smooth bounded one-dimensional domain with no-flux boundary conditions. In particular, we consider size-dependent diffusion coefficients, which may degenerate for small and large cluster-sizes. We prove that the entropy-entropy dissipation method applies directly in this inhomogeneous setting. We first show the necessary basic a priori estimates in dimension one, and second we show faster-than-polynomial convergence toward global equilibria for diffusion coefficients which vanish not faster than linearly for large sizes. This extends the previous results of [J.A. Carrillo, L. Desvillettes, and K. Fellner, Comm. Math. Phys., 278 (2008), pp. 433-451], which assumes that the diffusion coefficients are bounded below. © 2009 Society for Industrial and Applied Mathematics.

  14. Large-sized seaweed monitoring based on MODIS

    Science.gov (United States)

    Ma, Long; Li, Ying; Lan, Guo-xin; Li, Chuan-long

    2009-10-01

    In recent years, large-sized seaweed, such as ulva lactuca, blooms frequently in coastal water in China, which threatens marine eco-environment. In order to take effective measures, it is important to make operational surveillance. A case of large-sized seaweed blooming (i.e. enteromorpha), occurred in June, 2008, in the sea near Qingdao city, is studied. Seaweed blooming is dynamically monitored using Moderate Resolution Imaging Spectroradiometer (MODIS). After analyzing imaging spectral characteristics of enteromorpha, MODIS band 1 and 2 are used to create a band ratio algorithm for detecting and mapping large-sized seaweed blooming. In addition, chlorophyll-α concentration is inversed based on an empirical model developed using MODIS. Chlorophyll-α concentration maps are derived using multitemporal MODIS data, and chlorophyll-α concentration change is analyzed. Results show that the presented methods are useful to get the dynamic distribution and the growth of large-sized seaweed, and can support contingency planning.

  15. Simultaneous identification of long similar substrings in large sets of sequences

    Directory of Open Access Journals (Sweden)

    Wittig Burghardt

    2007-05-01

    Full Text Available Abstract Background Sequence comparison faces new challenges today, with many complete genomes and large libraries of transcripts known. Gene annotation pipelines match these sequences in order to identify genes and their alternative splice forms. However, the software currently available cannot simultaneously compare sets of sequences as large as necessary especially if errors must be considered. Results We therefore present a new algorithm for the identification of almost perfectly matching substrings in very large sets of sequences. Its implementation, called ClustDB, is considerably faster and can handle 16 times more data than VMATCH, the most memory efficient exact program known today. ClustDB simultaneously generates large sets of exactly matching substrings of a given minimum length as seeds for a novel method of match extension with errors. It generates alignments of maximum length with a considered maximum number of errors within each overlapping window of a given size. Such alignments are not optimal in the usual sense but faster to calculate and often more appropriate than traditional alignments for genomic sequence comparisons, EST and full-length cDNA matching, and genomic sequence assembly. The method is used to check the overlaps and to reveal possible assembly errors for 1377 Medicago truncatula BAC-size sequences published at http://www.medicago.org/genome/assembly_table.php?chr=1. Conclusion The program ClustDB proves that window alignment is an efficient way to find long sequence sections of homogenous alignment quality, as expected in case of random errors, and to detect systematic errors resulting from sequence contaminations. Such inserts are systematically overlooked in long alignments controlled by only tuning penalties for mismatches and gaps. ClustDB is freely available for academic use.

  16. A full scale approximation of covariance functions for large spatial data sets

    KAUST Repository

    Sang, Huiyan

    2011-10-10

    Gaussian process models have been widely used in spatial statistics but face tremendous computational challenges for very large data sets. The model fitting and spatial prediction of such models typically require O(n 3) operations for a data set of size n. Various approximations of the covariance functions have been introduced to reduce the computational cost. However, most existing approximations cannot simultaneously capture both the large- and the small-scale spatial dependence. A new approximation scheme is developed to provide a high quality approximation to the covariance function at both the large and the small spatial scales. The new approximation is the summation of two parts: a reduced rank covariance and a compactly supported covariance obtained by tapering the covariance of the residual of the reduced rank approximation. Whereas the former part mainly captures the large-scale spatial variation, the latter part captures the small-scale, local variation that is unexplained by the former part. By combining the reduced rank representation and sparse matrix techniques, our approach allows for efficient computation for maximum likelihood estimation, spatial prediction and Bayesian inference. We illustrate the new approach with simulated and real data sets. © 2011 Royal Statistical Society.

  17. A full scale approximation of covariance functions for large spatial data sets

    KAUST Repository

    Sang, Huiyan; Huang, Jianhua Z.

    2011-01-01

    Gaussian process models have been widely used in spatial statistics but face tremendous computational challenges for very large data sets. The model fitting and spatial prediction of such models typically require O(n 3) operations for a data set of size n. Various approximations of the covariance functions have been introduced to reduce the computational cost. However, most existing approximations cannot simultaneously capture both the large- and the small-scale spatial dependence. A new approximation scheme is developed to provide a high quality approximation to the covariance function at both the large and the small spatial scales. The new approximation is the summation of two parts: a reduced rank covariance and a compactly supported covariance obtained by tapering the covariance of the residual of the reduced rank approximation. Whereas the former part mainly captures the large-scale spatial variation, the latter part captures the small-scale, local variation that is unexplained by the former part. By combining the reduced rank representation and sparse matrix techniques, our approach allows for efficient computation for maximum likelihood estimation, spatial prediction and Bayesian inference. We illustrate the new approach with simulated and real data sets. © 2011 Royal Statistical Society.

  18. Large litter sizes

    DEFF Research Database (Denmark)

    Sandøe, Peter; Rutherford, K.M.D.; Berg, Peer

    2012-01-01

    This paper presents some key results and conclusions from a review (Rutherford et al. 2011) undertaken regarding the ethical and welfare implications of breeding for large litter size in the domestic pig and about different ways of dealing with these implications. Focus is primarily on the direct...... possible to achieve a drop in relative piglet mortality and the related welfare problems. However, there will be a growing problem with the need to use foster or nurse sows which may have negative effects on both sows and piglets. This gives rise to new challenges for management....

  19. Statistical characterization of a large geochemical database and effect of sample size

    Science.gov (United States)

    Zhang, C.; Manheim, F.T.; Hinde, J.; Grossman, J.N.

    2005-01-01

    smaller numbers of data points showed that few elements passed standard statistical tests for normality or log-normality until sample size decreased to a few hundred data points. Large sample size enhances the power of statistical tests, and leads to rejection of most statistical hypotheses for real data sets. For large sample sizes (e.g., n > 1000), graphical methods such as histogram, stem-and-leaf, and probability plots are recommended for rough judgement of probability distribution if needed. ?? 2005 Elsevier Ltd. All rights reserved.

  20. Metastrategies in large-scale bargaining settings

    NARCIS (Netherlands)

    Hennes, D.; Jong, S. de; Tuyls, K.; Gal, Y.

    2015-01-01

    This article presents novel methods for representing and analyzing a special class of multiagent bargaining settings that feature multiple players, large action spaces, and a relationship among players' goals, tasks, and resources. We show how to reduce these interactions to a set of bilateral

  1. Multivariate modeling of complications with data driven variable selection: Guarding against overfitting and effects of data set size

    International Nuclear Information System (INIS)

    Schaaf, Arjen van der; Xu Chengjian; Luijk, Peter van; Veld, Aart A. van’t; Langendijk, Johannes A.; Schilstra, Cornelis

    2012-01-01

    Purpose: Multivariate modeling of complications after radiotherapy is frequently used in conjunction with data driven variable selection. This study quantifies the risk of overfitting in a data driven modeling method using bootstrapping for data with typical clinical characteristics, and estimates the minimum amount of data needed to obtain models with relatively high predictive power. Materials and methods: To facilitate repeated modeling and cross-validation with independent datasets for the assessment of true predictive power, a method was developed to generate simulated data with statistical properties similar to real clinical data sets. Characteristics of three clinical data sets from radiotherapy treatment of head and neck cancer patients were used to simulate data with set sizes between 50 and 1000 patients. A logistic regression method using bootstrapping and forward variable selection was used for complication modeling, resulting for each simulated data set in a selected number of variables and an estimated predictive power. The true optimal number of variables and true predictive power were calculated using cross-validation with very large independent data sets. Results: For all simulated data set sizes the number of variables selected by the bootstrapping method was on average close to the true optimal number of variables, but showed considerable spread. Bootstrapping is more accurate in selecting the optimal number of variables than the AIC and BIC alternatives, but this did not translate into a significant difference of the true predictive power. The true predictive power asymptotically converged toward a maximum predictive power for large data sets, and the estimated predictive power converged toward the true predictive power. More than half of the potential predictive power is gained after approximately 200 samples. Our simulations demonstrated severe overfitting (a predicative power lower than that of predicting 50% probability) in a number of small

  2. A hybrid adaptive large neighborhood search algorithm applied to a lot-sizing problem

    DEFF Research Database (Denmark)

    Muller, Laurent Flindt; Spoorendonk, Simon

    This paper presents a hybrid of a general heuristic framework that has been successfully applied to vehicle routing problems and a general purpose MIP solver. The framework uses local search and an adaptive procedure which choses between a set of large neighborhoods to be searched. A mixed integer...... of a solution and to investigate the feasibility of elements in such a neighborhood. The hybrid heuristic framework is applied to the multi-item capacitated lot sizing problem with dynamic lot sizes, where experiments have been conducted on a series of instances from the literature. On average the heuristic...

  3. Pattern-set generation algorithm for the one-dimensional multiple stock sizes cutting stock problem

    Science.gov (United States)

    Cui, Yaodong; Cui, Yi-Ping; Zhao, Zhigang

    2015-09-01

    A pattern-set generation algorithm (PSG) for the one-dimensional multiple stock sizes cutting stock problem (1DMSSCSP) is presented. The solution process contains two stages. In the first stage, the PSG solves the residual problems repeatedly to generate the patterns in the pattern set, where each residual problem is solved by the column-generation approach, and each pattern is generated by solving a single large object placement problem. In the second stage, the integer linear programming model of the 1DMSSCSP is solved using a commercial solver, where only the patterns in the pattern set are considered. The computational results of benchmark instances indicate that the PSG outperforms existing heuristic algorithms and rivals the exact algorithm in solution quality.

  4. An alternative method for determining particle-size distribution of forest road aggregate and soil with large-sized particles

    Science.gov (United States)

    Hakjun Rhee; Randy B. Foltz; James L. Fridley; Finn Krogstad; Deborah S. Page-Dumroese

    2014-01-01

    Measurement of particle-size distribution (PSD) of soil with large-sized particles (e.g., 25.4 mm diameter) requires a large sample and numerous particle-size analyses (PSAs). A new method is needed that would reduce time, effort, and cost for PSAs of the soil and aggregate material with large-sized particles. We evaluated a nested method for sampling and PSA by...

  5. Large size space construction for space exploitation

    Science.gov (United States)

    Kondyurin, Alexey

    2016-07-01

    Space exploitation is impossible without large space structures. We need to make sufficient large volume of pressurized protecting frames for crew, passengers, space processing equipment, & etc. We have to be unlimited in space. Now the size and mass of space constructions are limited by possibility of a launch vehicle. It limits our future in exploitation of space by humans and in development of space industry. Large-size space construction can be made with using of the curing technology of the fibers-filled composites and a reactionable matrix applied directly in free space. For curing the fabric impregnated with a liquid matrix (prepreg) is prepared in terrestrial conditions and shipped in a container to orbit. In due time the prepreg is unfolded by inflating. After polymerization reaction, the durable construction can be fitted out with air, apparatus and life support systems. Our experimental studies of the curing processes in the simulated free space environment showed that the curing of composite in free space is possible. The large-size space construction can be developed. A project of space station, Moon base, Mars base, mining station, interplanet space ship, telecommunication station, space observatory, space factory, antenna dish, radiation shield, solar sail is proposed and overviewed. The study was supported by Humboldt Foundation, ESA (contract 17083/03/NL/SFe), NASA program of the stratospheric balloons and RFBR grants (05-08-18277, 12-08-00970 and 14-08-96011).

  6. Large and abundant flowers increase indirect costs of corollas: a study of coflowering sympatric Mediterranean species of contrasting flower size.

    Science.gov (United States)

    Teixido, Alberto L; Valladares, Fernando

    2013-09-01

    Large floral displays receive more pollinator visits but involve higher production and maintenance costs. This can result in indirect costs which may negatively affect functions like reproductive output. In this study, we explored the relationship between floral display and indirect costs in two pairs of coflowering sympatric Mediterranean Cistus of contrasting flower size. We hypothesized that: (1) corolla production entails direct costs in dry mass, N and P, (2) corollas entail significant indirect costs in terms of fruit set and seed production, (3) indirect costs increase with floral display, (4) indirect costs are greater in larger-flowered sympatric species, and (5) local climatic conditions influence indirect costs. We compared fruit set and seed production of petal-removed flowers and unmanipulated control flowers and evaluated the influence of mean flower number and mean flower size on relative fruit and seed gain of petal-removed and control flowers. Fruit set and seed production were significantly higher in petal-removed flowers in all the studied species. A positive relationship was found between relative fruit gain and mean individual flower size within species. In one pair of species, fruit gain was higher in the large-flowered species, as was the correlation between fruit gain and mean number of open flowers. In the other pair, the correlation between fruit gain and mean flower size was also higher in the large-flowered species. These results reveal that Mediterranean environments impose significant constraints on floral display, counteracting advantages of large flowers from the pollination point of view with increased indirect costs of such flowers.

  7. A scalable method for identifying frequent subtrees in sets of large phylogenetic trees.

    Science.gov (United States)

    Ramu, Avinash; Kahveci, Tamer; Burleigh, J Gordon

    2012-10-03

    We consider the problem of finding the maximum frequent agreement subtrees (MFASTs) in a collection of phylogenetic trees. Existing methods for this problem often do not scale beyond datasets with around 100 taxa. Our goal is to address this problem for datasets with over a thousand taxa and hundreds of trees. We develop a heuristic solution that aims to find MFASTs in sets of many, large phylogenetic trees. Our method works in multiple phases. In the first phase, it identifies small candidate subtrees from the set of input trees which serve as the seeds of larger subtrees. In the second phase, it combines these small seeds to build larger candidate MFASTs. In the final phase, it performs a post-processing step that ensures that we find a frequent agreement subtree that is not contained in a larger frequent agreement subtree. We demonstrate that this heuristic can easily handle data sets with 1000 taxa, greatly extending the estimation of MFASTs beyond current methods. Although this heuristic does not guarantee to find all MFASTs or the largest MFAST, it found the MFAST in all of our synthetic datasets where we could verify the correctness of the result. It also performed well on large empirical data sets. Its performance is robust to the number and size of the input trees. Overall, this method provides a simple and fast way to identify strongly supported subtrees within large phylogenetic hypotheses.

  8. Technical trends of large-size photomasks for flat panel displays

    Science.gov (United States)

    Yoshida, Koichiro

    2017-06-01

    Currently, flat panel displays (FPDs) are one of the main parts for information technology devices and sets. From 1990's to 2000's, liquid crystal displays (LCDs) and plasma displays had been mainstream FPDs. In the middle of 2000's, demand of plasma displays declined and organic light emitting diodes (OLEDs) newly came into FPD market. And today, major technology of FPDs are LCDs and OLEDs. Especially for mobile devices, the penetration of OLEDs is remarkable. In FPDs panel production, photolithography is the key technology as same as LSI. Photomasks for FPDs are used not only as original master of circuit pattern, but also as a tool to form other functional structures of FPDs. Photomasks for FPDs are called as "Large Size Photomasks(LSPMs)", since the remarkable feature is " Size" which reaches over 1- meter square and over 100kg. In this report, we discuss three LSPMs technical topics with FPDs technical transition and trend. The first topics is upsizing of LSPMs, the second is the challenge for higher resolution patterning, and the last is "Multi-Tone Mask" for "Half -Tone Exposure".

  9. Using SETS to find minimal cut sets in large fault trees

    International Nuclear Information System (INIS)

    Worrell, R.B.; Stack, D.W.

    1978-01-01

    An efficient algebraic algorithm for finding the minimal cut sets for a large fault tree was defined and a new procedure which implements the algorithm was added to the Set Equation Transformation System (SETS). The algorithm includes the identification and separate processing of independent subtrees, the coalescing of consecutive gates of the same kind, the creation of additional independent subtrees, and the derivation of the fault tree stem equation in stages. The computer time required to determine the minimal cut sets using these techniques is shown to be substantially less than the computer time required to determine the minimal cut sets when these techniques are not employed. It is shown for a given example that the execution time required to determine the minimal cut sets can be reduced from 7,686 seconds to 7 seconds when all of these techniques are employed

  10. Analyzing ROC curves using the effective set-size model

    Science.gov (United States)

    Samuelson, Frank W.; Abbey, Craig K.; He, Xin

    2018-03-01

    The Effective Set-Size model has been used to describe uncertainty in various signal detection experiments. The model regards images as if they were an effective number (M*) of searchable locations, where the observer treats each location as a location-known-exactly detection task with signals having average detectability d'. The model assumes a rational observer behaves as if he searches an effective number of independent locations and follows signal detection theory at each location. Thus the location-known-exactly detectability (d') and the effective number of independent locations M* fully characterize search performance. In this model the image rating in a single-response task is assumed to be the maximum response that the observer would assign to these many locations. The model has been used by a number of other researchers, and is well corroborated. We examine this model as a way of differentiating imaging tasks that radiologists perform. Tasks involving more searching or location uncertainty may have higher estimated M* values. In this work we applied the Effective Set-Size model to a number of medical imaging data sets. The data sets include radiologists reading screening and diagnostic mammography with and without computer-aided diagnosis (CAD), and breast tomosynthesis. We developed an algorithm to fit the model parameters using two-sample maximum-likelihood ordinal regression, similar to the classic bi-normal model. The resulting model ROC curves are rational and fit the observed data well. We find that the distributions of M* and d' differ significantly among these data sets, and differ between pairs of imaging systems within studies. For example, on average tomosynthesis increased readers' d' values, while CAD reduced the M* parameters. We demonstrate that the model parameters M* and d' are correlated. We conclude that the Effective Set-Size model may be a useful way of differentiating location uncertainty from the diagnostic uncertainty in medical

  11. Effects of Group Size on Students Mathematics Achievement in Small Group Settings

    Science.gov (United States)

    Enu, Justice; Danso, Paul Amoah; Awortwe, Peter K.

    2015-01-01

    An ideal group size is hard to obtain in small group settings; hence there are groups with more members than others. The purpose of the study was to find out whether group size has any effects on students' mathematics achievement in small group settings. Two third year classes of the 2011/2012 academic year were selected from two schools in the…

  12. Experimental study on propagation properties of large size TEM antennas

    International Nuclear Information System (INIS)

    Zhang Guowei; Wang Haiyang; Chen Weiqing; Wang Wei; Zhu Xiangqin; Xie Linshen

    2014-01-01

    The propagation properties of large size TEM antennas were studied by experiment. The size of the TEM antennas is 60 m × 20 m × 10 m and the character Impedance is 120 Ω. A kind of dielectric foil switch is designed compactly with TEM antennas which can generate double exponential waveform with altitude of 10 kV and rise time of l.2 ns. The radiated field distribution was measured. The relationship between rise time/altitude and distance were provided, and the propagation properties of large size TEM antennas were summarized. (authors)

  13. Processing and properties of large-sized ceramic slabs

    Energy Technology Data Exchange (ETDEWEB)

    Raimondo, M.; Dondi, M.; Zanelli, C.; Guarini, G.; Gozzi, A.; Marani, F.; Fossa, L.

    2010-07-01

    Large-sized ceramic slabs with dimensions up to 360x120 cm{sup 2} and thickness down to 2 mm are manufactured through an innovative ceramic process, starting from porcelain stoneware formulations and involving wet ball milling, spray drying, die-less slow-rate pressing, a single stage of fast drying-firing, and finishing (trimming, assembling of ceramic-fiberglass composites). Fired and unfired industrial slabs were selected and characterized from the technological, compositional (XRF, XRD) and microstructural (SEM) viewpoints. Semi-finished products exhibit a remarkable microstructural uniformity and stability in a rather wide window of firing schedules. The phase composition and compact microstructure of fired slabs are very similar to those of porcelain stoneware tiles. The values of water absorption, bulk density, closed porosity, functional performances as well as mechanical and tribological properties conform to the top quality range of porcelain stoneware tiles. However, the large size coupled with low thickness bestow on the slab a certain degree of flexibility, which is emphasized in ceramic-fiberglass composites. These outstanding performances make the large-sized slabs suitable to be used in novel applications: building and construction (new floorings without dismantling the previous paving, ventilated facades, tunnel coverings, insulating panelling), indoor furnitures (table tops, doors), support for photovoltaic ceramic panels. (Author) 24 refs.

  14. Processing and properties of large-sized ceramic slabs

    International Nuclear Information System (INIS)

    Raimondo, M.; Dondi, M.; Zanelli, C.; Guarini, G.; Gozzi, A.; Marani, F.; Fossa, L.

    2010-01-01

    Large-sized ceramic slabs with dimensions up to 360x120 cm 2 and thickness down to 2 mm are manufactured through an innovative ceramic process, starting from porcelain stoneware formulations and involving wet ball milling, spray drying, die-less slow-rate pressing, a single stage of fast drying-firing, and finishing (trimming, assembling of ceramic-fiberglass composites). Fired and unfired industrial slabs were selected and characterized from the technological, compositional (XRF, XRD) and microstructural (SEM) viewpoints. Semi-finished products exhibit a remarkable microstructural uniformity and stability in a rather wide window of firing schedules. The phase composition and compact microstructure of fired slabs are very similar to those of porcelain stoneware tiles. The values of water absorption, bulk density, closed porosity, functional performances as well as mechanical and tribological properties conform to the top quality range of porcelain stoneware tiles. However, the large size coupled with low thickness bestow on the slab a certain degree of flexibility, which is emphasized in ceramic-fiberglass composites. These outstanding performances make the large-sized slabs suitable to be used in novel applications: building and construction (new floorings without dismantling the previous paving, ventilated facades, tunnel coverings, insulating panelling), indoor furnitures (table tops, doors), support for photovoltaic ceramic panels. (Author) 24 refs.

  15. Technological Aspects of Creating Large-size Optical Telescopes

    Directory of Open Access Journals (Sweden)

    V. V. Sychev

    2015-01-01

    Full Text Available A concept of the telescope creation, first of all, depends both on a choice of the optical scheme to form optical radiation and images with minimum losses of energy and information and on a choice of design to meet requirements for strength, stiffness, and stabilization characteristics in real telescope operation conditions. Thus, the concept of creating large-size telescopes, certainly, involves the use of adaptive optics methods and means.The level of technological capabilities to realize scientific and engineering ideas define a successful development of large-size optical telescopes in many respects. All developers pursue the same aim that is to raise an amount of information by increasing a main mirror diameter of the telescope.The article analyses the adaptive telescope designs developed in our country. Using a domestic ACT-25 telescope as an example, it considers creation of large-size optical telescopes in terms of technological aspects. It also describes the telescope creation concept features, which allow reaching marginally possible characteristics to ensure maximum amount of information.The article compares a wide range of large-size telescopes projects. It shows that a domestic project to create the adaptive ACT-25 super-telescope surpasses its foreign counterparts, and there is no sense to implement Euro50 (50m and OWL (100m projects.The considered material gives clear understanding on a role of technological aspects in development of such complicated optic-electronic complexes as a large-size optical telescope. The technological criteria of an assessment offered in the article, namely specific informational content of the telescope, its specific mass, and specific cost allow us to reveal weaknesses in the project development and define a reserve regarding further improvement of the telescope.The analysis of results and their judgment have shown that improvement of optical largesize telescopes in terms of their maximum

  16. Ssecrett and neuroTrace: Interactive visualization and analysis tools for large-scale neuroscience data sets

    KAUST Repository

    Jeong, Wonki; Beyer, Johanna; Hadwiger, Markus; Blue, Rusty; Law, Charles; Vá zquez Reina, Amelio; Reid, Rollie Clay; Lichtman, Jeff W M D; Pfister, Hanspeter

    2010-01-01

    Recent advances in optical and electron microscopy let scientists acquire extremely high-resolution images for neuroscience research. Data sets imaged with modern electron microscopes can range between tens of terabytes to about one petabyte. These large data sizes and the high complexity of the underlying neural structures make it very challenging to handle the data at reasonably interactive rates. To provide neuroscientists flexible, interactive tools, the authors introduce Ssecrett and NeuroTrace, two tools they designed for interactive exploration and analysis of large-scale optical- and electron-microscopy images to reconstruct complex neural circuits of the mammalian nervous system. © 2010 IEEE.

  17. Ssecrett and neuroTrace: Interactive visualization and analysis tools for large-scale neuroscience data sets

    KAUST Repository

    Jeong, Wonki

    2010-05-01

    Recent advances in optical and electron microscopy let scientists acquire extremely high-resolution images for neuroscience research. Data sets imaged with modern electron microscopes can range between tens of terabytes to about one petabyte. These large data sizes and the high complexity of the underlying neural structures make it very challenging to handle the data at reasonably interactive rates. To provide neuroscientists flexible, interactive tools, the authors introduce Ssecrett and NeuroTrace, two tools they designed for interactive exploration and analysis of large-scale optical- and electron-microscopy images to reconstruct complex neural circuits of the mammalian nervous system. © 2010 IEEE.

  18. Reducing Information Overload in Large Seismic Data Sets

    Energy Technology Data Exchange (ETDEWEB)

    HAMPTON,JEFFERY W.; YOUNG,CHRISTOPHER J.; MERCHANT,BION J.; CARR,DORTHE B.; AGUILAR-CHANG,JULIO

    2000-08-02

    Event catalogs for seismic data can become very large. Furthermore, as researchers collect multiple catalogs and reconcile them into a single catalog that is stored in a relational database, the reconciled set becomes even larger. The sheer number of these events makes searching for relevant events to compare with events of interest problematic. Information overload in this form can lead to the data sets being under-utilized and/or used incorrectly or inconsistently. Thus, efforts have been initiated to research techniques and strategies for helping researchers to make better use of large data sets. In this paper, the authors present their efforts to do so in two ways: (1) the Event Search Engine, which is a waveform correlation tool and (2) some content analysis tools, which area combination of custom-built and commercial off-the-shelf tools for accessing, managing, and querying seismic data stored in a relational database. The current Event Search Engine is based on a hierarchical clustering tool known as the dendrogram tool, which is written as a MatSeis graphical user interface. The dendrogram tool allows the user to build dendrogram diagrams for a set of waveforms by controlling phase windowing, down-sampling, filtering, enveloping, and the clustering method (e.g. single linkage, complete linkage, flexible method). It also allows the clustering to be based on two or more stations simultaneously, which is important to bridge gaps in the sparsely recorded event sets anticipated in such a large reconciled event set. Current efforts are focusing on tools to help the researcher winnow the clusters defined using the dendrogram tool down to the minimum optimal identification set. This will become critical as the number of reference events in the reconciled event set continually grows. The dendrogram tool is part of the MatSeis analysis package, which is available on the Nuclear Explosion Monitoring Research and Engineering Program Web Site. As part of the research

  19. Spatial occupancy models for large data sets

    Science.gov (United States)

    Johnson, Devin S.; Conn, Paul B.; Hooten, Mevin B.; Ray, Justina C.; Pond, Bruce A.

    2013-01-01

    Since its development, occupancy modeling has become a popular and useful tool for ecologists wishing to learn about the dynamics of species occurrence over time and space. Such models require presence–absence data to be collected at spatially indexed survey units. However, only recently have researchers recognized the need to correct for spatially induced overdisperison by explicitly accounting for spatial autocorrelation in occupancy probability. Previous efforts to incorporate such autocorrelation have largely focused on logit-normal formulations for occupancy, with spatial autocorrelation induced by a random effect within a hierarchical modeling framework. Although useful, computational time generally limits such an approach to relatively small data sets, and there are often problems with algorithm instability, yielding unsatisfactory results. Further, recent research has revealed a hidden form of multicollinearity in such applications, which may lead to parameter bias if not explicitly addressed. Combining several techniques, we present a unifying hierarchical spatial occupancy model specification that is particularly effective over large spatial extents. This approach employs a probit mixture framework for occupancy and can easily accommodate a reduced-dimensional spatial process to resolve issues with multicollinearity and spatial confounding while improving algorithm convergence. Using open-source software, we demonstrate this new model specification using a case study involving occupancy of caribou (Rangifer tarandus) over a set of 1080 survey units spanning a large contiguous region (108 000 km2) in northern Ontario, Canada. Overall, the combination of a more efficient specification and open-source software allows for a facile and stable implementation of spatial occupancy models for large data sets.

  20. Accelerating inference for diffusions observed with measurement error and large sample sizes using approximate Bayesian computation

    DEFF Research Database (Denmark)

    Picchini, Umberto; Forman, Julie Lyng

    2016-01-01

    a nonlinear stochastic differential equation model observed with correlated measurement errors and an application to protein folding modelling. An approximate Bayesian computation (ABC)-MCMC algorithm is suggested to allow inference for model parameters within reasonable time constraints. The ABC algorithm......In recent years, dynamical modelling has been provided with a range of breakthrough methods to perform exact Bayesian inference. However, it is often computationally unfeasible to apply exact statistical methodologies in the context of large data sets and complex models. This paper considers...... applications. A simulation study is conducted to compare our strategy with exact Bayesian inference, the latter resulting two orders of magnitude slower than ABC-MCMC for the considered set-up. Finally, the ABC algorithm is applied to a large size protein data. The suggested methodology is fairly general...

  1. Visual exposure to large and small portion sizes and perceptions of portion size normality: Three experimental studies.

    Science.gov (United States)

    Robinson, Eric; Oldham, Melissa; Cuckson, Imogen; Brunstrom, Jeffrey M; Rogers, Peter J; Hardman, Charlotte A

    2016-03-01

    Portion sizes of many foods have increased in recent times. In three studies we examined the effect that repeated visual exposure to larger versus smaller food portion sizes has on perceptions of what constitutes a normal-sized food portion and measures of portion size selection. In studies 1 and 2 participants were visually exposed to images of large or small portions of spaghetti bolognese, before making evaluations about an image of an intermediate sized portion of the same food. In study 3 participants were exposed to images of large or small portions of a snack food before selecting a portion size of snack food to consume. Across the three studies, visual exposure to larger as opposed to smaller portion sizes resulted in participants considering a normal portion of food to be larger than a reference intermediate sized portion. In studies 1 and 2 visual exposure to larger portion sizes also increased the size of self-reported ideal meal size. In study 3 visual exposure to larger portion sizes of a snack food did not affect how much of that food participants subsequently served themselves and ate. Visual exposure to larger portion sizes may adjust visual perceptions of what constitutes a 'normal' sized portion. However, we did not find evidence that visual exposure to larger portions altered snack food intake. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.

  2. The influence of spatial grain size on the suitability of the higher-taxon approach in continental priority-setting

    DEFF Research Database (Denmark)

    Larsen, Frank Wugt; Rahbek, Carsten

    2005-01-01

    The higher-taxon approach may provide a pragmatic surrogate for the rapid identification of priority areas for conservation. To date, no continent-wide study has examined the use of higher-taxon data to identify complementarity-based networks of priority areas, nor has the influence of spatial gr...... grain size been assessed. We used data obtained from 939 sub-Saharan mammals to analyse the performance of higher-taxon data for continental priority-setting and to assess the influence of spatial grain sizes in terms of the size of selection units (1°× 1°, 2°× 2° and 4°× 4° latitudinal...... as effectively as species-based priority areas, genus-based areas perform considerably less effectively than species-based areas for the 1° and 2° grain size. Thus, our results favour the higher-taxon approach for continental priority-setting only when large grain sizes (= 4°) are used.......The higher-taxon approach may provide a pragmatic surrogate for the rapid identification of priority areas for conservation. To date, no continent-wide study has examined the use of higher-taxon data to identify complementarity-based networks of priority areas, nor has the influence of spatial...

  3. Synthesis of mesoporous carbon nanoparticles with large and tunable pore sizes

    Science.gov (United States)

    Liu, Chao; Yu, Meihua; Li, Yang; Li, Jiansheng; Wang, Jing; Yu, Chengzhong; Wang, Lianjun

    2015-07-01

    Mesoporous carbon nanoparticles (MCNs) with large and adjustable pores have been synthesized by using poly(ethylene oxide)-b-polystyrene (PEO-b-PS) as a template and resorcinol-formaldehyde (RF) as a carbon precursor. The resulting MCNs possess small diameters (100-126 nm) and high BET surface areas (up to 646 m2 g-1). By using home-designed block copolymers, the pore size of MCNs can be tuned in the range of 13-32 nm. Importantly, the pore size of 32 nm is the largest among the MCNs prepared by the soft-templating route. The formation mechanism and structure evolution of MCNs were studied by TEM and DLS measurements, based on which a soft-templating/sphere packing mechanism was proposed. Because of the large pores and small particle sizes, the resulting MCNs were excellent nano-carriers to deliver biomolecules into cancer cells. MCNs were further demonstrated with negligible toxicity. It is anticipated that this carbon material with large pores and small particle sizes may have excellent potential in drug/gene delivery.Mesoporous carbon nanoparticles (MCNs) with large and adjustable pores have been synthesized by using poly(ethylene oxide)-b-polystyrene (PEO-b-PS) as a template and resorcinol-formaldehyde (RF) as a carbon precursor. The resulting MCNs possess small diameters (100-126 nm) and high BET surface areas (up to 646 m2 g-1). By using home-designed block copolymers, the pore size of MCNs can be tuned in the range of 13-32 nm. Importantly, the pore size of 32 nm is the largest among the MCNs prepared by the soft-templating route. The formation mechanism and structure evolution of MCNs were studied by TEM and DLS measurements, based on which a soft-templating/sphere packing mechanism was proposed. Because of the large pores and small particle sizes, the resulting MCNs were excellent nano-carriers to deliver biomolecules into cancer cells. MCNs were further demonstrated with negligible toxicity. It is anticipated that this carbon material with large pores and

  4. Improving small RNA-seq by using a synthetic spike-in set for size-range quality control together with a set for data normalization.

    Science.gov (United States)

    Locati, Mauro D; Terpstra, Inez; de Leeuw, Wim C; Kuzak, Mateusz; Rauwerda, Han; Ensink, Wim A; van Leeuwen, Selina; Nehrdich, Ulrike; Spaink, Herman P; Jonker, Martijs J; Breit, Timo M; Dekker, Rob J

    2015-08-18

    There is an increasing interest in complementing RNA-seq experiments with small-RNA (sRNA) expression data to obtain a comprehensive view of a transcriptome. Currently, two main experimental challenges concerning sRNA-seq exist: how to check the size distribution of isolated sRNAs, given the sensitive size-selection steps in the protocol; and how to normalize data between samples, given the low complexity of sRNA types. We here present two separate sets of synthetic RNA spike-ins for monitoring size-selection and for performing data normalization in sRNA-seq. The size-range quality control (SRQC) spike-in set, consisting of 11 oligoribonucleotides (10-70 nucleotides), was tested by intentionally altering the size-selection protocol and verified via several comparative experiments. We demonstrate that the SRQC set is useful to reproducibly track down biases in the size-selection in sRNA-seq. The external reference for data-normalization (ERDN) spike-in set, consisting of 19 oligoribonucleotides, was developed for sample-to-sample normalization in differential-expression analysis of sRNA-seq data. Testing and applying the ERDN set showed that it can reproducibly detect differential expression over a dynamic range of 2(18). Hence, biological variation in sRNA composition and content between samples is preserved while technical variation is effectively minimized. Together, both spike-in sets can significantly improve the technical reproducibility of sRNA-seq. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.

  5. Visual exposure to large and small portion sizes and perceptions of portion size normality: Three experimental studies

    OpenAIRE

    Robinson, Eric; Oldham, Melissa; Cuckson, Imogen; Brunstrom, Jeffrey M.; Rogers, Peter J.; Hardman, Charlotte A.

    2016-01-01

    Portion sizes of many foods have increased in recent times. In three studies we examined the effect that repeated visual exposure to larger versus smaller food portion sizes has on perceptions of what constitutes a normal-sized food portion and measures of portion size selection. In studies 1 and 2 participants were visually exposed to images of large or small portions of spaghetti bolognese, before making evaluations about an image of an intermediate sized portion of the same food. In study ...

  6. Calculations of safe collimator settings and β^{*} at the CERN Large Hadron Collider

    Directory of Open Access Journals (Sweden)

    R. Bruce

    2015-06-01

    Full Text Available The first run of the Large Hadron Collider (LHC at CERN was very successful and resulted in important physics discoveries. One way of increasing the luminosity in a collider, which gave a very significant contribution to the LHC performance in the first run and can be used even if the beam intensity cannot be increased, is to decrease the transverse beam size at the interaction points by reducing the optical function β^{*}. However, when doing so, the beam becomes larger in the final focusing system, which could expose its aperture to beam losses. For the LHC, which is designed to store beams with a total energy of 362 MJ, this is critical, since the loss of even a small fraction of the beam could cause a magnet quench or even damage. Therefore, the machine aperture has to be protected by the collimation system. The settings of the collimators constrain the maximum beam size that can be tolerated and therefore impose a lower limit on β^{*}. In this paper, we present calculations to determine safe collimator settings and the resulting limit on β^{*}, based on available aperture and operational stability of the machine. Our model was used to determine the LHC configurations in 2011 and 2012 and it was found that β^{*} could be decreased significantly compared to the conservative model used in 2010. The gain in luminosity resulting from the decreased margins between collimators was more than a factor 2, and a further contribution from the use of realistic aperture estimates based on measurements was almost as large. This has played an essential role in the rapid and successful accumulation of experimental data in the LHC.

  7. Calculations of safe collimator settings and β* at the CERN Large Hadron Collider

    Science.gov (United States)

    Bruce, R.; Assmann, R. W.; Redaelli, S.

    2015-06-01

    The first run of the Large Hadron Collider (LHC) at CERN was very successful and resulted in important physics discoveries. One way of increasing the luminosity in a collider, which gave a very significant contribution to the LHC performance in the first run and can be used even if the beam intensity cannot be increased, is to decrease the transverse beam size at the interaction points by reducing the optical function β*. However, when doing so, the beam becomes larger in the final focusing system, which could expose its aperture to beam losses. For the LHC, which is designed to store beams with a total energy of 362 MJ, this is critical, since the loss of even a small fraction of the beam could cause a magnet quench or even damage. Therefore, the machine aperture has to be protected by the collimation system. The settings of the collimators constrain the maximum beam size that can be tolerated and therefore impose a lower limit on β*. In this paper, we present calculations to determine safe collimator settings and the resulting limit on β*, based on available aperture and operational stability of the machine. Our model was used to determine the LHC configurations in 2011 and 2012 and it was found that β* could be decreased significantly compared to the conservative model used in 2010. The gain in luminosity resulting from the decreased margins between collimators was more than a factor 2, and a further contribution from the use of realistic aperture estimates based on measurements was almost as large. This has played an essential role in the rapid and successful accumulation of experimental data in the LHC.

  8. Large Sets in Boolean and Non-Boolean Groups and Topology

    Directory of Open Access Journals (Sweden)

    Ol’ga V. Sipacheva

    2017-10-01

    Full Text Available Various notions of large sets in groups, including the classical notions of thick, syndetic, and piecewise syndetic sets and the new notion of vast sets in groups, are studied with emphasis on the interplay between such sets in Boolean groups. Natural topologies closely related to vast sets are considered; as a byproduct, interesting relations between vast sets and ultrafilters are revealed.

  9. Reliable pipeline repair system for very large pipe size

    Energy Technology Data Exchange (ETDEWEB)

    Charalambides, John N.; Sousa, Alexandre Barreto de [Oceaneering International, Inc., Houston, TX (United States)

    2004-07-01

    The oil and gas industry worldwide has been mainly depending on the long-term reliability of rigid pipelines to ensure the transportation of hydrocarbons, crude oil, gas, fuel, etc. Many other methods are also utilized onshore and offshore (e.g. flexible lines, FPSO's, etc.), but when it comes to the underwater transportation of very high volumes of oil and gas, the industry commonly uses large size rigid pipelines (i.e. steel pipes). Oil and gas operators learned to depend on the long-lasting integrity of these very large pipelines and many times they forget or disregard that even steel pipelines degrade over time and more often that that, they are also susceptible to various forms of damage (minor or major, environmental or external, etc.). Over the recent years the industry had recognized the need of implementing an 'emergency repair plan' to account for such unforeseen events and the oil and gas operators have become 'smarter' by being 'pro-active' in order to ensure 'flow assurance'. When we consider very large diameter steel pipelines such as 42' and 48' nominal pipe size (NPS), the industry worldwide does not provide 'ready-made', 'off-the-shelf' repair hardware that can be easily shipped to the offshore location and effect a major repair within acceptable time frames and avoid substantial profit losses due to 'down-time' in production. The typical time required to establish a solid repair system for large pipe diameters could be as long as six or more months (depending on the availability of raw materials). This paper will present in detail the Emergency Pipeline Repair Systems (EPRS) that Oceaneering successfully designed, manufactured, tested and provided to two major oil and gas operators, located in two different continents (Gulf of Mexico, U.S.A. and Arabian Gulf, U.A.E.), for two different very large pipe sizes (42'' and 48'' Nominal Pipe Sizes

  10. Multidimensional scaling for large genomic data sets

    Directory of Open Access Journals (Sweden)

    Lu Henry

    2008-04-01

    Full Text Available Abstract Background Multi-dimensional scaling (MDS is aimed to represent high dimensional data in a low dimensional space with preservation of the similarities between data points. This reduction in dimensionality is crucial for analyzing and revealing the genuine structure hidden in the data. For noisy data, dimension reduction can effectively reduce the effect of noise on the embedded structure. For large data set, dimension reduction can effectively reduce information retrieval complexity. Thus, MDS techniques are used in many applications of data mining and gene network research. However, although there have been a number of studies that applied MDS techniques to genomics research, the number of analyzed data points was restricted by the high computational complexity of MDS. In general, a non-metric MDS method is faster than a metric MDS, but it does not preserve the true relationships. The computational complexity of most metric MDS methods is over O(N2, so that it is difficult to process a data set of a large number of genes N, such as in the case of whole genome microarray data. Results We developed a new rapid metric MDS method with a low computational complexity, making metric MDS applicable for large data sets. Computer simulation showed that the new method of split-and-combine MDS (SC-MDS is fast, accurate and efficient. Our empirical studies using microarray data on the yeast cell cycle showed that the performance of K-means in the reduced dimensional space is similar to or slightly better than that of K-means in the original space, but about three times faster to obtain the clustering results. Our clustering results using SC-MDS are more stable than those in the original space. Hence, the proposed SC-MDS is useful for analyzing whole genome data. Conclusion Our new method reduces the computational complexity from O(N3 to O(N when the dimension of the feature space is far less than the number of genes N, and it successfully

  11. Higher albedos and size distribution of large transneptunian objects

    Science.gov (United States)

    Lykawka, Patryk Sofia; Mukai, Tadashi

    2005-11-01

    Transneptunian objects (TNOs) orbit beyond Neptune and do offer important clues about the formation of our solar system. Although observations have been increasing the number of discovered TNOs and improving their orbital elements, very little is known about elementary physical properties such as sizes, albedos and compositions. Due to TNOs large distances (>40 AU) and observational limitations, reliable physical information can be obtained only from brighter objects (supposedly larger bodies). According to size and albedo measurements available, it is evident the traditionally assumed albedo p=0.04 cannot hold for all TNOs, especially those with approximately absolute magnitudes H⩽5.5. That is, the largest TNOs possess higher albedos (generally >0.04) that strongly appear to increase as a function of size. Using a compilation of published data, we derived empirical relations which can provide estimations of diameters and albedos as a function of absolute magnitude. Calculations result in more accurate size/albedo estimations for TNOs with H⩽5.5 than just assuming p=0.04. Nevertheless, considering low statistics, the value p=0.04 sounds still convenient for H>5.5 non-binary TNOs as a group. We also discuss about physical processes (e.g., collisions, intrinsic activity and the presence of tenuous atmospheres) responsible for the increase of albedo among large bodies. Currently, all big TNOs (>700 km) would be capable to sustain thin atmospheres or icy frosts composed of CH 4, CO or N 2 even for body bulk densities as low as 0.5 g cm -3. A size-dependent albedo has important consequences for the TNOs size distribution, cumulative luminosity function and total mass estimations. According to our analysis, the latter can be reduced up to 50% if higher albedos are common among large bodies. Lastly, by analyzing orbital properties of classical TNOs ( 42AUbodies. For both populations, distinct absolute magnitude distributions are maximized for an inclination threshold

  12. Spin-torque oscillation in large size nano-magnet with perpendicular magnetic fields

    Energy Technology Data Exchange (ETDEWEB)

    Luo, Linqiang, E-mail: LL6UK@virginia.edu [Department of Physics, University of Virginia, Charlottesville, VA 22904 (United States); Kabir, Mehdi [Department of Electrical & Computer Engineering, University of Virginia, Charlottesville, VA 22904 (United States); Dao, Nam; Kittiwatanakul, Salinporn [Department of Materials Science & Engineering, University of Virginia, Charlottesville, VA 22904 (United States); Cyberey, Michael [Department of Electrical Engineering, University of Virginia, Charlottesville, VA 22904 (United States); Wolf, Stuart A. [Department of Physics, University of Virginia, Charlottesville, VA 22904 (United States); Department of Materials Science & Engineering, University of Virginia, Charlottesville, VA 22904 (United States); Institute of Defense Analyses, Alexandria, VA 22311 (United States); Stan, Mircea [Department of Electrical & Computer Engineering, University of Virginia, Charlottesville, VA 22904 (United States); Lu, Jiwei [Department of Materials Science & Engineering, University of Virginia, Charlottesville, VA 22904 (United States)

    2017-06-15

    Highlights: • 500 nm size nano-pillar device was fabricated by photolithography techniques. • A magnetic hybrid structure was achieved with perpendicular magnetic fields. • Spin torque switching and oscillation was demonstrated in the large sized device. • Micromagnetic simulations accurately reproduced the experimental results. • Simulations demonstrated the synchronization of magnetic inhomogeneities. - Abstract: DC current induced magnetization reversal and magnetization oscillation was observed in 500 nm large size Co{sub 90}Fe{sub 10}/Cu/Ni{sub 80}Fe{sub 20} pillars. A perpendicular external field enhanced the coercive field separation between the reference layer (Co{sub 90}Fe{sub 10}) and free layer (Ni{sub 80}Fe{sub 20}) in the pseudo spin valve, allowing a large window of external magnetic field for exploring the free-layer reversal. A magnetic hybrid structure was achieved for the study of spin torque oscillation by applying a perpendicular field >3 kOe. The magnetization precession was manifested in terms of the multiple peaks on the differential resistance curves. Depending on the bias current and applied field, the regions of magnetic switching and magnetization precession on a dynamical stability diagram has been discussed in details. Micromagnetic simulations are shown to be in good agreement with experimental results and provide insight for synchronization of inhomogeneities in large sized device. The ability to manipulate spin-dynamics on large size devices could be proved useful for increasing the output power of the spin-transfer nano-oscillators (STNOs).

  13. Operational Aspects of Dealing with the Large BaBar Data Set

    Energy Technology Data Exchange (ETDEWEB)

    Trunov, Artem G

    2003-06-13

    To date, the BaBar experiment has stored over 0.7PB of data in an Objectivity/DB database. Approximately half this data-set comprises simulated data of which more than 70% has been produced at more than 20 collaborating institutes outside of SLAC. The operational aspects of managing such a large data set and providing access to the physicists in a timely manner is a challenging and complex problem. We describe the operational aspects of managing such a large distributed data-set as well as importing and exporting data from geographically spread BaBar collaborators. We also describe problems common to dealing with such large datasets.

  14. Utilizing Maximal Independent Sets as Dominating Sets in Scale-Free Networks

    Science.gov (United States)

    Derzsy, N.; Molnar, F., Jr.; Szymanski, B. K.; Korniss, G.

    Dominating sets provide key solution to various critical problems in networked systems, such as detecting, monitoring, or controlling the behavior of nodes. Motivated by graph theory literature [Erdos, Israel J. Math. 4, 233 (1966)], we studied maximal independent sets (MIS) as dominating sets in scale-free networks. We investigated the scaling behavior of the size of MIS in artificial scale-free networks with respect to multiple topological properties (size, average degree, power-law exponent, assortativity), evaluated its resilience to network damage resulting from random failure or targeted attack [Molnar et al., Sci. Rep. 5, 8321 (2015)], and compared its efficiency to previously proposed dominating set selection strategies. We showed that, despite its small set size, MIS provides very high resilience against network damage. Using extensive numerical analysis on both synthetic and real-world (social, biological, technological) network samples, we demonstrate that our method effectively satisfies four essential requirements of dominating sets for their practical applicability on large-scale real-world systems: 1.) small set size, 2.) minimal network information required for their construction scheme, 3.) fast and easy computational implementation, and 4.) resiliency to network damage. Supported by DARPA, DTRA, and NSF.

  15. Improving a Lecture-Size Molecular Model Set by Repurposing Used Whiteboard Markers

    Science.gov (United States)

    Dragojlovic, Veljko

    2015-01-01

    Preparation of an inexpensive model set from whiteboard markers and either HGS molecular model set or atoms made of wood is described. The model set is relatively easy to prepare and is sufficiently large to be suitable as an instructor set for use in lectures.

  16. How large a training set is needed to develop a classifier for microarray data?

    Science.gov (United States)

    Dobbin, Kevin K; Zhao, Yingdong; Simon, Richard M

    2008-01-01

    A common goal of gene expression microarray studies is the development of a classifier that can be used to divide patients into groups with different prognoses, or with different expected responses to a therapy. These types of classifiers are developed on a training set, which is the set of samples used to train a classifier. The question of how many samples are needed in the training set to produce a good classifier from high-dimensional microarray data is challenging. We present a model-based approach to determining the sample size required to adequately train a classifier. It is shown that sample size can be determined from three quantities: standardized fold change, class prevalence, and number of genes or features on the arrays. Numerous examples and important experimental design issues are discussed. The method is adapted to address ex post facto determination of whether the size of a training set used to develop a classifier was adequate. An interactive web site for performing the sample size calculations is provided. We showed that sample size calculations for classifier development from high-dimensional microarray data are feasible, discussed numerous important considerations, and presented examples.

  17. Estimated spatial requirements of the medium- to large-sized ...

    African Journals Online (AJOL)

    Conservation planning in the Cape Floristic Region (CFR) of South Africa, a recognised world plant diversity hotspot, required information on the estimated spatial requirements of selected medium- to large-sized mammals within each of 102 Broad Habitat Units (BHUs) delineated according to key biophysical parameters.

  18. MiniWall Tool for Analyzing CFD and Wind Tunnel Large Data Sets

    Science.gov (United States)

    Schuh, Michael J.; Melton, John E.; Stremel, Paul M.

    2017-01-01

    It is challenging to review and assimilate large data sets created by Computational Fluid Dynamics (CFD) simulations and wind tunnel tests. Over the past 10 years, NASA Ames Research Center has developed and refined a software tool dubbed the MiniWall to increase productivity in reviewing and understanding large CFD-generated data sets. Under the recent NASA ERA project, the application of the tool expanded to enable rapid comparison of experimental and computational data. The MiniWall software is browser based so that it runs on any computer or device that can display a web page. It can also be used remotely and securely by using web server software such as the Apache HTTP server. The MiniWall software has recently been rewritten and enhanced to make it even easier for analysts to review large data sets and extract knowledge and understanding from these data sets. This paper describes the MiniWall software and demonstrates how the different features are used to review and assimilate large data sets.

  19. DESIGN AND DEVELOPMENT OF A LARGE SIZE NON-TRACKING SOLAR COOKER

    Directory of Open Access Journals (Sweden)

    N. M. NAHAR

    2009-09-01

    Full Text Available A large size novel non-tracking solar cooker has been designed, developed and tested. The cooker has been designed in such a way that the width to length ratio for reflector and glass window is about 4 so that maximum radiation falls on the glass window. This has helped in eliminating azimuthal tracking that is required in simple hot box solar cooker towards the Sun every hour because the width to length ratio of reflector is 1. It has been found that stagnation temperatures were 118.5oC and 108oC in large size non-tracking solar cooker and hot box solar cooker respectively. It takes about 2 h for soft food and 3 h for hard food. The cooker is capable of cooking 4.0 kg of food at a time. The efficiency of the large size non-tracking solar cooker has been found to be 27.5%. The cooker saves 5175 MJ of energy per year. The cost of the cooker is Rs. 10000.00 (1.0 US$ = Rs. 50.50. The payback period has been calculated by considering 10% annual interest, 5% maintenance cost and 5% inflation in fuel prices and maintenance cost. The payback period is least, i.e. 1.58 yr., with respect to electricity and maximum, i.e. 4.89 yr., with respect to kerosene. The payback periods are in increasing order with respect to fuel: electricity, coal, firewood, liquid petroleum gas, and kerosene. The shorter payback periods suggests that the use of large size non-tracking solar cooker is economical.

  20. Data Programming: Creating Large Training Sets, Quickly

    Science.gov (United States)

    Ratner, Alexander; De Sa, Christopher; Wu, Sen; Selsam, Daniel; Ré, Christopher

    2018-01-01

    Large labeled training sets are the critical building blocks of supervised learning methods and are key enablers of deep learning techniques. For some applications, creating labeled training sets is the most time-consuming and expensive part of applying machine learning. We therefore propose a paradigm for the programmatic creation of training sets called data programming in which users express weak supervision strategies or domain heuristics as labeling functions, which are programs that label subsets of the data, but that are noisy and may conflict. We show that by explicitly representing this training set labeling process as a generative model, we can “denoise” the generated training set, and establish theoretically that we can recover the parameters of these generative models in a handful of settings. We then show how to modify a discriminative loss function to make it noise-aware, and demonstrate our method over a range of discriminative models including logistic regression and LSTMs. Experimentally, on the 2014 TAC-KBP Slot Filling challenge, we show that data programming would have led to a new winning score, and also show that applying data programming to an LSTM model leads to a TAC-KBP score almost 6 F1 points over a state-of-the-art LSTM baseline (and into second place in the competition). Additionally, in initial user studies we observed that data programming may be an easier way for non-experts to create machine learning models when training data is limited or unavailable. PMID:29872252

  1. Combining the role of convenience and consideration set size in explaining fish consumption in Norway.

    Science.gov (United States)

    Rortveit, Asbjorn Warvik; Olsen, Svein Ottar

    2009-04-01

    The purpose of this study is to explore how convenience orientation, perceived product inconvenience and consideration set size are related to attitudes towards fish and fish consumption. The authors present a structural equation model (SEM) based on the integration of two previous studies. The results of a SEM analysis using Lisrel 8.72 on data from a Norwegian consumer survey (n=1630) suggest that convenience orientation and perceived product inconvenience have a negative effect on both consideration set size and consumption frequency. Attitude towards fish has the greatest impact on consumption frequency. The results also indicate that perceived product inconvenience is a key variable since it has a significant impact on attitude, and on consideration set size and consumption frequency. Further, the analyses confirm earlier findings suggesting that the effect of convenience orientation on consumption is partially mediated through perceived product inconvenience. The study also confirms earlier findings suggesting that the consideration set size affects consumption frequency. Practical implications drawn from this research are that the seafood industry would benefit from developing and positioning products that change beliefs about fish as an inconvenient product. Future research for other food categories should be done to enhance the external validity.

  2. The gradient boosting algorithm and random boosting for genome-assisted evaluation in large data sets.

    Science.gov (United States)

    González-Recio, O; Jiménez-Montero, J A; Alenda, R

    2013-01-01

    In the next few years, with the advent of high-density single nucleotide polymorphism (SNP) arrays and genome sequencing, genomic evaluation methods will need to deal with a large number of genetic variants and an increasing sample size. The boosting algorithm is a machine-learning technique that may alleviate the drawbacks of dealing with such large data sets. This algorithm combines different predictors in a sequential manner with some shrinkage on them; each predictor is applied consecutively to the residuals from the committee formed by the previous ones to form a final prediction based on a subset of covariates. Here, a detailed description is provided and examples using a toy data set are included. A modification of the algorithm called "random boosting" was proposed to increase predictive ability and decrease computation time of genome-assisted evaluation in large data sets. Random boosting uses a random selection of markers to add a subsequent weak learner to the predictive model. These modifications were applied to a real data set composed of 1,797 bulls genotyped for 39,714 SNP. Deregressed proofs of 4 yield traits and 1 type trait from January 2009 routine evaluations were used as dependent variables. A 2-fold cross-validation scenario was implemented. Sires born before 2005 were used as a training sample (1,576 and 1,562 for production and type traits, respectively), whereas younger sires were used as a testing sample to evaluate predictive ability of the algorithm on yet-to-be-observed phenotypes. Comparison with the original algorithm was provided. The predictive ability of the algorithm was measured as Pearson correlations between observed and predicted responses. Further, estimated bias was computed as the average difference between observed and predicted phenotypes. The results showed that the modification of the original boosting algorithm could be run in 1% of the time used with the original algorithm and with negligible differences in accuracy

  3. Accelerated EM-based clustering of large data sets

    NARCIS (Netherlands)

    Verbeek, J.J.; Nunnink, J.R.J.; Vlassis, N.

    2006-01-01

    Motivated by the poor performance (linear complexity) of the EM algorithm in clustering large data sets, and inspired by the successful accelerated versions of related algorithms like k-means, we derive an accelerated variant of the EM algorithm for Gaussian mixtures that: (1) offers speedups that

  4. Interlayer catalytic exfoliation realizing scalable production of large-size pristine few-layer graphene

    OpenAIRE

    Geng, Xiumei; Guo, Yufen; Li, Dongfang; Li, Weiwei; Zhu, Chao; Wei, Xiangfei; Chen, Mingliang; Gao, Song; Qiu, Shengqiang; Gong, Youpin; Wu, Liqiong; Long, Mingsheng; Sun, Mengtao; Pan, Gebo; Liu, Liwei

    2013-01-01

    Mass production of reduced graphene oxide and graphene nanoplatelets has recently been achieved. However, a great challenge still remains in realizing large-quantity and high-quality production of large-size thin few-layer graphene (FLG). Here, we create a novel route to solve the issue by employing one-time-only interlayer catalytic exfoliation (ICE) of salt-intercalated graphite. The typical FLG with a large lateral size of tens of microns and a thickness less than 2?nm have been obtained b...

  5. Efficient inference of population size histories and locus-specific mutation rates from large-sample genomic variation data.

    Science.gov (United States)

    Bhaskar, Anand; Wang, Y X Rachel; Song, Yun S

    2015-02-01

    With the recent increase in study sample sizes in human genetics, there has been growing interest in inferring historical population demography from genomic variation data. Here, we present an efficient inference method that can scale up to very large samples, with tens or hundreds of thousands of individuals. Specifically, by utilizing analytic results on the expected frequency spectrum under the coalescent and by leveraging the technique of automatic differentiation, which allows us to compute gradients exactly, we develop a very efficient algorithm to infer piecewise-exponential models of the historical effective population size from the distribution of sample allele frequencies. Our method is orders of magnitude faster than previous demographic inference methods based on the frequency spectrum. In addition to inferring demography, our method can also accurately estimate locus-specific mutation rates. We perform extensive validation of our method on simulated data and show that it can accurately infer multiple recent epochs of rapid exponential growth, a signal that is difficult to pick up with small sample sizes. Lastly, we use our method to analyze data from recent sequencing studies, including a large-sample exome-sequencing data set of tens of thousands of individuals assayed at a few hundred genic regions. © 2015 Bhaskar et al.; Published by Cold Spring Harbor Laboratory Press.

  6. Determinants of Awareness, Consideration, and Choice Set Size in University Choice.

    Science.gov (United States)

    Dawes, Philip L.; Brown, Jennifer

    2002-01-01

    Developed and tested a model of students' university "brand" choice using five individual-level variables (ethnic group, age, gender, number of parents going to university, and academic ability) and one situational variable (duration of search) to explain variation in the sizes of awareness, consideration, and choice decision sets. (EV)

  7. Determination of size-specific exposure settings in dental cone-beam CT

    International Nuclear Information System (INIS)

    Pauwels, Ruben; Jacobs, Reinhilde; Bogaerts, Ria; Bosmans, Hilde; Panmekiate, Soontra

    2017-01-01

    To estimate the possible reduction of tube output as a function of head size in dental cone-beam computed tomography (CBCT). A 16 cm PMMA phantom, containing a central and six peripheral columns filled with PMMA, was used to represent an average adult male head. The phantom was scanned using CBCT, with 0-6 peripheral columns having been removed in order to simulate varying head sizes. For five kV settings (70-90 kV), the mAs required to reach a predetermined image noise level was determined, and corresponding radiation doses were derived. Results were expressed as a function of head size, age, and gender, based on growth reference charts. The use of 90 kV consistently resulted in the largest relative dose reduction. A potential mAs reduction ranging from 7 % to 50 % was seen for the different simulated head sizes, showing an exponential relation between head size and mAs. An optimized exposure protocol based on head circumference or age/gender is proposed. A considerable dose reduction, through reduction of the mAs rather than the kV, is possible for small-sized patients in CBCT, including children and females. Size-specific exposure protocols should be clinically implemented. (orig.)

  8. Determination of size-specific exposure settings in dental cone-beam CT

    Energy Technology Data Exchange (ETDEWEB)

    Pauwels, Ruben [Chulalongkorn University, Department of Radiology, Faculty of Dentistry, Patumwan, Bangkok (Thailand); University of Leuven, OMFS-IMPATH Research Group, Department of Imaging and Pathology, Biomedical Sciences Group, Leuven (Belgium); Jacobs, Reinhilde [University of Leuven, OMFS-IMPATH Research Group, Department of Imaging and Pathology, Biomedical Sciences Group, Leuven (Belgium); Bogaerts, Ria [University of Leuven, Laboratory of Experimental Radiotherapy, Department of Oncology, Biomedical Sciences Group, Leuven (Belgium); Bosmans, Hilde [University of Leuven, Medical Physics and Quality Assessment, Department of Imaging and Pathology, Biomedical Sciences Group, Leuven (Belgium); Panmekiate, Soontra [Chulalongkorn University, Department of Radiology, Faculty of Dentistry, Patumwan, Bangkok (Thailand)

    2017-01-15

    To estimate the possible reduction of tube output as a function of head size in dental cone-beam computed tomography (CBCT). A 16 cm PMMA phantom, containing a central and six peripheral columns filled with PMMA, was used to represent an average adult male head. The phantom was scanned using CBCT, with 0-6 peripheral columns having been removed in order to simulate varying head sizes. For five kV settings (70-90 kV), the mAs required to reach a predetermined image noise level was determined, and corresponding radiation doses were derived. Results were expressed as a function of head size, age, and gender, based on growth reference charts. The use of 90 kV consistently resulted in the largest relative dose reduction. A potential mAs reduction ranging from 7 % to 50 % was seen for the different simulated head sizes, showing an exponential relation between head size and mAs. An optimized exposure protocol based on head circumference or age/gender is proposed. A considerable dose reduction, through reduction of the mAs rather than the kV, is possible for small-sized patients in CBCT, including children and females. Size-specific exposure protocols should be clinically implemented. (orig.)

  9. Large- and small-size advantages in sneaking behaviour in the dusky frillgoby Bathygobius fuscus

    Science.gov (United States)

    Takegaki, Takeshi; Kaneko, Takashi; Matsumoto, Yukio

    2012-04-01

    Sneaking tactic, a male alternative reproductive tactic involving sperm competition, is generally adopted by small individuals because of its inconspicuousness. However, large size has an advantage when competition occurs between sneakers for fertilization of eggs. Here, we suggest that both large- and small-size advantages of sneaker males are present within the same species. Large sneaker males of the dusky frillgoby Bathygobius fuscus showed a high success rate in intruding into spawning nests because of their advantage in competition among sneaker males in keeping a suitable position to sneak, whereas small sneakers had few chances to sneak. However, small sneaker males were able to stay in the nests longer than large sneaker males when they succeeded in sneak intrusion. This suggests the possibility of an increase in their paternity. The findings of these size-specific behavioural advantages may be important in considering the evolution of size-related reproductive traits.

  10. Auditory proactive interference in monkeys: the roles of stimulus set size and intertrial interval.

    Science.gov (United States)

    Bigelow, James; Poremba, Amy

    2013-09-01

    We conducted two experiments to examine the influences of stimulus set size (the number of stimuli that are used throughout the session) and intertrial interval (ITI, the elapsed time between trials) in auditory short-term memory in monkeys. We used an auditory delayed matching-to-sample task wherein the animals had to indicate whether two sounds separated by a 5-s retention interval were the same (match trials) or different (nonmatch trials). In Experiment 1, we randomly assigned stimulus set sizes of 2, 4, 8, 16, 32, 64, or 192 (trial-unique) for each session of 128 trials. Consistent with previous visual studies, overall accuracy was consistently lower when smaller stimulus set sizes were used. Further analyses revealed that these effects were primarily caused by an increase in incorrect "same" responses on nonmatch trials. In Experiment 2, we held the stimulus set size constant at four for each session and alternately set the ITI at 5, 10, or 20 s. Overall accuracy improved when the ITI was increased from 5 to 10 s, but it was the same across the 10- and 20-s conditions. As in Experiment 1, the overall decrease in accuracy during the 5-s condition was caused by a greater number of false "match" responses on nonmatch trials. Taken together, Experiments 1 and 2 showed that auditory short-term memory in monkeys is highly susceptible to proactive interference caused by stimulus repetition. Additional analyses of the data from Experiment 1 suggested that monkeys may make same-different judgments on the basis of a familiarity criterion that is adjusted by error-related feedback.

  11. Phased array inspection of large size forged steel parts

    Science.gov (United States)

    Dupont-Marillia, Frederic; Jahazi, Mohammad; Belanger, Pierre

    2018-04-01

    High strength forged steel requires uncompromising quality to warrant advance performance for numerous critical applications. Ultrasonic inspection is commonly used in nondestructive testing to detect cracks and other defects. In steel blocks of relatively small dimensions (at least two directions not exceeding a few centimetres), phased array inspection is a trusted method to generate images of the inside of the blocks and therefore identify and size defects. However, casting of large size forged ingots introduces changes of mechanical parameters such as grain size, the Young's modulus, the Poisson's ratio, and the chemical composition. These heterogeneities affect the wave propagation, and consequently, the reliability of ultrasonic inspection and the imaging capabilities for these blocks. In this context, a custom phased array transducer designed for a 40-ton bainitic forged ingot was investigated. Following a previous study that provided local mechanical parameters for a similar block, two-dimensional simulations were made to compute the optimal transducer parameters including the pitch, width and number of elements. It appeared that depending on the number of elements, backwall reconstruction can generate high amplitude artefacts. Indeed, the large dimensions of the simulated block introduce numerous constructive interferences from backwall reflections which may lead to important artefacts. To increase image quality, the reconstruction algorithm was adapted and promising results were observed and compared with the scattering cone filter method available in the CIVA software.

  12. The reconstruction of choice value in the brain: a look into the size of consideration sets and their affective consequences.

    Science.gov (United States)

    Kim, Hye-Young; Shin, Yeonsoon; Han, Sanghoon

    2014-04-01

    It has been proposed that choice utility exhibits an inverted U-shape as a function of the number of options in the choice set. However, most researchers have so far only focused on the "physically extant" number of options in the set while disregarding the more important psychological factor, the "subjective" number of options worth considering to choose-that is, the size of the consideration set. To explore this previously ignored aspect, we examined how variations in the size of a consideration set can produce different affective consequences after making choices and investigated the underlying neural mechanism using fMRI. After rating their preferences for art posters, participants made a choice from a presented set and then reported on their level of satisfaction with their choice and the level of difficulty experienced in choosing it. Our behavioral results demonstrated that enlarged assortment set can lead to greater choice satisfaction only when increases in both consideration set size and preference contrast are involved. Moreover, choice difficulty is determined based on the size of an individual's consideration set rather than on the size of the assortment set, and it decreases linearly as a function of the level of contrast among alternatives. The neuroimaging analysis of choice-making revealed that subjective consideration set size was encoded in the striatum, the dACC, and the insula. In addition, the striatum also represented variations in choice satisfaction resulting from alterations in the size of consideration sets, whereas a common neural specificity for choice difficulty and consideration set size was shown in the dACC. These results have theoretical and practical importance in that it is one of the first studies investigating the influence of the psychological attributes of choice sets on the value-based decision-making process.

  13. Shortest triplet clustering: reconstructing large phylogenies using representative sets

    Directory of Open Access Journals (Sweden)

    Sy Vinh Le

    2005-04-01

    Full Text Available Abstract Background Understanding the evolutionary relationships among species based on their genetic information is one of the primary objectives in phylogenetic analysis. Reconstructing phylogenies for large data sets is still a challenging task in Bioinformatics. Results We propose a new distance-based clustering method, the shortest triplet clustering algorithm (STC, to reconstruct phylogenies. The main idea is the introduction of a natural definition of so-called k-representative sets. Based on k-representative sets, shortest triplets are reconstructed and serve as building blocks for the STC algorithm to agglomerate sequences for tree reconstruction in O(n2 time for n sequences. Simulations show that STC gives better topological accuracy than other tested methods that also build a first starting tree. STC appears as a very good method to start the tree reconstruction. However, all tested methods give similar results if balanced nearest neighbor interchange (BNNI is applied as a post-processing step. BNNI leads to an improvement in all instances. The program is available at http://www.bi.uni-duesseldorf.de/software/stc/. Conclusion The results demonstrate that the new approach efficiently reconstructs phylogenies for large data sets. We found that BNNI boosts the topological accuracy of all methods including STC, therefore, one should use BNNI as a post-processing step to get better topological accuracy.

  14. Large- and small-size advantages in sneaking behaviour in the dusky frillgoby Bathygobius fuscus

    OpenAIRE

    Takegaki, Takeshi; Kaneko, Takashi; Matsumoto, Yukio

    2012-01-01

    Sneaking tactic, a male alternative reproductive tactic involving sperm competition, is generally adopted by small individuals because of its inconspicuousness. However, large size has an advantage when competition occurs between sneakers for fertilization of eggs. Here, we suggest that both large- and small-size advantages of sneaker males are present within the same species. Large sneaker males of the dusky frillgoby Bathygobius fuscus showed a high success rate in intruding into spawning n...

  15. The welfare implications of large litter size in the domestic pig I

    DEFF Research Database (Denmark)

    Rutherford, K.M.D; Baxter, E.M.; D'Eath, R.B.

    2013-01-01

    Increasing litter size has long been a goal of pig breeders and producers, and may have implications for pig (Sus scrofa domesticus) welfare. This paper reviews the scientific evidence on biological factors affecting sow and piglet welfare in relation to large litter size. It is concluded that, i...

  16. Computational scalability of large size image dissemination

    Science.gov (United States)

    Kooper, Rob; Bajcsy, Peter

    2011-01-01

    We have investigated the computational scalability of image pyramid building needed for dissemination of very large image data. The sources of large images include high resolution microscopes and telescopes, remote sensing and airborne imaging, and high resolution scanners. The term 'large' is understood from a user perspective which means either larger than a display size or larger than a memory/disk to hold the image data. The application drivers for our work are digitization projects such as the Lincoln Papers project (each image scan is about 100-150MB or about 5000x8000 pixels with the total number to be around 200,000) and the UIUC library scanning project for historical maps from 17th and 18th century (smaller number but larger images). The goal of our work is understand computational scalability of the web-based dissemination using image pyramids for these large image scans, as well as the preservation aspects of the data. We report our computational benchmarks for (a) building image pyramids to be disseminated using the Microsoft Seadragon library, (b) a computation execution approach using hyper-threading to generate image pyramids and to utilize the underlying hardware, and (c) an image pyramid preservation approach using various hard drive configurations of Redundant Array of Independent Disks (RAID) drives for input/output operations. The benchmarks are obtained with a map (334.61 MB, JPEG format, 17591x15014 pixels). The discussion combines the speed and preservation objectives.

  17. Damage threshold from large retinal spot size repetitive-pulse laser exposures.

    Science.gov (United States)

    Lund, Brian J; Lund, David J; Edsall, Peter R

    2014-10-01

    The retinal damage thresholds for large spot size, multiple-pulse exposures to a Q-switched, frequency doubled Nd:YAG laser (532 nm wavelength, 7 ns pulses) have been measured for 100 μm and 500 μm retinal irradiance diameters. The ED50, expressed as energy per pulse, varies only weakly with the number of pulses, n, for these extended spot sizes. The previously reported threshold for a multiple-pulse exposure for a 900 μm retinal spot size also shows the same weak dependence on the number of pulses. The multiple-pulse ED50 for an extended spot-size exposure does not follow the n dependence exhibited by small spot size exposures produced by a collimated beam. Curves derived by using probability-summation models provide a better fit to the data.

  18. Folding and unfolding of large-size shell construction for application in Earth orbit

    Science.gov (United States)

    Kondyurin, Alexey; Pestrenina, Irena; Pestrenin, Valery; Rusakov, Sergey

    2016-07-01

    A future exploration of space requires a technology of large module for biological, technological, logistic and other applications in Earth orbits [1-3]. This report describes the possibility of using large-sized shell structures deployable in space. Structure is delivered to the orbit in the spaceship container. The shell is folded for the transportation. The shell material is either rigid plastic or multilayer prepreg comprising rigid reinforcements (such as reinforcing fibers). The unfolding process (bringing a construction to the unfolded state by loading the internal pressure) needs be considered at the presence of both stretching and bending deformations. An analysis of the deployment conditions (the minimum internal pressure bringing a construction from the folded state to the unfolded state) of large laminated CFRP shell structures is formulated in this report. Solution of this mechanics of deformable solids (MDS) problem of the shell structure is based on the following assumptions: the shell is made of components whose median surface has a reamer; in the separate structural element relaxed state (not stressed and not deformed) its median surface coincides with its reamer (this assumption allows choose the relaxed state of the structure correctly); structural elements are joined (sewn together) by a seam that does not resist rotation around the tangent to the seam line. The ways of large shell structures folding, whose median surface has a reamer, are suggested. Unfolding of cylindrical, conical (full and truncated cones), and large-size composite shells (cylinder-cones, cones-cones) is considered. These results show that the unfolding pressure of such large-size structures (0.01-0.2 atm.) is comparable to the deploying pressure of pneumatic parts (0.001-0.1 atm.) [3]. It would be possible to extend this approach to investigate the unfolding process of large-sized shells with ruled median surface or for non-developable surfaces. This research was

  19. Accuracy of the photogrametric measuring system for large size elements

    Directory of Open Access Journals (Sweden)

    M. Grzelka

    2011-04-01

    Full Text Available The aim of this paper is to present methods of estimating and guidelines for verifying the accuracy of optical photogrammetric measuringsystems, using for measurement of large size elements. Measuring systems applied to measure workpieces of a large size which oftenreach more than 10000mm require use of appropriate standards. Those standards provided by the manufacturer of photogrammetricsystems are certified and are inspected annually. To make sure that these systems work properly there was developed a special standardVDI / VDE 2634, "Optical 3D measuring systems. Imaging systems with point - by - point probing. " According to recommendationsdescribed in this standard research on accuracy of photogrametric measuring system was conducted using K class gauge blocks dedicatedto calibrate and test accuracy of classic CMMs. The paper presents results of research of estimation the actual error of indication for sizemeasurement MPEE for photogrammetric coordinate measuring system TRITOP.

  20. Large exon size does not limit splicing in vivo.

    Science.gov (United States)

    Chen, I T; Chasin, L A

    1994-03-01

    Exon sizes in vertebrate genes are, with a few exceptions, limited to less than 300 bases. It has been proposed that this limitation may derive from the exon definition model of splice site recognition. In this model, a downstream donor site enhances splicing at the upstream acceptor site of the same exon. This enhancement may require contact between factors bound to each end of the exon; an exon size limitation would promote such contact. To test the idea that proximity was required for exon definition, we inserted random DNA fragments from Escherichia coli into a central exon in a three-exon dihydrofolate reductase minigene and tested whether the expanded exons were efficiently spliced. DNA from a plasmid library of expanded minigenes was used to transfect a CHO cell deletion mutant lacking the dhfr locus. PCR analysis of DNA isolated from the pooled stable cotransfectant populations displayed a range of DNA insert sizes from 50 to 1,500 nucleotides. A parallel analysis of the RNA from this population by reverse transcription followed by PCR showed a similar size distribution. Central exons as large as 1,400 bases could be spliced into mRNA. We also tested individual plasmid clones containing exon inserts of defined sizes. The largest exon included in mRNA was 1,200 bases in length, well above the 300-base limit implied by the survey of naturally occurring exons. We conclude that a limitation in exon size is not part of the exon definition mechanism.

  1. Body size evolution in an old insect order: No evidence for Cope's Rule in spite of fitness benefits of large size.

    Science.gov (United States)

    Waller, John T; Svensson, Erik I

    2017-09-01

    We integrate field data and phylogenetic comparative analyses to investigate causes of body size evolution and stasis in an old insect order: odonates ("dragonflies and damselflies"). Fossil evidence for "Cope's Rule" in odonates is weak or nonexistent since the last major extinction event 65 million years ago, yet selection studies show consistent positive selection for increased body size among adults. In particular, we find that large males in natural populations of the banded demoiselle (Calopteryx splendens) over several generations have consistent fitness benefits both in terms of survival and mating success. Additionally, there was no evidence for stabilizing or conflicting selection between fitness components within the adult life-stage. This lack of stabilizing selection during the adult life-stage was independently supported by a literature survey on different male and female fitness components from several odonate species. We did detect several significant body size shifts among extant taxa using comparative methods and a large new molecular phylogeny for odonates. We suggest that the lack of Cope's rule in odonates results from conflicting selection between fitness advantages of large adult size and costs of long larval development. We also discuss competing explanations for body size stasis in this insect group. © 2017 The Author(s). Evolution © 2017 The Society for the Study of Evolution.

  2. Size Reduction Techniques for Large Scale Permanent Magnet Generators in Wind Turbines

    Science.gov (United States)

    Khazdozian, Helena; Hadimani, Ravi; Jiles, David

    2015-03-01

    Increased wind penetration is necessary to reduce U.S. dependence on fossil fuels, combat climate change and increase national energy security. The U.S Department of Energy has recommended large scale and offshore wind turbines to achieve 20% wind electricity generation by 2030. Currently, geared doubly-fed induction generators (DFIGs) are typically employed in the drivetrain for conversion of mechanical to electrical energy. Yet, gearboxes account for the greatest downtime of wind turbines, decreasing reliability and contributing to loss of profit. Direct drive permanent magnet generators (PMGs) offer a reliable alternative to DFIGs by eliminating the gearbox. However, PMGs scale up in size and weight much more rapidly than DFIGs as rated power is increased, presenting significant challenges for large scale wind turbine application. Thus, size reduction techniques are needed for viability of PMGs in large scale wind turbines. Two size reduction techniques are presented. It is demonstrated that 25% size reduction of a 10MW PMG is possible with a high remanence theoretical permanent magnet. Additionally, the use of a Halbach cylinder in an outer rotor PMG is investigated to focus magnetic flux over the rotor surface in order to increase torque. This work was supported by the National Science Foundation under Grant No. 1069283 and a Barbara and James Palmer Endowment at Iowa State University.

  3. Explicit Constructions and Bounds for Batch Codes with Restricted Size of Reconstruction Sets

    OpenAIRE

    Thomas, Eldho K.; Skachek, Vitaly

    2017-01-01

    Linear batch codes and codes for private information retrieval (PIR) with a query size $t$ and a restricted size $r$ of the reconstruction sets are studied. New bounds on the parameters of such codes are derived for small values of $t$ or of $r$ by providing corresponding constructions. By building on the ideas of Cadambe and Mazumdar, a new bound in a recursive form is derived for batch codes and PIR codes.

  4. Impact of sample size on principal component analysis ordination of an environmental data set: effects on eigenstructure

    Directory of Open Access Journals (Sweden)

    Shaukat S. Shahid

    2016-06-01

    Full Text Available In this study, we used bootstrap simulation of a real data set to investigate the impact of sample size (N = 20, 30, 40 and 50 on the eigenvalues and eigenvectors resulting from principal component analysis (PCA. For each sample size, 100 bootstrap samples were drawn from environmental data matrix pertaining to water quality variables (p = 22 of a small data set comprising of 55 samples (stations from where water samples were collected. Because in ecology and environmental sciences the data sets are invariably small owing to high cost of collection and analysis of samples, we restricted our study to relatively small sample sizes. We focused attention on comparison of first 6 eigenvectors and first 10 eigenvalues. Data sets were compared using agglomerative cluster analysis using Ward’s method that does not require any stringent distributional assumptions.

  5. Management of a Large Qualitative Data Set: Establishing Trustworthiness of the Data

    Directory of Open Access Journals (Sweden)

    Debbie Elizabeth White RN, PhD

    2012-07-01

    Full Text Available Health services research is multifaceted and impacted by the multiple contexts and stakeholders involved. Hence, large data sets are necessary to fully understand the complex phenomena (e.g., scope of nursing practice being studied. The management of these large data sets can lead to numerous challenges in establishing trustworthiness of the study. This article reports on strategies utilized in data collection and analysis of a large qualitative study to establish trustworthiness. Specific strategies undertaken by the research team included training of interviewers and coders, variation in participant recruitment, consistency in data collection, completion of data cleaning, development of a conceptual framework for analysis, consistency in coding through regular communication and meetings between coders and key research team members, use of N6™ software to organize data, and creation of a comprehensive audit trail with internal and external audits. Finally, we make eight recommendations that will help ensure rigour for studies with large qualitative data sets: organization of the study by a single person; thorough documentation of the data collection and analysis process; attention to timelines; the use of an iterative process for data collection and analysis; internal and external audits; regular communication among the research team; adequate resources for timely completion; and time for reflection and diversion. Following these steps will enable researchers to complete a rigorous, qualitative research study when faced with large data sets to answer complex health services research questions.

  6. Gastro-oesophageal reflux in large-sized, deep-chested versus small-sized, barrel-chested dogs undergoing spinal surgery in sternal recumbency.

    Science.gov (United States)

    Anagnostou, Tilemahos L; Kazakos, George M; Savvas, Ioannis; Kostakis, Charalampos; Papadopoulou, Paraskevi

    2017-01-01

    The aim of this study was to investigate whether an increased frequency of gastro-oesophageal reflux (GOR) is more common in large-sized, deep-chested dogs undergoing spinal surgery in sternal recumbency than in small-sized, barrelchested dogs. Prospective, cohort study. Nineteen small-sized, barrel-chested dogs (group B) and 26 large-sized, deep-chested dogs (group D). All animals were premedicated with intramuscular (IM) acepromazine (0.05 mg kg -1 ) and pethidine (3 mg kg -1 ) IM. Anaesthesia was induced with intravenous sodium thiopental and maintained with halothane in oxygen. Lower oesophageal pH was monitored continuously after induction of anaesthesia. Gastro-oesophageal reflux was considered to have occurred whenever pH values > 7.5 or < 4 were recorded. If GOR was detected during anaesthesia, measures were taken to avoid aspiration of gastric contents into the lungs and to prevent the development of oesophagitis/oesophageal stricture. The frequency of GOR during anaesthesia was significantly higher in group D (6/26 dogs; 23.07%) than in group B (0/19 dogs; 0%) (p = 0.032). Signs indicative of aspiration pneumonia, oesophagitis or oesophageal stricture were not reported in any of the GOR cases. In large-sized, deep-chested dogs undergoing spinal surgery in sternal recumbency, it would seem prudent to consider measures aimed at preventing GOR and its potentially devastating consequences (oesophagitis/oesophageal stricture, aspiration pneumonia). Copyright © 2016 Association of Veterinary Anaesthetists and American College of Veterinary Anesthesia and Analgesia. Published by Elsevier Ltd. All rights reserved.

  7. Fission gas release during post irradiation annealing of large grain size fuels from Hinkley point B

    International Nuclear Information System (INIS)

    Killeen, J.C.

    1997-01-01

    A series of post-irradiation anneals has been carried out on fuel taken from an experimental stringer from Hinkley Point B AGR. The stringer was part of an experimental programme in the reactor to study the effect of large grain size fuel. Three differing fuel types were present in separate pins in the stringer. One variant of large grain size fuel had been prepared by using an MgO dopant during fuel manufactured, a second by high temperature sintering of standard fuel and the third was a reference, 12μm grain size fuel. Both large grain size variants had similar grain sizes around 35μm. The present experiments took fuel samples from highly rated pins from the stringer with local burn-up in excess of 25GWd/tU and annealed these to temperature of up to 1535 deg. C under reducing conditions to allow a comparison of fission gas behaviour at high release levels. The results demonstrate the beneficial effect of large grain size on release rate of 85 Kr following interlinkage. At low temperatures and release rates there was no difference between the fuel types, but at temperatures in excess of 1400 deg. C the release rate was found to be inversely dependent on the fuel grain size. The experiments showed some differences between the doped and undoped large grains size fuel in that the former became interlinked at a lower temperature, releasing fission gas at an increased rate at this temperature. At higher temperatures the grain size effect was dominant. The temperature dependence for fission gas release was determined over a narrow range of temperature and found to be similar for all three types and for both pre-interlinkage and post-interlinkage releases, the difference between the release rates is then seen to be controlled by grain size. (author). 4 refs, 7 figs, 3 tabs

  8. Fission gas release during post irradiation annealing of large grain size fuels from Hinkley point B

    Energy Technology Data Exchange (ETDEWEB)

    Killeen, J C [Nuclear Electric plc, Barnwood (United Kingdom)

    1997-08-01

    A series of post-irradiation anneals has been carried out on fuel taken from an experimental stringer from Hinkley Point B AGR. The stringer was part of an experimental programme in the reactor to study the effect of large grain size fuel. Three differing fuel types were present in separate pins in the stringer. One variant of large grain size fuel had been prepared by using an MgO dopant during fuel manufactured, a second by high temperature sintering of standard fuel and the third was a reference, 12{mu}m grain size fuel. Both large grain size variants had similar grain sizes around 35{mu}m. The present experiments took fuel samples from highly rated pins from the stringer with local burn-up in excess of 25GWd/tU and annealed these to temperature of up to 1535 deg. C under reducing conditions to allow a comparison of fission gas behaviour at high release levels. The results demonstrate the beneficial effect of large grain size on release rate of {sup 85}Kr following interlinkage. At low temperatures and release rates there was no difference between the fuel types, but at temperatures in excess of 1400 deg. C the release rate was found to be inversely dependent on the fuel grain size. The experiments showed some differences between the doped and undoped large grains size fuel in that the former became interlinked at a lower temperature, releasing fission gas at an increased rate at this temperature. At higher temperatures the grain size effect was dominant. The temperature dependence for fission gas release was determined over a narrow range of temperature and found to be similar for all three types and for both pre-interlinkage and post-interlinkage releases, the difference between the release rates is then seen to be controlled by grain size. (author). 4 refs, 7 figs, 3 tabs.

  9. Small-size pedestrian detection in large scene based on fast R-CNN

    Science.gov (United States)

    Wang, Shengke; Yang, Na; Duan, Lianghua; Liu, Lu; Dong, Junyu

    2018-04-01

    Pedestrian detection is a canonical sub-problem of object detection with high demand during recent years. Although recent deep learning object detectors such as Fast/Faster R-CNN have shown excellent performance for general object detection, they have limited success for small size pedestrian detection in large-view scene. We study that the insufficient resolution of feature maps lead to the unsatisfactory accuracy when handling small instances. In this paper, we investigate issues involving Fast R-CNN for pedestrian detection. Driven by the observations, we propose a very simple but effective baseline for pedestrian detection based on Fast R-CNN, employing the DPM detector to generate proposals for accuracy, and training a fast R-CNN style network to jointly optimize small size pedestrian detection with skip connection concatenating feature from different layers to solving coarseness of feature maps. And the accuracy is improved in our research for small size pedestrian detection in the real large scene.

  10. Size matters: large objects capture attention in visual search.

    Science.gov (United States)

    Proulx, Michael J

    2010-12-23

    Can objects or events ever capture one's attention in a purely stimulus-driven manner? A recent review of the literature set out the criteria required to find stimulus-driven attentional capture independent of goal-directed influences, and concluded that no published study has satisfied that criteria. Here visual search experiments assessed whether an irrelevantly large object can capture attention. Capture of attention by this static visual feature was found. The results suggest that a large object can indeed capture attention in a stimulus-driven manner and independent of displaywide features of the task that might encourage a goal-directed bias for large items. It is concluded that these results are either consistent with the stimulus-driven criteria published previously or alternatively consistent with a flexible, goal-directed mechanism of saliency detection.

  11. Fiber-chip edge coupler with large mode size for silicon photonic wire waveguides.

    Science.gov (United States)

    Papes, Martin; Cheben, Pavel; Benedikovic, Daniel; Schmid, Jens H; Pond, James; Halir, Robert; Ortega-Moñux, Alejandro; Wangüemert-Pérez, Gonzalo; Ye, Winnie N; Xu, Dan-Xia; Janz, Siegfried; Dado, Milan; Vašinek, Vladimír

    2016-03-07

    Fiber-chip edge couplers are extensively used in integrated optics for coupling of light between planar waveguide circuits and optical fibers. In this work, we report on a new fiber-chip edge coupler concept with large mode size for silicon photonic wire waveguides. The coupler allows direct coupling with conventional cleaved optical fibers with large mode size while circumventing the need for lensed fibers. The coupler is designed for 220 nm silicon-on-insulator (SOI) platform. It exhibits an overall coupling efficiency exceeding 90%, as independently confirmed by 3D Finite-Difference Time-Domain (FDTD) and fully vectorial 3D Eigenmode Expansion (EME) calculations. We present two specific coupler designs, namely for a high numerical aperture single mode optical fiber with 6 µm mode field diameter (MFD) and a standard SMF-28 fiber with 10.4 µm MFD. An important advantage of our coupler concept is the ability to expand the mode at the chip edge without leading to high substrate leakage losses through buried oxide (BOX), which in our design is set to 3 µm. This remarkable feature is achieved by implementing in the SiO 2 upper cladding thin high-index Si 3 N 4 layers. The Si 3 N 4 layers increase the effective refractive index of the upper cladding near the facet. The index is controlled along the taper by subwavelength refractive index engineering to facilitate adiabatic mode transformation to the silicon wire waveguide while the Si-wire waveguide is inversely tapered along the coupler. The mode overlap optimization at the chip facet is carried out with a full vectorial mode solver. The mode transformation along the coupler is studied using 3D-FDTD simulations and with fully-vectorial 3D-EME calculations. The couplers are optimized for operating with transverse electric (TE) polarization and the operating wavelength is centered at 1.55 µm.

  12. Interlayer catalytic exfoliation realizing scalable production of large-size pristine few-layer graphene

    Science.gov (United States)

    Geng, Xiumei; Guo, Yufen; Li, Dongfang; Li, Weiwei; Zhu, Chao; Wei, Xiangfei; Chen, Mingliang; Gao, Song; Qiu, Shengqiang; Gong, Youpin; Wu, Liqiong; Long, Mingsheng; Sun, Mengtao; Pan, Gebo; Liu, Liwei

    2013-01-01

    Mass production of reduced graphene oxide and graphene nanoplatelets has recently been achieved. However, a great challenge still remains in realizing large-quantity and high-quality production of large-size thin few-layer graphene (FLG). Here, we create a novel route to solve the issue by employing one-time-only interlayer catalytic exfoliation (ICE) of salt-intercalated graphite. The typical FLG with a large lateral size of tens of microns and a thickness less than 2 nm have been obtained by a mild and durative ICE. The high-quality graphene layers preserve intact basal crystal planes owing to avoidance of the degradation reaction during both intercalation and ICE. Furthermore, we reveal that the high-quality FLG ensures a remarkable lithium-storage stability (>1,000 cycles) and a large reversible specific capacity (>600 mAh g-1). This simple and scalable technique acquiring high-quality FLG offers considerable potential for future realistic applications.

  13. Semi-empirical formula for large pore-size estimation from o-Ps annihilation lifetime

    International Nuclear Information System (INIS)

    Nguyen Duc Thanh; Tran Quoc Dung; Luu Anh Tuyen; Khuong Thanh Tuan

    2007-01-01

    The o-Ps annihilation rate in large pore was investigated by the semi-classical approach. The semi-empirical formula that simply correlates between the pore size and the o-Ps lifetime was proposed. The calculated results agree well with experiment in the range from some angstroms to several ten nanometers size of pore. (author)

  14. Large increase in nest size linked to climate change: an indicator of life history, senescence and condition.

    Science.gov (United States)

    Møller, Anders Pape; Nielsen, Jan Tøttrup

    2015-11-01

    Many animals build extravagant nests that exceed the size required for successful reproduction. Large nests may signal the parenting ability of nest builders suggesting that nests may have a signaling function. In particular, many raptors build very large nests for their body size. We studied nest size in the goshawk Accipiter gentilis, which is a top predator throughout most of the Nearctic. Both males and females build nests, and males provision their females and offspring with food. Nest volume in the goshawk is almost three-fold larger than predicted from their body size. Nest size in the goshawk is highly variable and may reach more than 600 kg for a bird that weighs ca. 1 kg. While 8.5% of nests fell down, smaller nests fell down more often than large nests. There was a hump-shaped relationship between nest volume and female age, with a decline in nest volume late in life, as expected for senescence. Clutch size increased with nest volume. Nest volume increased during 1977-2014 in an accelerating fashion, linked to increasing spring temperature during April, when goshawks build and start reproduction. These findings are consistent with nest size being a reliable signal of parental ability, with large nest size signaling superior parenting ability and senescence, and also indicating climate warming.

  15. Optimizing distance-based methods for large data sets

    Science.gov (United States)

    Scholl, Tobias; Brenner, Thomas

    2015-10-01

    Distance-based methods for measuring spatial concentration of industries have received an increasing popularity in the spatial econometrics community. However, a limiting factor for using these methods is their computational complexity since both their memory requirements and running times are in {{O}}(n^2). In this paper, we present an algorithm with constant memory requirements and shorter running time, enabling distance-based methods to deal with large data sets. We discuss three recent distance-based methods in spatial econometrics: the D&O-Index by Duranton and Overman (Rev Econ Stud 72(4):1077-1106, 2005), the M-function by Marcon and Puech (J Econ Geogr 10(5):745-762, 2010) and the Cluster-Index by Scholl and Brenner (Reg Stud (ahead-of-print):1-15, 2014). Finally, we present an alternative calculation for the latter index that allows the use of data sets with millions of firms.

  16. Statistical characteristics and stability index (si) of large-sized landslide dams around the world

    International Nuclear Information System (INIS)

    Iqbal, J.; Dai, F.; Raja, I.A.

    2014-01-01

    In the last few decades, landslide dams have received greater attention of researchers, as they have caused loss to property and human lives. Over 261 large-sized landslide dams from different countries of the world with volume greater than 1 x 105 m have been reviewed for this study. The data collected for this study shows that 58% of the catastrophic landslides were triggered by earthquakes and 21 % by rainfall, revealing that earthquake and rainfall are the two major triggers, accounting for 75% of large-sized landslide dams. These land-slides were most frequent during last two decades (1990-2010) throughout the world. The mean landslide dam volume of the studied cases was 53.39 x 10 m with mean dam height of 71.98 m, while the mean lake volume was found to be 156.62 x 10 m. Failure of these large landslide dams pose a severe threat to the property and people living downstream, hence immediate attention is required to deal with this problem. A stability index (SI) has been derived on the basis on 59 large-sized landslide dams (out of the 261 dams) with complete parametric information. (author)

  17. Comparison of Hounsfield units by changing in size of physical area and setting size of region of interest by using the CT phantom made with a 3D printer

    International Nuclear Information System (INIS)

    Seung, Youl Hun

    2015-01-01

    In this study, we have observed the change of the Hounsfield (HU) in the alteration of by changing in size of physical area and setting size of region of interest (ROI) at focus on kVp and mAs. Four-channel multi-detector computed tomography was used to get transverse axial scanning images and HU. Three dimensional printer which is type of fused deposition modeling (FDM) was used to produce the Phantom. The structure of the phantom was designed to be a type of cylinder that contains 33 mm, 24 mm, 19 mm, 16 mm, 9 mm size of circle holes that are symmetrically located. It was charged with mixing iodine contrast agent and distilled water in the holes. The images were gained with changing by 90 kVp, 120 kVp, 140 kVp and 50 mAs, 100 mAs, 150 mAs, respectively. The ‘image J’ was used to get the HU measurement of gained images of ROI. As a result, it was confirmed that kVp affects to HU more than mAs. And it is suggested that the smaller size of physical area, the more decreasing HU even in material of a uniform density and the smaller setting size of ROI, the more increasing HU. Therefore, it is reason that to set maximum ROI within 5 HU is the best way to minimize in the alteration of by changing in size of physical area and setting size of region of interest

  18. Comparison of Hounsfield units by changing in size of physical area and setting size of region of interest by using the CT phantom made with a 3D printer

    Energy Technology Data Exchange (ETDEWEB)

    Seung, Youl Hun [Dept. of Radiological Science, Cheongju University, Cheongju (Korea, Republic of)

    2015-12-15

    In this study, we have observed the change of the Hounsfield (HU) in the alteration of by changing in size of physical area and setting size of region of interest (ROI) at focus on kVp and mAs. Four-channel multi-detector computed tomography was used to get transverse axial scanning images and HU. Three dimensional printer which is type of fused deposition modeling (FDM) was used to produce the Phantom. The structure of the phantom was designed to be a type of cylinder that contains 33 mm, 24 mm, 19 mm, 16 mm, 9 mm size of circle holes that are symmetrically located. It was charged with mixing iodine contrast agent and distilled water in the holes. The images were gained with changing by 90 kVp, 120 kVp, 140 kVp and 50 mAs, 100 mAs, 150 mAs, respectively. The ‘image J’ was used to get the HU measurement of gained images of ROI. As a result, it was confirmed that kVp affects to HU more than mAs. And it is suggested that the smaller size of physical area, the more decreasing HU even in material of a uniform density and the smaller setting size of ROI, the more increasing HU. Therefore, it is reason that to set maximum ROI within 5 HU is the best way to minimize in the alteration of by changing in size of physical area and setting size of region of interest.

  19. A comparison of workplace safety perceptions among financial decision-makers of medium- vs. large-size companies.

    Science.gov (United States)

    Huang, Yueng-Hsiang; Leamon, Tom B; Courtney, Theodore K; Chen, Peter Y; DeArmond, Sarah

    2011-01-01

    This study, through a random national survey in the U.S., explored how corporate financial decision-makers perceive important workplace safety issues as a function of the size of the company for which they worked (medium- vs. large-size companies). Telephone surveys were conducted with 404 U.S. corporate financial decision-makers: 203 from medium-size companies and 201 from large companies. Results showed that the patterns of responding for participants from medium- and large-size companies were somewhat similar. The top-rated safety priorities in resource allocation reported by participants from both groups were overexertion, repetitive motion, and bodily reaction. They believed that there were direct and indirect costs associated with workplace injuries and for every dollar spent improving workplace safety, more than four dollars would be returned. They perceived the top benefits of an effective safety program to be predominately financial in nature - increased productivity and reduced costs - and the safety modification participants mentioned most often was to have more/better safety-focused training. However, more participants from large- than medium-size companies reported that "falling on the same level" was the major cause of workers' compensation loss, which is in line with industry loss data. Participants from large companies were more likely to see their safety programs as better than those of other companies in their industries, and those of medium-size companies were more likely to mention that there were no improvements needed for their companies. Copyright © 2009 Elsevier Ltd. All rights reserved.

  20. Making sense of large data sets without annotations: analyzing age-related correlations from lung CT scans

    Science.gov (United States)

    Dicente Cid, Yashin; Mamonov, Artem; Beers, Andrew; Thomas, Armin; Kovalev, Vassili; Kalpathy-Cramer, Jayashree; Müller, Henning

    2017-03-01

    The analysis of large data sets can help to gain knowledge about specific organs or on specific diseases, just as big data analysis does in many non-medical areas. This article aims to gain information from 3D volumes, so the visual content of lung CT scans of a large number of patients. In the case of the described data set, only little annotation is available on the patients that were all part of an ongoing screening program and besides age and gender no information on the patient and the findings was available for this work. This is a scenario that can happen regularly as image data sets are produced and become available in increasingly large quantities but manual annotations are often not available and also clinical data such as text reports are often harder to share. We extracted a set of visual features from 12,414 CT scans of 9,348 patients that had CT scans of the lung taken in the context of a national lung screening program in Belarus. Lung fields were segmented by two segmentation algorithms and only cases where both algorithms were able to find left and right lung and had a Dice coefficient above 0.95 were analyzed. This assures that only segmentations of good quality were used to extract features of the lung. Patients ranged in age from 0 to 106 years. Data analysis shows that age can be predicted with a fairly high accuracy for persons under 15 years. Relatively good results were also obtained between 30 and 65 years where a steady trend is seen. For young adults and older people the results are not as good as variability is very high in these groups. Several visualizations of the data show the evolution patters of the lung texture, size and density with age. The experiments allow learning the evolution of the lung and the gained results show that even with limited metadata we can extract interesting information from large-scale visual data. These age-related changes (for example of the lung volume, the density histogram of the tissue) can also be

  1. BAND STRUCTURE OF NON-STEIOCHIOMETRIC LARGE-SIZED NANOCRYSTALLITES

    Directory of Open Access Journals (Sweden)

    I.V.Kityk

    2004-01-01

    Full Text Available A band structure of large-sized (from 20 to 35nm non-steichiometric nanocrystallites (NC of the Si2-xCx (1.04 < x < 1.10 has been investigated using different band energy approaches and a modified Car-Parinello molecular dynamics structure optimization of the NC interfaces. The non-steichiometric excess of carbon favors the appearance of a thin prevailingly carbon-contained layer (with thickness of about 1 nm covering the crystallites. As a consequence, one can observe a substantial structure reconstruction of boundary SiC crystalline layers. The numerical modeling has shown that these NC can be considered as SiC reconstructed crystalline films with thickness of about 2 nm covering the SiC crystallites. The observed data are considered within the different one-electron band structure methods. It was shown that the nano-sized carbon sheet plays a key role in a modified band structure. Independent manifestation of the important role played by the reconstructed confined layers is due to the experimentally discovered excitonic-like resonances. Low-temperature absorption measurements confirm the existence of sharp-like absorption resonances originating from the reconstructed layers.

  2. A conceptual analysis of standard setting in large-scale assessments

    NARCIS (Netherlands)

    van der Linden, Willem J.

    1994-01-01

    Elements of arbitrariness in the standard setting process are explored, and an alternative to the use of cut scores is presented. The first part of the paper analyzes the use of cut scores in large-scale assessments, discussing three different functions: (1) cut scores define the qualifications used

  3. Fatigue-crack propagation in gamma-based titanium aluminide alloys at large and small crack sizes

    International Nuclear Information System (INIS)

    Kruzic, J.J.; Campbell, J.P.; Ritchie, R.O.

    1999-01-01

    Most evaluations of the fracture and fatigue-crack propagation properties of γ+α 2 titanium aluminide alloys to date have been performed using standard large-crack samples, e.g., compact-tension specimens containing crack sizes which are on the order of tens of millimeters, i.e., large compared to microstructural dimensions. However, these alloys have been targeted for applications, such as blades in gas-turbine engines, where relevant crack sizes are much smaller ( 5 mm) and (c ≅ 25--300 microm) cracks in a γ-TiAl based alloy, of composition Ti-47Al-2Nb-2Cr-0.2B (at.%), specifically for duplex (average grain size approximately17 microm) and refined lamellar (average colony size ≅150 microm) microstructures. It is found that, whereas the lamellar microstructure displays far superior fracture toughness and fatigue-crack growth resistance in the presence of large cracks, in small-crack testing the duplex microstructure exhibits a better combination of properties. The reasons for such contrasting behavior are examined in terms of the intrinsic and extrinsic (i.e., crack bridging) contributions to cyclic crack advance

  4. Visualization of diversity in large multivariate data sets.

    Science.gov (United States)

    Pham, Tuan; Hess, Rob; Ju, Crystal; Zhang, Eugene; Metoyer, Ronald

    2010-01-01

    Understanding the diversity of a set of multivariate objects is an important problem in many domains, including ecology, college admissions, investing, machine learning, and others. However, to date, very little work has been done to help users achieve this kind of understanding. Visual representation is especially appealing for this task because it offers the potential to allow users to efficiently observe the objects of interest in a direct and holistic way. Thus, in this paper, we attempt to formalize the problem of visualizing the diversity of a large (more than 1000 objects), multivariate (more than 5 attributes) data set as one worth deeper investigation by the information visualization community. In doing so, we contribute a precise definition of diversity, a set of requirements for diversity visualizations based on this definition, and a formal user study design intended to evaluate the capacity of a visual representation for communicating diversity information. Our primary contribution, however, is a visual representation, called the Diversity Map, for visualizing diversity. An evaluation of the Diversity Map using our study design shows that users can judge elements of diversity consistently and as or more accurately than when using the only other representation specifically designed to visualize diversity.

  5. Influence Factors of Sports Bra Evaluation and Design Based on Large Size

    Directory of Open Access Journals (Sweden)

    Zhang Lingxi

    2016-01-01

    Full Text Available The purpose of this paper was to find the main influence factors of sports bra evaluation by the subjective assessment of different styles commercial sports bra, and to summarize the design elements of sports bra for large size. 10 women in large size (>C80 were chosen to evaluate 9 different sports bras. The main influence factors were extracted by factor analysis and all the product samples were classified by Q-cluster analysis. The conclusions show that breast stability, wearing comfort and bust modelling are the three key factors for sports bra evaluation. And a classification-positioning model of sports bra products was established. The findings can provide theoretical basis and guidance for the research and design of sports bras both for academic and sports or underwear enterprises, and also provide reference value for women customers.

  6. Preparation and provisional validation of a large size dried spike: Batch SAL-9931

    International Nuclear Information System (INIS)

    Jammet, G.; Zoigner, A.; Doubek, N.; Grabmueller, G.; Bagliano, G.

    1990-05-01

    To determine uranium and plutonium concentration using isotope dilution mass spectrometry, weighed aliquands of a synthetic mixture containing about 2 mg of Pu (with a 239 Pu abundance of about 98%) and 40 mg of U (with a 235 U enrichment of about 19%) have been prepared and verified by SAL to be used to spike samples of concentrated spent fuel solutions with a high burn-up and a low 235 U enrichment. The advantages of such a Large Size Dried (LSD) Spike have been pointed out elsewhere and proof of the usefulness in the field reported. Certified Reference Materials Pu-NBL-126, natural U-NBS-960 and 93% enriched U-NBL-116 were used to prepare a stock solution containing 1.8 mg/ml of Pu and 37.3 mg/ml of 19.4% enriched U. Before shipment to the Reprocessing Plant, aliquands of the stock solution are dried to give Large Size Dried Spikes which resist shocks encountered during transportation, so that they can readily be recovered quantitatively at the plant. This paper describes the preparation and the validation of a Large Size Dried Spike which is intended to be used as a common spike by the plant operator, the national and the IAEA inspectorates. 6 refs, 7 tabs

  7. The research of the quantitative prediction of the deposits concentrated regions of the large and super-large sized mineral deposits in China

    International Nuclear Information System (INIS)

    Zhao Zhenyu; Wang Shicheng

    2003-01-01

    By the general theory and method of mineral resources prognosis of synthetic information, the locative and quantitative prediction of the large and super-large sized mineral deposits of solid resources of 1 : 5,000,000 are developed in china. The deposit concentrated regions is model unit, the anomaly concentrated regions is prediction unit. The mineral prognosis of synthetic information is developed on GIS platform. The technical route and work method of looking for the large and super-large sized mineral resources and basic principle of compiling attribute table of the variables and the response variables are mentioned. In research of prediction of resources quantity, the locative and quantitative prediction are processed by separately the quantification theory Ⅲ and the corresponding characteristic analysis, two methods are compared. It is very important for resources prediction of western ten provinces in china, it is helpful. (authors)

  8. New Sequences with Low Correlation and Large Family Size

    Science.gov (United States)

    Zeng, Fanxin

    In direct-sequence code-division multiple-access (DS-CDMA) communication systems and direct-sequence ultra wideband (DS-UWB) radios, sequences with low correlation and large family size are important for reducing multiple access interference (MAI) and accepting more active users, respectively. In this paper, a new collection of families of sequences of length pn-1, which includes three constructions, is proposed. The maximum number of cyclically distinct families without GMW sequences in each construction is φ(pn-1)/n·φ(pm-1)/m, where p is a prime number, n is an even number, and n=2m, and these sequences can be binary or polyphase depending upon choice of the parameter p. In Construction I, there are pn distinct sequences within each family and the new sequences have at most d+2 nontrivial periodic correlation {-pm-1, -1, pm-1, 2pm-1,…,dpm-1}. In Construction II, the new sequences have large family size p2n and possibly take the nontrivial correlation values in {-pm-1, -1, pm-1, 2pm-1,…,(3d-4)pm-1}. In Construction III, the new sequences possess the largest family size p(d-1)n and have at most 2d correlation levels {-pm-1, -1,pm-1, 2pm-1,…,(2d-2)pm-1}. Three constructions are near-optimal with respect to the Welch bound because the values of their Welch-Ratios are moderate, WR_??_d, WR_??_3d-4 and WR_??_2d-2, respectively. Each family in Constructions I, II and III contains a GMW sequence. In addition, Helleseth sequences and Niho sequences are special cases in Constructions I and III, and their restriction conditions to the integers m and n, pm≠2 (mod 3) and n≅0 (mod 4), respectively, are removed in our sequences. Our sequences in Construction III include the sequences with Niho type decimation 3·2m-2, too. Finally, some open questions are pointed out and an example that illustrates the performance of these sequences is given.

  9. Teaching the Assessment of Normality Using Large Easily-Generated Real Data Sets

    Science.gov (United States)

    Kulp, Christopher W.; Sprechini, Gene D.

    2016-01-01

    A classroom activity is presented, which can be used in teaching students statistics with an easily generated, large, real world data set. The activity consists of analyzing a video recording of an object. The colour data of the recorded object can then be used as a data set to explore variation in the data using graphs including histograms,…

  10. 13 CFR 121.412 - What are the size procedures for partial small business set-asides?

    Science.gov (United States)

    2010-01-01

    ... Requirements for Government Procurement § 121.412 What are the size procedures for partial small business set... portion of a procurement, and is not required to qualify as a small business for the unrestricted portion. ...

  11. The prevalence of terraced treescapes in analyses of phylogenetic data sets.

    Science.gov (United States)

    Dobrin, Barbara H; Zwickl, Derrick J; Sanderson, Michael J

    2018-04-04

    The pattern of data availability in a phylogenetic data set may lead to the formation of terraces, collections of equally optimal trees. Terraces can arise in tree space if trees are scored with parsimony or with partitioned, edge-unlinked maximum likelihood. Theory predicts that terraces can be large, but their prevalence in contemporary data sets has never been surveyed. We selected 26 data sets and phylogenetic trees reported in recent literature and investigated the terraces to which the trees would belong, under a common set of inference assumptions. We examined terrace size as a function of the sampling properties of the data sets, including taxon coverage density (the proportion of taxon-by-gene positions with any data present) and a measure of gene sampling "sufficiency". We evaluated each data set in relation to the theoretical minimum gene sampling depth needed to reduce terrace size to a single tree, and explored the impact of the terraces found in replicate trees in bootstrap methods. Terraces were identified in nearly all data sets with taxon coverage densities tree. Terraces found during bootstrap resampling reduced overall support. If certain inference assumptions apply, trees estimated from empirical data sets often belong to large terraces of equally optimal trees. Terrace size correlates to data set sampling properties. Data sets seldom include enough genes to reduce terrace size to one tree. When bootstrap replicate trees lie on a terrace, statistical support for phylogenetic hypotheses may be reduced. Although some of the published analyses surveyed were conducted with edge-linked inference models (which do not induce terraces), unlinked models have been used and advocated. The present study describes the potential impact of that inference assumption on phylogenetic inference in the context of the kinds of multigene data sets now widely assembled for large-scale tree construction.

  12. Influences of large sets of environmental exposures on immune responses in healthy adult men.

    Science.gov (United States)

    Yi, Buqing; Rykova, Marina; Jäger, Gundula; Feuerecker, Matthias; Hörl, Marion; Matzel, Sandra; Ponomarev, Sergey; Vassilieva, Galina; Nichiporuk, Igor; Choukèr, Alexander

    2015-08-26

    Environmental factors have long been known to influence immune responses. In particular, clinical studies about the association between migration and increased risk of atopy/asthma have provided important information on the role of migration associated large sets of environmental exposures in the development of allergic diseases. However, investigations about environmental effects on immune responses are mostly limited in candidate environmental exposures, such as air pollution. The influences of large sets of environmental exposures on immune responses are still largely unknown. A simulated 520-d Mars mission provided an opportunity to investigate this topic. Six healthy males lived in a closed habitat simulating a spacecraft for 520 days. When they exited their "spacecraft" after the mission, the scenario was similar to that of migration, involving exposure to a new set of environmental pollutants and allergens. We measured multiple immune parameters with blood samples at chosen time points after the mission. At the early adaptation stage, highly enhanced cytokine responses were observed upon ex vivo antigen stimulations. For cell population frequencies, we found the subjects displayed increased neutrophils. These results may presumably represent the immune changes occurred in healthy humans when migrating, indicating that large sets of environmental exposures may trigger aberrant immune activity.

  13. Genome size variation affects song attractiveness in grasshoppers: evidence for sexual selection against large genomes.

    Science.gov (United States)

    Schielzeth, Holger; Streitner, Corinna; Lampe, Ulrike; Franzke, Alexandra; Reinhold, Klaus

    2014-12-01

    Genome size is largely uncorrelated to organismal complexity and adaptive scenarios. Genetic drift as well as intragenomic conflict have been put forward to explain this observation. We here study the impact of genome size on sexual attractiveness in the bow-winged grasshopper Chorthippus biguttulus. Grasshoppers show particularly large variation in genome size due to the high prevalence of supernumerary chromosomes that are considered (mildly) selfish, as evidenced by non-Mendelian inheritance and fitness costs if present in high numbers. We ranked male grasshoppers by song characteristics that are known to affect female preferences in this species and scored genome sizes of attractive and unattractive individuals from the extremes of this distribution. We find that attractive singers have significantly smaller genomes, demonstrating that genome size is reflected in male courtship songs and that females prefer songs of males with small genomes. Such a genome size dependent mate preference effectively selects against selfish genetic elements that tend to increase genome size. The data therefore provide a novel example of how sexual selection can reinforce natural selection and can act as an agent in an intragenomic arms race. Furthermore, our findings indicate an underappreciated route of how choosy females could gain indirect benefits. © 2014 The Author(s). Evolution © 2014 The Society for the Study of Evolution.

  14. Details Matter: Noise and Model Structure Set the Relationship between Cell Size and Cell Cycle Timing

    Directory of Open Access Journals (Sweden)

    Felix Barber

    2017-11-01

    Full Text Available Organisms across all domains of life regulate the size of their cells. However, the means by which this is done is poorly understood. We study two abstracted “molecular” models for size regulation: inhibitor dilution and initiator accumulation. We apply the models to two settings: bacteria like Escherichia coli, that grow fully before they set a division plane and divide into two equally sized cells, and cells that form a bud early in the cell division cycle, confine new growth to that bud, and divide at the connection between that bud and the mother cell, like the budding yeast Saccharomyces cerevisiae. In budding cells, delaying cell division until buds reach the same size as their mother leads to very weak size control, with average cell size and standard deviation of cell size increasing over time and saturating up to 100-fold higher than those values for cells that divide when the bud is still substantially smaller than its mother. In budding yeast, both inhibitor dilution or initiator accumulation models are consistent with the observation that the daughters of diploid cells add a constant volume before they divide. This “adder” behavior has also been observed in bacteria. We find that in bacteria an inhibitor dilution model produces adder correlations that are not robust to noise in the timing of DNA replication initiation or in the timing from initiation of DNA replication to cell division (the C+D period. In contrast, in bacteria an initiator accumulation model yields robust adder correlations in the regime where noise in the timing of DNA replication initiation is much greater than noise in the C + D period, as reported previously (Ho and Amir, 2015. In bacteria, division into two equally sized cells does not broaden the size distribution.

  15. Scalable Algorithms for Clustering Large Geospatiotemporal Data Sets on Manycore Architectures

    Science.gov (United States)

    Mills, R. T.; Hoffman, F. M.; Kumar, J.; Sreepathi, S.; Sripathi, V.

    2016-12-01

    The increasing availability of high-resolution geospatiotemporal data sets from sources such as observatory networks, remote sensing platforms, and computational Earth system models has opened new possibilities for knowledge discovery using data sets fused from disparate sources. Traditional algorithms and computing platforms are impractical for the analysis and synthesis of data sets of this size; however, new algorithmic approaches that can effectively utilize the complex memory hierarchies and the extremely high levels of available parallelism in state-of-the-art high-performance computing platforms can enable such analysis. We describe a massively parallel implementation of accelerated k-means clustering and some optimizations to boost computational intensity and utilization of wide SIMD lanes on state-of-the art multi- and manycore processors, including the second-generation Intel Xeon Phi ("Knights Landing") processor based on the Intel Many Integrated Core (MIC) architecture, which includes several new features, including an on-package high-bandwidth memory. We also analyze the code in the context of a few practical applications to the analysis of climatic and remotely-sensed vegetation phenology data sets, and speculate on some of the new applications that such scalable analysis methods may enable.

  16. Settings and artefacts relevant for Doppler ultrasound in large vessel vasculitis

    DEFF Research Database (Denmark)

    Terslev, L; Diamantopoulos, A P; Døhn, U Møller

    2017-01-01

    Ultrasound is used increasingly for diagnosing large vessel vasculitis (LVV). The application of Doppler in LVV is very different from in arthritic conditions. This paper aims to explain the most important Doppler parameters, including spectral Doppler, and how the settings differ from those used...

  17. RESOURCE SAVING TECHNOLOGICAL PROCESS OF LARGE-SIZE DIE THERMAL TREATMENT

    Directory of Open Access Journals (Sweden)

    L. A. Glazkov

    2009-01-01

    Full Text Available The given paper presents a development of a technological process pertaining to hardening large-size parts made of die steel. The proposed process applies a water-air mixture instead of a conventional hardening medium that is industrial oil.While developing this new technological process it has been necessary to solve the following problems: reduction of thermal treatment duration, reduction of power resource expense (natural gas and mineral oil, elimination of fire danger and increase of process ecological efficiency. 

  18. Secondary data analysis of large data sets in urology: successes and errors to avoid.

    Science.gov (United States)

    Schlomer, Bruce J; Copp, Hillary L

    2014-03-01

    Secondary data analysis is the use of data collected for research by someone other than the investigator. In the last several years there has been a dramatic increase in the number of these studies being published in urological journals and presented at urological meetings, especially involving secondary data analysis of large administrative data sets. Along with this expansion, skepticism for secondary data analysis studies has increased for many urologists. In this narrative review we discuss the types of large data sets that are commonly used for secondary data analysis in urology, and discuss the advantages and disadvantages of secondary data analysis. A literature search was performed to identify urological secondary data analysis studies published since 2008 using commonly used large data sets, and examples of high quality studies published in high impact journals are given. We outline an approach for performing a successful hypothesis or goal driven secondary data analysis study and highlight common errors to avoid. More than 350 secondary data analysis studies using large data sets have been published on urological topics since 2008 with likely many more studies presented at meetings but never published. Nonhypothesis or goal driven studies have likely constituted some of these studies and have probably contributed to the increased skepticism of this type of research. However, many high quality, hypothesis driven studies addressing research questions that would have been difficult to conduct with other methods have been performed in the last few years. Secondary data analysis is a powerful tool that can address questions which could not be adequately studied by another method. Knowledge of the limitations of secondary data analysis and of the data sets used is critical for a successful study. There are also important errors to avoid when planning and performing a secondary data analysis study. Investigators and the urological community need to strive to use

  19. Optical and thermal performance of large-size parabolic-trough solar collectors from outdoor experiments: A test method and a case study

    International Nuclear Information System (INIS)

    Valenzuela, Loreto; López-Martín, Rafael; Zarza, Eduardo

    2014-01-01

    This article presents an outdoor test method to evaluate the optical and thermal performance of parabolic-trough collectors of large size (length ≥ 100 m), similar to those currently installed in solar thermal power plants. Optical performance in line-focus collectors is defined by three parameters, peak-optical efficiency and longitudinal and transversal incidence angle modifiers. In parabolic-troughs, the transversal incidence angle modifier is usually assumed equal to 1, and the incidence angle modifier is referred to the longitudinal incidence angle modifier, which is a factor less than or equal to 1 and must be quantified. These measurements are performed by operating the collector at low fluid temperatures for reducing heat losses. Thermal performance is measured during tests at various operating temperatures, which are defined within the working temperature range of the solar field, and for the condition of maximum optical response. Heat losses are measured from both the experiments performed to measure the overall efficiency and the experiments done by operating the collector to ensure that absorber pipes are not exposed to concentrated solar radiation. The set of parameters describing the performance of a parabolic-trough collector of large size has been measured following the test procedures proposed and explained in the article. - Highlights: • Outdoor test procedures of parabolic-trough solar collector (PTC) of large size working at high temperature are described. • Optical performance measured with cold fluid temperature and thermal performance measured in the complete temperature range. • Experimental data obtained in the testing of a PTC prototype are explained

  20. Security Optimization for Distributed Applications Oriented on Very Large Data Sets

    Directory of Open Access Journals (Sweden)

    Mihai DOINEA

    2010-01-01

    Full Text Available The paper presents the main characteristics of applications which are working with very large data sets and the issues related to security. First section addresses the optimization process and how it is approached when dealing with security. The second section describes the concept of very large datasets management while in the third section the risks related are identified and classified. Finally, a security optimization schema is presented with a cost-efficiency analysis upon its feasibility. Conclusions are drawn and future approaches are identified.

  1. Processing and properties of large-sized ceramic slabs

    Directory of Open Access Journals (Sweden)

    Fossa, L.

    2010-10-01

    Full Text Available Large-sized ceramic slabs – with dimensions up to 360x120 cm2 and thickness down to 2 mm – are manufactured through an innovative ceramic process, starting from porcelain stoneware formulations and involving wet ball milling, spray drying, die-less slow-rate pressing, a single stage of fast drying-firing, and finishing (trimming, assembling of ceramic-fiberglass composites. Fired and unfired industrial slabs were selected and characterized from the technological, compositional (XRF, XRD and microstructural (SEM viewpoints. Semi-finished products exhibit a remarkable microstructural uniformity and stability in a rather wide window of firing schedules. The phase composition and compact microstructure of fired slabs are very similar to those of porcelain stoneware tiles. The values of water absorption, bulk density, closed porosity, functional performances as well as mechanical and tribological properties conform to the top quality range of porcelain stoneware tiles. However, the large size coupled with low thickness bestow on the slab a certain degree of flexibility, which is emphasized in ceramic-fiberglass composites. These outstanding performances make the large-sized slabs suitable to be used in novel applications: building and construction (new floorings without dismantling the previous paving, ventilated façades, tunnel coverings, insulating panelling, indoor furnitures (table tops, doors, support for photovoltaic ceramic panels.

    Se han fabricado piezas de gran formato, con dimensiones de hasta 360x120 cm, y menos de 2 mm, de espesor, empleando métodos innovadores de fabricación, partiendo de composiciones de gres porcelánico y utilizando, molienda con bolas por vía húmeda, atomización, prensado a baja velocidad sin boquilla de extrusión, secado y cocción rápido en una sola etapa, y un acabado que incluye la adhesión de fibra de vidrio al soporte cerámico y el rectificado de la pieza final. Se han

  2. Evaluation of Kirkwood-Buff integrals via finite size scaling: a large scale molecular dynamics study

    Science.gov (United States)

    Dednam, W.; Botha, A. E.

    2015-01-01

    Solvation of bio-molecules in water is severely affected by the presence of co-solvent within the hydration shell of the solute structure. Furthermore, since solute molecules can range from small molecules, such as methane, to very large protein structures, it is imperative to understand the detailed structure-function relationship on the microscopic level. For example, it is useful know the conformational transitions that occur in protein structures. Although such an understanding can be obtained through large-scale molecular dynamic simulations, it is often the case that such simulations would require excessively large simulation times. In this context, Kirkwood-Buff theory, which connects the microscopic pair-wise molecular distributions to global thermodynamic properties, together with the recently developed technique, called finite size scaling, may provide a better method to reduce system sizes, and hence also the computational times. In this paper, we present molecular dynamics trial simulations of biologically relevant low-concentration solvents, solvated by aqueous co-solvent solutions. In particular we compare two different methods of calculating the relevant Kirkwood-Buff integrals. The first (traditional) method computes running integrals over the radial distribution functions, which must be obtained from large system-size NVT or NpT simulations. The second, newer method, employs finite size scaling to obtain the Kirkwood-Buff integrals directly by counting the particle number fluctuations in small, open sub-volumes embedded within a larger reservoir that can be well approximated by a much smaller simulation cell. In agreement with previous studies, which made a similar comparison for aqueous co-solvent solutions, without the additional solvent, we conclude that the finite size scaling method is also applicable to the present case, since it can produce computationally more efficient results which are equivalent to the more costly radial distribution

  3. Evaluation of Kirkwood-Buff integrals via finite size scaling: a large scale molecular dynamics study

    International Nuclear Information System (INIS)

    Dednam, W; Botha, A E

    2015-01-01

    Solvation of bio-molecules in water is severely affected by the presence of co-solvent within the hydration shell of the solute structure. Furthermore, since solute molecules can range from small molecules, such as methane, to very large protein structures, it is imperative to understand the detailed structure-function relationship on the microscopic level. For example, it is useful know the conformational transitions that occur in protein structures. Although such an understanding can be obtained through large-scale molecular dynamic simulations, it is often the case that such simulations would require excessively large simulation times. In this context, Kirkwood-Buff theory, which connects the microscopic pair-wise molecular distributions to global thermodynamic properties, together with the recently developed technique, called finite size scaling, may provide a better method to reduce system sizes, and hence also the computational times. In this paper, we present molecular dynamics trial simulations of biologically relevant low-concentration solvents, solvated by aqueous co-solvent solutions. In particular we compare two different methods of calculating the relevant Kirkwood-Buff integrals. The first (traditional) method computes running integrals over the radial distribution functions, which must be obtained from large system-size NVT or NpT simulations. The second, newer method, employs finite size scaling to obtain the Kirkwood-Buff integrals directly by counting the particle number fluctuations in small, open sub-volumes embedded within a larger reservoir that can be well approximated by a much smaller simulation cell. In agreement with previous studies, which made a similar comparison for aqueous co-solvent solutions, without the additional solvent, we conclude that the finite size scaling method is also applicable to the present case, since it can produce computationally more efficient results which are equivalent to the more costly radial distribution

  4. Generating a taxonomy of spatially cued attention for visual discrimination: Effects of judgment precision and set size on attention

    Science.gov (United States)

    Hetley, Richard; Dosher, Barbara Anne; Lu, Zhong-Lin

    2014-01-01

    Attention precues improve the performance of perceptual tasks in many but not all circumstances. These spatial attention effects may depend upon display set size or workload, and have been variously attributed to external noise filtering, stimulus enhancement, contrast gain, or response gain, or to uncertainty or other decision effects. In this study, we document systematically different effects of spatial attention in low- and high-precision judgments, with and without external noise, and in different set sizes in order to contribute to the development of a taxonomy of spatial attention. An elaborated perceptual template model (ePTM) provides an integrated account of a complex set of effects of spatial attention with just two attention factors: a set-size dependent exclusion or filtering of external noise and a narrowing of the perceptual template to focus on the signal stimulus. These results are related to the previous literature by classifying the judgment precision and presence of external noise masks in those experiments, suggesting a taxonomy of spatially cued attention in discrimination accuracy. PMID:24939234

  5. Generating a taxonomy of spatially cued attention for visual discrimination: effects of judgment precision and set size on attention.

    Science.gov (United States)

    Hetley, Richard; Dosher, Barbara Anne; Lu, Zhong-Lin

    2014-11-01

    Attention precues improve the performance of perceptual tasks in many but not all circumstances. These spatial attention effects may depend upon display set size or workload, and have been variously attributed to external noise filtering, stimulus enhancement, contrast gain, or response gain, or to uncertainty or other decision effects. In this study, we document systematically different effects of spatial attention in low- and high-precision judgments, with and without external noise, and in different set sizes in order to contribute to the development of a taxonomy of spatial attention. An elaborated perceptual template model (ePTM) provides an integrated account of a complex set of effects of spatial attention with just two attention factors: a set-size dependent exclusion or filtering of external noise and a narrowing of the perceptual template to focus on the signal stimulus. These results are related to the previous literature by classifying the judgment precision and presence of external noise masks in those experiments, suggesting a taxonomy of spatially cued attention in discrimination accuracy.

  6. PeptideNavigator: An interactive tool for exploring large and complex data sets generated during peptide-based drug design projects.

    Science.gov (United States)

    Diller, Kyle I; Bayden, Alexander S; Audie, Joseph; Diller, David J

    2018-01-01

    There is growing interest in peptide-based drug design and discovery. Due to their relatively large size, polymeric nature, and chemical complexity, the design of peptide-based drugs presents an interesting "big data" challenge. Here, we describe an interactive computational environment, PeptideNavigator, for naturally exploring the tremendous amount of information generated during a peptide drug design project. The purpose of PeptideNavigator is the presentation of large and complex experimental and computational data sets, particularly 3D data, so as to enable multidisciplinary scientists to make optimal decisions during a peptide drug discovery project. PeptideNavigator provides users with numerous viewing options, such as scatter plots, sequence views, and sequence frequency diagrams. These views allow for the collective visualization and exploration of many peptides and their properties, ultimately enabling the user to focus on a small number of peptides of interest. To drill down into the details of individual peptides, PeptideNavigator provides users with a Ramachandran plot viewer and a fully featured 3D visualization tool. Each view is linked, allowing the user to seamlessly navigate from collective views of large peptide data sets to the details of individual peptides with promising property profiles. Two case studies, based on MHC-1A activating peptides and MDM2 scaffold design, are presented to demonstrate the utility of PeptideNavigator in the context of disparate peptide-design projects. Copyright © 2017 Elsevier Ltd. All rights reserved.

  7. A Full-size High Temperature Superconducting Coil Employed in a Wind Turbine Generator Set-up

    DEFF Research Database (Denmark)

    Song, Xiaowei (Andy); Mijatovic, Nenad; Kellers, Jürgen

    2016-01-01

    A full-size stationary experimental set-up, which is a pole pair segment of a 2 MW high temperature superconducting (HTS) wind turbine generator, has been built and tested under the HTS-GEN project in Denmark. The performance of the HTS coil is crucial to the set-up, and further to the development...... is tested in LN2 first, and then tested in the set-up so that the magnetic environment in a real generator is reflected. The experimental results are reported, followed by a finite element simulation and a discussion on the deviation of the results. The tested and estimated Ic in LN2 are 148 A and 143 A...

  8. RADIOMETRIC NORMALIZATION OF LARGE AIRBORNE IMAGE DATA SETS ACQUIRED BY DIFFERENT SENSOR TYPES

    Directory of Open Access Journals (Sweden)

    S. Gehrke

    2016-06-01

    Full Text Available Generating seamless mosaics of aerial images is a particularly challenging task when the mosaic comprises a large number of im-ages, collected over longer periods of time and with different sensors under varying imaging conditions. Such large mosaics typically consist of very heterogeneous image data, both spatially (different terrain types and atmosphere and temporally (unstable atmo-spheric properties and even changes in land coverage. We present a new radiometric normalization or, respectively, radiometric aerial triangulation approach that takes advantage of our knowledge about each sensor’s properties. The current implementation supports medium and large format airborne imaging sensors of the Leica Geosystems family, namely the ADS line-scanner as well as DMC and RCD frame sensors. A hierarchical modelling – with parameters for the overall mosaic, the sensor type, different flight sessions, strips and individual images – allows for adaptation to each sensor’s geometric and radiometric properties. Additional parameters at different hierarchy levels can compensate radiome-tric differences of various origins to compensate for shortcomings of the preceding radiometric sensor calibration as well as BRDF and atmospheric corrections. The final, relative normalization is based on radiometric tie points in overlapping images, absolute radiometric control points and image statistics. It is computed in a global least squares adjustment for the entire mosaic by altering each image’s histogram using a location-dependent mathematical model. This model involves contrast and brightness corrections at radiometric fix points with bilinear interpolation for corrections in-between. The distribution of the radiometry fixes is adaptive to each image and generally increases with image size, hence enabling optimal local adaptation even for very long image strips as typi-cally captured by a line-scanner sensor. The normalization approach is implemented in

  9. When bigger is not better: selection against large size, high condition and fast growth in juvenile lemon sharks.

    Science.gov (United States)

    Dibattista, J D; Feldheim, K A; Gruber, S H; Hendry, A P

    2007-01-01

    Selection acting on large marine vertebrates may be qualitatively different from that acting on terrestrial or freshwater organisms, but logistical constraints have thus far precluded selection estimates for the former. We overcame these constraints by exhaustively sampling and repeatedly recapturing individuals in six cohorts of juvenile lemon sharks (450 age-0 and 255 age-1 fish) at an enclosed nursery site (Bimini, Bahamas). Data on individual size, condition factor, growth rate and inter-annual survival were used to test the 'bigger is better', 'fatter is better' and 'faster is better' hypotheses of life-history theory. For age-0 sharks, selection on all measured traits was weak, and generally acted against large size and high condition. For age-1 sharks, selection was much stronger, and consistently acted against large size and fast growth. These results suggest that selective pressures at Bimini may be constraining the evolution of large size and fast growth, an observation that fits well with the observed small size and low growth rate of juveniles at this site. Our results support those of some other recent studies in suggesting that bigger/fatter/faster is not always better, and may often be worse.

  10. Resonant atom-field interaction in large-size coupled-cavity arrays

    International Nuclear Information System (INIS)

    Ciccarello, Francesco

    2011-01-01

    We consider an array of coupled cavities with staggered intercavity couplings, where each cavity mode interacts with an atom. In contrast to large-size arrays with uniform hopping rates where the atomic dynamics is known to be frozen in the strong-hopping regime, we show that resonant atom-field dynamics with significant energy exchange can occur in the case of staggered hopping rates even in the thermodynamic limit. This effect arises from the joint emergence of an energy gap in the free photonic dispersion relation and a discrete frequency at the gap's center. The latter corresponds to a bound normal mode stemming solely from the finiteness of the array length. Depending on which cavity is excited, either the atomic dynamics is frozen or a Jaynes-Cummings-like energy exchange is triggered between the bound photonic mode and its atomic analog. As these phenomena are effective with any number of cavities, they are prone to be experimentally observed even in small-size arrays.

  11. From nanoparticles to large aerosols: Ultrafast measurement methods for size and concentration

    International Nuclear Information System (INIS)

    Keck, Lothar; Spielvogel, Juergen; Grimm, Hans

    2009-01-01

    A major challenge in aerosol technology is the fast measurement of number size distributions with good accuracy and size resolution. The dedicated instruments are frequently based on particle charging and electric detection. Established fast systems, however, still feature a number of shortcomings. We have developed a new instrument that constitutes of a high flow Differential Mobility Analyser (high flow DMA) and a high sensitivity Faraday Cup Electrometer (FCE). The system enables variable flow rates of up to 150 lpm, and the scan time for size distribution can be shortened considerably due to the short residence time of the particles in the DMA. Three different electrodes can be employed in order to cover a large size range. First test results demonstrate that the scan time can be reduced to less than 1 s for small particles, and that the results from the fast scans feature no significant difference to the results from established slow method. The fields of application for the new instrument comprise the precise monitoring of fast processes with nanoparticles, including monitoring of engine exhaust in automotive research.

  12. From nanoparticles to large aerosols: Ultrafast measurement methods for size and concentration

    Science.gov (United States)

    Keck, Lothar; Spielvogel, Jürgen; Grimm, Hans

    2009-05-01

    A major challenge in aerosol technology is the fast measurement of number size distributions with good accuracy and size resolution. The dedicated instruments are frequently based on particle charging and electric detection. Established fast systems, however, still feature a number of shortcomings. We have developed a new instrument that constitutes of a high flow Differential Mobility Analyser (high flow DMA) and a high sensitivity Faraday Cup Electrometer (FCE). The system enables variable flow rates of up to 150 lpm, and the scan time for size distribution can be shortened considerably due to the short residence time of the particles in the DMA. Three different electrodes can be employed in order to cover a large size range. First test results demonstrate that the scan time can be reduced to less than 1 s for small particles, and that the results from the fast scans feature no significant difference to the results from established slow method. The fields of application for the new instrument comprise the precise monitoring of fast processes with nanoparticles, including monitoring of engine exhaust in automotive research.

  13. Polish Phoneme Statistics Obtained On Large Set Of Written Texts

    Directory of Open Access Journals (Sweden)

    Bartosz Ziółko

    2009-01-01

    Full Text Available The phonetical statistics were collected from several Polish corpora. The paper is a summaryof the data which are phoneme n-grams and some phenomena in the statistics. Triphonestatistics apply context-dependent speech units which have an important role in speech recognitionsystems and were never calculated for a large set of Polish written texts. The standardphonetic alphabet for Polish, SAMPA, and methods of providing phonetic transcriptions are described.

  14. Querying Large Physics Data Sets Over an Information Grid

    CERN Document Server

    Baker, N; Kovács, Z; Le Goff, J M; McClatchey, R

    2001-01-01

    Optimising use of the Web (WWW) for LHC data analysis is a complex problem and illustrates the challenges arising from the integration of and computation across massive amounts of information distributed worldwide. Finding the right piece of information can, at times, be extremely time-consuming, if not impossible. So-called Grids have been proposed to facilitate LHC computing and many groups have embarked on studies of data replication, data migration and networking philosophies. Other aspects such as the role of 'middleware' for Grids are emerging as requiring research. This paper positions the need for appropriate middleware that enables users to resolve physics queries across massive data sets. It identifies the role of meta-data for query resolution and the importance of Information Grids for high-energy physics analysis rather than just Computational or Data Grids. This paper identifies software that is being implemented at CERN to enable the querying of very large collaborating HEP data-sets, initially...

  15. Uncertainty budget in internal monostandard NAA for small and large size samples analysis

    International Nuclear Information System (INIS)

    Dasari, K.B.; Acharya, R.

    2014-01-01

    Total uncertainty budget evaluation on determined concentration value is important under quality assurance programme. Concentration calculation in NAA or carried out by relative NAA and k0 based internal monostandard NAA (IM-NAA) method. IM-NAA method has been used for small and large sample analysis of clay potteries. An attempt was made to identify the uncertainty components in IM-NAA and uncertainty budget for La in both small and large size samples has been evaluated and compared. (author)

  16. Portion size: a qualitative study of consumers' attitudes toward point-of-purchase interventions aimed at portion size.

    Science.gov (United States)

    Vermeer, Willemijn M; Steenhuis, Ingrid H M; Seidell, Jacob C

    2010-02-01

    This qualitative study assessed consumers' opinions of food portion sizes and their attitudes toward portion-size interventions located in various point-of-purchase settings targeting overweight and obese people. Eight semi-structured focus group discussions were conducted with 49 participants. Constructs from the diffusion of innovations theory were included in the interview guide. Each focus group was recorded and transcribed verbatim. Data were coded and analyzed with Atlas.ti 5.2 using the framework approach. Results showed that many participants thought that portion sizes of various products have increased during the past decades and are larger than acceptable. The majority also indicated that value for money is important when purchasing and that large portion sizes offer more value for money than small portion sizes. Furthermore, many experienced difficulties with self-regulating the consumption of large portion sizes. Among the portion-size interventions that were discussed, participants had most positive attitudes toward a larger availability of portion sizes and pricing strategies, followed by serving-size labeling. In general, reducing package serving sizes as an intervention strategy to control food intake met resistance. The study concludes that consumers consider interventions consisting of a larger variety of available portion sizes, pricing strategies and serving-size labeling as most acceptable to implement.

  17. Family size and effective population size in a hatchery stock of coho salmon (Oncorhynchus kisutch)

    Science.gov (United States)

    Simon, R.C.; McIntyre, J.D.; Hemmingsen, A.R.

    1986-01-01

    Means and variances of family size measured in five year-classes of wire-tagged coho salmon (Oncorhynchus kisutch) were linearly related. Population effective size was calculated by using estimated means and variances of family size in a 25-yr data set. Although numbers of age 3 adults returning to the hatchery appeared to be large enough to avoid inbreeding problems (the 25-yr mean exceeded 4500), the numbers actually contributing to the hatchery production may be too low. Several strategies are proposed to correct the problem perceived. Argument is given to support the contention that the problem of effective size is fairly general and is not confined to the present study population.

  18. Timetable-based simulation method for choice set generation in large-scale public transport networks

    DEFF Research Database (Denmark)

    Rasmussen, Thomas Kjær; Anderson, Marie Karen; Nielsen, Otto Anker

    2016-01-01

    The composition and size of the choice sets are a key for the correct estimation of and prediction by route choice models. While existing literature has posed a great deal of attention towards the generation of path choice sets for private transport problems, the same does not apply to public...... transport problems. This study proposes a timetable-based simulation method for generating path choice sets in a multimodal public transport network. Moreover, this study illustrates the feasibility of its implementation by applying the method to reproduce 5131 real-life trips in the Greater Copenhagen Area...... and to assess the choice set quality in a complex multimodal transport network. Results illustrate the applicability of the algorithm and the relevance of the utility specification chosen for the reproduction of real-life path choices. Moreover, results show that the level of stochasticity used in choice set...

  19. Development of superconducting poloidal field coils for medium and large size tokamaks

    International Nuclear Information System (INIS)

    Dittrich, H.-G.; Forster, S.; Hofmann, A.

    1983-01-01

    Large long pulse tokamak fusion experiments require the use of superconducting poloidal field (PF) coils. In the past not much attention has been paid to the development of such coils. Therefore a development programme has been initiated recently at KfK. In this report start with summarizing the relevant PF coil parameters of some medium and large size tokamaks presently under construction or design, respectively. The most important areas of research and development work are deduced from these parameters. Design considerations and first experimental results concerning low loss conductors, cooling concepts and structural components are given

  20. Optimization of the size and shape of the set-in nozzle for a PWR reactor pressure vessel

    Energy Technology Data Exchange (ETDEWEB)

    Murtaza, Usman Tariq, E-mail: maniiut@yahoo.com; Javed Hyder, M., E-mail: hyder@pieas.edu.pk

    2015-04-01

    Highlights: • The size and shape of the set-in nozzle of the RPV have been optimized. • The optimized nozzle ensure the reduction of the mass around 198 kg per nozzle. • The mass of the RPV should be minimized for better fracture toughness. - Abstract: The objective of this research work is to optimize the size and shape of the set-in nozzle for a typical reactor pressure vessel (RPV) of a 300 MW pressurized water reactor. The analysis was performed by optimizing the four design variables which control the size and shape of the nozzle. These variables are inner radius of the nozzle, thickness of the nozzle, taper angle at the nozzle-cylinder intersection, and the point where taper of the nozzle starts from. It is concluded that the optimum design of the nozzle is the one that minimizes the two conflicting state variables, i.e., the stress intensity (Tresca yield criterion) and the mass of the RPV.

  1. Market size and attendance in English Premier League football

    OpenAIRE

    Buraimo, B; Simmons, R

    2006-01-01

    This paper models the impacts of market size and team competition for fan base on matchday attendance in the English Premier League over the period 1997-2004 using a large panel data set. We construct a comprehensive set of control variables and use tobit estimation to overcome the problems caused by sell-out crowds. We also account for unobserved influences on attendance by means of random effects attached to home teams. Our treatment of market size, with its use of Geographical Information ...

  2. Efficient algorithms for collaborative decision making for large scale settings

    DEFF Research Database (Denmark)

    Assent, Ira

    2011-01-01

    to bring about more effective and more efficient retrieval systems that support the users' decision making process. We sketch promising research directions for more efficient algorithms for collaborative decision making, especially for large scale systems.......Collaborative decision making is a successful approach in settings where data analysis and querying can be done interactively. In large scale systems with huge data volumes or many users, collaboration is often hindered by impractical runtimes. Existing work on improving collaboration focuses...... on avoiding redundancy for users working on the same task. While this improves the effectiveness of the user work process, the underlying query processing engine is typically considered a "black box" and left unchanged. Research in multiple query processing, on the other hand, ignores the application...

  3. Highly crystallized nanometer-sized zeolite a with large Cs adsorption capability for the decontamination of water.

    Science.gov (United States)

    Torad, Nagy L; Naito, Masanobu; Tatami, Junichi; Endo, Akira; Leo, Sin-Yen; Ishihara, Shinsuke; Wu, Kevin C-W; Wakihara, Toru; Yamauchi, Yusuke

    2014-03-01

    Nanometer-sized zeolite A with a large cesium (Cs) uptake capability is prepared through a simple post-milling recrystallization method. This method is suitable for producing nanometer-sized zeolite in large scale, as additional organic compounds are not needed to control zeolite nucleation and crystal growth. Herein, we perform a quartz crystal microbalance (QCM) study to evaluate the uptake ability of Cs ions by zeolite, to the best of our knowledge, for the first time. In comparison to micrometer-sized zeolite A, nanometer-sized zeolite A can rapidly accommodate a larger amount of Cs ions into the zeolite crystal structure, owing to its high external surface area. Nanometer-sized zeolite is a promising candidate for the removal of radioactive Cs ions from polluted water. Our QCM study on Cs adsorption uptake behavior provides the information of adsorption kinetics (e.g., adsorption amounts and rates). This technique is applicable to other zeolites, which will be highly valuable for further consideration of radioactive Cs removal in the future. Copyright © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  4. Does source population size affect performance in new environments?

    Science.gov (United States)

    Yates, Matthew C; Fraser, Dylan J

    2014-01-01

    Small populations are predicted to perform poorly relative to large populations when experiencing environmental change. To explore this prediction in nature, data from reciprocal transplant, common garden, and translocation studies were compared meta-analytically. We contrasted changes in performance resulting from transplantation to new environments among individuals originating from different sized source populations from plants and salmonids. We then evaluated the effect of source population size on performance in natural common garden environments and the relationship between population size and habitat quality. In ‘home-away’ contrasts, large populations exhibited reduced performance in new environments. In common gardens, the effect of source population size on performance was inconsistent across life-history stages (LHS) and environments. When transplanted to the same set of new environments, small populations either performed equally well or better than large populations, depending on life stage. Conversely, large populations outperformed small populations within native environments, but only at later life stages. Population size was not associated with habitat quality. Several factors might explain the negative association between source population size and performance in new environments: (i) stronger local adaptation in large populations and antagonistic pleiotropy, (ii) the maintenance of genetic variation in small populations, and (iii) potential environmental differences between large and small populations. PMID:25469166

  5. Portfolio of automated trading systems: complexity and learning set size issues.

    Science.gov (United States)

    Raudys, Sarunas

    2013-03-01

    In this paper, we consider using profit/loss histories of multiple automated trading systems (ATSs) as N input variables in portfolio management. By means of multivariate statistical analysis and simulation studies, we analyze the influences of sample size (L) and input dimensionality on the accuracy of determining the portfolio weights. We find that degradation in portfolio performance due to inexact estimation of N means and N(N - 1)/2 correlations is proportional to N/L; however, estimation of N variances does not worsen the result. To reduce unhelpful sample size/dimensionality effects, we perform a clustering of N time series and split them into a small number of blocks. Each block is composed of mutually correlated ATSs. It generates an expert trading agent based on a nontrainable 1/N portfolio rule. To increase the diversity of the expert agents, we use training sets of different lengths for clustering. In the output of the portfolio management system, the regularized mean-variance framework-based fusion agent is developed in each walk-forward step of an out-of-sample portfolio validation experiment. Experiments with the real financial data (2003-2012) confirm the effectiveness of the suggested approach.

  6. Set size influences the relationship between ANS acuity and math performance: a result of different strategies?

    Science.gov (United States)

    Dietrich, Julia Felicitas; Nuerk, Hans-Christoph; Klein, Elise; Moeller, Korbinian; Huber, Stefan

    2017-08-29

    Previous research has proposed that the approximate number system (ANS) constitutes a building block for later mathematical abilities. Therefore, numerous studies investigated the relationship between ANS acuity and mathematical performance, but results are inconsistent. Properties of the experimental design have been discussed as a potential explanation of these inconsistencies. In the present study, we investigated the influence of set size and presentation duration on the association between non-symbolic magnitude comparison and math performance. Moreover, we focused on strategies reported as an explanation for these inconsistencies. In particular, we employed a non-symbolic magnitude comparison task and asked participants how they solved the task. We observed that set size was a significant moderator of the relationship between non-symbolic magnitude comparison and math performance, whereas presentation duration of the stimuli did not moderate this relationship. This supports the notion that specific design characteristics contribute to the inconsistent results. Moreover, participants reported different strategies including numerosity-based, visual, counting, calculation-based, and subitizing strategies. Frequencies of these strategies differed between different set sizes and presentation durations. However, we found no specific strategy, which alone predicted arithmetic performance, but when considering the frequency of all reported strategies, arithmetic performance could be predicted. Visual strategies made the largest contribution to this prediction. To conclude, the present findings suggest that different design characteristics contribute to the inconsistent findings regarding the relationship between non-symbolic magnitude comparison and mathematical performance by inducing different strategies and additional processes.

  7. Large-Scale Constraint-Based Pattern Mining

    Science.gov (United States)

    Zhu, Feida

    2009-01-01

    We studied the problem of constraint-based pattern mining for three different data formats, item-set, sequence and graph, and focused on mining patterns of large sizes. Colossal patterns in each data formats are studied to discover pruning properties that are useful for direct mining of these patterns. For item-set data, we observed robustness of…

  8. Study on external reactor vessel cooling capacity for advanced large size PWR

    International Nuclear Information System (INIS)

    Jin Di; Liu Xiaojing; Cheng Xu; Li Fei

    2014-01-01

    External reactor vessel cooling (ERVC) is widely adopted as a part of in- vessel retention (IVR) in severe accident management strategies. In this paper, some flow parameters and boundary conditions, eg., inlet and outlet area, water inlet temperature, heating power of the lower head, the annular gap size at the position of the lower head and flooding water level, were considered to qualitatively study the effect of them on natural circulation capacity of the external reactor vessel cooling for an advanced large size PWR by using RELAP5 code. And the calculation results provide some basis of analysis for the structure design and the following transient response behavior of the system. (authors)

  9. Prey size and availability limits maximum size of rainbow trout in a large tailwater: insights from a drift-foraging bioenergetics model

    Science.gov (United States)

    Dodrill, Michael J.; Yackulic, Charles B.; Kennedy, Theodore A.; Haye, John W

    2016-01-01

    The cold and clear water conditions present below many large dams create ideal conditions for the development of economically important salmonid fisheries. Many of these tailwater fisheries have experienced declines in the abundance and condition of large trout species, yet the causes of these declines remain uncertain. Here, we develop, assess, and apply a drift-foraging bioenergetics model to identify the factors limiting rainbow trout (Oncorhynchus mykiss) growth in a large tailwater. We explored the relative importance of temperature, prey quantity, and prey size by constructing scenarios where these variables, both singly and in combination, were altered. Predicted growth matched empirical mass-at-age estimates, particularly for younger ages, demonstrating that the model accurately describes how current temperature and prey conditions interact to determine rainbow trout growth. Modeling scenarios that artificially inflated prey size and abundance demonstrate that rainbow trout growth is limited by the scarcity of large prey items and overall prey availability. For example, shifting 10% of the prey biomass to the 13 mm (large) length class, without increasing overall prey biomass, increased lifetime maximum mass of rainbow trout by 88%. Additionally, warmer temperatures resulted in lower predicted growth at current and lower levels of prey availability; however, growth was similar across all temperatures at higher levels of prey availability. Climate change will likely alter flow and temperature regimes in large rivers with corresponding changes to invertebrate prey resources used by fish. Broader application of drift-foraging bioenergetics models to build a mechanistic understanding of how changes to habitat conditions and prey resources affect growth of salmonids will benefit management of tailwater fisheries.

  10. Mechanical properties of duplex steel welded joints in large-size constructions

    OpenAIRE

    J. Nowacki

    2012-01-01

    Purpose: On the basis of sources and own experiments, the analysis of mechanical properties, applications as well as material and technological problems of ferritic-austenitic steel welding were carried out. It was shown the area of welding applications, particularly welding of large-size structures, on the basis of example of the FCAW method of welding of the UNS S3 1803 duplex steel in construction of chemical cargo ships.Design/methodology/approach: Welding tests were carried out for duple...

  11. Mock-up test of remote controlled dismantling apparatus for large-sized vessels (contract research)

    Energy Technology Data Exchange (ETDEWEB)

    Myodo, Masato; Miyajima, Kazutoshi [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment; Okane, Shogo [Japan Atomic Energy Research Inst., Oarai, Ibaraki (Japan). Oarai Research Establishment

    2001-03-01

    The Remote dismantling apparatus, which is equipped with multi-units for functioning of washing, cutting, collection of cut pieces and so on, has been constructed to dismantle the large-sized vessels in the JAERI's Reprocessing Test Facility (JRTF). The apparatus has five-axis movement capability and its operation is performed remotely. The mock-up tests were performed to evaluate the applicability of the apparatus to actual dismantling activities by using the mock-ups of LV-3 and LV-5 in the facility. It was confirmed that each unit was satisfactory functioned by remote operation. Efficient procedures for dismantling the large-sized vessel was studied and various date was obtained in the mock-up tests. This apparatus was found to be applicable for the actual dismantling activity in JRTF. (author)

  12. Mock-up test of remote controlled dismantling apparatus for large-sized vessels (contract research)

    International Nuclear Information System (INIS)

    Myodo, Masato; Miyajima, Kazutoshi; Okane, Shogo

    2001-03-01

    The Remote dismantling apparatus, which is equipped with multi-units for functioning of washing, cutting, collection of cut pieces and so on, has been constructed to dismantle the large-sized vessels in the JAERI's Reprocessing Test Facility (JRTF). The apparatus has five-axis movement capability and its operation is performed remotely. The mock-up tests were performed to evaluate the applicability of the apparatus to actual dismantling activities by using the mock-ups of LV-3 and LV-5 in the facility. It was confirmed that each unit was satisfactory functioned by remote operation. Efficient procedures for dismantling the large-sized vessel was studied and various date was obtained in the mock-up tests. This apparatus was found to be applicable for the actual dismantling activity in JRTF. (author)

  13. SAR matrices: automated extraction of information-rich SAR tables from large compound data sets.

    Science.gov (United States)

    Wassermann, Anne Mai; Haebel, Peter; Weskamp, Nils; Bajorath, Jürgen

    2012-07-23

    We introduce the SAR matrix data structure that is designed to elucidate SAR patterns produced by groups of structurally related active compounds, which are extracted from large data sets. SAR matrices are systematically generated and sorted on the basis of SAR information content. Matrix generation is computationally efficient and enables processing of large compound sets. The matrix format is reminiscent of SAR tables, and SAR patterns revealed by different categories of matrices are easily interpretable. The structural organization underlying matrix formation is more flexible than standard R-group decomposition schemes. Hence, the resulting matrices capture SAR information in a comprehensive manner.

  14. Impact basins on Ganymede and Callisto and implications for the large-projectile size distribution

    Science.gov (United States)

    Wagner, R.; Neukum, G.; Wolf, U.; Greeley, R.; Klemaszewski, J. E.

    2003-04-01

    It has been conjectured that the projectile family which impacted the Galilean Satellites of Jupiter was depleted in large projectiles, concluded from a ''dearth'' in large craters (> 60 km) (e.g. [1]). Geologic mapping, aided by spatial filtering of new Galileo as well as older Voyager data shows, however, that large projectiles have left an imprint of palimpsests and multi-ring structures on both Ganymede and Callisto (e. g. [2]). Most of these impact structures are heavily degraded and hence difficult to recognize. In this paper, we present (1) maps showing the outlines of these basins, and (2) derive updated crater size-frequency diagrams of the two satellites. The crater diameter from a palimpsest diameter was reconstructed using a formula derived by [3]. The calculation of the crater diameter Dc from the outer boundary Do of a multi-ring structure is much less constrained and on the order of Dc = k \\cdot Do , with k ≈ 0.25-0.3 [4]. Despite the uncertainties in locating the ''true'' crater rims, the resulting shape of the distribution in the range from kilometer-sized craters to sizes of ≈ 500 km is lunar-like and strongly suggests a collisionally evolved projectile family, very likely of asteroidal origin. An alternative explanation for this shape could be that comets are collisionally evolved bodies in a similar way as are asteroids, which as of yet is still uncertain and in discussion. Also, the crater size distributions on Ganymede and Callisto are shifted towards smaller crater sizes compared to the Moon, caused by a much lower impact velocity of impactors which preferentially were in planetocentric orbits [5]. References: [1] Strom et al., JGR 86, 8659-8674, 1981. [2] J. E. Klemaszewski et al., Ann. Geophys. 16, suppl. III, 1998. [3] Iaquinta-Ridolfi &Schenk, LPSC XXVI (abstr.), 651-652, 1995. [4] Schenk &Moore, LPSC XXX, abstr. No. 1786 [CD-Rom], 1999. [5] Horedt & Neukum, JGR 89, 10,405-10,410, 1984.

  15. Combining RP and SP data while accounting for large choice sets and travel mode

    DEFF Research Database (Denmark)

    Abildtrup, Jens; Olsen, Søren Bøye; Stenger, Anne

    2015-01-01

    set used for site selection modelling when the actual choice set considered is potentially large and unknown to the analyst. Easy access to forests also implies that around half of the visitors walk or bike to the forest. We apply an error-component mixed-logit model to simultaneously model the travel...

  16. "Size-Independent" Single-Electron Tunneling.

    Science.gov (United States)

    Zhao, Jianli; Sun, Shasha; Swartz, Logan; Riechers, Shawn; Hu, Peiguang; Chen, Shaowei; Zheng, Jie; Liu, Gang-Yu

    2015-12-17

    Incorporating single-electron tunneling (SET) of metallic nanoparticles (NPs) into modern electronic devices offers great promise to enable new properties; however, it is technically very challenging due to the necessity to integrate ultrasmall (<10 nm) particles into the devices. The nanosize requirements are intrinsic for NPs to exhibit quantum or SET behaviors, for example, 10 nm or smaller, at room temperature. This work represents the first observation of SET that defies the well-known size restriction. Using polycrystalline Au NPs synthesized via our newly developed solid-state glycine matrices method, a Coulomb Blockade was observed for particles as large as tens of nanometers, and the blockade voltage exhibited little dependence on the size of the NPs. These observations are counterintuitive at first glance. Further investigations reveal that each observed SET arises from the ultrasmall single crystalline grain(s) within the polycrystal NP, which is (are) sufficiently isolated from the nearest neighbor grains. This work demonstrates the concept and feasibility to overcome orthodox spatial confinement requirements to achieve quantum effects.

  17. —Does Demand Fall When Customers Perceive That Prices Are Unfair? The Case of Premium Pricing for Large Sizes

    OpenAIRE

    Eric T. Anderson; Duncan I. Simester

    2008-01-01

    We analyze a large-scale field test conducted with a mail-order catalog firm to investigate how customers react to premium prices for larger sizes of women's apparel. We find that customers who demand large sizes react unfavorably to paying a higher price than customers for small sizes. Further investigation suggests that these consumers perceive that the price premium is unfair. Overall, premium pricing led to a 6% to 8% decrease in gross profits.

  18. Vertebral Adaptations to Large Body Size in Theropod Dinosaurs.

    Directory of Open Access Journals (Sweden)

    John P Wilson

    Full Text Available Rugose projections on the anterior and posterior aspects of vertebral neural spines appear throughout Amniota and result from the mineralization of the supraspinous and interspinous ligaments via metaplasia, the process of permanent tissue-type transformation. In mammals, this metaplasia is generally pathological or stress induced, but is a normal part of development in some clades of birds. Such structures, though phylogenetically sporadic, appear throughout the fossil record of non-avian theropod dinosaurs, yet their physiological and adaptive significance has remained unexamined. Here we show novel histologic and phylogenetic evidence that neural spine projections were a physiological response to biomechanical stress in large-bodied theropod species. Metaplastic projections also appear to vary between immature and mature individuals of the same species, with immature animals either lacking them or exhibiting smaller projections, supporting the hypothesis that these structures develop through ontogeny as a result of increasing bending stress subjected to the spinal column. Metaplastic mineralization of spinal ligaments would likely affect the flexibility of the spinal column, increasing passive support for body weight. A stiff spinal column would also provide biomechanical support for the primary hip flexors and, therefore, may have played a role in locomotor efficiency and mobility in large-bodied species. This new association of interspinal ligament metaplasia in Theropoda with large body size contributes additional insight to our understanding of the diverse biomechanical coping mechanisms developed throughout Dinosauria, and stresses the significance of phylogenetic methods when testing for biological trends, evolutionary or not.

  19. Vertebral Adaptations to Large Body Size in Theropod Dinosaurs.

    Science.gov (United States)

    Wilson, John P; Woodruff, D Cary; Gardner, Jacob D; Flora, Holley M; Horner, John R; Organ, Chris L

    2016-01-01

    Rugose projections on the anterior and posterior aspects of vertebral neural spines appear throughout Amniota and result from the mineralization of the supraspinous and interspinous ligaments via metaplasia, the process of permanent tissue-type transformation. In mammals, this metaplasia is generally pathological or stress induced, but is a normal part of development in some clades of birds. Such structures, though phylogenetically sporadic, appear throughout the fossil record of non-avian theropod dinosaurs, yet their physiological and adaptive significance has remained unexamined. Here we show novel histologic and phylogenetic evidence that neural spine projections were a physiological response to biomechanical stress in large-bodied theropod species. Metaplastic projections also appear to vary between immature and mature individuals of the same species, with immature animals either lacking them or exhibiting smaller projections, supporting the hypothesis that these structures develop through ontogeny as a result of increasing bending stress subjected to the spinal column. Metaplastic mineralization of spinal ligaments would likely affect the flexibility of the spinal column, increasing passive support for body weight. A stiff spinal column would also provide biomechanical support for the primary hip flexors and, therefore, may have played a role in locomotor efficiency and mobility in large-bodied species. This new association of interspinal ligament metaplasia in Theropoda with large body size contributes additional insight to our understanding of the diverse biomechanical coping mechanisms developed throughout Dinosauria, and stresses the significance of phylogenetic methods when testing for biological trends, evolutionary or not.

  20. Ultra-large size austenitic stainless steel forgings for fast breeder reactor 'Monju'

    International Nuclear Information System (INIS)

    Tsukada, Hisashi; Suzuki, Komei; Sato, Ikuo; Miura, Ritsu.

    1988-01-01

    The large SUS 304 austenitic stainless steel forgings for the reactor vessel of the prototype FBR 'Monju' of 280 MWe output were successfully manufactured. The reactor vessel contains the heart of the reactor and sodium coolant at 530 deg C, and its inside diameter is about 7 m, and height is about 18 m. It is composed of 12 large forgings, that is, very thick flanges and shalls made by ring forging and an end plate made by disk forging and hot forming, using a special press machine. The manufacture of these large forgings utilized the results of the basic test on the material properties in high temperature environment and the effect that the manufacturing factors exert on the material properties and the results of the development of manufacturing techniques for superlarge forgings. The problems were the manufacturing techniques for the large ingots of 250 t class of high purity, the hot working techniques for stainless steel of fine grain size, the forging techniques for superlarge rings and disks, and the machining techniques of high precision for particularly large diameter, thin wall rings. The manufacture of these large stainless steel forgings is reported. (Kako, I.)

  1. The causal effect of board size in the performance of small and medium-sized firms

    DEFF Research Database (Denmark)

    Bennedsen, Morten; Kongsted, Hans Christian; Meisner Nielsen, Kasper

    2008-01-01

    correlation between family size and board size and show this correlation to be driven by firms where the CEO's relatives serve on the board. Second, we find empirical evidence of a small adverse board size effect driven by the minority of small and medium-sized firms that are characterized by having......Empirical studies of large publicly traded firms have shown a robust negative relationship between board size and firm performance. The evidence on small and medium-sized firms is less clear; we show that existing work has been incomplete in analyzing the causal relationship due to weak...... identification strategies. Using a rich data set of almost 7000 closely held corporations we provide a causal analysis of board size effects on firm performance: We use a novel instrument given by the number of children of the chief executive officer (CEO) of the firms. First, we find a strong positive...

  2. Sizing and scaling requirements of a large-scale physical model for code validation

    International Nuclear Information System (INIS)

    Khaleel, R.; Legore, T.

    1990-01-01

    Model validation is an important consideration in application of a code for performance assessment and therefore in assessing the long-term behavior of the engineered and natural barriers of a geologic repository. Scaling considerations relevant to porous media flow are reviewed. An analysis approach is presented for determining the sizing requirements of a large-scale, hydrology physical model. The physical model will be used to validate performance assessment codes that evaluate the long-term behavior of the repository isolation system. Numerical simulation results for sizing requirements are presented for a porous medium model in which the media properties are spatially uncorrelated

  3. Effect of pore size on performance of monolithic tube chromatography of large biomolecules.

    Science.gov (United States)

    Podgornik, Ales; Hamachi, Masataka; Isakari, Yu; Yoshimoto, Noriko; Yamamoto, Shuichi

    2017-11-01

    Effect of pore size on the performance of ion-exchange monolith tube chromatography of large biomolecules was investigated. Radial flow 1 mL polymer based monolith tubes of different pore sizes (1.5, 2, and 6 μm) were tested with model samples such as 20 mer poly T-DNA, basic proteins, and acidic proteins (molecular weight 14 000-670 000). Pressure drop, pH transient, the number of binding site, dynamic binding capacity, and peak width were examined. Pressure drop-flow rate curves and dynamic binding capacity values were well correlated with the nominal pore size. While duration of the pH transient curves depends on the pore size, it was found that pH duration normalized on estimated surface area was constant, indicating that the ligand density is the same. This was also confirmed by the constant number of binding site values being independent of pore size. The peak width values were similar to those for axial flow monolith chromatography. These results showed that it is easy to scale up axial flow monolith chromatography to radial flow monolith tube chromatography by choosing the right pore size in terms of the pressure drop and capacity. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  4. Determination of the optimal sample size for a clinical trial accounting for the population size.

    Science.gov (United States)

    Stallard, Nigel; Miller, Frank; Day, Simon; Hee, Siew Wan; Madan, Jason; Zohar, Sarah; Posch, Martin

    2017-07-01

    The problem of choosing a sample size for a clinical trial is a very common one. In some settings, such as rare diseases or other small populations, the large sample sizes usually associated with the standard frequentist approach may be infeasible, suggesting that the sample size chosen should reflect the size of the population under consideration. Incorporation of the population size is possible in a decision-theoretic approach either explicitly by assuming that the population size is fixed and known, or implicitly through geometric discounting of the gain from future patients reflecting the expected population size. This paper develops such approaches. Building on previous work, an asymptotic expression is derived for the sample size for single and two-arm clinical trials in the general case of a clinical trial with a primary endpoint with a distribution of one parameter exponential family form that optimizes a utility function that quantifies the cost and gain per patient as a continuous function of this parameter. It is shown that as the size of the population, N, or expected size, N∗ in the case of geometric discounting, becomes large, the optimal trial size is O(N1/2) or O(N∗1/2). The sample size obtained from the asymptotic expression is also compared with the exact optimal sample size in examples with responses with Bernoulli and Poisson distributions, showing that the asymptotic approximations can also be reasonable in relatively small sample sizes. © 2016 The Author. Biometrical Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  5. Are large farms more efficient? Tenure security, farm size and farm efficiency: evidence from northeast China

    Science.gov (United States)

    Zhou, Yuepeng; Ma, Xianlei; Shi, Xiaoping

    2017-04-01

    How to increase production efficiency, guarantee grain security, and increase farmers' income using the limited farmland is a great challenge that China is facing. Although theory predicts that secure property rights and moderate scale management of farmland can increase land productivity, reduce farm-related costs, and raise farmer's income, empirical studies on the size and magnitude of these effects are scarce. A number of studies have examined the impacts of land tenure or farm size on productivity or efficiency, respectively. There are also a few studies linking farm size, land tenure and efficiency together. However, to our best knowledge, there are no studies considering tenure security and farm efficiency together for different farm scales in China. In addition, there is little study analyzing the profit frontier. In this study, we particularly focus on the impacts of land tenure security and farm size on farm profit efficiency, using farm level data collected from 23 villages, 811 households in Liaoning in 2015. 7 different farm scales have been identified to further represent small farms, median farms, moderate-scale farms, and large farms. Technical efficiency is analyzed with stochastic frontier production function. The profit efficiency is regressed on a set of explanatory variables which includes farm size dummies, land tenure security indexes, and household characteristics. We found that: 1) The technical efficiency scores for production efficiency (average score = 0.998) indicate that it is already very close to the production frontier, and thus there is little room to improve production efficiency. However, there is larger space to raise profit efficiency (average score = 0.768) by investing more on farm size expansion, seed, hired labor, pesticide, and irrigation. 2) Farms between 50-80 mu are most efficient from the viewpoint of profit efficiency. The so-called moderate-scale farms (100-150 mu) according to the governmental guideline show no

  6. Large margin image set representation and classification

    KAUST Repository

    Wang, Jim Jing-Yan; Alzahrani, Majed A.; Gao, Xin

    2014-01-01

    In this paper, we propose a novel image set representation and classification method by maximizing the margin of image sets. The margin of an image set is defined as the difference of the distance to its nearest image set from different classes and the distance to its nearest image set of the same class. By modeling the image sets by using both their image samples and their affine hull models, and maximizing the margins of the images sets, the image set representation parameter learning problem is formulated as an minimization problem, which is further optimized by an expectation - maximization (EM) strategy with accelerated proximal gradient (APG) optimization in an iterative algorithm. To classify a given test image set, we assign it to the class which could provide the largest margin. Experiments on two applications of video-sequence-based face recognition demonstrate that the proposed method significantly outperforms state-of-the-art image set classification methods in terms of both effectiveness and efficiency.

  7. Large margin image set representation and classification

    KAUST Repository

    Wang, Jim Jing-Yan

    2014-07-06

    In this paper, we propose a novel image set representation and classification method by maximizing the margin of image sets. The margin of an image set is defined as the difference of the distance to its nearest image set from different classes and the distance to its nearest image set of the same class. By modeling the image sets by using both their image samples and their affine hull models, and maximizing the margins of the images sets, the image set representation parameter learning problem is formulated as an minimization problem, which is further optimized by an expectation - maximization (EM) strategy with accelerated proximal gradient (APG) optimization in an iterative algorithm. To classify a given test image set, we assign it to the class which could provide the largest margin. Experiments on two applications of video-sequence-based face recognition demonstrate that the proposed method significantly outperforms state-of-the-art image set classification methods in terms of both effectiveness and efficiency.

  8. Scrum of scrums solution for large size teams using scrum methodology

    OpenAIRE

    Qurashi, Saja Al; Qureshi, M. Rizwan Jameel

    2014-01-01

    Scrum is a structured framework to support complex product development. However, Scrum methodology faces a challenge of managing large teams. To address this challenge, in this paper we propose a solution called Scrum of Scrums. In Scrum of Scrums, we divide the Scrum team into teams of the right size, and then organize them hierarchically into a Scrum of Scrums. The main goals of the proposed solution are to optimize communication between teams in Scrum of Scrums; to make the system work aft...

  9. Introduction to Large-sized Test Facility for validating Containment Integrity under Severe Accidents

    International Nuclear Information System (INIS)

    Na, Young Su; Hong, Seongwan; Hong, Seongho; Min, Beongtae

    2014-01-01

    An overall assessment of containment integrity can be conducted properly by examining the hydrogen behavior in the containment building. Under severe accidents, an amount of hydrogen gases can be generated by metal oxidation and corium-concrete interaction. Hydrogen behavior in the containment building strongly depends on complicated thermal hydraulic conditions with mixed gases and steam. The performance of a PAR can be directly affected by the thermal hydraulic conditions, steam contents, gas mixture behavior and aerosol characteristics, as well as the operation of other engineering safety systems such as a spray. The models in computer codes for a severe accident assessment can be validated based on the experiment results in a large-sized test facility. The Korea Atomic Energy Research Institute (KAERI) is now preparing a large-sized test facility to examine in detail the safety issues related with hydrogen including the performance of safety devices such as a PAR in various severe accident situations. This paper introduces the KAERI test facility for validating the containment integrity under severe accidents. To validate the containment integrity, a large-sized test facility is necessary for simulating complicated phenomena induced by an amount of steam and gases, especially hydrogen released into the containment building under severe accidents. A pressure vessel 9.5 m in height and 3.4 m in diameter was designed at the KAERI test facility for the validating containment integrity, which was based on the THAI test facility with the experimental safety and the reliable measurement systems certified for a long time. This large-sized pressure vessel operated in steam and iodine as a corrosive agent was made by stainless steel 316L because of corrosion resistance for a long operating time, and a vessel was installed in at KAERI in March 2014. In the future, the control systems for temperature and pressure in a vessel will be constructed, and the measurement system

  10. Large Scale Behavior and Droplet Size Distributions in Crude Oil Jets and Plumes

    Science.gov (United States)

    Katz, Joseph; Murphy, David; Morra, David

    2013-11-01

    The 2010 Deepwater Horizon blowout introduced several million barrels of crude oil into the Gulf of Mexico. Injected initially as a turbulent jet containing crude oil and gas, the spill caused formation of a subsurface plume stretching for tens of miles. The behavior of such buoyant multiphase plumes depends on several factors, such as the oil droplet and bubble size distributions, current speed, and ambient stratification. While large droplets quickly rise to the surface, fine ones together with entrained seawater form intrusion layers. Many elements of the physics of droplet formation by an immiscible turbulent jet and their resulting size distribution have not been elucidated, but are known to be significantly influenced by the addition of dispersants, which vary the Weber Number by orders of magnitude. We present experimental high speed visualizations of turbulent jets of sweet petroleum crude oil (MC 252) premixed with Corexit 9500A dispersant at various dispersant to oil ratios. Observations were conducted in a 0.9 m × 0.9 m × 2.5 m towing tank, where large-scale behavior of the jet, both stationary and towed at various speeds to simulate cross-flow, have been recorded at high speed. Preliminary data on oil droplet size and spatial distributions were also measured using a videoscope and pulsed light sheet. Sponsored by Gulf of Mexico Research Initiative (GoMRI).

  11. Large-sized and highly radioactive 3H and 109Cd Langmuir-Blodgett films

    International Nuclear Information System (INIS)

    Shibata, S.; Kawakami, H.; Kato, S.

    1994-02-01

    A device for the deposition of a radioactive Langmuir-Blodgett (LB) film was developed with the use of: (1) a modified horizontal lifting method, (2) an extremely shallow trough, and (3) a surface pressure-generating system without piston oil. It made a precious radioactive subphase solution repeatedly usable while keeping its radioactivity concentration as high as possible. Any large-size thin films can be prepared by just changing the trough size. Two monomolecular-layers of Y-type films of cadmium [ 3 H] icosanoate and 109 Cd icosanoate were built up as 3 H and 109 Cd β-sources for electron spectroscopy with intensities of 1.5 GBq (40 mCi) and 7.4 MBq (200 μCi), respectively, and a size of 65x200 mm 2 . Excellent uniformity of the distribution of deposited radioactivity was confirmed by autoradiography and photometry. (author)

  12. Zebrafish Expression Ontology of Gene Sets (ZEOGS): A Tool to Analyze Enrichment of Zebrafish Anatomical Terms in Large Gene Sets

    Science.gov (United States)

    Marsico, Annalisa

    2013-01-01

    Abstract The zebrafish (Danio rerio) is an established model organism for developmental and biomedical research. It is frequently used for high-throughput functional genomics experiments, such as genome-wide gene expression measurements, to systematically analyze molecular mechanisms. However, the use of whole embryos or larvae in such experiments leads to a loss of the spatial information. To address this problem, we have developed a tool called Zebrafish Expression Ontology of Gene Sets (ZEOGS) to assess the enrichment of anatomical terms in large gene sets. ZEOGS uses gene expression pattern data from several sources: first, in situ hybridization experiments from the Zebrafish Model Organism Database (ZFIN); second, it uses the Zebrafish Anatomical Ontology, a controlled vocabulary that describes connected anatomical structures; and third, the available connections between expression patterns and anatomical terms contained in ZFIN. Upon input of a gene set, ZEOGS determines which anatomical structures are overrepresented in the input gene set. ZEOGS allows one for the first time to look at groups of genes and to describe them in terms of shared anatomical structures. To establish ZEOGS, we first tested it on random gene selections and on two public microarray datasets with known tissue-specific gene expression changes. These tests showed that ZEOGS could reliably identify the tissues affected, whereas only very few enriched terms to none were found in the random gene sets. Next we applied ZEOGS to microarray datasets of 24 and 72 h postfertilization zebrafish embryos treated with beclomethasone, a potent glucocorticoid. This analysis resulted in the identification of several anatomical terms related to glucocorticoid-responsive tissues, some of which were stage-specific. Our studies highlight the ability of ZEOGS to extract spatial information from datasets derived from whole embryos, indicating that ZEOGS could be a useful tool to automatically analyze gene

  13. Zebrafish Expression Ontology of Gene Sets (ZEOGS): a tool to analyze enrichment of zebrafish anatomical terms in large gene sets.

    Science.gov (United States)

    Prykhozhij, Sergey V; Marsico, Annalisa; Meijsing, Sebastiaan H

    2013-09-01

    The zebrafish (Danio rerio) is an established model organism for developmental and biomedical research. It is frequently used for high-throughput functional genomics experiments, such as genome-wide gene expression measurements, to systematically analyze molecular mechanisms. However, the use of whole embryos or larvae in such experiments leads to a loss of the spatial information. To address this problem, we have developed a tool called Zebrafish Expression Ontology of Gene Sets (ZEOGS) to assess the enrichment of anatomical terms in large gene sets. ZEOGS uses gene expression pattern data from several sources: first, in situ hybridization experiments from the Zebrafish Model Organism Database (ZFIN); second, it uses the Zebrafish Anatomical Ontology, a controlled vocabulary that describes connected anatomical structures; and third, the available connections between expression patterns and anatomical terms contained in ZFIN. Upon input of a gene set, ZEOGS determines which anatomical structures are overrepresented in the input gene set. ZEOGS allows one for the first time to look at groups of genes and to describe them in terms of shared anatomical structures. To establish ZEOGS, we first tested it on random gene selections and on two public microarray datasets with known tissue-specific gene expression changes. These tests showed that ZEOGS could reliably identify the tissues affected, whereas only very few enriched terms to none were found in the random gene sets. Next we applied ZEOGS to microarray datasets of 24 and 72 h postfertilization zebrafish embryos treated with beclomethasone, a potent glucocorticoid. This analysis resulted in the identification of several anatomical terms related to glucocorticoid-responsive tissues, some of which were stage-specific. Our studies highlight the ability of ZEOGS to extract spatial information from datasets derived from whole embryos, indicating that ZEOGS could be a useful tool to automatically analyze gene expression

  14. Q0000-398 is a high-redshift quasar with a large angular size

    International Nuclear Information System (INIS)

    Gearhart, M.R.; Pacht, E.

    1977-01-01

    A study is described, using the three-element interferrometer at the National Radio Astronomy Observatory, West Virginia, to investigate whether any quasars exist that might be radio sources. It was found that Q0000-398 appeared to be a quasar of high red shift and large angular size. The interferrometer was operated with a 300-1200-1500 m baseline configuration at 2695 MHz. The radio map for Q0000-398 is shown, and has two weak components separated by 134 +- 40 arc s. If these components are associated with the optical object this quasar has the largest known angular size for its red shift value. The results reported for Q0000-398 and other quasars having considerable angular extent demonstrate the importance of considering radio selection effects in the angular diameter-red shift relationship, and since any radio selection effects are removed when quasars are selected optically, more extensive mapping programs should be undertaken, looking particularly for large scale structure around optically selected high-z quasars. (U.K.)

  15. Body size distribution of the dinosaurs.

    Directory of Open Access Journals (Sweden)

    Eoin J O'Gorman

    Full Text Available The distribution of species body size is critically important for determining resource use within a group or clade. It is widely known that non-avian dinosaurs were the largest creatures to roam the Earth. There is, however, little understanding of how maximum species body size was distributed among the dinosaurs. Do they share a similar distribution to modern day vertebrate groups in spite of their large size, or did they exhibit fundamentally different distributions due to unique evolutionary pressures and adaptations? Here, we address this question by comparing the distribution of maximum species body size for dinosaurs to an extensive set of extant and extinct vertebrate groups. We also examine the body size distribution of dinosaurs by various sub-groups, time periods and formations. We find that dinosaurs exhibit a strong skew towards larger species, in direct contrast to modern day vertebrates. This pattern is not solely an artefact of bias in the fossil record, as demonstrated by contrasting distributions in two major extinct groups and supports the hypothesis that dinosaurs exhibited a fundamentally different life history strategy to other terrestrial vertebrates. A disparity in the size distribution of the herbivorous Ornithischia and Sauropodomorpha and the largely carnivorous Theropoda suggests that this pattern may have been a product of a divergence in evolutionary strategies: herbivorous dinosaurs rapidly evolved large size to escape predation by carnivores and maximise digestive efficiency; carnivores had sufficient resources among juvenile dinosaurs and non-dinosaurian prey to achieve optimal success at smaller body size.

  16. Body size distribution of the dinosaurs.

    Science.gov (United States)

    O'Gorman, Eoin J; Hone, David W E

    2012-01-01

    The distribution of species body size is critically important for determining resource use within a group or clade. It is widely known that non-avian dinosaurs were the largest creatures to roam the Earth. There is, however, little understanding of how maximum species body size was distributed among the dinosaurs. Do they share a similar distribution to modern day vertebrate groups in spite of their large size, or did they exhibit fundamentally different distributions due to unique evolutionary pressures and adaptations? Here, we address this question by comparing the distribution of maximum species body size for dinosaurs to an extensive set of extant and extinct vertebrate groups. We also examine the body size distribution of dinosaurs by various sub-groups, time periods and formations. We find that dinosaurs exhibit a strong skew towards larger species, in direct contrast to modern day vertebrates. This pattern is not solely an artefact of bias in the fossil record, as demonstrated by contrasting distributions in two major extinct groups and supports the hypothesis that dinosaurs exhibited a fundamentally different life history strategy to other terrestrial vertebrates. A disparity in the size distribution of the herbivorous Ornithischia and Sauropodomorpha and the largely carnivorous Theropoda suggests that this pattern may have been a product of a divergence in evolutionary strategies: herbivorous dinosaurs rapidly evolved large size to escape predation by carnivores and maximise digestive efficiency; carnivores had sufficient resources among juvenile dinosaurs and non-dinosaurian prey to achieve optimal success at smaller body size.

  17. Body Size Distribution of the Dinosaurs

    Science.gov (United States)

    O’Gorman, Eoin J.; Hone, David W. E.

    2012-01-01

    The distribution of species body size is critically important for determining resource use within a group or clade. It is widely known that non-avian dinosaurs were the largest creatures to roam the Earth. There is, however, little understanding of how maximum species body size was distributed among the dinosaurs. Do they share a similar distribution to modern day vertebrate groups in spite of their large size, or did they exhibit fundamentally different distributions due to unique evolutionary pressures and adaptations? Here, we address this question by comparing the distribution of maximum species body size for dinosaurs to an extensive set of extant and extinct vertebrate groups. We also examine the body size distribution of dinosaurs by various sub-groups, time periods and formations. We find that dinosaurs exhibit a strong skew towards larger species, in direct contrast to modern day vertebrates. This pattern is not solely an artefact of bias in the fossil record, as demonstrated by contrasting distributions in two major extinct groups and supports the hypothesis that dinosaurs exhibited a fundamentally different life history strategy to other terrestrial vertebrates. A disparity in the size distribution of the herbivorous Ornithischia and Sauropodomorpha and the largely carnivorous Theropoda suggests that this pattern may have been a product of a divergence in evolutionary strategies: herbivorous dinosaurs rapidly evolved large size to escape predation by carnivores and maximise digestive efficiency; carnivores had sufficient resources among juvenile dinosaurs and non-dinosaurian prey to achieve optimal success at smaller body size. PMID:23284818

  18. Small, medium, large or supersize? The development and evaluation of interventions targeted at portion size

    Science.gov (United States)

    Vermeer, W M; Steenhuis, I H M; Poelman, M P

    2014-01-01

    In the past decades, portion sizes of high-caloric foods and drinks have increased and can be considered an important environmental obesogenic factor. This paper describes a research project in which the feasibility and effectiveness of environmental interventions targeted at portion size was evaluated. The studies that we conducted revealed that portion size labeling, offering a larger variety of portion sizes, and proportional pricing (that is, a comparable price per unit regardless of the size) were considered feasible to implement according to both consumers and point-of-purchase representatives. Studies into the effectiveness of these interventions demonstrated that the impact of portion size labeling on the (intended) consumption of soft drinks was, at most, modest. Furthermore, the introduction of smaller portion sizes of hot meals in worksite cafeterias in addition to the existing size stimulated a moderate number of consumers to replace their large meals by a small meal. Elaborating on these findings, we advocate further research into communication and marketing strategies related to portion size interventions; the development of environmental portion size interventions as well as educational interventions that improve people's ability to deal with a ‘super-sized' environment; the implementation of regulation with respect to portion size labeling, and the use of nudges to stimulate consumers to select healthier portion sizes. PMID:25033959

  19. Validation Of Intermediate Large Sample Analysis (With Sizes Up to 100 G) and Associated Facility Improvement

    International Nuclear Information System (INIS)

    Bode, P.; Koster-Ammerlaan, M.J.J.

    2018-01-01

    Pragmatic rather than physical correction factors for neutron and gamma-ray shielding were studied for samples of intermediate size, i.e. up to the 10-100 gram range. It was found that for most biological and geological materials, the neutron self-shielding is less than 5 % and the gamma-ray self-attenuation can easily be estimated. A trueness control material of 1 kg size was made based on use of left-overs of materials, used in laboratory intercomparisons. A design study for a large sample pool-side facility, handling plate-type volumes, had to be stopped because of a reduction in human resources, available for this CRP. The large sample NAA facilities were made available to guest scientists from Greece and Brazil. The laboratory for neutron activation analysis participated in the world’s first laboratory intercomparison utilizing large samples. (author)

  20. Generalization of some hidden subgroup algorithms for input sets of arbitrary size

    Science.gov (United States)

    Poslu, Damla; Say, A. C. Cem

    2006-05-01

    We consider the problem of generalizing some quantum algorithms so that they will work on input domains whose cardinalities are not necessarily powers of two. When analyzing the algorithms we assume that generating superpositions of arbitrary subsets of basis states whose cardinalities are not necessarily powers of two perfectly is possible. We have taken Ballhysa's model as a template and have extended it to Chi, Kim and Lee's generalizations of the Deutsch-Jozsa algorithm and to Simon's algorithm. With perfectly equal superpositions of input sets of arbitrary size, Chi, Kim and Lee's generalized Deutsch-Jozsa algorithms, both for evenly-distributed and evenly-balanced functions, worked with one-sided error property. For Simon's algorithm the success probability of the generalized algorithm is the same as that of the original for input sets of arbitrary cardinalities with equiprobable superpositions, since the property that the measured strings are all those which have dot product zero with the string we search, for the case where the function is 2-to-1, is not lost.

  1. Optimal integrated sizing and planning of hubs with midsize/large CHP units considering reliability of supply

    International Nuclear Information System (INIS)

    Moradi, Saeed; Ghaffarpour, Reza; Ranjbar, Ali Mohammad; Mozaffari, Babak

    2017-01-01

    Highlights: • New hub planning formulation is proposed to exploit assets of midsize/large CHPs. • Linearization approaches are proposed for two-variable nonlinear CHP fuel function. • Efficient operation of addressed CHPs & hub devices at contingencies are considered. • Reliability-embedded integrated planning & sizing is formulated as one single MILP. • Noticeable results for costs & reliability-embedded planning due to mid/large CHPs. - Abstract: Use of multi-carrier energy systems and the energy hub concept has recently been a widespread trend worldwide. However, most of the related researches specialize in CHP systems with constant electricity/heat ratios and linear operating characteristics. In this paper, integrated energy hub planning and sizing is developed for the energy systems with mid-scale and large-scale CHP units, by taking their wide operating range into consideration. The proposed formulation is aimed at taking the best use of the beneficial degrees of freedom associated with these units for decreasing total costs and increasing reliability. High-accuracy piecewise linearization techniques with approximation errors of about 1% are introduced for the nonlinear two-dimensional CHP input-output function, making it possible to successfully integrate the CHP sizing. Efficient operation of CHP and the hub at contingencies is extracted via a new formulation, which is developed to be incorporated to the planning and sizing problem. Optimal operation, planning, sizing and contingency operation of hub components are integrated and formulated as a single comprehensive MILP problem. Results on a case study with midsize CHPs reveal a 33% reduction in total costs, and it is demonstrated that the proposed formulation ceases the need for additional components/capacities for increasing reliability of supply.

  2. Study on characteristics of response to nodal vibration in a main hull of a large-size ferry boat; Ogata feri no shusentai yodo oto tokusei ni kansuru kenkyu

    Energy Technology Data Exchange (ETDEWEB)

    Takimoto, T; Yamamoto, A; Kasuda, T; Yanagi, K [Mitsubishi Heavy Industries, Ltd., Tokyo (Japan)

    1996-04-10

    Demand for reduction in vibration and noise in large-size ferry boats has been severer in recent years. On the other hand, vibration exciting force in main engines and propellers is on an increasing trend in association with increase in speed and horsepower. A large-size ferry boat uses an intermediate-speed diesel engine which has high vibration exciting frequency. Therefore, discussions were given on characteristics of response to nodal vibration in a main hull induced by primary internal moment in a main engine in a large-size ferry boat mounting an intermediate speed main engine. Results of detailed vibration calculations, vibration experiments using an actual ship, and results of measurements were used for the discussions. Natural frequency for two-node vibration above and below the main hull was set for an equation of estimation such that the whole ship is hypothesized to have been structured with beams having the same cross section according to the Todd`s equation, and effect of rigidity of the long structure can be evaluated. Parameters were derived by using the minimum square method that uses the measured natural frequency of the ship A through the ship E among large-size ferry boats. The derived result may be summarized as follows: this equation of estimation has an estimation error of about 5% against the natural frequency for nodal vibration above and below the main hull; and this equation of estimation has an estimation error of about 30% against the acceleration in the vertical direction at the end of the stern. 2 refs., 11 figs., 1 tab.

  3. A procedure to detect flaws inside large size marble blocks by ultrasound

    OpenAIRE

    Bramanti, Mauro; Bozzi, Edoardo

    1999-01-01

    In stone and marble industry there is considerable interest in the possibility of using ultrasound diagnostic techniques for non-destructive testing of large size blocks in order to detect internal flaws such as faults, cracks and fissures. In this paper some preliminary measurements are reported in order to acquire basic knowledge of the fundamental properties of ultrasound, such as propagation velocity and attenuation, in the media here considered. We then outline a particular diagnostic pr...

  4. An efficient quantum scheme for Private Set Intersection

    Science.gov (United States)

    Shi, Run-hua; Mu, Yi; Zhong, Hong; Cui, Jie; Zhang, Shun

    2016-01-01

    Private Set Intersection allows a client to privately compute set intersection with the collaboration of the server, which is one of the most fundamental and key problems within the multiparty collaborative computation of protecting the privacy of the parties. In this paper, we first present a cheat-sensitive quantum scheme for Private Set Intersection. Compared with classical schemes, our scheme has lower communication complexity, which is independent of the size of the server's set. Therefore, it is very suitable for big data services in Cloud or large-scale client-server networks.

  5. Experimental investigation on the influence of instrument settings on pixel size and nonlinearity in SEM image formation

    DEFF Research Database (Denmark)

    Carli, Lorenzo; Genta, Gianfranco; Cantatore, Angela

    2010-01-01

    The work deals with an experimental investigation on the influence of three Scanning Electron Microscope (SEM) instrument settings, accelerating voltage, spot size and magnification, on the image formation process. Pixel size and nonlinearity were chosen as output parameters related to image...... quality and resolution. A silicon grating calibrated artifact was employed to investigate qualitatively and quantitatively, through a designed experiment approach, the parameters relevance. SEM magnification was found to account by far for the largest contribution on both parameters under consideration...

  6. Large and small sets with respect to homomorphisms and products of groups

    Directory of Open Access Journals (Sweden)

    Riccardo Gusso

    2002-10-01

    Full Text Available We study the behaviour of large, small and medium subsets with respect to homomorphisms and products of groups. Then we introduce the definition af a P-small set in abelian groups and we investigate the relations between this kind of smallness and the previous one, giving some examples that distinguish them.

  7. Teaching Children to Organise and Represent Large Data Sets in a Histogram

    Science.gov (United States)

    Nisbet, Steven; Putt, Ian

    2004-01-01

    Although some bright students in primary school are able to organise numerical data into classes, most attend to the characteristics of individuals rather than the group, and "see the trees rather than the forest". How can teachers in upper primary and early high school teach students to organise large sets of data with widely varying…

  8. Comparison of silicon strip tracker module size using large sensors from 6 inch wafers

    CERN Multimedia

    Honma, Alan

    1999-01-01

    Two large silicon strip sensor made from 6 inch wafers are placed next to each other to simulate the size of a CMS outer silicon tracker module. On the left is a prototype 2 sensor CMS inner endcap silicon tracker module made from 4 inch wafers.

  9. Finite-time and finite-size scalings in the evaluation of large-deviation functions: Numerical approach in continuous time.

    Science.gov (United States)

    Guevara Hidalgo, Esteban; Nemoto, Takahiro; Lecomte, Vivien

    2017-06-01

    Rare trajectories of stochastic systems are important to understand because of their potential impact. However, their properties are by definition difficult to sample directly. Population dynamics provides a numerical tool allowing their study, by means of simulating a large number of copies of the system, which are subjected to selection rules that favor the rare trajectories of interest. Such algorithms are plagued by finite simulation time and finite population size, effects that can render their use delicate. In this paper, we present a numerical approach which uses the finite-time and finite-size scalings of estimators of the large deviation functions associated to the distribution of rare trajectories. The method we propose allows one to extract the infinite-time and infinite-size limit of these estimators, which-as shown on the contact process-provides a significant improvement of the large deviation function estimators compared to the standard one.

  10. Finite-time and finite-size scalings in the evaluation of large-deviation functions: Numerical approach in continuous time

    Science.gov (United States)

    Guevara Hidalgo, Esteban; Nemoto, Takahiro; Lecomte, Vivien

    2017-06-01

    Rare trajectories of stochastic systems are important to understand because of their potential impact. However, their properties are by definition difficult to sample directly. Population dynamics provides a numerical tool allowing their study, by means of simulating a large number of copies of the system, which are subjected to selection rules that favor the rare trajectories of interest. Such algorithms are plagued by finite simulation time and finite population size, effects that can render their use delicate. In this paper, we present a numerical approach which uses the finite-time and finite-size scalings of estimators of the large deviation functions associated to the distribution of rare trajectories. The method we propose allows one to extract the infinite-time and infinite-size limit of these estimators, which—as shown on the contact process—provides a significant improvement of the large deviation function estimators compared to the standard one.

  11. Detection of tiny amounts of fissile materials in large-sized containers with radioactive waste

    Science.gov (United States)

    Batyaev, V. F.; Skliarov, S. V.

    2018-01-01

    The paper is devoted to non-destructive control of tiny amounts of fissile materials in large-sized containers filled with radioactive waste (RAW). The aim of this work is to model an active neutron interrogation facility for detection of fissile ma-terials inside NZK type containers with RAW and determine the minimal detectable mass of U-235 as a function of various param-eters: matrix type, nonuniformity of container filling, neutron gen-erator parameters (flux, pulse frequency, pulse duration), meas-urement time. As a result the dependence of minimal detectable mass on fissile materials location inside container is shown. Nonu-niformity of the thermal neutron flux inside a container is the main reason of the space-heterogeneity of minimal detectable mass in-side a large-sized container. Our experiments with tiny amounts of uranium-235 (<1 g) confirm the detection of fissile materials in NZK containers by using active neutron interrogation technique.

  12. Detection of tiny amounts of fissile materials in large-sized containers with radioactive waste

    Directory of Open Access Journals (Sweden)

    Batyaev V.F.

    2018-01-01

    Full Text Available The paper is devoted to non-destructive control of tiny amounts of fissile materials in large-sized containers filled with radioactive waste (RAW. The aim of this work is to model an active neutron interrogation facility for detection of fissile ma-terials inside NZK type containers with RAW and determine the minimal detectable mass of U-235 as a function of various param-eters: matrix type, nonuniformity of container filling, neutron gen-erator parameters (flux, pulse frequency, pulse duration, meas-urement time. As a result the dependence of minimal detectable mass on fissile materials location inside container is shown. Nonu-niformity of the thermal neutron flux inside a container is the main reason of the space-heterogeneity of minimal detectable mass in-side a large-sized container. Our experiments with tiny amounts of uranium-235 (<1 g confirm the detection of fissile materials in NZK containers by using active neutron interrogation technique.

  13. A three-step algorithm for CANDECOMP/PARAFAC analysis of large data sets with multicollinearity

    NARCIS (Netherlands)

    Kiers, H.A.L.

    1998-01-01

    Fitting the CANDECOMP/PARAFAC model by the standard alternating least squares algorithm often requires very many iterations. One case in point is that of analysing data with mild to severe multicollinearity. If, in addition, the size of the data is large, the computation of one CANDECOMP/PARAFAC

  14. Parallel analysis tools and new visualization techniques for ultra-large climate data set

    Energy Technology Data Exchange (ETDEWEB)

    Middleton, Don [National Center for Atmospheric Research, Boulder, CO (United States); Haley, Mary [National Center for Atmospheric Research, Boulder, CO (United States)

    2014-12-10

    ParVis was a project funded under LAB 10-05: “Earth System Modeling: Advanced Scientific Visualization of Ultra-Large Climate Data Sets”. Argonne was the lead lab with partners at PNNL, SNL, NCAR and UC-Davis. This report covers progress from January 1st, 2013 through Dec 1st, 2014. Two previous reports covered the period from Summer, 2010, through September 2011 and October 2011 through December 2012, respectively. While the project was originally planned to end on April 30, 2013, personnel and priority changes allowed many of the institutions to continue work through FY14 using existing funds. A primary focus of ParVis was introducing parallelism to climate model analysis to greatly reduce the time-to-visualization for ultra-large climate data sets. Work in the first two years was conducted on two tracks with different time horizons: one track to provide immediate help to climate scientists already struggling to apply their analysis to existing large data sets and another focused on building a new data-parallel library and tool for climate analysis and visualization that will give the field a platform for performing analysis and visualization on ultra-large datasets for the foreseeable future. In the final 2 years of the project, we focused mostly on the new data-parallel library and associated tools for climate analysis and visualization.

  15. Using Content-Specific Lyrics to Familiar Tunes in a Large Lecture Setting

    Science.gov (United States)

    McLachlin, Derek T.

    2009-01-01

    Music can be used in lectures to increase student engagement and help students retain information. In this paper, I describe my use of biochemistry-related lyrics written to the tune of the theme to the television show, The Flintstones, in a large class setting (400-800 students). To determine student perceptions, the class was surveyed several…

  16. Large data sets in finance and marketing: introduction by the special issue editor

    NARCIS (Netherlands)

    Ph.H.B.F. Franses (Philip Hans)

    1998-01-01

    textabstractOn December 18 and 19 of 1997, a small conference on the "Statistical Analysis of Large Data Sets in Business Economics" was organized by the Rotterdam Institute for Business Economic Studies. Eleven presentations were delivered in plenary sessions, which were attended by about 90

  17. An automated system for the preparation of Large Size Dried (LSD) Spikes

    International Nuclear Information System (INIS)

    Verbruggen, A.; Bauwens, J.; Jakobsson, U.; Eykens, R.; Wellum, R.; Aregbe, Y.; Van De Steene, N.

    2008-01-01

    Large size dried (LSD) spikes have been produced to fulfill the existing requirement for reliable and traceable isotopic reference materials for nuclear safeguards. A system to produce certified nuclear isotopic reference material as a U/Pu mixture in the form of large size dried spikes, comparable to those produced using traditional methods has been installed in collaboration with Nucomat, a company with a recognized reputation in design and development of integrated automated systems. The major components of the system are a robot, two balances, a dispenser and a drying unit fitted into a glove box. The robot is software driven and designed to control all movements inside the glove-box, to identify unambiguously the penicillin vials with a bar-code reader, to dispense the LSD batch solution into the vials and to weigh the amount dispensed. The system functionality has been evaluated and the performance validated by comparing the results from a series of samples dispensed and weighed by the automated system with the results by manual substitution weighing. After applying the proper correction factors to the data from the automated system balance no significant difference was observed between the two. However, an additional component of uncertainty of 3*10 -4 is introduced in the uncertainty budget for the certified weights provided by the automatic system. (authors)

  18. An automated system for the preparation of Large Size Dried (LSD) Spikes

    Energy Technology Data Exchange (ETDEWEB)

    Verbruggen, A.; Bauwens, J.; Jakobsson, U.; Eykens, R.; Wellum, R.; Aregbe, Y. [European Commission - Joint Research Centre, Institute for Reference Materials and Measurements (IRMM), Retieseweg 211, B2440 Geel (Belgium); Van De Steene, N. [Nucomat, Mercatorstraat 206, B9100 Sint Niklaas (Belgium)

    2008-07-01

    Large size dried (LSD) spikes have been produced to fulfill the existing requirement for reliable and traceable isotopic reference materials for nuclear safeguards. A system to produce certified nuclear isotopic reference material as a U/Pu mixture in the form of large size dried spikes, comparable to those produced using traditional methods has been installed in collaboration with Nucomat, a company with a recognized reputation in design and development of integrated automated systems. The major components of the system are a robot, two balances, a dispenser and a drying unit fitted into a glove box. The robot is software driven and designed to control all movements inside the glove-box, to identify unambiguously the penicillin vials with a bar-code reader, to dispense the LSD batch solution into the vials and to weigh the amount dispensed. The system functionality has been evaluated and the performance validated by comparing the results from a series of samples dispensed and weighed by the automated system with the results by manual substitution weighing. After applying the proper correction factors to the data from the automated system balance no significant difference was observed between the two. However, an additional component of uncertainty of 3*10{sup -4} is introduced in the uncertainty budget for the certified weights provided by the automatic system. (authors)

  19. Salt-assisted direct exfoliation of graphite into high-quality, large-size, few-layer graphene sheets.

    Science.gov (United States)

    Niu, Liyong; Li, Mingjian; Tao, Xiaoming; Xie, Zhuang; Zhou, Xuechang; Raju, Arun P A; Young, Robert J; Zheng, Zijian

    2013-08-21

    We report a facile and low-cost method to directly exfoliate graphite powders into large-size, high-quality, and solution-dispersible few-layer graphene sheets. In this method, aqueous mixtures of graphite and inorganic salts such as NaCl and CuCl2 are stirred, and subsequently dried by evaporation. Finally, the mixture powders are dispersed into an orthogonal organic solvent solution of the salt by low-power and short-time ultrasonication, which exfoliates graphite into few-layer graphene sheets. We find that the as-made graphene sheets contain little oxygen, and 86% of them are 1-5 layers with lateral sizes as large as 210 μm(2). Importantly, the as-made graphene can be readily dispersed into aqueous solution in the presence of surfactant and thus is compatible with various solution-processing techniques towards graphene-based thin film devices.

  20. mmpdb: An Open-Source Matched Molecular Pair Platform for Large Multiproperty Data Sets.

    Science.gov (United States)

    Dalke, Andrew; Hert, Jérôme; Kramer, Christian

    2018-05-29

    Matched molecular pair analysis (MMPA) enables the automated and systematic compilation of medicinal chemistry rules from compound/property data sets. Here we present mmpdb, an open-source matched molecular pair (MMP) platform to create, compile, store, retrieve, and use MMP rules. mmpdb is suitable for the large data sets typically found in pharmaceutical and agrochemical companies and provides new algorithms for fragment canonicalization and stereochemistry handling. The platform is written in Python and based on the RDKit toolkit. It is freely available from https://github.com/rdkit/mmpdb .

  1. Comparative analysis of non-destructive methods to control fissile materials in large-size containers

    Directory of Open Access Journals (Sweden)

    Batyaev V.F.

    2017-01-01

    Full Text Available The analysis of various non-destructive methods to control fissile materials (FM in large-size containers filled with radioactive waste (RAW has been carried out. The difficulty of applying passive gamma-neutron monitoring FM in large containers filled with concreted RAW is shown. Selection of an active non-destructive assay technique depends on the container contents; and in case of a concrete or iron matrix with very low activity and low activity RAW the neutron radiation method appears to be more preferable as compared with the photonuclear one.

  2. Optimum sample length for estimating anchovy size distribution and the proportion of juveniles per fishing set for the Peruvian purse-seine fleet

    Directory of Open Access Journals (Sweden)

    Rocío Joo

    2017-04-01

    Full Text Available The length distribution of catches represents a fundamental source of information for estimating growth and spatio-temporal dynamics of cohorts. The length distribution of caught is estimated based on samples of catched individuals. This work studies the optimum sample size of individuals at each fishing set in order to obtain a representative sample of the length and the proportion of juveniles in the fishing set. For that matter, we use anchovy (Engraulis ringens length data from different fishing sets recorded by observers at-sea from the On-board Observers Program from the Peruvian Marine Research Institute. Finally, we propose an optimum sample size for obtaining robust size and juvenile estimations. Though the application of this work corresponds to the anchovy fishery, the procedure can be applied to any fishery, either for on board or inland biometric measurements.

  3. Sizing the star cluster population of the Large Magellanic Cloud

    Science.gov (United States)

    Piatti, Andrés E.

    2018-04-01

    The number of star clusters that populate the Large Magellanic Cloud (LMC) at deprojected distances knowledge of the LMC cluster formation and dissolution histories, we closely revisited such a compilation of objects and found that only ˜35 per cent of the previously known catalogued clusters have been included. The remaining entries are likely related to stellar overdensities of the LMC composite star field, because there is a remarkable enhancement of objects with assigned ages older than log(t yr-1) ˜ 9.4, which contrasts with the existence of the LMC cluster age gap; the assumption of a cluster formation rate similar to that of the LMC star field does not help to conciliate so large amount of clusters either; and nearly 50 per cent of them come from cluster search procedures known to produce more than 90 per cent of false detections. The lack of further analyses to confirm the physical reality as genuine star clusters of the identified overdensities also glooms those results. We support that the actual size of the LMC main body cluster population is close to that previously known.

  4. Growth of large-size-two-dimensional crystalline pentacene grains for high performance organic thin film transistors

    Directory of Open Access Journals (Sweden)

    Chuan Du

    2012-06-01

    Full Text Available New approach is presented for growth of pentacene crystalline thin film with large grain size. Modification of dielectric surfaces using a monolayer of small molecule results in the formation of pentacene thin films with well ordered large crystalline domain structures. This suggests that pentacene molecules may have significantly large diffusion constant on the modified surface. An average hole mobility about 1.52 cm2/Vs of pentacene based organic thin film transistors (OTFTs is achieved with good reproducibility.

  5. Does company size matter? Validation of an integrative model of safety behavior across small and large construction companies.

    Science.gov (United States)

    Guo, Brian H W; Yiu, Tak Wing; González, Vicente A

    2018-02-01

    Previous safety climate studies primarily focused on either large construction companies or the construction industry as a whole, while little is known about whether company size has significant effects on workers' understanding of safety climate measures and relationships between safety climate factors and safety behavior. Thus, this study aims to: (a) test the measurement equivalence (ME) of a safety climate measure across workers from small and large companies; (b) investigate if company size alters the causal structure of the integrative model developed by Guo, Yiu, and González (2016). Data were collected from 253 construction workers in New Zealand using a safety climate measure. This study used multi-group confirmatory factor analyses (MCFA) to test the measurement equivalence of the safety climate measure and structure invariance of the integrative model. Results indicate that workers from small and large companies understood the safety climate measure in a similar manner. In addition, it was suggested that company size does not change the causal structure and mediational processes of the integrative model. Both measurement equivalence of the safety climate measure and structural invariance of the integrative model were supported by this study. Practical applications: Findings of this study provided strong support for a meaningful use of the safety climate measure across construction companies in different sizes. Safety behavior promotion strategies designed based on the integrative model may be well suited for both large and small companies. Copyright © 2017 National Safety Council and Elsevier Ltd. All rights reserved.

  6. Selection of the Maximum Spatial Cluster Size of the Spatial Scan Statistic by Using the Maximum Clustering Set-Proportion Statistic.

    Science.gov (United States)

    Ma, Yue; Yin, Fei; Zhang, Tao; Zhou, Xiaohua Andrew; Li, Xiaosong

    2016-01-01

    Spatial scan statistics are widely used in various fields. The performance of these statistics is influenced by parameters, such as maximum spatial cluster size, and can be improved by parameter selection using performance measures. Current performance measures are based on the presence of clusters and are thus inapplicable to data sets without known clusters. In this work, we propose a novel overall performance measure called maximum clustering set-proportion (MCS-P), which is based on the likelihood of the union of detected clusters and the applied dataset. MCS-P was compared with existing performance measures in a simulation study to select the maximum spatial cluster size. Results of other performance measures, such as sensitivity and misclassification, suggest that the spatial scan statistic achieves accurate results in most scenarios with the maximum spatial cluster sizes selected using MCS-P. Given that previously known clusters are not required in the proposed strategy, selection of the optimal maximum cluster size with MCS-P can improve the performance of the scan statistic in applications without identified clusters.

  7. Characterization of Large Structural Genetic Mosaicism in Human Autosomes

    Science.gov (United States)

    Machiela, Mitchell J.; Zhou, Weiyin; Sampson, Joshua N.; Dean, Michael C.; Jacobs, Kevin B.; Black, Amanda; Brinton, Louise A.; Chang, I-Shou; Chen, Chu; Chen, Constance; Chen, Kexin; Cook, Linda S.; Crous Bou, Marta; De Vivo, Immaculata; Doherty, Jennifer; Friedenreich, Christine M.; Gaudet, Mia M.; Haiman, Christopher A.; Hankinson, Susan E.; Hartge, Patricia; Henderson, Brian E.; Hong, Yun-Chul; Hosgood, H. Dean; Hsiung, Chao A.; Hu, Wei; Hunter, David J.; Jessop, Lea; Kim, Hee Nam; Kim, Yeul Hong; Kim, Young Tae; Klein, Robert; Kraft, Peter; Lan, Qing; Lin, Dongxin; Liu, Jianjun; Le Marchand, Loic; Liang, Xiaolin; Lissowska, Jolanta; Lu, Lingeng; Magliocco, Anthony M.; Matsuo, Keitaro; Olson, Sara H.; Orlow, Irene; Park, Jae Yong; Pooler, Loreall; Prescott, Jennifer; Rastogi, Radhai; Risch, Harvey A.; Schumacher, Fredrick; Seow, Adeline; Setiawan, Veronica Wendy; Shen, Hongbing; Sheng, Xin; Shin, Min-Ho; Shu, Xiao-Ou; VanDen Berg, David; Wang, Jiu-Cun; Wentzensen, Nicolas; Wong, Maria Pik; Wu, Chen; Wu, Tangchun; Wu, Yi-Long; Xia, Lucy; Yang, Hannah P.; Yang, Pan-Chyr; Zheng, Wei; Zhou, Baosen; Abnet, Christian C.; Albanes, Demetrius; Aldrich, Melinda C.; Amos, Christopher; Amundadottir, Laufey T.; Berndt, Sonja I.; Blot, William J.; Bock, Cathryn H.; Bracci, Paige M.; Burdett, Laurie; Buring, Julie E.; Butler, Mary A.; Carreón, Tania; Chatterjee, Nilanjan; Chung, Charles C.; Cook, Michael B.; Cullen, Michael; Davis, Faith G.; Ding, Ti; Duell, Eric J.; Epstein, Caroline G.; Fan, Jin-Hu; Figueroa, Jonine D.; Fraumeni, Joseph F.; Freedman, Neal D.; Fuchs, Charles S.; Gao, Yu-Tang; Gapstur, Susan M.; Patiño-Garcia, Ana; Garcia-Closas, Montserrat; Gaziano, J. Michael; Giles, Graham G.; Gillanders, Elizabeth M.; Giovannucci, Edward L.; Goldin, Lynn; Goldstein, Alisa M.; Greene, Mark H.; Hallmans, Goran; Harris, Curtis C.; Henriksson, Roger; Holly, Elizabeth A.; Hoover, Robert N.; Hu, Nan; Hutchinson, Amy; Jenab, Mazda; Johansen, Christoffer; Khaw, Kay-Tee; Koh, Woon-Puay; Kolonel, Laurence N.; Kooperberg, Charles; Krogh, Vittorio; Kurtz, Robert C.; LaCroix, Andrea; Landgren, Annelie; Landi, Maria Teresa; Li, Donghui; Liao, Linda M.; Malats, Nuria; McGlynn, Katherine A.; McNeill, Lorna H.; McWilliams, Robert R.; Melin, Beatrice S.; Mirabello, Lisa; Peplonska, Beata; Peters, Ulrike; Petersen, Gloria M.; Prokunina-Olsson, Ludmila; Purdue, Mark; Qiao, You-Lin; Rabe, Kari G.; Rajaraman, Preetha; Real, Francisco X.; Riboli, Elio; Rodríguez-Santiago, Benjamín; Rothman, Nathaniel; Ruder, Avima M.; Savage, Sharon A.; Schwartz, Ann G.; Schwartz, Kendra L.; Sesso, Howard D.; Severi, Gianluca; Silverman, Debra T.; Spitz, Margaret R.; Stevens, Victoria L.; Stolzenberg-Solomon, Rachael; Stram, Daniel; Tang, Ze-Zhong; Taylor, Philip R.; Teras, Lauren R.; Tobias, Geoffrey S.; Viswanathan, Kala; Wacholder, Sholom; Wang, Zhaoming; Weinstein, Stephanie J.; Wheeler, William; White, Emily; Wiencke, John K.; Wolpin, Brian M.; Wu, Xifeng; Wunder, Jay S.; Yu, Kai; Zanetti, Krista A.; Zeleniuch-Jacquotte, Anne; Ziegler, Regina G.; de Andrade, Mariza; Barnes, Kathleen C.; Beaty, Terri H.; Bierut, Laura J.; Desch, Karl C.; Doheny, Kimberly F.; Feenstra, Bjarke; Ginsburg, David; Heit, John A.; Kang, Jae H.; Laurie, Cecilia A.; Li, Jun Z.; Lowe, William L.; Marazita, Mary L.; Melbye, Mads; Mirel, Daniel B.; Murray, Jeffrey C.; Nelson, Sarah C.; Pasquale, Louis R.; Rice, Kenneth; Wiggs, Janey L.; Wise, Anastasia; Tucker, Margaret; Pérez-Jurado, Luis A.; Laurie, Cathy C.; Caporaso, Neil E.; Yeager, Meredith; Chanock, Stephen J.

    2015-01-01

    Analyses of genome-wide association study (GWAS) data have revealed that detectable genetic mosaicism involving large (>2 Mb) structural autosomal alterations occurs in a fraction of individuals. We present results for a set of 24,849 genotyped individuals (total GWAS set II [TGSII]) in whom 341 large autosomal abnormalities were observed in 168 (0.68%) individuals. Merging data from the new TGSII set with data from two prior reports (the Gene-Environment Association Studies and the total GWAS set I) generated a large dataset of 127,179 individuals; we then conducted a meta-analysis to investigate the patterns of detectable autosomal mosaicism (n = 1,315 events in 925 [0.73%] individuals). Restricting to events >2 Mb in size, we observed an increase in event frequency as event size decreased. The combined results underscore that the rate of detectable mosaicism increases with age (p value = 5.5 × 10−31) and is higher in men (p value = 0.002) but lower in participants of African ancestry (p value = 0.003). In a subset of 47 individuals from whom serial samples were collected up to 6 years apart, complex changes were noted over time and showed an overall increase in the proportion of mosaic cells as age increased. Our large combined sample allowed for a unique ability to characterize detectable genetic mosaicism involving large structural events and strengthens the emerging evidence of non-random erosion of the genome in the aging population. PMID:25748358

  8. The higher infinite large cardinals in set theory from their beginnings

    CERN Document Server

    Kanamori, Akihiro

    2003-01-01

    The theory of large cardinals is currently a broad mainstream of modern set theory, the main area of investigation for the analysis of the relative consistency of mathematical propositions and possible new axioms for mathematics. The first of a projected multi-volume series, this book provides a comprehensive account of the theory of large cardinals from its beginnings and some of the direct outgrowths leading to the frontiers of contempory research. A "genetic" approach is taken, presenting the subject in the context of its historical development. With hindsight the consequential avenues are pursued and the most elegant or accessible expositions given. With open questions and speculations provided throughout the reader should not only come to appreciate the scope and coherence of the overall enterpreise but also become prepared to pursue research in several specific areas by studying the relevant sections.

  9. Precise large deviations of aggregate claims in a size-dependent renewal risk model with stopping time claim-number process

    Directory of Open Access Journals (Sweden)

    Shuo Zhang

    2017-04-01

    Full Text Available Abstract In this paper, we consider a size-dependent renewal risk model with stopping time claim-number process. In this model, we do not make any assumption on the dependence structure of claim sizes and inter-arrival times. We study large deviations of the aggregate amount of claims. For the subexponential heavy-tailed case, we obtain a precise large-deviation formula; our method substantially relies on a martingale for the structure of our models.

  10. Comparative analysis of non-destructive methods to control fissile materials in large-size containers

    Science.gov (United States)

    Batyaev, V. F.; Sklyarov, S. V.

    2017-09-01

    The analysis of various non-destructive methods to control fissile materials (FM) in large-size containers filled with radioactive waste (RAW) has been carried out. The difficulty of applying passive gamma-neutron monitoring FM in large containers filled with concreted RAW is shown. Selection of an active non-destructive assay technique depends on the container contents; and in case of a concrete or iron matrix with very low activity and low activity RAW the neutron radiation method appears to be more preferable as compared with the photonuclear one. Note to the reader: the pdf file has been changed on September 22, 2017.

  11. Flexible Multi-Bit Feedback Design for HARQ Operation of Large-Size Data Packets in 5G

    DEFF Research Database (Denmark)

    Khosravirad, Saeed; Mudolo, Luke; Pedersen, Klaus I.

    2017-01-01

    large-size data packet thanks to which the transmitter node can reduce the retransmission size to only include the initially failed segments of the packet. We study the effect of feedback size on retransmission efficiency through extensive link-level simulations over realistic channel models. Numerical......A reliable feedback channel is vital to report decoding acknowledgments in retransmission mechanisms such as the hybrid automatic repeat request (HARQ). While the feedback bits are known to be costly for the wireless link, a feedback message more informative than the conventional single......-bit feedback can increase resource utilization efficiency. Considering the practical limitations for increasing feedback message size, this paper proposes a framework for the design of flexible-content multi-bit feedback. The proposed design is capable of efficiently indicating the faulty segments of a failed...

  12. Eco-friendly preparation of large-sized graphene via short-circuit discharge of lithium primary battery.

    Science.gov (United States)

    Kang, Shaohong; Yu, Tao; Liu, Tingting; Guan, Shiyou

    2018-02-15

    We proposed a large-sized graphene preparation method by short-circuit discharge of the lithium-graphite primary battery for the first time. LiC x is obtained through lithium ions intercalation into graphite cathode in the above primary battery. Graphene was acquired by chemical reaction between LiC x and stripper agents with dispersion under sonication conditions. The gained graphene is characterized by Raman spectrum, X-ray diffraction (XRD), transmission electron microscopy (TEM), X-ray photoelectron spectroscopy (XPS), Atomic force microscope (AFM) and Scanning electron microscopy (SEM). The results indicate that the as-prepared graphene has a large size and few defects, and it is monolayer or less than three layers. The quality of graphene is significant improved compared to the reported electrochemical methods. The yield of graphene can reach 8.76% when the ratio of the H 2 O and NMP is 3:7. This method provides a potential solution for the recycling of waste lithium ion batteries. Copyright © 2017 Elsevier Inc. All rights reserved.

  13. Optimized Basis Sets for the Environment in the Domain-Specific Basis Set Approach of the Incremental Scheme.

    Science.gov (United States)

    Anacker, Tony; Hill, J Grant; Friedrich, Joachim

    2016-04-21

    Minimal basis sets, denoted DSBSenv, based on the segmented basis sets of Ahlrichs and co-workers have been developed for use as environmental basis sets for the domain-specific basis set (DSBS) incremental scheme with the aim of decreasing the CPU requirements of the incremental scheme. The use of these minimal basis sets within explicitly correlated (F12) methods has been enabled by the optimization of matching auxiliary basis sets for use in density fitting of two-electron integrals and resolution of the identity. The accuracy of these auxiliary sets has been validated by calculations on a test set containing small- to medium-sized molecules. The errors due to density fitting are about 2-4 orders of magnitude smaller than the basis set incompleteness error of the DSBSenv orbital basis sets. Additional reductions in computational cost have been tested with the reduced DSBSenv basis sets, in which the highest angular momentum functions of the DSBSenv auxiliary basis sets have been removed. The optimized and reduced basis sets are used in the framework of the domain-specific basis set of the incremental scheme to decrease the computation time without significant loss of accuracy. The computation times and accuracy of the previously used environmental basis and that optimized in this work have been validated with a test set of medium- to large-sized systems. The optimized and reduced DSBSenv basis sets decrease the CPU time by about 15.4% and 19.4% compared with the old environmental basis and retain the accuracy in the absolute energy with standard deviations of 0.99 and 1.06 kJ/mol, respectively.

  14. Modeling large data sets in marketing

    NARCIS (Netherlands)

    Balasubramanian, S; Gupta, S; Kamakura, W; Wedel, M

    In the last two decades, marketing databases have grown significantly in terms of size and richness of available information. The analysis of these databases raises several information-related and statistical issues. We aim at providing an overview of a selection of issues related to the analysis of

  15. Hierarchical sets: analyzing pangenome structure through scalable set visualizations

    Science.gov (United States)

    2017-01-01

    Abstract Motivation: The increase in available microbial genome sequences has resulted in an increase in the size of the pangenomes being analyzed. Current pangenome visualizations are not intended for the pangenome sizes possible today and new approaches are necessary in order to convert the increase in available information to increase in knowledge. As the pangenome data structure is essentially a collection of sets we explore the potential for scalable set visualization as a tool for pangenome analysis. Results: We present a new hierarchical clustering algorithm based on set arithmetics that optimizes the intersection sizes along the branches. The intersection and union sizes along the hierarchy are visualized using a composite dendrogram and icicle plot, which, in pangenome context, shows the evolution of pangenome and core size along the evolutionary hierarchy. Outlying elements, i.e. elements whose presence pattern do not correspond with the hierarchy, can be visualized using hierarchical edge bundles. When applied to pangenome data this plot shows putative horizontal gene transfers between the genomes and can highlight relationships between genomes that is not represented by the hierarchy. We illustrate the utility of hierarchical sets by applying it to a pangenome based on 113 Escherichia and Shigella genomes and find it provides a powerful addition to pangenome analysis. Availability and Implementation: The described clustering algorithm and visualizations are implemented in the hierarchicalSets R package available from CRAN (https://cran.r-project.org/web/packages/hierarchicalSets) Contact: thomasp85@gmail.com Supplementary information: Supplementary data are available at Bioinformatics online. PMID:28130242

  16. Squamate hatchling size and the evolutionary causes of negative offspring size allometry.

    Science.gov (United States)

    Meiri, S; Feldman, A; Kratochvíl, L

    2015-02-01

    Although fecundity selection is ubiquitous, in an overwhelming majority of animal lineages, small species produce smaller number of offspring per clutch. In this context, egg, hatchling and neonate sizes are absolutely larger, but smaller relative to adult body size in larger species. The evolutionary causes of this widespread phenomenon are not fully explored. The negative offspring size allometry can result from processes limiting maximal egg/offspring size forcing larger species to produce relatively smaller offspring ('upper limit'), or from a limit on minimal egg/offspring size forcing smaller species to produce relatively larger offspring ('lower limit'). Several reptile lineages have invariant clutch sizes, where females always lay either one or two eggs per clutch. These lineages offer an interesting perspective on the general evolutionary forces driving negative offspring size allometry, because an important selective factor, fecundity selection in a single clutch, is eliminated here. Under the upper limit hypotheses, large offspring should be selected against in lineages with invariant clutch sizes as well, and these lineages should therefore exhibit the same, or shallower, offspring size allometry as lineages with variable clutch size. On the other hand, the lower limit hypotheses would allow lineages with invariant clutch sizes to have steeper offspring size allometries. Using an extensive data set on the hatchling and female sizes of > 1800 species of squamates, we document that negative offspring size allometry is widespread in lizards and snakes with variable clutch sizes and that some lineages with invariant clutch sizes have unusually steep offspring size allometries. These findings suggest that the negative offspring size allometry is driven by a constraint on minimal offspring size, which scales with a negative allometry. © 2014 European Society For Evolutionary Biology. Journal of Evolutionary Biology © 2014 European Society For Evolutionary

  17. High voltage distribution scheme for large size GEM detector

    International Nuclear Information System (INIS)

    Saini, J.; Kumar, A.; Dubey, A.K.; Negi, V.S.; Chattopadhyay, S.

    2016-01-01

    Gas Electron Multiplier (GEM) detectors will be used for Muon tracking in the Compressed Baryonic Matter (CBM) experiment at the Facility for Anti-proton Ion Research (FAIR) at Darmstadt, Germany. The sizes of the detector modules in the Muon chambers are of the order of 1 metre x 0.5 metre. For construction of these chambers, three GEM foils are used per chamber. These foils are made by two layered 50μm thin kapton foil. Each GEM foil has millions of holes on it. In such a large scale manufacturing of the foils, even after stringent quality controls, some of the holes may still have defects or defects might develop over the time with operating conditions. These defects may result in short-circuit of the entire GEM foil. A short even in a single hole will make entire foil un-usable. To reduce such occurrences, high voltage (HV) segmentation within the foils has been introduced. These segments are powered either by individual HV supply per segment or through an active HV distribution to manage such a large number of segments across the foil. Individual supplies apart from being costly, are highly complex to implement. Additionally, CBM will have high intensity of particles bombarding on the detector causing the change of resistive chain current feeding the GEM detector with the variation in the intensity. This leads to voltage fluctuations across the foil resulting in the gain variation with the particle intensity. Hence, a low cost active HV distribution is designed to take care of the above discussed issues

  18. The reference frame for encoding and retention of motion depends on stimulus set size.

    Science.gov (United States)

    Huynh, Duong; Tripathy, Srimant P; Bedell, Harold E; Öğmen, Haluk

    2017-04-01

    The goal of this study was to investigate the reference frames used in perceptual encoding and storage of visual motion information. In our experiments, observers viewed multiple moving objects and reported the direction of motion of a randomly selected item. Using a vector-decomposition technique, we computed performance during smooth pursuit with respect to a spatiotopic (nonretinotopic) and to a retinotopic component and compared them with performance during fixation, which served as the baseline. For the stimulus encoding stage, which precedes memory, we found that the reference frame depends on the stimulus set size. For a single moving target, the spatiotopic reference frame had the most significant contribution with some additional contribution from the retinotopic reference frame. When the number of items increased (Set Sizes 3 to 7), the spatiotopic reference frame was able to account for the performance. Finally, when the number of items became larger than 7, the distinction between reference frames vanished. We interpret this finding as a switch to a more abstract nonmetric encoding of motion direction. We found that the retinotopic reference frame was not used in memory. Taken together with other studies, our results suggest that, whereas a retinotopic reference frame may be employed for controlling eye movements, perception and memory use primarily nonretinotopic reference frames. Furthermore, the use of nonretinotopic reference frames appears to be capacity limited. In the case of complex stimuli, the visual system may use perceptual grouping in order to simplify the complexity of stimuli or resort to a nonmetric abstract coding of motion information.

  19. Breeding and Genetics Symposium: really big data: processing and analysis of very large data sets.

    Science.gov (United States)

    Cole, J B; Newman, S; Foertter, F; Aguilar, I; Coffey, M

    2012-03-01

    Modern animal breeding data sets are large and getting larger, due in part to recent availability of high-density SNP arrays and cheap sequencing technology. High-performance computing methods for efficient data warehousing and analysis are under development. Financial and security considerations are important when using shared clusters. Sound software engineering practices are needed, and it is better to use existing solutions when possible. Storage requirements for genotypes are modest, although full-sequence data will require greater storage capacity. Storage requirements for intermediate and results files for genetic evaluations are much greater, particularly when multiple runs must be stored for research and validation studies. The greatest gains in accuracy from genomic selection have been realized for traits of low heritability, and there is increasing interest in new health and management traits. The collection of sufficient phenotypes to produce accurate evaluations may take many years, and high-reliability proofs for older bulls are needed to estimate marker effects. Data mining algorithms applied to large data sets may help identify unexpected relationships in the data, and improved visualization tools will provide insights. Genomic selection using large data requires a lot of computing power, particularly when large fractions of the population are genotyped. Theoretical improvements have made possible the inversion of large numerator relationship matrices, permitted the solving of large systems of equations, and produced fast algorithms for variance component estimation. Recent work shows that single-step approaches combining BLUP with a genomic relationship (G) matrix have similar computational requirements to traditional BLUP, and the limiting factor is the construction and inversion of G for many genotypes. A naïve algorithm for creating G for 14,000 individuals required almost 24 h to run, but custom libraries and parallel computing reduced that to

  20. Multiscale virtual particle based elastic network model (MVP-ENM) for normal mode analysis of large-sized biomolecules.

    Science.gov (United States)

    Xia, Kelin

    2017-12-20

    In this paper, a multiscale virtual particle based elastic network model (MVP-ENM) is proposed for the normal mode analysis of large-sized biomolecules. The multiscale virtual particle (MVP) model is proposed for the discretization of biomolecular density data. With this model, large-sized biomolecular structures can be coarse-grained into virtual particles such that a balance between model accuracy and computational cost can be achieved. An elastic network is constructed by assuming "connections" between virtual particles. The connection is described by a special harmonic potential function, which considers the influence from both the mass distributions and distance relations of the virtual particles. Two independent models, i.e., the multiscale virtual particle based Gaussian network model (MVP-GNM) and the multiscale virtual particle based anisotropic network model (MVP-ANM), are proposed. It has been found that in the Debye-Waller factor (B-factor) prediction, the results from our MVP-GNM with a high resolution are as good as the ones from GNM. Even with low resolutions, our MVP-GNM can still capture the global behavior of the B-factor very well with mismatches predominantly from the regions with large B-factor values. Further, it has been demonstrated that the low-frequency eigenmodes from our MVP-ANM are highly consistent with the ones from ANM even with very low resolutions and a coarse grid. Finally, the great advantage of MVP-ANM model for large-sized biomolecules has been demonstrated by using two poliovirus virus structures. The paper ends with a conclusion.

  1. A simple, compact, and rigid piezoelectric step motor with large step size

    Science.gov (United States)

    Wang, Qi; Lu, Qingyou

    2009-08-01

    We present a novel piezoelectric stepper motor featuring high compactness, rigidity, simplicity, and any direction operability. Although tested in room temperature, it is believed to work in low temperatures, owing to its loose operation conditions and large step size. The motor is implemented with a piezoelectric scanner tube that is axially cut into almost two halves and clamp holds a hollow shaft inside at both ends via the spring parts of the shaft. Two driving voltages that singly deform the two halves of the piezotube in one direction and recover simultaneously will move the shaft in the opposite direction, and vice versa.

  2. Contribution of large-sized primary sensory neuronal sensitization to mechanical allodynia by upregulation of hyperpolarization-activated cyclic nucleotide gated channels via cyclooxygenase 1 cascade.

    Science.gov (United States)

    Sun, Wei; Yang, Fei; Wang, Yan; Fu, Han; Yang, Yan; Li, Chun-Li; Wang, Xiao-Liang; Lin, Qing; Chen, Jun

    2017-02-01

    Under physiological state, small- and medium-sized dorsal root ganglia (DRG) neurons are believed to mediate nociceptive behavioral responses to painful stimuli. However, recently it has been found that a number of large-sized neurons are also involved in nociceptive transmission under neuropathic conditions. Nonetheless, the underlying mechanisms that large-sized DRG neurons mediate nociception are poorly understood. In the present study, the role of large-sized neurons in bee venom (BV)-induced mechanical allodynia and the underlying mechanisms were investigated. Behaviorally, it was found that mechanical allodynia was still evoked by BV injection in rats in which the transient receptor potential vanilloid 1-positive DRG neurons were chemically deleted. Electrophysiologically, in vitro patch clamp recordings of large-sized neurons showed hyperexcitability in these neurons. Interestingly, the firing pattern of these neurons was changed from phasic to tonic under BV-inflamed state. It has been suggested that hyperpolarization-activated cyclic nucleotide gated channels (HCN) expressed in large-sized DRG neurons contribute importantly to repeatedly firing. So we examined the roles of HCNs in BV-induced mechanical allodynia. Consistent with the overexpression of HCN1/2 detected by immunofluorescence, HCNs-mediated hyperpolarization activated cation current (I h ) was significantly increased in the BV treated samples. Pharmacological experiments demonstrated that the hyperexcitability and upregulation of I h in large-sized neurons were mediated by cyclooxygenase-1 (COX-1)-prostaglandin E2 pathway. This is evident by the fact that the COX-1 inhibitor significantly attenuated the BV-induced mechanical allodynia. These results suggest that BV can excite the large-sized DRG neurons at least in part by increasing I h through activation of COX-1. Copyright © 2016 Elsevier Ltd. All rights reserved.

  3. Measuring fire size in tunnels

    International Nuclear Information System (INIS)

    Guo, Xiaoping; Zhang, Qihui

    2013-01-01

    A new measure of fire size Q′ has been introduced in longitudinally ventilated tunnel as the ratio of flame height to the height of tunnel. The analysis in this article has shown that Q′ controls both the critical velocity and the maximum ceiling temperature in the tunnel. Before the fire flame reaches tunnel ceiling (Q′ 1.0), Fr approaches a constant value. This is also a well-known phenomenon in large tunnel fires. Tunnel ceiling temperature shows the opposite trend. Before the fire flame reaches the ceiling, it increases very slowly with the fire size. Once the flame has hit the ceiling of tunnel, temperature rises rapidly with Q′. The good agreement between the current prediction and three different sets of experimental data has demonstrated that the theory has correctly modelled the relation among the heat release rate of fire, ventilation flow and the height of tunnel. From design point of view, the theoretical maximum of critical velocity for a given tunnel can help to prevent oversized ventilation system. -- Highlights: • Fire sizing is an important safety measure in tunnel design. • New measure of fire size a function of HRR of fire, tunnel height and ventilation. • The measure can identify large and small fires. • The characteristics of different fire are consistent with observation in real fires

  4. CLUSTER DYNAMICS LARGELY SHAPES PROTOPLANETARY DISK SIZES

    Energy Technology Data Exchange (ETDEWEB)

    Vincke, Kirsten; Pfalzner, Susanne, E-mail: kvincke@mpifr-bonn.mpg.de [Max Planck Institute for Radio Astronomy, Auf dem Hügel 69, D-53121 Bonn (Germany)

    2016-09-01

    To what degree the cluster environment influences the sizes of protoplanetary disks surrounding young stars is still an open question. This is particularly true for the short-lived clusters typical for the solar neighborhood, in which the stellar density and therefore the influence of the cluster environment change considerably over the first 10 Myr. In previous studies, the effect of the gas on the cluster dynamics has often been neglected; this is remedied here. Using the code NBody6++, we study the stellar dynamics in different developmental phases—embedded, expulsion, and expansion—including the gas, and quantify the effect of fly-bys on the disk size. We concentrate on massive clusters (M {sub cl} ≥ 10{sup 3}–6 ∗ 10{sup 4} M {sub Sun}), which are representative for clusters like the Orion Nebula Cluster (ONC) or NGC 6611. We find that not only the stellar density but also the duration of the embedded phase matters. The densest clusters react fastest to the gas expulsion and drop quickly in density, here 98% of relevant encounters happen before gas expulsion. By contrast, disks in sparser clusters are initially less affected, but because these clusters expand more slowly, 13% of disks are truncated after gas expulsion. For ONC-like clusters, we find that disks larger than 500 au are usually affected by the environment, which corresponds to the observation that 200 au-sized disks are common. For NGC 6611-like clusters, disk sizes are cut-down on average to roughly 100 au. A testable hypothesis would be that the disks in the center of NGC 6611 should be on average ≈20 au and therefore considerably smaller than those in the ONC.

  5. Strength and fatigue testing of large size wind turbines rotors. Vol. II: Full size natural vibration and static strength test, a reference case

    Energy Technology Data Exchange (ETDEWEB)

    Arias, F.; Soria, E.

    1996-12-01

    This report shows the methods and procedures selected to define a strength test for large size wind turbine, anyway in particular it application on a 500 kW blade and it results obtained in the test carried out in july of 1995 in Asinel`s test plant (Madrid). Henceforth, this project is designed in an abbreviate form whit the acronym SFAT. (Author)

  6. Strength and fatigue testing of large size wind turbines rotors. Volume II. Full size natural vibration and static strength test, a reference case

    International Nuclear Information System (INIS)

    Arias, F.; Soria, E.

    1996-01-01

    This report shows the methods and procedures selected to define a strength test for large size wind turbine, anyway in particularly it application on a 500 kW blade and it results obtained in the test carried out in july of 1995 in Asinel test plant (Madrid). Henceforth, this project is designed in an abbreviate form whit the acronym SFAT. (Author)

  7. Job Stress in the United Kingdom: Are Small and Medium-Sized Enterprises and Large Enterprises Different?

    Science.gov (United States)

    Lai, Yanqing; Saridakis, George; Blackburn, Robert

    2015-08-01

    This paper examines the relationships between firm size and employees' experience of work stress. We used a matched employer-employee dataset (Workplace Employment Relations Survey 2011) that comprises of 7182 employees from 1210 private organizations in the United Kingdom. Initially, we find that employees in small and medium-sized enterprises experience lower level of overall job stress than those in large enterprises, although the effect disappears when we control for individual and organizational characteristics in the model. We also find that quantitative work overload, job insecurity and poor promotion opportunities, good work relationships and poor communication are strongly associated with job stress in the small and medium-sized enterprises, whereas qualitative work overload, poor job autonomy and employee engagements are more related with larger enterprises. Hence, our estimates show that the association and magnitude of estimated effects differ significantly by enterprise size. Copyright © 2013 John Wiley & Sons, Ltd.

  8. Reduced-portion entrées in a worksite and restaurant setting: impact on food consumption and waste.

    Science.gov (United States)

    Berkowitz, Sarah; Marquart, Len; Mykerezi, Elton; Degeneffe, Dennis; Reicks, Marla

    2016-11-01

    Large portion sizes in restaurants have been identified as a public health risk. The purpose of the present study was to determine whether customers in two different food-service operator segments (non-commercial worksite cafeteria and commercial upscale restaurant) would select reduced-portion menu items and the impact of selecting reduced-portion menu items on energy and nutrient intakes and plate waste. Consumption and plate waste data were collected for 5 weeks before and 7 weeks after introduction of five reduced-size entrées in a worksite lunch cafeteria and for 3 weeks before and 4 weeks after introduction of five reduced-size dinner entrées in a restaurant setting. Full-size entrées were available throughout the entire study periods. A worksite cafeteria and a commercial upscale restaurant in a large US Midwestern metropolitan area. Adult worksite employees and restaurant patrons. Reduced-size entrées accounted for 5·3-12·8 % and 18·8-31·3 % of total entrées selected in the worksite and restaurant settings, respectively. Food waste, energy intake and intakes of total fat, saturated fat, cholesterol, Na, fibre, Ca, K and Fe were significantly lower when both full- and reduced-size entrées were served in the worksite setting and in the restaurant setting compared with when only full-size entrées were served. A relatively small proportion of reduced-size entrées were selected but still resulted in reductions in overall energy and nutrient intakes. These outcomes could serve as the foundation for future studies to determine strategies to enhance acceptance of reduced-portion menu items in restaurant settings.

  9. Concepts in sample size determination

    Directory of Open Access Journals (Sweden)

    Umadevi K Rao

    2012-01-01

    Full Text Available Investigators involved in clinical, epidemiological or translational research, have the drive to publish their results so that they can extrapolate their findings to the population. This begins with the preliminary step of deciding the topic to be studied, the subjects and the type of study design. In this context, the researcher must determine how many subjects would be required for the proposed study. Thus, the number of individuals to be included in the study, i.e., the sample size is an important consideration in the design of many clinical studies. The sample size determination should be based on the difference in the outcome between the two groups studied as in an analytical study, as well as on the accepted p value for statistical significance and the required statistical power to test a hypothesis. The accepted risk of type I error or alpha value, which by convention is set at the 0.05 level in biomedical research defines the cutoff point at which the p value obtained in the study is judged as significant or not. The power in clinical research is the likelihood of finding a statistically significant result when it exists and is typically set to >80%. This is necessary since the most rigorously executed studies may fail to answer the research question if the sample size is too small. Alternatively, a study with too large a sample size will be difficult and will result in waste of time and resources. Thus, the goal of sample size planning is to estimate an appropriate number of subjects for a given study design. This article describes the concepts in estimating the sample size.

  10. Towards Optimal Buffer Size in Wi-Fi Networks

    KAUST Repository

    Showail, Ahmad J.

    2016-01-19

    Buffer sizing is an important network configuration parameter that impacts the quality of data traffic. Falling memory cost and the fallacy that ‘more is better’ lead to over provisioning network devices with large buffers. Over-buffering or the so called ‘bufferbloat’ phenomenon creates excessive end-to-end delay in today’s networks. On the other hand, under-buffering results in frequent packet loss and subsequent under-utilization of network resources. The buffer sizing problem has been studied extensively for wired networks. However, there is little work addressing the unique challenges of wireless environment. In this dissertation, we discuss buffer sizing challenges in wireless networks, classify the state-of-the-art solutions, and propose two novel buffer sizing schemes. The first scheme targets buffer sizing in wireless multi-hop networks where the radio spectral resource is shared among a set of con- tending nodes. Hence, it sizes the buffer collectively and distributes it over a set of interfering devices. The second buffer sizing scheme is designed to cope up with recent Wi-Fi enhancements. It adapts the buffer size based on measured link characteristics and network load. Also, it enforces limits on the buffer size to maximize frame aggregation benefits. Both mechanisms are evaluated using simulation as well as testbed implementation over half-duplex and full-duplex wireless networks. Experimental evaluation shows that our proposal reduces latency by an order of magnitude.

  11. Ulysses: accurate detection of low-frequency structural variations in large insert-size sequencing libraries.

    Science.gov (United States)

    Gillet-Markowska, Alexandre; Richard, Hugues; Fischer, Gilles; Lafontaine, Ingrid

    2015-03-15

    The detection of structural variations (SVs) in short-range Paired-End (PE) libraries remains challenging because SV breakpoints can involve large dispersed repeated sequences, or carry inherent complexity, hardly resolvable with classical PE sequencing data. In contrast, large insert-size sequencing libraries (Mate-Pair libraries) provide higher physical coverage of the genome and give access to repeat-containing regions. They can thus theoretically overcome previous limitations as they are becoming routinely accessible. Nevertheless, broad insert size distributions and high rates of chimerical sequences are usually associated to this type of libraries, which makes the accurate annotation of SV challenging. Here, we present Ulysses, a tool that achieves drastically higher detection accuracy than existing tools, both on simulated and real mate-pair sequencing datasets from the 1000 Human Genome project. Ulysses achieves high specificity over the complete spectrum of variants by assessing, in a principled manner, the statistical significance of each possible variant (duplications, deletions, translocations, insertions and inversions) against an explicit model for the generation of experimental noise. This statistical model proves particularly useful for the detection of low frequency variants. SV detection performed on a large insert Mate-Pair library from a breast cancer sample revealed a high level of somatic duplications in the tumor and, to a lesser extent, in the blood sample as well. Altogether, these results show that Ulysses is a valuable tool for the characterization of somatic mosaicism in human tissues and in cancer genomes. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  12. An effective filter for IBD detection in large data sets.

    KAUST Repository

    Huang, Lin

    2014-03-25

    Identity by descent (IBD) inference is the task of computationally detecting genomic segments that are shared between individuals by means of common familial descent. Accurate IBD detection plays an important role in various genomic studies, ranging from mapping disease genes to exploring ancient population histories. The majority of recent work in the field has focused on improving the accuracy of inference, targeting shorter genomic segments that originate from a more ancient common ancestor. The accuracy of these methods, however, is achieved at the expense of high computational cost, resulting in a prohibitively long running time when applied to large cohorts. To enable the study of large cohorts, we introduce SpeeDB, a method that facilitates fast IBD detection in large unphased genotype data sets. Given a target individual and a database of individuals that potentially share IBD segments with the target, SpeeDB applies an efficient opposite-homozygous filter, which excludes chromosomal segments from the database that are highly unlikely to be IBD with the corresponding segments from the target individual. The remaining segments can then be evaluated by any IBD detection method of choice. When examining simulated individuals sharing 4 cM IBD regions, SpeeDB filtered out 99.5% of genomic regions from consideration while retaining 99% of the true IBD segments. Applying the SpeeDB filter prior to detecting IBD in simulated fourth cousins resulted in an overall running time that was 10,000x faster than inferring IBD without the filter and retained 99% of the true IBD segments in the output.

  13. An effective filter for IBD detection in large data sets.

    KAUST Repository

    Huang, Lin; Bercovici, Sivan; Rodriguez, Jesse M; Batzoglou, Serafim

    2014-01-01

    Identity by descent (IBD) inference is the task of computationally detecting genomic segments that are shared between individuals by means of common familial descent. Accurate IBD detection plays an important role in various genomic studies, ranging from mapping disease genes to exploring ancient population histories. The majority of recent work in the field has focused on improving the accuracy of inference, targeting shorter genomic segments that originate from a more ancient common ancestor. The accuracy of these methods, however, is achieved at the expense of high computational cost, resulting in a prohibitively long running time when applied to large cohorts. To enable the study of large cohorts, we introduce SpeeDB, a method that facilitates fast IBD detection in large unphased genotype data sets. Given a target individual and a database of individuals that potentially share IBD segments with the target, SpeeDB applies an efficient opposite-homozygous filter, which excludes chromosomal segments from the database that are highly unlikely to be IBD with the corresponding segments from the target individual. The remaining segments can then be evaluated by any IBD detection method of choice. When examining simulated individuals sharing 4 cM IBD regions, SpeeDB filtered out 99.5% of genomic regions from consideration while retaining 99% of the true IBD segments. Applying the SpeeDB filter prior to detecting IBD in simulated fourth cousins resulted in an overall running time that was 10,000x faster than inferring IBD without the filter and retained 99% of the true IBD segments in the output.

  14. Effects of hippocampal lesions on the monkey's ability to learn large sets of object-place associations.

    Science.gov (United States)

    Belcher, Annabelle M; Harrington, Rebecca A; Malkova, Ludise; Mishkin, Mortimer

    2006-01-01

    Earlier studies found that recognition memory for object-place associations was impaired in patients with relatively selective hippocampal damage (Vargha-Khadem et al., Science 1997; 277:376-380), but was unaffected after selective hippocampal lesions in monkeys (Malkova and Mishkin, J Neurosci 2003; 23:1956-1965). A potentially important methodological difference between the two studies is that the patients were required to remember a set of 20 object-place associations for several minutes, whereas the monkeys had to remember only two such associations at a time, and only for a few seconds. To approximate more closely the task given to the patients, we trained monkeys on several successive sets of 10 object-place pairs each, with each set requiring learning across days. Despite the increased associative memory demands, monkeys given hippocampal lesions were unimpaired relative to their unoperated controls, suggesting that differences other than set size and memory duration underlie the different outcomes in the human and animal studies. (c) 2005 Wiley-Liss, Inc.

  15. Mining Hierarchies and Similarity Clusters from Value Set Repositories.

    Science.gov (United States)

    Peterson, Kevin J; Jiang, Guoqian; Brue, Scott M; Shen, Feichen; Liu, Hongfang

    2017-01-01

    A value set is a collection of permissible values used to describe a specific conceptual domain for a given purpose. By helping to establish a shared semantic understanding across use cases, these artifacts are important enablers of interoperability and data standardization. As the size of repositories cataloging these value sets expand, knowledge management challenges become more pronounced. Specifically, discovering value sets applicable to a given use case may be challenging in a large repository. In this study, we describe methods to extract implicit relationships between value sets, and utilize these relationships to overlay organizational structure onto value set repositories. We successfully extract two different structurings, hierarchy and clustering, and show how tooling can leverage these structures to enable more effective value set discovery.

  16. The influence of negative training set size on machine learning-based virtual screening.

    Science.gov (United States)

    Kurczab, Rafał; Smusz, Sabina; Bojarski, Andrzej J

    2014-01-01

    The paper presents a thorough analysis of the influence of the number of negative training examples on the performance of machine learning methods. The impact of this rather neglected aspect of machine learning methods application was examined for sets containing a fixed number of positive and a varying number of negative examples randomly selected from the ZINC database. An increase in the ratio of positive to negative training instances was found to greatly influence most of the investigated evaluating parameters of ML methods in simulated virtual screening experiments. In a majority of cases, substantial increases in precision and MCC were observed in conjunction with some decreases in hit recall. The analysis of dynamics of those variations let us recommend an optimal composition of training data. The study was performed on several protein targets, 5 machine learning algorithms (SMO, Naïve Bayes, Ibk, J48 and Random Forest) and 2 types of molecular fingerprints (MACCS and CDK FP). The most effective classification was provided by the combination of CDK FP with SMO or Random Forest algorithms. The Naïve Bayes models appeared to be hardly sensitive to changes in the number of negative instances in the training set. In conclusion, the ratio of positive to negative training instances should be taken into account during the preparation of machine learning experiments, as it might significantly influence the performance of particular classifier. What is more, the optimization of negative training set size can be applied as a boosting-like approach in machine learning-based virtual screening.

  17. Development and introduction of stamping technique for large-size laterals of NPP pipelines

    International Nuclear Information System (INIS)

    Romashko, N.I.; Moshnin, E.N.; Timokhin, V.S.; Bryukhanov, Yu.V.; Lebedev, V.A.

    1984-01-01

    The results of development and introduction of stamping technique for large-size laterals of NPP high-pressure pipelines are presented. The main experimental data characterizing technological possibilities of the process are given. The technological process and design of the stamp assure production of laterals from ovalized bars per one heating of the bar and per one running of the press cronnhead. Introduction of new technology decreased labour input of lateral production, reliability and serviceability of pipelines increased in this case. Introduction of this technology gives a considerable benefit

  18. Large Pelagic Logbook Set Survey (Vessels)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This data set contains catch and effort for fishing trips that are taken by vessels with a Federal permit issued for the swordfish and sharks under the Highly...

  19. A comparison of accuracy validation methods for genomic and pedigree-based predictions of swine litter size traits using Large White and simulated data.

    Science.gov (United States)

    Putz, A M; Tiezzi, F; Maltecca, C; Gray, K A; Knauer, M T

    2018-02-01

    The objective of this study was to compare and determine the optimal validation method when comparing accuracy from single-step GBLUP (ssGBLUP) to traditional pedigree-based BLUP. Field data included six litter size traits. Simulated data included ten replicates designed to mimic the field data in order to determine the method that was closest to the true accuracy. Data were split into training and validation sets. The methods used were as follows: (i) theoretical accuracy derived from the prediction error variance (PEV) of the direct inverse (iLHS), (ii) approximated accuracies from the accf90(GS) program in the BLUPF90 family of programs (Approx), (iii) correlation between predictions and the single-step GEBVs from the full data set (GEBV Full ), (iv) correlation between predictions and the corrected phenotypes of females from the full data set (Y c ), (v) correlation from method iv divided by the square root of the heritability (Y ch ) and (vi) correlation between sire predictions and the average of their daughters' corrected phenotypes (Y cs ). Accuracies from iLHS increased from 0.27 to 0.37 (37%) in the Large White. Approximation accuracies were very consistent and close in absolute value (0.41 to 0.43). Both iLHS and Approx were much less variable than the corrected phenotype methods (ranging from 0.04 to 0.27). On average, simulated data showed an increase in accuracy from 0.34 to 0.44 (29%) using ssGBLUP. Both iLHS and Y ch approximated the increase well, 0.30 to 0.46 and 0.36 to 0.45, respectively. GEBV Full performed poorly in both data sets and is not recommended. Results suggest that for within-breed selection, theoretical accuracy using PEV was consistent and accurate. When direct inversion is infeasible to get the PEV, correlating predictions to the corrected phenotypes divided by the square root of heritability is adequate given a large enough validation data set. © 2017 Blackwell Verlag GmbH.

  20. Determining the Variability of Lesion Size Measurements from CT Patient Data Sets Acquired under “No Change” Conditions

    Directory of Open Access Journals (Sweden)

    Michael F. McNitt-Gray

    2015-02-01

    Full Text Available PURPOSE: To determine the variability of lesion size measurements in computed tomography data sets of patients imaged under a “no change” (“coffee break” condition and to determine the impact of two reading paradigms on measurement variability. METHOD AND MATERIALS: Using data sets from 32 non-small cell lung cancer patients scanned twice within 15 minutes (“no change”, measurements were performed by five radiologists in two phases: (1 independent reading of each computed tomography dataset (timepoint: (2 a locked, sequential reading of datasets. Readers performed measurements using several sizing methods, including one-dimensional (1D longest in-slice dimension and 3D semi-automated segmented volume. Change in size was estimated by comparing measurements performed on both timepoints for the same lesion, for each reader and each measurement method. For each reading paradigm, results were pooled across lesions, across readers, and across both readers and lesions, for each measurement method. RESULTS: The mean percent difference (±SD when pooled across both readers and lesions for 1D and 3D measurements extracted from contours was 2.8 ± 22.2% and 23.4 ± 105.0%, respectively, for the independent reads. For the locked, sequential reads, the mean percent differences (±SD reduced to 2.52 ± 14.2% and 7.4 ± 44.2% for the 1D and 3D measurements, respectively. CONCLUSION: Even under a “no change” condition between scans, there is variation in lesion size measurements due to repeat scans and variations in reader, lesion, and measurement method. This variation is reduced when using a locked, sequential reading paradigm compared to an independent reading paradigm.

  1. Effects of display set size and its variability on the event-related potentials during a visual search task

    OpenAIRE

    Miyatani, Makoto; Sakata, Sumiko

    1999-01-01

    This study investigated the effects of display set size and its variability on the event-related potentials (ERPs) during a visual search task. In Experiment 1, subjects were required to respond if a visual display, which consisted of two, four, or six alphabets, contained one of two members of memory set. In Experiment 2, subjects detected the change of the shape of a fixation stimulus, which was surrounded by the same alphabets as in Experiment 1. In the search task (Experiment 1), the incr...

  2. An Examination of Teachers' Perceptions and Practice when Teaching Large and Reduced-Size Classes: Do Teachers Really Teach Them in the Same Way?

    Science.gov (United States)

    Harfitt, Gary James

    2012-01-01

    Class size research suggests that teachers do not vary their teaching strategies when moving from large to smaller classes. This study draws on interviews and classroom observations of three experienced English language teachers working with large and reduced-size classes in Hong Kong secondary schools. Findings from the study point to subtle…

  3. Analyzing Damping Vibration Methods of Large-Size Space Vehicles in the Earth's Magnetic Field

    Directory of Open Access Journals (Sweden)

    G. A. Shcheglov

    2016-01-01

    Full Text Available It is known that most of today's space vehicles comprise large antennas, which are bracket-attached to the vehicle body. Dimensions of reflector antennas may be of 30 ... 50 m. The weight of such constructions can reach approximately 200 kg.Since the antenna dimensions are significantly larger than the size of the vehicle body and the points to attach the brackets to the space vehicles have a low stiffness, conventional dampers may be inefficient. The paper proposes to consider the damping antenna in terms of its interaction with the Earth's magnetic field.A simple dynamic model of the space vehicle equipped with a large-size structure is built. The space vehicle is a parallelepiped to which the antenna is attached through a beam.To solve the model problems, was used a simplified model of Earth's magnetic field: uniform, with intensity lines parallel to each other and perpendicular to the plane of the antenna.The paper considers two layouts of coils with respect to the antenna, namely: a vertical one in which an axis of magnetic dipole is perpendicular to the antenna plane, and a horizontal layout in which an axis of magnetic dipole lies in the antenna plane. It also explores two ways for magnetic damping of oscillations: through the controlled current that is supplied from the power supply system of the space vehicle, and by the self-induction current in the coil. Thus, four objectives were formulated.In each task was formulated an oscillation equation. Then a ratio of oscillation amplitudes and their decay time were estimated. It was found that each task requires the certain parameters either of the antenna itself, its dimensions and moment of inertia, or of the coil and, respectively, the current, which is supplied from the space vehicle. In each task for these parameters were found the ranges, which allow us to tell of efficient damping vibrations.The conclusion can be drawn based on the analysis of tasks that a specialized control system

  4. A summarization approach for Affymetrix GeneChip data using a reference training set from a large, biologically diverse database

    Directory of Open Access Journals (Sweden)

    Tripputi Mark

    2006-10-01

    Full Text Available Abstract Background Many of the most popular pre-processing methods for Affymetrix expression arrays, such as RMA, gcRMA, and PLIER, simultaneously analyze data across a set of predetermined arrays to improve precision of the final measures of expression. One problem associated with these algorithms is that expression measurements for a particular sample are highly dependent on the set of samples used for normalization and results obtained by normalization with a different set may not be comparable. A related problem is that an organization producing and/or storing large amounts of data in a sequential fashion will need to either re-run the pre-processing algorithm every time an array is added or store them in batches that are pre-processed together. Furthermore, pre-processing of large numbers of arrays requires loading all the feature-level data into memory which is a difficult task even with modern computers. We utilize a scheme that produces all the information necessary for pre-processing using a very large training set that can be used for summarization of samples outside of the training set. All subsequent pre-processing tasks can be done on an individual array basis. We demonstrate the utility of this approach by defining a new version of the Robust Multi-chip Averaging (RMA algorithm which we refer to as refRMA. Results We assess performance based on multiple sets of samples processed over HG U133A Affymetrix GeneChip® arrays. We show that the refRMA workflow, when used in conjunction with a large, biologically diverse training set, results in the same general characteristics as that of RMA in its classic form when comparing overall data structure, sample-to-sample correlation, and variation. Further, we demonstrate that the refRMA workflow and reference set can be robustly applied to naïve organ types and to benchmark data where its performance indicates respectable results. Conclusion Our results indicate that a biologically diverse

  5. Small genomes and large seeds: chromosome numbers, genome size and seed mass in diploid Aesculus species (Sapindaceae).

    Science.gov (United States)

    Krahulcová, Anna; Trávnícek, Pavel; Krahulec, František; Rejmánek, Marcel

    2017-04-01

    Aesculus L. (horse chestnut, buckeye) is a genus of 12-19 extant woody species native to the temperate Northern Hemisphere. This genus is known for unusually large seeds among angiosperms. While chromosome counts are available for many Aesculus species, only one has had its genome size measured. The aim of this study is to provide more genome size data and analyse the relationship between genome size and seed mass in this genus. Chromosome numbers in root tip cuttings were confirmed for four species and reported for the first time for three additional species. Flow cytometric measurements of 2C nuclear DNA values were conducted on eight species, and mean seed mass values were estimated for the same taxa. The same chromosome number, 2 n = 40, was determined in all investigated taxa. Original measurements of 2C values for seven Aesculus species (eight taxa), added to just one reliable datum for A. hippocastanum , confirmed the notion that the genome size in this genus with relatively large seeds is surprisingly low, ranging from 0·955 pg 2C -1 in A. parviflora to 1·275 pg 2C -1 in A. glabra var. glabra. The chromosome number of 2 n = 40 seems to be conclusively the universal 2 n number for non-hybrid species in this genus. Aesculus genome sizes are relatively small, not only within its own family, Sapindaceae, but also within woody angiosperms. The genome sizes seem to be distinct and non-overlapping among the four major Aesculus clades. These results provide an extra support for the most recent reconstruction of Aesculus phylogeny. The correlation between the 2C values and seed masses in examined Aesculus species is slightly negative and not significant. However, when the four major clades are treated separately, there is consistent positive association between larger genome size and larger seed mass within individual lineages. © The Author 2017. Published by Oxford University Press on behalf of the Annals of Botany Company. All rights reserved. For

  6. Preparation and validation of a large size dried spike: Batch SAL-9924

    International Nuclear Information System (INIS)

    Bagliano, G.; Cappis, J.; Doubek, N.; Jammet, G.; Raab, W.; Zoigner, A.

    1989-12-01

    To determine uranium and plutonium concentration using isotope dilution mass spectrometry, weighed aliquands of a synthetic mixture containing 2 to 4 mg of Pu (with a 239 Pu abundance of about 97%) and 40 to 200 mg of U (with a 235 U enrichment of about 18%) can be advantageously used to spike a concentrated spent fuel solution with a high burn up and with a low 235 U enrichment. This will simplify the conditioning of the sample by 1) reduced time of preparation (from more than one day used for the conventional technique to 2-3 hours); 2) reduced burden for the operator with a clear easiness for the inspector to witness the entire procedure (accurate dilution of the spent fuel sample before spiking being no longer necessary). Furthermore this type of spike could be used as a common spike for the operator and the inspector. The source materials are available in sufficient quantity and are enough cheaper than the commonly used 233 U and 242 Pu or 244 Pu tracer that the costs of the overall Operator-Inspector procedures will be reduced. Certified Reference Materials Pu-NBL-126, natural U-NBS-960 and 93% enriched U-NBL-116 were used to prepare a stock solution containing 1.7 mg/ml of Pu and 68 mg/ml of 17.5% enriched U. Before shipment to the Reprocessing Plant, aliquands of the stock solution must be dried to give Large Size Dried Spikes which resist shocks encountered during transportation, so that they can readily be recovered quantitatively at the plant. This paper describes the preparation and the validation of the Large Size Dried Spike. Proof of usefulness in the field will be done at a later date in parallel with analysis by the conventional technique. Refs and tabs

  7. Set-aside key to creating industry

    International Nuclear Information System (INIS)

    Anon.

    1998-01-01

    The Canadian Wind Energy Association (CanWEA) submitted a plan of wind energy industry development to the Regie de l'energie of Quebec at its May wind set-aside hearings. According to the plan, CanWEA wants to see Hydro-Quebec buy 10 MW of wind power a year starting in 2001, progressively increasing to 50 MW power per year in four years. According to CanWEA the size of the set-aside must be large enough to foster competition in the turbine manufacturing sector, as well as put a downward pressure on wind energy prices. In 1996, Hydro-Quebec pledged to buy 10 MW of wind power per year over 10 years. It raised the target to 20 MW per year over 10 years in 1997, which was raised to 30 MW per year by a parliamentary commission in the spring of 1998. Hydro-Quebec contends that it cannot buy more than 30 MW, given that some of the utility's large industrial customers won't want to pay the increased cost of wind energy, and before buying any wind power there will have to be a decision about who will pay the additional cost. The Regie de l'energie has the power to decide whether Hydro-Quebec should spread the cost over the entire customer base, or just charge those customers willing to pay. According to the Regie, the wind set-aside is not simply an energy supply question, it is also a matter of developing a wind turbine manufacturing industry. It is this larger objective and other key issues, such as the economic spin-offs from developing wind energy, its role in regional development and its environmental impacts that will determine the the size of the set-aside and the price to be paid for wind power by Hydro-Quebec

  8. Oxygen no longer plays a major role in Body Size Evolution

    Science.gov (United States)

    Datta, H.; Sachson, W.; Heim, N. A.; Payne, J.

    2015-12-01

    When observing the long-term relationship between atmospheric oxygen and the maximum size in organisms across the Geozoic (~3.8 Ga - present), it appears that as oxygen increases, organism size grows. However, during the Phanerozoic (541 Ma - Present) oxygen levels varied, so we set out to test the hypothesis that oxygen levels drive patterns marine animal body size evolution. Expected decreases in maximum size due to a lack of oxygen do not occur, and instead, body size continues to increase regardless. In the oxygen data, a relatively low atmospheric oxygen percentage can support increasing body size, so our research tries to determine whether lifestyle affects body size in marine organisms. The genera in the data set were organized based on their tiering, motility, and feeding, such as a pelagic, fully-motile, predator. When organisms fill a certain ecological niche to take advantage of resources, they will have certain life modes, rather than randomly selected traits. For example, even in terrestrial environments, large animals have to constantly feed themselves to support their expensive terrestrial lifestyle which involves fairly consistent movement, and the structural support necessary for that movement. Only organisms with access to high energy food sources or large amounts of food can support themselves, and that is before they expend energy elsewhere. Organisms that expend energy frugally when active or have slower metabolisms in comparison to body size have a more efficient lifestyle and are generally able to grow larger, while those who have higher energy demands like predators are limited to comparatively smaller sizes. Therefore, in respect to the fossil record and modern measurements of animals, the metabolism and lifestyle of an organism dictate its body size in general. With this further clarification on the patterns of evolution, it will be easier to observe and understand the reasons for the ecological traits of organisms today.

  9. Galaxy Evolution Insights from Spectral Modeling of Large Data Sets from the Sloan Digital Sky Survey

    Energy Technology Data Exchange (ETDEWEB)

    Hoversten, Erik A. [Johns Hopkins Univ., Baltimore, MD (United States)

    2007-10-01

    This thesis centers on the use of spectral modeling techniques on data from the Sloan Digital Sky Survey (SDSS) to gain new insights into current questions in galaxy evolution. The SDSS provides a large, uniform, high quality data set which can be exploited in a number of ways. One avenue pursued here is to use the large sample size to measure precisely the mean properties of galaxies of increasingly narrow parameter ranges. The other route taken is to look for rare objects which open up for exploration new areas in galaxy parameter space. The crux of this thesis is revisiting the classical Kennicutt method for inferring the stellar initial mass function (IMF) from the integrated light properties of galaxies. A large data set (~ 105 galaxies) from the SDSS DR4 is combined with more in-depth modeling and quantitative statistical analysis to search for systematic IMF variations as a function of galaxy luminosity. Galaxy Hα equivalent widths are compared to a broadband color index to constrain the IMF. It is found that for the sample as a whole the best fitting IMF power law slope above 0.5 M is Γ = 1.5 ± 0.1 with the error dominated by systematics. Galaxies brighter than around Mr,0.1 = -20 (including galaxies like the Milky Way which has Mr,0.1 ~ -21) are well fit by a universal Γ ~ 1.4 IMF, similar to the classical Salpeter slope, and smooth, exponential star formation histories (SFH). Fainter galaxies prefer steeper IMFs and the quality of the fits reveal that for these galaxies a universal IMF with smooth SFHs is actually a poor assumption. Related projects are also pursued. A targeted photometric search is conducted for strongly lensed Lyman break galaxies (LBG) similar to MS1512-cB58. The evolution of the photometric selection technique is described as are the results of spectroscopic follow-up of the best targets. The serendipitous discovery of two interesting blue compact dwarf galaxies is reported. These

  10. Simulation of reflecting surface deviations of centimeter-band parabolic space radiotelescope (SRT) with the large-size mirror

    Science.gov (United States)

    Kotik, A.; Usyukin, V.; Vinogradov, I.; Arkhipov, M.

    2017-11-01

    he realization of astrophysical researches requires the development of high-sensitive centimeterband parabolic space radiotelescopes (SRT) with the large-size mirrors. Constructively such SRT with the mirror size more than 10 m can be realized as deployable rigid structures. Mesh-structures of such size do not provide the reflector reflecting surface accuracy which is necessary for the centimeter band observations. Now such telescope with the 10 m diameter mirror is developed in Russia in the frame of "SPECTR - R" program. External dimensions of the telescope is more than the size of existing thermo-vacuum chambers used to prove SRT reflecting surface accuracy parameters under the action of space environment factors. That's why the numerical simulation turns out to be the basis required to accept the taken designs. Such modeling should be based on experimental working of the basic constructive materials and elements of the future reflector. In the article computational modeling of reflecting surface deviations of a centimeter-band of a large-sized deployable space reflector at a stage of his orbital functioning is considered. The analysis of the factors that determines the deviations - both determined (temperatures fields) and not-determined (telescope manufacturing and installation faults; the deformations caused by features of composite materials behavior in space) is carried out. The finite-element model and complex of methods are developed. They allow to carry out computational modeling of reflecting surface deviations caused by influence of all factors and to take into account the deviations correction by space vehicle orientation system. The results of modeling for two modes of functioning (orientation at the Sun) SRT are presented.

  11. Analysis of Large Seeds from Three Different Medicago truncatula Ecotypes Reveals a Potential Role of Hormonal Balance in Final Size Determination of Legume Grains

    Directory of Open Access Journals (Sweden)

    Kaustav Bandyopadhyay

    2016-09-01

    Full Text Available Legume seeds are important as protein and oil source for human diet. Understanding how their final seed size is determined is crucial to improve crop yield. In this study, we analyzed seed development of three accessions of the model legume, Medicago truncatula, displaying contrasted seed size. By comparing two large seed accessions to the reference accession A17, we described mechanisms associated with large seed size determination and potential factors modulating the final seed size. We observed that early events during embryogenesis had a major impact on final seed size and a delayed heart stage embryo development resulted to large seeds. We also observed that the difference in seed growth rate was mainly due to a difference in embryo cell number, implicating a role of cell division rate. Large seed accessions could be explained by an extended period of cell division due to a longer embryogenesis phase. According to our observations and recent reports, we observed that auxin (IAA and abscisic acid (ABA ratio could be a key determinant of cell division regulation at the end of embryogenesis. Overall, our study highlights that timing of events occurring during early seed development play decisive role for final seed size determination.

  12. AN INVESTIGATION INTO MEDIUM-SIZED MULTINATIONAL ENTERPRISES

    Directory of Open Access Journals (Sweden)

    Daniele Schilirò

    2013-09-01

    Full Text Available The paper provides an investigation of medium-sized Italian industrial enterprises that have become multinational companies. It concetrates on the set of medium and medium-large enterprises who seem to grow more in foreign markets, either through exports or through foreign direct investment. The work also offers a descriptive empirical picture of the performance of medium-sized Italian multinationals, which is compared with the performance of large corporations. From this analysis, which is based on several data sources, it is possible to outline a profile regarding the medium-size italian multinational enterprises; the aim is to understand the complex strategy towards internationalization of these companies, where the dimension of production is important and, therefore, innovation has a key role. Also the commercial dimension is crucial, because it leads to point to the direct supervision of foreign markets and to look very carefully at the customers, offering them a wide range of services. Finally, the paper highlights some critical issues that the medium sized multinational enterprises have to face for competing: namely, the stagnant productivity, the high taxation, the insufficient institutional support for internationalization, the bureaucracy and its high costs, the lack of skilled human capital available in the labor market due to inadequate policy training.

  13. Application of electron beam welding to large size pressure vessels made of thick low alloy steel

    International Nuclear Information System (INIS)

    Kuri, S.; Yamamoto, M.; Aoki, S.; Kimura, M.; Nayama, M.; Takano, G.

    1993-01-01

    The authors describe the results of studies for application of the electron beam welding to the large size pressure vessels made of thick low alloy steel (ASME A533 Gr.B cl.2 and A533 Gr.A cl.1). Two major problems for applying the EBW, the poor toughness of weld metal and the equipment to weld huge pressure vessels are focused on. For the first problem, the effects of Ni content of weld metal, welding conditions and post weld heat treatment are investigated. For the second problem, an applicability of the local vacuum EBW to a large size pressure vessel made of thick plate is qualified by the construction of a 120 mm thick, 2350 mm outside diameter cylindrical model. The model was electron beam welded using local vacuum chamber and the performance of the weld joint is investigated. Based on these results, the electron beam welding has been applied to the production of a steam generator for a PWR. (author). 3 refs., 10 figs., 4 tabs

  14. Analyzing large data sets from XGC1 magnetic fusion simulations using apache spark

    Energy Technology Data Exchange (ETDEWEB)

    Churchill, R. Michael [Princeton Plasma Physics Lab. (PPPL), Princeton, NJ (United States)

    2016-11-21

    Apache Spark is explored as a tool for analyzing large data sets from the magnetic fusion simulation code XGCI. Implementation details of Apache Spark on the NERSC Edison supercomputer are discussed, including binary file reading, and parameter setup. Here, an unsupervised machine learning algorithm, k-means clustering, is applied to XGCI particle distribution function data, showing that highly turbulent spatial regions do not have common coherent structures, but rather broad, ring-like structures in velocity space.

  15. A homeostatic clock sets daughter centriole size in flies

    Science.gov (United States)

    Aydogan, Mustafa G.; Steinacker, Thomas L.; Novak, Zsofia A.; Baumbach, Janina; Muschalik, Nadine

    2018-01-01

    Centrioles are highly structured organelles whose size is remarkably consistent within any given cell type. New centrioles are born when Polo-like kinase 4 (Plk4) recruits Ana2/STIL and Sas-6 to the side of an existing “mother” centriole. These two proteins then assemble into a cartwheel, which grows outwards to form the structural core of a new daughter. Here, we show that in early Drosophila melanogaster embryos, daughter centrioles grow at a linear rate during early S-phase and abruptly stop growing when they reach their correct size in mid- to late S-phase. Unexpectedly, the cartwheel grows from its proximal end, and Plk4 determines both the rate and period of centriole growth: the more active the centriolar Plk4, the faster centrioles grow, but the faster centriolar Plk4 is inactivated and growth ceases. Thus, Plk4 functions as a homeostatic clock, establishing an inverse relationship between growth rate and period to ensure that daughter centrioles grow to the correct size. PMID:29500190

  16. CUDA based Level Set Method for 3D Reconstruction of Fishes from Large Acoustic Data

    DEFF Research Database (Denmark)

    Sharma, Ojaswa; Anton, François

    2009-01-01

    Acoustic images present views of underwater dynamics, even in high depths. With multi-beam echo sounders (SONARs), it is possible to capture series of 2D high resolution acoustic images. 3D reconstruction of the water column and subsequent estimation of fish abundance and fish species identificat...... of suppressing threshold and show its convergence as the evolution proceeds. We also present a GPU based streaming computation of the method using NVIDIA's CUDA framework to handle large volume data-sets. Our implementation is optimised for memory usage to handle large volumes....

  17. Buffer Sizing in 802.11 Wireless Mesh Networks

    KAUST Repository

    Jamshaid, Kamran; Shihada, Basem; Xia, Li; Levis, Philip

    2011-01-01

    We analyze the problem of buffer sizing for TCP flows in 802.11-based Wireless Mesh Networks. Our objective is to maintain high network utilization while providing low queueing delays. The problem is complicated by the time-varying capacity of the wireless channel as well as the random access mechanism of 802.11 MAC protocol. While arbitrarily large buffers can maintain high network utilization, this results in large queueing delays. Such delays may affect TCP stability characteristics, and also increase queueing delays for other flows (including real-time flows) sharing the buffer. In this paper we propose sizing link buffers collectively for a set of nodes within mutual interference range called the 'collision domain'. We aim to provide a buffer just large enough to saturate the available capacity of the bottleneck collision domain that limits the carrying capacity of the network. This neighborhood buffer is distributed over multiple nodes that constitute the network bottleneck; a transmission by any of these nodes fully utilizes the available spectral resource for the duration of the transmission. We show that sizing routing buffers collectively for this bottleneck allows us to have small buffers (as low as 2 - 3 packets) at individual nodes without any significant loss in network utilization. We propose heuristics to determine these buffer sizes in WMNs. Our results show that we can reduce the end-to-end delays by 6× to 10× at the cost of losing roughly 5% of the network capacity achievable with large buffers.

  18. Buffer Sizing in 802.11 Wireless Mesh Networks

    KAUST Repository

    Jamshaid, Kamran

    2011-10-01

    We analyze the problem of buffer sizing for TCP flows in 802.11-based Wireless Mesh Networks. Our objective is to maintain high network utilization while providing low queueing delays. The problem is complicated by the time-varying capacity of the wireless channel as well as the random access mechanism of 802.11 MAC protocol. While arbitrarily large buffers can maintain high network utilization, this results in large queueing delays. Such delays may affect TCP stability characteristics, and also increase queueing delays for other flows (including real-time flows) sharing the buffer. In this paper we propose sizing link buffers collectively for a set of nodes within mutual interference range called the \\'collision domain\\'. We aim to provide a buffer just large enough to saturate the available capacity of the bottleneck collision domain that limits the carrying capacity of the network. This neighborhood buffer is distributed over multiple nodes that constitute the network bottleneck; a transmission by any of these nodes fully utilizes the available spectral resource for the duration of the transmission. We show that sizing routing buffers collectively for this bottleneck allows us to have small buffers (as low as 2 - 3 packets) at individual nodes without any significant loss in network utilization. We propose heuristics to determine these buffer sizes in WMNs. Our results show that we can reduce the end-to-end delays by 6× to 10× at the cost of losing roughly 5% of the network capacity achievable with large buffers.

  19. When David beats Goliath: the advantage of large size in interspecific aggressive contests declines over evolutionary time.

    Directory of Open Access Journals (Sweden)

    Paul R Martin

    Full Text Available Body size has long been recognized to play a key role in shaping species interactions. For example, while small species thrive in a diversity of environments, they typically lose aggressive contests for resources with larger species. However, numerous examples exist of smaller species dominating larger species during aggressive interactions, suggesting that the evolution of traits can allow species to overcome the competitive disadvantage of small size. If these traits accumulate as lineages diverge, then the advantage of large size in interspecific aggressive interactions should decline with increased evolutionary distance. We tested this hypothesis using data on the outcomes of 23,362 aggressive interactions among 246 bird species pairs involving vultures at carcasses, hummingbirds at nectar sources, and antbirds and woodcreepers at army ant swarms. We found the advantage of large size declined as species became more evolutionarily divergent, and smaller species were more likely to dominate aggressive contests when interacting with more distantly-related species. These results appear to be caused by both the evolution of traits in smaller species that enhanced their abilities in aggressive contests, and the evolution of traits in larger species that were adaptive for other functions, but compromised their abilities to compete aggressively. Specific traits that may provide advantages to small species in aggressive interactions included well-developed leg musculature and talons, enhanced flight acceleration and maneuverability, novel fighting behaviors, and traits associated with aggression, such as testosterone and muscle development. Traits that may have hindered larger species in aggressive interactions included the evolution of morphologies for tree trunk foraging that compromised performance in aggressive contests away from trunks, and the evolution of migration. Overall, our results suggest that fundamental trade-offs, such as those

  20. Hypopigmentation Induced by Frequent Low-Fluence, Large-Spot-Size QS Nd:YAG Laser Treatments.

    Science.gov (United States)

    Wong, Yisheng; Lee, Siong See Joyce; Goh, Chee Leok

    2015-12-01

    The Q-switched 1064-nm neodymium-doped yttrium aluminum garnet (QS 1064-nm Nd:YAG) laser is increasingly used for nonablative skin rejuvenation or "laser toning" for melasma. Multiple and frequent low-fluence, large-spot-size treatments are used to achieve laser toning, and these treatments are associated with the development of macular hypopigmentation as a complication. We present a case series of three patients who developed guttate hypomelanotic macules on the face after receiving laser toning treatment with QS 1064-nm Nd:YAG.

  1. Investigation of Low-Cost Surface Processing Techniques for Large-Size Multicrystalline Silicon Solar Cells

    OpenAIRE

    Cheng, Yuang-Tung; Ho, Jyh-Jier; Lee, William J.; Tsai, Song-Yeu; Lu, Yung-An; Liou, Jia-Jhe; Chang, Shun-Hsyung; Wang, Kang L.

    2010-01-01

    The subject of the present work is to develop a simple and effective method of enhancing conversion efficiency in large-size solar cells using multicrystalline silicon (mc-Si) wafer. In this work, industrial-type mc-Si solar cells with area of 125×125 mm2 were acid etched to produce simultaneously POCl3 emitters and silicon nitride deposition by plasma-enhanced chemical vapor deposited (PECVD). The study of surface morphology and reflectivity of different mc-Si etched surfaces has also been d...

  2. [Effect sizes, statistical power and sample sizes in "the Japanese Journal of Psychology"].

    Science.gov (United States)

    Suzukawa, Yumi; Toyoda, Hideki

    2012-04-01

    This study analyzed the statistical power of research studies published in the "Japanese Journal of Psychology" in 2008 and 2009. Sample effect sizes and sample statistical powers were calculated for each statistical test and analyzed with respect to the analytical methods and the fields of the studies. The results show that in the fields like perception, cognition or learning, the effect sizes were relatively large, although the sample sizes were small. At the same time, because of the small sample sizes, some meaningful effects could not be detected. In the other fields, because of the large sample sizes, meaningless effects could be detected. This implies that researchers who could not get large enough effect sizes would use larger samples to obtain significant results.

  3. Quark bag coupling to finite size pions

    International Nuclear Information System (INIS)

    De Kam, J.; Pirner, H.J.

    1982-01-01

    A standard approximation in theories of quark bags coupled to a pion field is to treat the pion as an elementary field ignoring its substructure and finite size. A difficulty associated with these treatments in the lack of stability of the quark bag due to the rapid increase of the pion pressure on the bad as the bag size diminishes. We investigate the effects of the finite size of the qanti q pion on the pion quark bag coupling by means of a simple nonlocal pion quark interaction. With this amendment the pion pressure on the bag vanishes if the bag size goes to zero. No stability problems are encountered in this description. Furthermore, for extended pions, no longer a maximum is set to the bag parameter B. Therefore 'little bag' solutions may be found provided that B is large enough. We also discuss the possibility of a second minimum in the bag energy function. (orig.)

  4. The impact of sample size and marker selection on the study of haplotype structures

    Directory of Open Access Journals (Sweden)

    Sun Xiao

    2004-03-01

    Full Text Available Abstract Several studies of haplotype structures in the human genome in various populations have found that the human chromosomes are structured such that each chromosome can be divided into many blocks, within which there is limited haplotype diversity. In addition, only a few genetic markers in a putative block are needed to capture most of the diversity within a block. There has been no systematic empirical study of the effects of sample size and marker set on the identified block structures and representative marker sets, however. The purpose of this study was to conduct a detailed empirical study to examine such impacts. Towards this goal, we have analysed three representative autosomal regions from a large genome-wide study of haplotypes with samples consisting of African-Americans and samples consisting of Japanese and Chinese individuals. For both populations, we have found that the sample size and marker set have significant impact on the number of blocks and the total number of representative markers identified. The marker set in particular has very strong impacts, and our results indicate that the marker density in the original datasets may not be adequate to allow a meaningful characterisation of haplotype structures. In general, we conclude that we need a relatively large sample size and a very dense marker panel in the study of haplotype structures in human populations.

  5. Container size influences snack food intake independently of portion size.

    Science.gov (United States)

    Marchiori, David; Corneille, Olivier; Klein, Olivier

    2012-06-01

    While larger containers have been found to increase food intake, it is unclear whether this effect is driven by container size, portion size, or their combination, as these variables are usually confounded. The study was advertised as examining the effects of snack food consumption on information processing and participants were served M&M's for free consumption in individual cubicles while watching a TV show. Participants were served (1) a medium portion of M&M's in a small (n=30) or (2) in a large container (n=29), or (3) a large portion in a large container (n=29). The larger container increased intake by 129% (199 kcal) despite holding portion size constant, while controlling for different confounding variables. This research suggests that larger containers stimulate food intake over and above their impact on portion size. Copyright © 2012 Elsevier Ltd. All rights reserved.

  6. Closure and ratio correlation analysis of lunar chemical and grain size data

    Science.gov (United States)

    Butler, J. C.

    1976-01-01

    Major element and major element plus trace element analyses were selected from the lunar data base for Apollo 11, 12 and 15 basalt and regolith samples. Summary statistics for each of the six data sets were compiled, and the effects of closure on the Pearson product moment correlation coefficient were investigated using the Chayes and Kruskal approximation procedure. In general, there are two types of closure effects evident in these data sets: negative correlations of intermediate size which are solely the result of closure, and correlations of small absolute value which depart significantly from their expected closure correlations which are of intermediate size. It is shown that a positive closure correlation will arise only when the product of the coefficients of variation is very small (less than 0.01 for most data sets) and, in general, trace elements in the lunar data sets exhibit relatively large coefficients of variation.

  7. Separation of large DNA molecules by applying pulsed electric field to size exclusion chromatography-based microchip

    Science.gov (United States)

    Azuma, Naoki; Itoh, Shintaro; Fukuzawa, Kenji; Zhang, Hedong

    2018-02-01

    Through electrophoresis driven by a pulsed electric field, we succeeded in separating large DNA molecules with an electrophoretic microchip based on size exclusion chromatography (SEC), which was proposed in our previous study. The conditions of the pulsed electric field required to achieve the separation were determined by numerical analyses using our originally proposed separation model. From the numerical results, we succeeded in separating large DNA molecules (λ DNA and T4 DNA) within 1600 s, which was approximately half of that achieved under a direct electric field in our previous study. Our SEC-based electrophoresis microchip will be one of the effective tools to meet the growing demand of faster and more convenient separation of large DNA molecules, especially in the field of epidemiological research of infectious diseases.

  8. EFFECTIVE SUMMARY FOR MASSIVE DATA SET

    Directory of Open Access Journals (Sweden)

    A. Radhika

    2015-07-01

    Full Text Available The research efforts attempt to investigate size of the data increasing interest in designing the effective algorithm for space and time reduction. Providing high-dimensional technique over large data set is difficult. However, Randomized techniques are used for analyzing the data set where the performance of the data from part of storage in networks needs to be collected and analyzed continuously. Previously collaborative filtering approach is used for finding the similar patterns based on the user ranking but the outcomes are not observed yet. Linear approach requires high running time and more space. To overcome this sketching technique is used to represent massive data sets. Sketching allows short fingerprints of the item sets of users which allow approximately computing similarity between sets of different users. The concept of sketching is to generate minimum subset of record that executes all the original records. Sketching performs two techniques dimensionality reduction which reduces rows or columns and data reduction. It is proved that sketching can be performed using Principal Component Analysis for finding index value

  9. Body size mediated coexistence of consumers competing for resources in space

    Science.gov (United States)

    Basset, A.; Angelis, D.L.

    2007-01-01

    Body size is a major phenotypic trait of individuals that commonly differentiates co-occurring species. We analyzed inter-specific competitive interactions between a large consumer and smaller competitors, whose energetics, selection and giving-up behaviour on identical resource patches scaled with individual body size. The aim was to investigate whether pure metabolic constraints on patch behaviour of vagile species can determine coexistence conditions consistent with existing theoretical and experimental evidence. We used an individual-based spatially explicit simulation model at a spatial scale defined by the home range of the large consumer, which was assumed to be parthenogenic and semelparous. Under exploitative conditions, competitive coexistence occurred in a range of body size ratios between 2 and 10. Asymmetrical competition and the mechanism underlying asymmetry, determined by the scaling of energetics and patch behaviour with consumer body size, were the proximate determinant of inter-specific coexistence. The small consumer exploited patches more efficiently, but searched for profitable patches less effectively than the larger competitor. Therefore, body-size related constraints induced niche partitioning, allowing competitive coexistence within a set of conditions where the large consumer maintained control over the small consumer and resource dynamics. The model summarises and extends the existing evidence of species coexistence on a limiting resource, and provides a mechanistic explanation for decoding the size-abundance distribution patterns commonly observed at guild and community levels. ?? Oikos.

  10. The role of consumer satisfaction, consideration set size, variety seeking and convenience orientation in explaining seafood consumption in Vietnam

    OpenAIRE

    Ninh, Thi Kim Anh

    2010-01-01

    The study examines the relationship betweens convenience food and seafood consumption in Vietnam through a replication and an extension of studies of Rortveit and Olsen (2007; 2009). The main purpose of this study is to give an understanding of the role of consumers’ satisfaction, consideration set size, variety seeking, and convenience in explaining seafood consumption behavior in Vietnam.

  11. Autologous chondrocyte implantation: Is it likely to become a saviour of large-sized and full-thickness cartilage defect in young adult knee?

    Science.gov (United States)

    Zhang, Chi; Cai, You-Zhi; Lin, Xiang-Jin

    2016-05-01

    A literature review of the first-, second- and third-generation autologous chondrocyte implantation (ACI) technique for the treatment of large-sized (>4 cm(2)) and full-thickness knee cartilage defects in young adults was conducted, examining the current literature on features, clinical scores, complications, magnetic resonance image (MRI) and histological outcomes, rehabilitation and cost-effectiveness. A literature review was carried out in the main medical databases to evaluate the several studies concerning ACI treatment of large-sized and full-thickness knee cartilage defects in young adults. ACI technique has been shown to relieve symptoms and improve functional assessment in large-sized (>4 cm(2)) and full-thickness knee articular cartilage defect of young adults in short- and medium-term follow-up. Besides, low level of evidence demonstrated its efficiency and durability at long-term follow-up after implantation. Furthermore, MRI and histological evaluations provided the evidence that graft can return back to the previous nearly normal cartilage via ACI techniques. Clinical outcomes tend to be similar in different ACI techniques, but with simplified procedure, low complication rate and better graft quality in the third-generation ACI technique. ACI based on the experience of cell-based therapy, with the high potential to regenerate hyaline-like tissue, represents clinical development in treatment of large-sized and full-thickness knee cartilage defects. IV.

  12. Impact of problem-based learning in a large classroom setting: student perception and problem-solving skills.

    Science.gov (United States)

    Klegeris, Andis; Hurren, Heather

    2011-12-01

    Problem-based learning (PBL) can be described as a learning environment where the problem drives the learning. This technique usually involves learning in small groups, which are supervised by tutors. It is becoming evident that PBL in a small-group setting has a robust positive effect on student learning and skills, including better problem-solving skills and an increase in overall motivation. However, very little research has been done on the educational benefits of PBL in a large classroom setting. Here, we describe a PBL approach (using tutorless groups) that was introduced as a supplement to standard didactic lectures in University of British Columbia Okanagan undergraduate biochemistry classes consisting of 45-85 students. PBL was chosen as an effective method to assist students in learning biochemical and physiological processes. By monitoring student attendance and using informal and formal surveys, we demonstrated that PBL has a significant positive impact on student motivation to attend and participate in the course work. Student responses indicated that PBL is superior to traditional lecture format with regard to the understanding of course content and retention of information. We also demonstrated that student problem-solving skills are significantly improved, but additional controlled studies are needed to determine how much PBL exercises contribute to this improvement. These preliminary data indicated several positive outcomes of using PBL in a large classroom setting, although further studies aimed at assessing student learning are needed to further justify implementation of this technique in courses delivered to large undergraduate classes.

  13. MUSI: an integrated system for identifying multiple specificity from very large peptide or nucleic acid data sets.

    Science.gov (United States)

    Kim, Taehyung; Tyndel, Marc S; Huang, Haiming; Sidhu, Sachdev S; Bader, Gary D; Gfeller, David; Kim, Philip M

    2012-03-01

    Peptide recognition domains and transcription factors play crucial roles in cellular signaling. They bind linear stretches of amino acids or nucleotides, respectively, with high specificity. Experimental techniques that assess the binding specificity of these domains, such as microarrays or phage display, can retrieve thousands of distinct ligands, providing detailed insight into binding specificity. In particular, the advent of next-generation sequencing has recently increased the throughput of such methods by several orders of magnitude. These advances have helped reveal the presence of distinct binding specificity classes that co-exist within a set of ligands interacting with the same target. Here, we introduce a software system called MUSI that can rapidly analyze very large data sets of binding sequences to determine the relevant binding specificity patterns. Our pipeline provides two major advances. First, it can detect previously unrecognized multiple specificity patterns in any data set. Second, it offers integrated processing of very large data sets from next-generation sequencing machines. The results are visualized as multiple sequence logos describing the different binding preferences of the protein under investigation. We demonstrate the performance of MUSI by analyzing recent phage display data for human SH3 domains as well as microarray data for mouse transcription factors.

  14. Large-size space debris flyby in low earth orbits

    Science.gov (United States)

    Baranov, A. A.; Grishko, D. A.; Razoumny, Y. N.

    2017-09-01

    the analysis of NORAD catalogue of space objects executed with respect to the overall sizes of upper-stages and last stages of carrier rockets allows the classification of 5 groups of large-size space debris (LSSD). These groups are defined according to the proximity of orbital inclinations of the involved objects. The orbits within a group have various values of deviations in the Right Ascension of the Ascending Node (RAAN). It is proposed to use the RAANs deviations' evolution portrait to clarify the orbital planes' relative spatial distribution in a group so that the RAAN deviations should be calculated with respect to the concrete precessing orbital plane of the concrete object. In case of the first three groups (inclinations i = 71°, i = 74°, i = 81°) the straight lines of the RAAN relative deviations almost do not intersect each other. So the simple, successive flyby of group's elements is effective, but the significant value of total Δ V is required to form drift orbits. In case of the fifth group (Sun-synchronous orbits) these straight lines chaotically intersect each other for many times due to the noticeable differences in values of semi-major axes and orbital inclinations. The intersections' existence makes it possible to create such a flyby sequence for LSSD group when the orbit of one LSSD object simultaneously serves as the drift orbit to attain another LSSD object. This flyby scheme requiring less Δ V was called "diagonal." The RAANs deviations' evolution portrait built for the fourth group (to be studied in the paper) contains both types of lines, so the simultaneous combination of diagonal and successive flyby schemes is possible. The value of total Δ V and temporal costs were calculated to cover all the elements of the 4th group. The article is also enriched by the results obtained for the flyby problem solution in case of all the five mentioned LSSD groups. The general recommendations are given concerned with the required reserve of total

  15. Minimizing size of decision trees for multi-label decision tables

    KAUST Repository

    Azad, Mohammad

    2014-09-29

    We used decision tree as a model to discover the knowledge from multi-label decision tables where each row has a set of decisions attached to it and our goal is to find out one arbitrary decision from the set of decisions attached to a row. The size of the decision tree can be small as well as very large. We study here different greedy as well as dynamic programming algorithms to minimize the size of the decision trees. When we compare the optimal result from dynamic programming algorithm, we found some greedy algorithms produce results which are close to the optimal result for the minimization of number of nodes (at most 18.92% difference), number of nonterminal nodes (at most 20.76% difference), and number of terminal nodes (at most 18.71% difference).

  16. Minimizing size of decision trees for multi-label decision tables

    KAUST Repository

    Azad, Mohammad; Moshkov, Mikhail

    2014-01-01

    We used decision tree as a model to discover the knowledge from multi-label decision tables where each row has a set of decisions attached to it and our goal is to find out one arbitrary decision from the set of decisions attached to a row. The size of the decision tree can be small as well as very large. We study here different greedy as well as dynamic programming algorithms to minimize the size of the decision trees. When we compare the optimal result from dynamic programming algorithm, we found some greedy algorithms produce results which are close to the optimal result for the minimization of number of nodes (at most 18.92% difference), number of nonterminal nodes (at most 20.76% difference), and number of terminal nodes (at most 18.71% difference).

  17. Self-heating by large insect larvae?

    Science.gov (United States)

    Cooley, Nikita L; Emlen, Douglas J; Woods, H Arthur

    2016-12-01

    Do insect larvae ever self-heat significantly from their own metabolic activity and, if so, under what sets of environmental temperatures and across what ranges of body size? We examine these questions using larvae of the Japanese rhinoceros beetle (Trypoxylus dichotomus), chosen for their large size (>20g), simple body plan, and underground lifestyle. Using CO 2 respirometry, we measured larval metabolic rates then converted measured rates of gas exchange into rates of heat production and developed a mathematical model to predict how much steady state body temperatures of underground insects would increase above ambient depending on body size. Collectively, our results suggest that large, extant larvae (20-30g body mass) can self-heat by at most 2°C, and under many common conditions (shallow depths, moister soils) would self-heat by less than 1°C. By extending the model to even larger (hypothetical) body sizes, we show that underground insects with masses >1kg could heat, in warm, dry soils, by 1.5-6°C or more. Additional experiments showed that larval critical thermal maxima (CT max ) were in excess of 43.5°C and that larvae could behaviorally thermoregulate on a thermal gradient bar. Together, these results suggest that large larvae living underground likely regulate their temperatures primarily using behavior; self-heating by metabolism likely contributes little to their heat budgets, at least in most common soil conditions. Copyright © 2016 Elsevier Ltd. All rights reserved.

  18. SIBSHIP SIZE AND YOUNG WOMEN'S TRANSITIONS TO ADULTHOOD IN INDIA.

    Science.gov (United States)

    Santhya, K G; Zavier, A J Francis

    2017-11-01

    In India, a substantial proportion of young people are growing up in smaller families with fewer siblings than earlier generations of young people. Studies exploring the associations between declines in sibship size and young people's life experiences are limited. Drawing on data from a sub-nationally representative study conducted in 2006-08 of over 50,000 youths in India, this paper examines the associations between surviving sibship size and young women's (age 20-24) transitions to adulthood. Young women who reported no or a single surviving sibling were categorized as those with a small surviving sibship size, and those who reported two or more surviving siblings as those with a large surviving sibship size. Bivariate and multivariate regression analyses were conducted to ascertain the relationship between sibship size and outcome indicators. Analysis was also done separately for low- and high-fertility settings. Small sibship size tended to have a positive influence in many ways on young women's chances of making successful transitions to adulthood. Young women with fewer siblings were more likely than others to report secondary school completion, participation in vocational skills training programmes, experience of gender egalitarian socialization practices, adherence to gender egalitarian norms, exercise of pre-marital agency and small family size preferences. These associations were more apparent in low- than high-fertility settings.

  19. Study of large size fiber reinforced cement containers for solid wastes from dismantling

    International Nuclear Information System (INIS)

    Jaouen, C.

    1990-01-01

    The production of large-sized metallic waste by dismantling operations, and the evolution of the specifications of the waste to be stored in the different European countries will create a need for large standard containers for the transport and final disposal of the corresponding waste. The research conducted during the 1984-1988 programme, supported by the Commission of European Communities, and based on a comparative study of high-grade concrete materials, reinforced with organic or metallic fibres, led to the development of a high performance container meeting international transport recommendations as well as French requirements for shallow-ground disposal. The material selected, consisting of high-performance mortar with metal fibre reinforcement, was the subject of an intensive programme of characterization tests conducted in close cooperation with LAFARGE Company, demonstrating the achievement of mechanical and physical properties comfortably above the regulatory requirements. The construction of an industrial prototype and the subsequent economic analysis served to guarantee the industrial feasibility and cost of this system, in which attempts were made to optimize the finished package product, including its closure system

  20. Flow induced vibration of the large-sized sodium valve for MONJU

    Energy Technology Data Exchange (ETDEWEB)

    Sato, K [Sodium Engineering Division, O-arai Engineering Centre, Power Reactor and Nuclear Fuel Development Corporation, Nariata-cho, O-arai Machi, Ibaraki-ken (Japan)

    1977-12-01

    Measurements have been made on the hydraulic characteristics of the large-sized sodium valves in the hydraulic simulation test loop with water as fluid. The following three prototype sodium valves were tested; (1) 22-inch wedge gate type isolation valve, (2) 22-inch butterfly type isolation valve, and (3) 16-inch butterfly type control valve. In the test, accelerations of flow induced vibrations were measured as a function of flow velocity and disk position. The excitation mechanism of the vibrations is not fully interpreted in these tests due to the complexity of the phenomena, but the experimental results suggest that it closely depends on random pressure fluctuations near the valve disk and flow separation at the contracted cross section between the valve seat and the disk. The intensity of flow induced vibrations suddenly increases at a certain critical condition, which depends on the type of valve and is proportional to fluid velocity. (author)

  1. Flow induced vibration of the large-sized sodium valve for MONJU

    International Nuclear Information System (INIS)

    Sato, K.

    1977-01-01

    Measurements have been made on the hydraulic characteristics of the large-sized sodium valves in the hydraulic simulation test loop with water as fluid. The following three prototype sodium valves were tested; (1) 22-inch wedge gate type isolation valve, (2) 22-inch butterfly type isolation valve, and (3) 16-inch butterfly type control valve. In the test, accelerations of flow induced vibrations were measured as a function of flow velocity and disk position. The excitation mechanism of the vibrations is not fully interpreted in these tests due to the complexity of the phenomena, but the experimental results suggest that it closely depends on random pressure fluctuations near the valve disk and flow separation at the contracted cross section between the valve seat and the disk. The intensity of flow induced vibrations suddenly increases at a certain critical condition, which depends on the type of valve and is proportional to fluid velocity. (author)

  2. A hybrid adaptive large neighborhood search heuristic for lot-sizing with setup times

    DEFF Research Database (Denmark)

    Muller, Laurent Flindt; Spoorendonk, Simon; Pisinger, David

    2012-01-01

    This paper presents a hybrid of a general heuristic framework and a general purpose mixed-integer programming (MIP) solver. The framework is based on local search and an adaptive procedure which chooses between a set of large neighborhoods to be searched. A mixed integer programming solver and its......, and the upper bounds found by the commercial MIP solver ILOG CPLEX using state-of-the-art MIP formulations. Furthermore, we improve the best known solutions on 60 out of 100 and improve the lower bound on all 100 instances from the literature...

  3. Technology for Obtaining Large Size Complex Oxide Crystals for Experiments on Muon-Electron Conversion Registration in High Energy Physics

    Directory of Open Access Journals (Sweden)

    Gerasymov, Ya.

    2014-11-01

    Full Text Available Technological approaches for qualitative large size scintillation crystals growing based on rare-earth silicates are proposed. A method of iridium crucibles charging using eutectic phase instead of a oxyorthosilicate was developed.

  4. Growing vertical ZnO nanorod arrays within graphite: efficient isolation of large size and high quality single-layer graphene.

    Science.gov (United States)

    Ding, Ling; E, Yifeng; Fan, Louzhen; Yang, Shihe

    2013-07-18

    We report a unique strategy for efficiently exfoliating large size and high quality single-layer graphene directly from graphite into DMF dispersions by growing ZnO nanorod arrays between the graphene layers in graphite.

  5. The Viking viewer for connectomics: scalable multi-user annotation and summarization of large volume data sets.

    Science.gov (United States)

    Anderson, J R; Mohammed, S; Grimm, B; Jones, B W; Koshevoy, P; Tasdizen, T; Whitaker, R; Marc, R E

    2011-01-01

    Modern microscope automation permits the collection of vast amounts of continuous anatomical imagery in both two and three dimensions. These large data sets present significant challenges for data storage, access, viewing, annotation and analysis. The cost and overhead of collecting and storing the data can be extremely high. Large data sets quickly exceed an individual's capability for timely analysis and present challenges in efficiently applying transforms, if needed. Finally annotated anatomical data sets can represent a significant investment of resources and should be easily accessible to the scientific community. The Viking application was our solution created to view and annotate a 16.5 TB ultrastructural retinal connectome volume and we demonstrate its utility in reconstructing neural networks for a distinctive retinal amacrine cell class. Viking has several key features. (1) It works over the internet using HTTP and supports many concurrent users limited only by hardware. (2) It supports a multi-user, collaborative annotation strategy. (3) It cleanly demarcates viewing and analysis from data collection and hosting. (4) It is capable of applying transformations in real-time. (5) It has an easily extensible user interface, allowing addition of specialized modules without rewriting the viewer. © 2010 The Authors Journal of Microscopy © 2010 The Royal Microscopical Society.

  6. Simplified DFT methods for consistent structures and energies of large systems

    Science.gov (United States)

    Caldeweyher, Eike; Gerit Brandenburg, Jan

    2018-05-01

    Kohn–Sham density functional theory (DFT) is routinely used for the fast electronic structure computation of large systems and will most likely continue to be the method of choice for the generation of reliable geometries in the foreseeable future. Here, we present a hierarchy of simplified DFT methods designed for consistent structures and non-covalent interactions of large systems with particular focus on molecular crystals. The covered methods are a minimal basis set Hartree–Fock (HF-3c), a small basis set screened exchange hybrid functional (HSE-3c), and a generalized gradient approximated functional evaluated in a medium-sized basis set (B97-3c), all augmented with semi-classical correction potentials. We give an overview on the methods design, a comprehensive evaluation on established benchmark sets for geometries and lattice energies of molecular crystals, and highlight some realistic applications on large organic crystals with several hundreds of atoms in the primitive unit cell.

  7. Development of high sensitivity and high speed large size blank inspection system LBIS

    Science.gov (United States)

    Ohara, Shinobu; Yoshida, Akinori; Hirai, Mitsuo; Kato, Takenori; Moriizumi, Koichi; Kusunose, Haruhiko

    2017-07-01

    The production of high-resolution flat panel displays (FPDs) for mobile phones today requires the use of high-quality large-size photomasks (LSPMs). Organic light emitting diode (OLED) displays use several transistors on each pixel for precise current control and, as such, the mask patterns for OLED displays are denser and finer than the patterns for the previous generation displays throughout the entire mask surface. It is therefore strongly demanded that mask patterns be produced with high fidelity and free of defect. To enable the production of a high quality LSPM in a short lead time, the manufacturers need a high-sensitivity high-speed mask blank inspection system that meets the requirement of advanced LSPMs. Lasertec has developed a large-size blank inspection system called LBIS, which achieves high sensitivity based on a laser-scattering technique. LBIS employs a high power laser as its inspection light source. LBIS's delivery optics, including a scanner and F-Theta scan lens, focus the light from the source linearly on the surface of the blank. Its specially-designed optics collect the light scattered by particles and defects generated during the manufacturing process, such as scratches, on the surface and guide it to photo multiplier tubes (PMTs) with high efficiency. Multiple PMTs are used on LBIS for the stable detection of scattered light, which may be distributed at various angles due to irregular shapes of defects. LBIS captures 0.3mμ PSL at a detection rate of over 99.5% with uniform sensitivity. Its inspection time is 20 minutes for a G8 blank and 35 minutes for G10. The differential interference contrast (DIC) microscope on the inspection head of LBIS captures high-contrast review images after inspection. The images are classified automatically.

  8. Usability-driven pruning of large ontologies: the case of SNOMED CT.

    Science.gov (United States)

    López-García, Pablo; Boeker, Martin; Illarramendi, Arantza; Schulz, Stefan

    2012-06-01

    To study ontology modularization techniques when applied to SNOMED CT in a scenario in which no previous corpus of information exists and to examine if frequency-based filtering using MEDLINE can reduce subset size without discarding relevant concepts. Subsets were first extracted using four graph-traversal heuristics and one logic-based technique, and were subsequently filtered with frequency information from MEDLINE. Twenty manually coded discharge summaries from cardiology patients were used as signatures and test sets. The coverage, size, and precision of extracted subsets were measured. Graph-traversal heuristics provided high coverage (71-96% of terms in the test sets of discharge summaries) at the expense of subset size (17-51% of the size of SNOMED CT). Pre-computed subsets and logic-based techniques extracted small subsets (1%), but coverage was limited (24-55%). Filtering reduced the size of large subsets to 10% while still providing 80% coverage. Extracting subsets to annotate discharge summaries is challenging when no previous corpus exists. Ontology modularization provides valuable techniques, but the resulting modules grow as signatures spread across subhierarchies, yielding a very low precision. Graph-traversal strategies and frequency data from an authoritative source can prune large biomedical ontologies and produce useful subsets that still exhibit acceptable coverage. However, a clinical corpus closer to the specific use case is preferred when available.

  9. A new type of intelligent wireless sensing network for health monitoring of large-size structures

    Science.gov (United States)

    Lei, Ying; Liu, Ch.; Wu, D. T.; Tang, Y. L.; Wang, J. X.; Wu, L. J.; Jiang, X. D.

    2009-07-01

    In recent years, some innovative wireless sensing systems have been proposed. However, more exploration and research on wireless sensing systems are required before wireless systems can substitute for the traditional wire-based systems. In this paper, a new type of intelligent wireless sensing network is proposed for the heath monitoring of large-size structures. Hardware design of the new wireless sensing units is first studied. The wireless sensing unit mainly consists of functional modules of: sensing interface, signal conditioning, signal digitization, computational core, wireless communication and battery management. Then, software architecture of the unit is introduced. The sensing network has a two-level cluster-tree architecture with Zigbee communication protocol. Important issues such as power saving and fault tolerance are considered in the designs of the new wireless sensing units and sensing network. Each cluster head in the network is characterized by its computational capabilities that can be used to implement the computational methodologies of structural health monitoring; making the wireless sensing units and sensing network have "intelligent" characteristics. Primary tests on the measurement data collected by the wireless system are performed. The distributed computational capacity of the intelligent sensing network is also demonstrated. It is shown that the new type of intelligent wireless sensing network provides an efficient tool for structural health monitoring of large-size structures.

  10. Family size, the physical environment, and socioeconomic effects across the stature distribution.

    Science.gov (United States)

    Carson, Scott Alan

    2012-04-01

    A neglected area in historical stature studies is the relationship between stature and family size. Using robust statistics and a large 19th century data set, this study documents a positive relationship between stature and family size across the stature distribution. The relationship between material inequality and health is the subject of considerable debate, and there was a positive relationship between stature and wealth and an inverse relationship between stature and material inequality. After controlling for family size and wealth variables, the paper reports a positive relationship between the physical environment and stature. Copyright © 2012 Elsevier GmbH. All rights reserved.

  11. A large set of potential past, present and future hydro-meteorological time series for the UK

    Science.gov (United States)

    Guillod, Benoit P.; Jones, Richard G.; Dadson, Simon J.; Coxon, Gemma; Bussi, Gianbattista; Freer, James; Kay, Alison L.; Massey, Neil R.; Sparrow, Sarah N.; Wallom, David C. H.; Allen, Myles R.; Hall, Jim W.

    2018-01-01

    Hydro-meteorological extremes such as drought and heavy precipitation can have large impacts on society and the economy. With potentially increasing risks associated with such events due to climate change, properly assessing the associated impacts and uncertainties is critical for adequate adaptation. However, the application of risk-based approaches often requires large sets of extreme events, which are not commonly available. Here, we present such a large set of hydro-meteorological time series for recent past and future conditions for the United Kingdom based on weather@home 2, a modelling framework consisting of a global climate model (GCM) driven by observed or projected sea surface temperature (SST) and sea ice which is downscaled to 25 km over the European domain by a regional climate model (RCM). Sets of 100 time series are generated for each of (i) a historical baseline (1900-2006), (ii) five near-future scenarios (2020-2049) and (iii) five far-future scenarios (2070-2099). The five scenarios in each future time slice all follow the Representative Concentration Pathway 8.5 (RCP8.5) and sample the range of sea surface temperature and sea ice changes from CMIP5 (Coupled Model Intercomparison Project Phase 5) models. Validation of the historical baseline highlights good performance for temperature and potential evaporation, but substantial seasonal biases in mean precipitation, which are corrected using a linear approach. For extremes in low precipitation over a long accumulation period ( > 3 months) and shorter-duration high precipitation (1-30 days), the time series generally represents past statistics well. Future projections show small precipitation increases in winter but large decreases in summer on average, leading to an overall drying, consistently with the most recent UK Climate Projections (UKCP09) but larger in magnitude than the latter. Both drought and high-precipitation events are projected to increase in frequency and intensity in most regions

  12. Designing Websites for Displaying Large Data Sets and Images on Multiple Platforms

    Science.gov (United States)

    Anderson, A.; Wolf, V. G.; Garron, J.; Kirschner, M.

    2012-12-01

    The desire to build websites to analyze and display ever increasing amounts of scientific data and images pushes for web site designs which utilize large displays, and to use the display area as efficiently as possible. Yet, scientists and users of their data are increasingly wishing to access these websites in the field and on mobile devices. This results in the need to develop websites that can support a wide range of devices and screen sizes, and to optimally use whatever display area is available. Historically, designers have addressed this issue by building two websites; one for mobile devices, and one for desktop environments, resulting in increased cost, duplicity of work, and longer development times. Recent advancements in web design technology and techniques have evolved which allow for the development of a single website that dynamically adjusts to the type of device being used to browse the website (smartphone, tablet, desktop). In addition they provide the opportunity to truly optimize whatever display area is available. HTML5 and CSS3 give web designers media query statements which allow design style sheets to be aware of the size of the display being used, and to format web content differently based upon the queried response. Web elements can be rendered in a different size, position, or even removed from the display entirely, based upon the size of the display area. Using HTML5/CSS3 media queries in this manner is referred to as "Responsive Web Design" (RWD). RWD in combination with technologies such as LESS and Twitter Bootstrap allow the web designer to build web sites which not only dynamically respond to the browser display size being used, but to do so in very controlled and intelligent ways, ensuring that good layout and graphic design principles are followed while doing so. At the University of Alaska Fairbanks, the Alaska Satellite Facility SAR Data Center (ASF) recently redesigned their popular Vertex application and converted it from a

  13. Matching cue size and task properties in exogenous attention.

    Science.gov (United States)

    Burnett, Katherine E; d'Avossa, Giovanni; Sapir, Ayelet

    2013-01-01

    Exogenous attention is an involuntary, reflexive orienting response that results in enhanced processing at the attended location. The standard view is that this enhancement generalizes across visual properties of a stimulus. We test whether the size of an exogenous cue sets the attentional field and whether this leads to different effects on stimuli with different visual properties. In a dual task with a random-dot kinematogram (RDK) in each quadrant of the screen, participants discriminated the direction of moving dots in one RDK and localized one red dot. Precues were uninformative and consisted of either a large or a small luminance-change frame. The motion discrimination task showed attentional effects following both large and small exogenous cues. The red dot probe localization task showed attentional effects following a small cue, but not a large cue. Two additional experiments showed that the different effects on localization were not due to reduced spatial uncertainty or suppression of RDK dots in the surround. These results indicate that the effects of exogenous attention depend on the size of the cue and the properties of the task, suggesting the involvement of receptive fields with different sizes in different tasks. These attentional effects are likely to be driven by bottom-up mechanisms in early visual areas.

  14. Resolution 8.069/12. It approve the regulations for the large size structures installation, destined for wind power generation

    International Nuclear Information System (INIS)

    2012-01-01

    This resolution approve the regulations for the large size structures installation, destined to wind power generation. The objective of this rule is to regulate the urban conditions of the facilities and the environmental guarantees, safety and inhabitants wholesomeness

  15. Ecosystem size structure response to 21st century climate projection: large fish abundance decreases in the central North Pacific and increases in the California Current.

    Science.gov (United States)

    Woodworth-Jefcoats, Phoebe A; Polovina, Jeffrey J; Dunne, John P; Blanchard, Julia L

    2013-03-01

    Output from an earth system model is paired with a size-based food web model to investigate the effects of climate change on the abundance of large fish over the 21st century. The earth system model, forced by the Intergovernmental Panel on Climate Change (IPCC) Special report on emission scenario A2, combines a coupled climate model with a biogeochemical model including major nutrients, three phytoplankton functional groups, and zooplankton grazing. The size-based food web model includes linkages between two size-structured pelagic communities: primary producers and consumers. Our investigation focuses on seven sites in the North Pacific, each highlighting a specific aspect of projected climate change, and includes top-down ecosystem depletion through fishing. We project declines in large fish abundance ranging from 0 to 75.8% in the central North Pacific and increases of up to 43.0% in the California Current (CC) region over the 21st century in response to change in phytoplankton size structure and direct physiological effects. We find that fish abundance is especially sensitive to projected changes in large phytoplankton density and our model projects changes in the abundance of large fish being of the same order of magnitude as changes in the abundance of large phytoplankton. Thus, studies that address only climate-induced impacts to primary production without including changes to phytoplankton size structure may not adequately project ecosystem responses. © 2012 Blackwell Publishing Ltd.

  16. An expert system based software sizing tool, phase 2

    Science.gov (United States)

    Friedlander, David

    1990-01-01

    A software tool was developed for predicting the size of a future computer program at an early stage in its development. The system is intended to enable a user who is not expert in Software Engineering to estimate software size in lines of source code with an accuracy similar to that of an expert, based on the program's functional specifications. The project was planned as a knowledge based system with a field prototype as the goal of Phase 2 and a commercial system planned for Phase 3. The researchers used techniques from Artificial Intelligence and knowledge from human experts and existing software from NASA's COSMIC database. They devised a classification scheme for the software specifications, and a small set of generic software components that represent complexity and apply to large classes of programs. The specifications are converted to generic components by a set of rules and the generic components are input to a nonlinear sizing function which makes the final prediction. The system developed for this project predicted code sizes from the database with a bias factor of 1.06 and a fluctuation factor of 1.77, an accuracy similar to that of human experts but without their significant optimistic bias.

  17. Regional climate model sensitivity to domain size

    Science.gov (United States)

    Leduc, Martin; Laprise, René

    2009-05-01

    Regional climate models are increasingly used to add small-scale features that are not present in their lateral boundary conditions (LBC). It is well known that the limited area over which a model is integrated must be large enough to allow the full development of small-scale features. On the other hand, integrations on very large domains have shown important departures from the driving data, unless large scale nudging is applied. The issue of domain size is studied here by using the “perfect model” approach. This method consists first of generating a high-resolution climatic simulation, nicknamed big brother (BB), over a large domain of integration. The next step is to degrade this dataset with a low-pass filter emulating the usual coarse-resolution LBC. The filtered nesting data (FBB) are hence used to drive a set of four simulations (LBs for Little Brothers), with the same model, but on progressively smaller domain sizes. The LB statistics for a climate sample of four winter months are compared with BB over a common region. The time average (stationary) and transient-eddy standard deviation patterns of the LB atmospheric fields generally improve in terms of spatial correlation with the reference (BB) when domain gets smaller. The extraction of the small-scale features by using a spectral filter allows detecting important underestimations of the transient-eddy variability in the vicinity of the inflow boundary, which can penalize the use of small domains (less than 100 × 100 grid points). The permanent “spatial spin-up” corresponds to the characteristic distance that the large-scale flow needs to travel before developing small-scale features. The spin-up distance tends to grow in size at higher levels in the atmosphere.

  18. Regional climate model sensitivity to domain size

    Energy Technology Data Exchange (ETDEWEB)

    Leduc, Martin [Universite du Quebec a Montreal, Canadian Regional Climate Modelling and Diagnostics (CRCMD) Network, ESCER Centre, Montreal (Canada); UQAM/Ouranos, Montreal, QC (Canada); Laprise, Rene [Universite du Quebec a Montreal, Canadian Regional Climate Modelling and Diagnostics (CRCMD) Network, ESCER Centre, Montreal (Canada)

    2009-05-15

    Regional climate models are increasingly used to add small-scale features that are not present in their lateral boundary conditions (LBC). It is well known that the limited area over which a model is integrated must be large enough to allow the full development of small-scale features. On the other hand, integrations on very large domains have shown important departures from the driving data, unless large scale nudging is applied. The issue of domain size is studied here by using the ''perfect model'' approach. This method consists first of generating a high-resolution climatic simulation, nicknamed big brother (BB), over a large domain of integration. The next step is to degrade this dataset with a low-pass filter emulating the usual coarse-resolution LBC. The filtered nesting data (FBB) are hence used to drive a set of four simulations (LBs for Little Brothers), with the same model, but on progressively smaller domain sizes. The LB statistics for a climate sample of four winter months are compared with BB over a common region. The time average (stationary) and transient-eddy standard deviation patterns of the LB atmospheric fields generally improve in terms of spatial correlation with the reference (BB) when domain gets smaller. The extraction of the small-scale features by using a spectral filter allows detecting important underestimations of the transient-eddy variability in the vicinity of the inflow boundary, which can penalize the use of small domains (less than 100 x 100 grid points). The permanent ''spatial spin-up'' corresponds to the characteristic distance that the large-scale flow needs to travel before developing small-scale features. The spin-up distance tends to grow in size at higher levels in the atmosphere. (orig.)

  19. THE PHYSICS OF PROTOPLANETESIMAL DUST AGGLOMERATES. VI. EROSION OF LARGE AGGREGATES AS A SOURCE OF MICROMETER-SIZED PARTICLES

    International Nuclear Information System (INIS)

    Schraepler, Rainer; Blum, Juergen

    2011-01-01

    Observed protoplanetary disks consist of a large amount of micrometer-sized particles. Dullemond and Dominik pointed out for the first time the difficulty in explaining the strong mid-infrared excess of classical T Tauri stars without any dust-retention mechanisms. Because high relative velocities in between micrometer-sized and macroscopic particles exist in protoplanetary disks, we present experimental results on the erosion of macroscopic agglomerates consisting of micrometer-sized spherical particles via the impact of micrometer-sized particles. We find that after an initial phase, in which an impacting particle erodes up to 10 particles of an agglomerate, the impacting particles compress the agglomerate's surface, which partly passivates the agglomerates against erosion. Due to this effect, the erosion halts for impact velocities up to ∼30 m s -1 within our error bars. For higher velocities, the erosion is reduced by an order of magnitude. This outcome is explained and confirmed by a numerical model. In a next step, we build an analytical disk model and implement the experimentally found erosive effect. The model shows that erosion is a strong source of micrometer-sized particles in a protoplanetary disk. Finally, we use the stationary solution of this model to explain the amount of micrometer-sized particles in the observational infrared data of Furlan et al.

  20. Temperature Uniformity of Wafer on a Large-Sized Susceptor for a Nitride Vertical MOCVD Reactor

    International Nuclear Information System (INIS)

    Li Zhi-Ming; Jiang Hai-Ying; Han Yan-Bin; Li Jin-Ping; Yin Jian-Qin; Zhang Jin-Cheng

    2012-01-01

    The effect of coil location on wafer temperature is analyzed in a vertical MOCVD reactor by induction heating. It is observed that the temperature distribution in the wafer with the coils under the graphite susceptor is more uniform than that with the coils around the outside wall of the reactor. For the case of coils under the susceptor, we find that the thickness of the susceptor, the distance from the coils to the susceptor bottom and the coil turns significantly affect the temperature uniformity of the wafer. An optimization process is executed for a 3-inch susceptor with this kind of structure, resulting in a large improvement in the temperature uniformity. A further optimization demonstrates that the new susceptor structure is also suitable for either multiple wafers or large-sized wafers approaching 6 and 8 inches

  1. Utility of large spot binocular indirect laser delivery for peripheral photocoagulation therapy in children.

    Science.gov (United States)

    Balasubramaniam, Saranya C; Mohney, Brian G; Bang, Genie M; Link, Thomas P; Pulido, Jose S

    2012-09-01

    The purpose of this article is to demonstrate the utility of the large spot size (LSS) setting using a binocular laser indirect delivery system for peripheral ablation in children. One patient with bilateral retinopathy of prematurity received photocoagulation with standard spot size burns placed adjacently to LSS burns. Using a pixel analysis program called Image J on the Retcam picture, the areas of each retinal spot size were determined in units of pixels, giving a standard spot range of 805 to 1294 pixels and LSS range of 1699 to 2311 pixels. Additionally, fluence was calculated using theoretical retinal areas produced by each spot size: the standard spot setting was 462 mJ/mm2 and the LSS setting was 104 mJ/mm2. For eyes with retinopathy of prematurity, our study shows that LSS laser indirect delivery halves the number of spots required for treatment and reduces fluence by almost one-quarter, producing more uniform spots.

  2. Evaluation of digital soil mapping approaches with large sets of environmental covariates

    Science.gov (United States)

    Nussbaum, Madlene; Spiess, Kay; Baltensweiler, Andri; Grob, Urs; Keller, Armin; Greiner, Lucie; Schaepman, Michael E.; Papritz, Andreas

    2018-01-01

    The spatial assessment of soil functions requires maps of basic soil properties. Unfortunately, these are either missing for many regions or are not available at the desired spatial resolution or down to the required soil depth. The field-based generation of large soil datasets and conventional soil maps remains costly. Meanwhile, legacy soil data and comprehensive sets of spatial environmental data are available for many regions. Digital soil mapping (DSM) approaches relating soil data (responses) to environmental data (covariates) face the challenge of building statistical models from large sets of covariates originating, for example, from airborne imaging spectroscopy or multi-scale terrain analysis. We evaluated six approaches for DSM in three study regions in Switzerland (Berne, Greifensee, ZH forest) by mapping the effective soil depth available to plants (SD), pH, soil organic matter (SOM), effective cation exchange capacity (ECEC), clay, silt, gravel content and fine fraction bulk density for four soil depths (totalling 48 responses). Models were built from 300-500 environmental covariates by selecting linear models through (1) grouped lasso and (2) an ad hoc stepwise procedure for robust external-drift kriging (georob). For (3) geoadditive models we selected penalized smoothing spline terms by component-wise gradient boosting (geoGAM). We further used two tree-based methods: (4) boosted regression trees (BRTs) and (5) random forest (RF). Lastly, we computed (6) weighted model averages (MAs) from the predictions obtained from methods 1-5. Lasso, georob and geoGAM successfully selected strongly reduced sets of covariates (subsets of 3-6 % of all covariates). Differences in predictive performance, tested on independent validation data, were mostly small and did not reveal a single best method for 48 responses. Nevertheless, RF was often the best among methods 1-5 (28 of 48 responses), but was outcompeted by MA for 14 of these 28 responses. RF tended to over

  3. The limits of weak selection and large population size in evolutionary game theory.

    Science.gov (United States)

    Sample, Christine; Allen, Benjamin

    2017-11-01

    Evolutionary game theory is a mathematical approach to studying how social behaviors evolve. In many recent works, evolutionary competition between strategies is modeled as a stochastic process in a finite population. In this context, two limits are both mathematically convenient and biologically relevant: weak selection and large population size. These limits can be combined in different ways, leading to potentially different results. We consider two orderings: the [Formula: see text] limit, in which weak selection is applied before the large population limit, and the [Formula: see text] limit, in which the order is reversed. Formal mathematical definitions of the [Formula: see text] and [Formula: see text] limits are provided. Applying these definitions to the Moran process of evolutionary game theory, we obtain asymptotic expressions for fixation probability and conditions for success in these limits. We find that the asymptotic expressions for fixation probability, and the conditions for a strategy to be favored over a neutral mutation, are different in the [Formula: see text] and [Formula: see text] limits. However, the ordering of limits does not affect the conditions for one strategy to be favored over another.

  4. 40 CFR 141.81 - Applicability of corrosion control treatment steps to small, medium-size and large water systems.

    Science.gov (United States)

    2010-07-01

    ... treatment steps to small, medium-size and large water systems. 141.81 Section 141.81 Protection of... WATER REGULATIONS Control of Lead and Copper § 141.81 Applicability of corrosion control treatment steps...). (ii) A report explaining the test methods used by the water system to evaluate the corrosion control...

  5. Decomposing wage distributions on a large data set - a quantile regression analysis of the gender wage gap

    DEFF Research Database (Denmark)

    Albæk, Karsten; Brink Thomsen, Lars

    This paper presents and implements a procedure that makes it possible to decompose wage distributions on large data sets. We replace bootstrap sampling in the standard Machado-Mata procedure with ‘non-replacement subsampling’, which is more suitable for the linked employer-employee data applied i...... in gender wage differences in the lower part of the wage distribution.......This paper presents and implements a procedure that makes it possible to decompose wage distributions on large data sets. We replace bootstrap sampling in the standard Machado-Mata procedure with ‘non-replacement subsampling’, which is more suitable for the linked employer-employee data applied...... in this paper. Decompositions show that most of the glass ceiling is related to segregation in the form of either composition effects or different returns to males and females. A counterfactual wage distribution without differences in the constant terms (or ‘discrimination’) implies substantial changes...

  6. Complexity analysis on public transport networks of 97 large- and medium-sized cities in China

    Science.gov (United States)

    Tian, Zhanwei; Zhang, Zhuo; Wang, Hongfei; Ma, Li

    2018-04-01

    The traffic situation in Chinese urban areas is continuing to deteriorate. To make a better planning and designing of the public transport system, it is necessary to make profound research on the structure of urban public transport networks (PTNs). We investigate 97 large- and medium-sized cities’ PTNs in China, construct three types of network models — bus stop network, bus transit network and bus line network, then analyze the structural characteristics of them. It is revealed that bus stop network is small-world and scale-free, bus transit network and bus line network are both small-world. Betweenness centrality of each city’s PTN shows similar distribution pattern, although these networks’ size is various. When classifying cities according to the characteristics of PTNs or economic development level, the results are similar. It means that the development of cities’ economy and transport network has a strong correlation, PTN expands in a certain model with the development of economy.

  7. Caught you: threats to confidentiality due to the public release of large-scale genetic data sets.

    Science.gov (United States)

    Wjst, Matthias

    2010-12-29

    Large-scale genetic data sets are frequently shared with other research groups and even released on the Internet to allow for secondary analysis. Study participants are usually not informed about such data sharing because data sets are assumed to be anonymous after stripping off personal identifiers. The assumption of anonymity of genetic data sets, however, is tenuous because genetic data are intrinsically self-identifying. Two types of re-identification are possible: the "Netflix" type and the "profiling" type. The "Netflix" type needs another small genetic data set, usually with less than 100 SNPs but including a personal identifier. This second data set might originate from another clinical examination, a study of leftover samples or forensic testing. When merged to the primary, unidentified set it will re-identify all samples of that individual. Even with no second data set at hand, a "profiling" strategy can be developed to extract as much information as possible from a sample collection. Starting with the identification of ethnic subgroups along with predictions of body characteristics and diseases, the asthma kids case as a real-life example is used to illustrate that approach. Depending on the degree of supplemental information, there is a good chance that at least a few individuals can be identified from an anonymized data set. Any re-identification, however, may potentially harm study participants because it will release individual genetic disease risks to the public.

  8. Caught you: threats to confidentiality due to the public release of large-scale genetic data sets

    Directory of Open Access Journals (Sweden)

    Wjst Matthias

    2010-12-01

    Full Text Available Abstract Background Large-scale genetic data sets are frequently shared with other research groups and even released on the Internet to allow for secondary analysis. Study participants are usually not informed about such data sharing because data sets are assumed to be anonymous after stripping off personal identifiers. Discussion The assumption of anonymity of genetic data sets, however, is tenuous because genetic data are intrinsically self-identifying. Two types of re-identification are possible: the "Netflix" type and the "profiling" type. The "Netflix" type needs another small genetic data set, usually with less than 100 SNPs but including a personal identifier. This second data set might originate from another clinical examination, a study of leftover samples or forensic testing. When merged to the primary, unidentified set it will re-identify all samples of that individual. Even with no second data set at hand, a "profiling" strategy can be developed to extract as much information as possible from a sample collection. Starting with the identification of ethnic subgroups along with predictions of body characteristics and diseases, the asthma kids case as a real-life example is used to illustrate that approach. Summary Depending on the degree of supplemental information, there is a good chance that at least a few individuals can be identified from an anonymized data set. Any re-identification, however, may potentially harm study participants because it will release individual genetic disease risks to the public.

  9. Particle size distributions of lead measured in battery manufacturing and secondary smelter facilities and implications in setting workplace lead exposure limits.

    Science.gov (United States)

    Petito Boyce, Catherine; Sax, Sonja N; Cohen, Joel M

    2017-08-01

    Inhalation plays an important role in exposures to lead in airborne particulate matter in occupational settings, and particle size determines where and how much of airborne lead is deposited in the respiratory tract and how much is subsequently absorbed into the body. Although some occupational airborne lead particle size data have been published, limited information is available reflecting current workplace conditions in the U.S. To address this data gap, the Battery Council International (BCI) conducted workplace monitoring studies at nine lead acid battery manufacturing facilities (BMFs) and five secondary smelter facilities (SSFs) across the U.S. This article presents the results of the BCI studies focusing on the particle size distributions calculated from Personal Marple Impactor sampling data and particle deposition estimates in each of the three major respiratory tract regions derived using the Multiple-Path Particle Dosimetry model. The BCI data showed the presence of predominantly larger-sized particles in the work environments evaluated, with average mass median aerodynamic diameters (MMADs) ranging from 21-32 µm for the three BMF job categories and from 15-25 µm for the five SSF job categories tested. The BCI data also indicated that the percentage of lead mass measured at the sampled facilities in the submicron range (i.e., lead) was generally small. The estimated average percentages of lead mass in the submicron range for the tested job categories ranged from 0.8-3.3% at the BMFs and from 0.44-6.1% at the SSFs. Variability was observed in the particle size distributions across job categories and facilities, and sensitivity analyses were conducted to explore this variability. The BCI results were compared with results reported in the scientific literature. Screening-level analyses were also conducted to explore the overall degree of lead absorption potentially associated with the observed particle size distributions and to identify key issues

  10. Estimation of the sizes of hot nuclear systems from particle-particle large angle kinematical correlations

    International Nuclear Information System (INIS)

    La Ville, J.L.; Bizard, G.; Durand, D.; Jin, G.M.; Rosato, E.

    1990-06-01

    Light fragment emission, when triggered by large transverse momentum protons shows specific kinematical correlations due to recoil effects of the excited emitting source. Such effects have been observed in azimuthal angular distributions of He-particles produced in collisions induced by 94 MeV/u 16 0 ions on Al, Ni and Au targets. A model calculation assuming a two-stage mechanism (formation and sequential decay of a hot source) gives a good description of these whole data. From this succesfull confrontation, it is possible to estimate the size of the emitting system

  11. Verification measurements of the IRMM-1027 and the IAEA large-sized dried (LSD) spikes

    International Nuclear Information System (INIS)

    Jakopic, R.; Aregbe, Y.; Richter, S.

    2017-01-01

    In the frame of the accountancy measurements of the fissile materials, reliable determinations of the plutonium and uranium content in spent nuclear fuel are required to comply with international safeguards agreements. Large-sized dried (LSD) spikes of enriched "2"3"5U and "2"3"9Pu for isotope dilution mass spectrometry (IDMS) analysis are routinely applied in reprocessing plants for this purpose. A correct characterisation of these elements is a pre-requirement for achieving high accuracy in IDMS analyses. This paper will present the results of external verification measurements of such LSD spikes performed by the European Commission and the International Atomic Energy Agency. (author)

  12. What big size you have! Using effect sizes to determine the impact of public health nursing interventions.

    Science.gov (United States)

    Johnson, K E; McMorris, B J; Raynor, L A; Monsen, K A

    2013-01-01

    The Omaha System is a standardized interface terminology that is used extensively by public health nurses in community settings to document interventions and client outcomes. Researchers using Omaha System data to analyze the effectiveness of interventions have typically calculated p-values to determine whether significant client changes occurred between admission and discharge. However, p-values are highly dependent on sample size, making it difficult to distinguish statistically significant changes from clinically meaningful changes. Effect sizes can help identify practical differences but have not yet been applied to Omaha System data. We compared p-values and effect sizes (Cohen's d) for mean differences between admission and discharge for 13 client problems documented in the electronic health records of 1,016 young low-income parents. Client problems were documented anywhere from 6 (Health Care Supervision) to 906 (Caretaking/parenting) times. On a scale from 1 to 5, the mean change needed to yield a large effect size (Cohen's d ≥ 0.80) was approximately 0.60 (range = 0.50 - 1.03) regardless of p-value or sample size (i.e., the number of times a client problem was documented in the electronic health record). Researchers using the Omaha System should report effect sizes to help readers determine which differences are practical and meaningful. Such disclosures will allow for increased recognition of effective interventions.

  13. Evaluation research of small and medium-sized enterprise informatization on big data

    Science.gov (United States)

    Yang, Na

    2017-09-01

    Under the background of big data, key construction of small and medium-sized enterprise informationization level was needed, but information construction cost was large, while information cost of inputs can bring benefit to small and medium-sized enterprises. This paper established small and medium-sized enterprise informatization evaluation system from hardware and software security level, information organization level, information technology application and the profit level, and information ability level. The rough set theory was used to brief indexes, and then carry out evaluation by support vector machine (SVM) model. At last, examples were used to verify the theory in order to prove the effectiveness of the method.

  14. A polymer, random walk model for the size-distribution of large DNA fragments after high linear energy transfer radiation

    Science.gov (United States)

    Ponomarev, A. L.; Brenner, D.; Hlatky, L. R.; Sachs, R. K.

    2000-01-01

    DNA double-strand breaks (DSBs) produced by densely ionizing radiation are not located randomly in the genome: recent data indicate DSB clustering along chromosomes. Stochastic DSB clustering at large scales, from > 100 Mbp down to simulations and analytic equations. A random-walk, coarse-grained polymer model for chromatin is combined with a simple track structure model in Monte Carlo software called DNAbreak and is applied to data on alpha-particle irradiation of V-79 cells. The chromatin model neglects molecular details but systematically incorporates an increase in average spatial separation between two DNA loci as the number of base-pairs between the loci increases. Fragment-size distributions obtained using DNAbreak match data on large fragments about as well as distributions previously obtained with a less mechanistic approach. Dose-response relations, linear at small doses of high linear energy transfer (LET) radiation, are obtained. They are found to be non-linear when the dose becomes so large that there is a significant probability of overlapping or close juxtaposition, along one chromosome, for different DSB clusters from different tracks. The non-linearity is more evident for large fragments than for small. The DNAbreak results furnish an example of the RLC (randomly located clusters) analytic formalism, which generalizes the broken-stick fragment-size distribution of the random-breakage model that is often applied to low-LET data.

  15. Effect of high-pressure homogenization preparation on mean globule size and large-diameter tail of oil-in-water injectable emulsions.

    Science.gov (United States)

    Peng, Jie; Dong, Wu-Jun; Li, Ling; Xu, Jia-Ming; Jin, Du-Jia; Xia, Xue-Jun; Liu, Yu-Ling

    2015-12-01

    The effect of different high pressure homogenization energy input parameters on mean diameter droplet size (MDS) and droplets with > 5 μm of lipid injectable emulsions were evaluated. All emulsions were prepared at different water bath temperatures or at different rotation speeds and rotor-stator system times, and using different homogenization pressures and numbers of high-pressure system recirculations. The MDS and polydispersity index (PI) value of the emulsions were determined using the dynamic light scattering (DLS) method, and large-diameter tail assessments were performed using the light-obscuration/single particle optical sensing (LO/SPOS) method. Using 1000 bar homogenization pressure and seven recirculations, the energy input parameters related to the rotor-stator system will not have an effect on the final particle size results. When rotor-stator system energy input parameters are fixed, homogenization pressure and recirculation will affect mean particle size and large diameter droplet. Particle size will decrease with increasing homogenization pressure from 400 bar to 1300 bar when homogenization recirculation is fixed; when the homogenization pressure is fixed at 1000 bar, the particle size of both MDS and percent of fat droplets exceeding 5 μm (PFAT 5 ) will decrease with increasing homogenization recirculations, MDS dropped to 173 nm after five cycles and maintained this level, volume-weighted PFAT 5 will drop to 0.038% after three cycles, so the "plateau" of MDS will come up later than that of PFAT 5 , and the optimal particle size is produced when both of them remained at plateau. Excess homogenization recirculation such as nine times under the 1000 bar may lead to PFAT 5 increase to 0.060% rather than a decrease; therefore, the high-pressure homogenization procedure is the key factor affecting the particle size distribution of emulsions. Varying storage conditions (4-25°C) also influenced particle size, especially the PFAT 5 . Copyright

  16. Evidence for density-dependent changes in growth, downstream movement, and size of Chinook salmon subyearlings in a large-river landscape

    Science.gov (United States)

    Connor, William P.; Tiffan, Kenneth F.; Plumb, John M.; Moffit, Christine M.

    2013-01-01

    We studied the growth rate, downstream movement, and size of naturally produced fall Chinook Salmon Oncorhynchus tshawytscha subyearlings (age 0) for 20 years in an 8th-order river landscape with regulated riverine upstream rearing areas and an impounded downstream migration corridor. The population transitioned from low to high abundance in association with U.S. Endangered Species Act and other federally mandated recovery efforts. The mean growth rate of parr in the river did not decline with increasing abundance, but during the period of higher abundance the timing of dispersal from riverine habitat into the reservoir averaged 17 d earlier and the average size at the time of downstream dispersal was smaller by 10 mm and 1.8 g. Changes in apparent abundance, measured by catch per unit effort, largely explained the time of dispersal, measured by median day of capture, in riverine habitat. The growth rate of smolts in the reservoir declined from an average of 0.6 to 0.2 g/d between the abundance periods because the reduction in size at reservoir entry was accompanied by a tendency to migrate rather than linger and by increasing concentrations of smolts in the reservoir. The median date of passage through the reservoir was 14 d earlier on average, and average smolt size was smaller by 38 mm and 22.0 g, in accordance with density-dependent behavioral changes reflected by decreased smolt growth. Unexpectedly, smolts during the high-abundance period had begun to reexpress the migration timing and size phenotypes observed before the river was impounded, when abundance was relatively high. Our findings provide evidence for density-dependent phenotypic change in a large river that was influenced by the expansion of a recovery program. Thus, this study shows that efforts to recover native fishes can have detectable effects in large-river landscapes. The outcome of such phenotypic change, which will be an important area of future research, can only be fully judged by

  17. Development of a composite large-size SiPM (assembled matrix) based modular detector cluster for MAGIC

    Energy Technology Data Exchange (ETDEWEB)

    Hahn, A., E-mail: ahahn@mpp.mpg.de [Max Planck Institute for Physics (Werner-Heisenberg-Institut), Föhringer Ring 6, 80805 München (Germany); Mazin, D., E-mail: mazin@mpp.mpg.de [Max Planck Institute for Physics (Werner-Heisenberg-Institut), Föhringer Ring 6, 80805 München (Germany); Institute for Cosmic Ray Research, The University of Tokyo, 5-1-5 Kashiwa-no-Ha, Kashiwa City, Chiba 277–8582 (Japan); Bangale, P., E-mail: priya@mpp.mpg.de [Max Planck Institute for Physics (Werner-Heisenberg-Institut), Föhringer Ring 6, 80805 München (Germany); Dettlaff, A., E-mail: todettl@mpp.mpg.de [Max Planck Institute for Physics (Werner-Heisenberg-Institut), Föhringer Ring 6, 80805 München (Germany); Fink, D., E-mail: fink@mpp.mpg.de [Max Planck Institute for Physics (Werner-Heisenberg-Institut), Föhringer Ring 6, 80805 München (Germany); Grundner, F., E-mail: grundner@mpp.mpg.de [Max Planck Institute for Physics (Werner-Heisenberg-Institut), Föhringer Ring 6, 80805 München (Germany); Haberer, W., E-mail: haberer@mpp.mpg.de [Max Planck Institute for Physics (Werner-Heisenberg-Institut), Föhringer Ring 6, 80805 München (Germany); Maier, R., E-mail: rma@mpp.mpg.de [Max Planck Institute for Physics (Werner-Heisenberg-Institut), Föhringer Ring 6, 80805 München (Germany); and others

    2017-02-11

    The MAGIC collaboration operates two 17 m diameter Imaging Atmospheric Cherenkov Telescopes (IACTs) on the Canary Island of La Palma. Each of the two telescopes is currently equipped with a photomultiplier tube (PMT) based imaging camera. Due to the advances in the development of Silicon Photomultipliers (SiPMs), they are becoming a widely used alternative to PMTs in many research fields including gamma-ray astronomy. Within the Otto-Hahn group at the Max Planck Institute for Physics, Munich, we are developing a SiPM based detector module for a possible upgrade of the MAGIC cameras and also for future experiments as, e.g., the Large Size Telescopes (LST) of the Cherenkov Telescope Array (CTA). Because of the small size of individual SiPM sensors (6 mm×6 mm) with respect to the 1-inch diameter PMTs currently used in MAGIC, we use a custom-made matrix of SiPMs to cover the same detection area. We developed an electronic circuit to actively sum up and amplify the SiPM signals. Existing non-imaging hexagonal light concentrators (Winston cones) used in MAGIC have been modified for the angular acceptance of the SiPMs by using C++ based ray tracing simulations. The first prototype based detector module includes seven channels and was installed into the MAGIC camera in May 2015. We present the results of the first prototype and its performance as well as the status of the project and discuss its challenges. - Highlights: • The design of the first SiPM large-size IACT pixel is described. • The simulation of the light concentrators is presented. • The temperature stability of the detector module is demonstrated. • The calibration procedure of SiPM device in the field is described.

  18. A large set of potential past, present and future hydro-meteorological time series for the UK

    Directory of Open Access Journals (Sweden)

    B. P. Guillod

    2018-01-01

    Full Text Available Hydro-meteorological extremes such as drought and heavy precipitation can have large impacts on society and the economy. With potentially increasing risks associated with such events due to climate change, properly assessing the associated impacts and uncertainties is critical for adequate adaptation. However, the application of risk-based approaches often requires large sets of extreme events, which are not commonly available. Here, we present such a large set of hydro-meteorological time series for recent past and future conditions for the United Kingdom based on weather@home 2, a modelling framework consisting of a global climate model (GCM driven by observed or projected sea surface temperature (SST and sea ice which is downscaled to 25 km over the European domain by a regional climate model (RCM. Sets of 100 time series are generated for each of (i a historical baseline (1900–2006, (ii five near-future scenarios (2020–2049 and (iii five far-future scenarios (2070–2099. The five scenarios in each future time slice all follow the Representative Concentration Pathway 8.5 (RCP8.5 and sample the range of sea surface temperature and sea ice changes from CMIP5 (Coupled Model Intercomparison Project Phase 5 models. Validation of the historical baseline highlights good performance for temperature and potential evaporation, but substantial seasonal biases in mean precipitation, which are corrected using a linear approach. For extremes in low precipitation over a long accumulation period ( > 3 months and shorter-duration high precipitation (1–30 days, the time series generally represents past statistics well. Future projections show small precipitation increases in winter but large decreases in summer on average, leading to an overall drying, consistently with the most recent UK Climate Projections (UKCP09 but larger in magnitude than the latter. Both drought and high-precipitation events are projected to increase in frequency and

  19. Setting up fuel supply strategies for large-scale bio-energy projects using agricultural and forest residues. A methodology for developing countries

    International Nuclear Information System (INIS)

    Junginger, M.

    2000-08-01

    The objective of this paper is to develop a coherent methodology to set up fuel supply strategies for large-scale biomass-conversion units. This method will explicitly take risks and uncertainties regarding availability and costs in relation to time into account. This paper aims at providing general guidelines, which are not country-specific. These guidelines cannot provide 'perfect fit'-solutions, but aim to give general help to overcome barriers and to set up supply strategies. It will mainly focus on residues from the agricultural and forestry sector. This study focuses on electricity or both electricity and heat production (CHP) with plant scales between 1040 MWe. This range is chosen due to rules of economies of scale. In large-scale plants the benefits of increased efficiency outweigh increased transportation costs, allowing a lower price per kWh which in turn may allow higher biomass costs. However, fuel-supply risks tend to get higher with increasing plant size, which makes it more important to assess them for large(r) conversion plants. Although the methodology does not focus on a specific conversion technology, it should be stressed that the technology must be able to handle a wide variety of biomass fuels with different characteristics because many biomass residues are not available the year round and various fuels are needed for a constant supply. The methodology allows for comparing different technologies (with known investment and operational and maintenance costs from literature) and evaluation for different fuel supply scenarios. In order to demonstrate the methodology, a case study was carried out for the north-eastern part of Thailand (Isaan), an agricultural region. The research was conducted in collaboration with the Regional Wood Energy Development Programme in Asia (RWEDP), a project of the UN Food and Agricultural Organization (FAO) in Bangkok, Thailand. In Section 2 of this paper the methodology will be presented. In Section 3 the economic

  20. Inferring Population Size History from Large Samples of Genome-Wide Molecular Data - An Approximate Bayesian Computation Approach.

    Directory of Open Access Journals (Sweden)

    Simon Boitard

    2016-03-01

    Full Text Available Inferring the ancestral dynamics of effective population size is a long-standing question in population genetics, which can now be tackled much more accurately thanks to the massive genomic data available in many species. Several promising methods that take advantage of whole-genome sequences have been recently developed in this context. However, they can only be applied to rather small samples, which limits their ability to estimate recent population size history. Besides, they can be very sensitive to sequencing or phasing errors. Here we introduce a new approximate Bayesian computation approach named PopSizeABC that allows estimating the evolution of the effective population size through time, using a large sample of complete genomes. This sample is summarized using the folded allele frequency spectrum and the average zygotic linkage disequilibrium at different bins of physical distance, two classes of statistics that are widely used in population genetics and can be easily computed from unphased and unpolarized SNP data. Our approach provides accurate estimations of past population sizes, from the very first generations before present back to the expected time to the most recent common ancestor of the sample, as shown by simulations under a wide range of demographic scenarios. When applied to samples of 15 or 25 complete genomes in four cattle breeds (Angus, Fleckvieh, Holstein and Jersey, PopSizeABC revealed a series of population declines, related to historical events such as domestication or modern breed creation. We further highlight that our approach is robust to sequencing errors, provided summary statistics are computed from SNPs with common alleles.

  1. GPU-based large-scale visualization

    KAUST Repository

    Hadwiger, Markus

    2013-11-19

    Recent advances in image and volume acquisition as well as computational advances in simulation have led to an explosion of the amount of data that must be visualized and analyzed. Modern techniques combine the parallel processing power of GPUs with out-of-core methods and data streaming to enable the interactive visualization of giga- and terabytes of image and volume data. A major enabler for interactivity is making both the computational and the visualization effort proportional to the amount of data that is actually visible on screen, decoupling it from the full data size. This leads to powerful display-aware multi-resolution techniques that enable the visualization of data of almost arbitrary size. The course consists of two major parts: An introductory part that progresses from fundamentals to modern techniques, and a more advanced part that discusses details of ray-guided volume rendering, novel data structures for display-aware visualization and processing, and the remote visualization of large online data collections. You will learn how to develop efficient GPU data structures and large-scale visualizations, implement out-of-core strategies and concepts such as virtual texturing that have only been employed recently, as well as how to use modern multi-resolution representations. These approaches reduce the GPU memory requirements of extremely large data to a working set size that fits into current GPUs. You will learn how to perform ray-casting of volume data of almost arbitrary size and how to render and process gigapixel images using scalable, display-aware techniques. We will describe custom virtual texturing architectures as well as recent hardware developments in this area. We will also describe client/server systems for distributed visualization, on-demand data processing and streaming, and remote visualization. We will describe implementations using OpenGL as well as CUDA, exploiting parallelism on GPUs combined with additional asynchronous

  2. Algorithms for detecting and analysing autocatalytic sets.

    Science.gov (United States)

    Hordijk, Wim; Smith, Joshua I; Steel, Mike

    2015-01-01

    Autocatalytic sets are considered to be fundamental to the origin of life. Prior theoretical and computational work on the existence and properties of these sets has relied on a fast algorithm for detectingself-sustaining autocatalytic sets in chemical reaction systems. Here, we introduce and apply a modified version and several extensions of the basic algorithm: (i) a modification aimed at reducing the number of calls to the computationally most expensive part of the algorithm, (ii) the application of a previously introduced extension of the basic algorithm to sample the smallest possible autocatalytic sets within a reaction network, and the application of a statistical test which provides a probable lower bound on the number of such smallest sets, (iii) the introduction and application of another extension of the basic algorithm to detect autocatalytic sets in a reaction system where molecules can also inhibit (as well as catalyse) reactions, (iv) a further, more abstract, extension of the theory behind searching for autocatalytic sets. (i) The modified algorithm outperforms the original one in the number of calls to the computationally most expensive procedure, which, in some cases also leads to a significant improvement in overall running time, (ii) our statistical test provides strong support for the existence of very large numbers (even millions) of minimal autocatalytic sets in a well-studied polymer model, where these minimal sets share about half of their reactions on average, (iii) "uninhibited" autocatalytic sets can be found in reaction systems that allow inhibition, but their number and sizes depend on the level of inhibition relative to the level of catalysis. (i) Improvements in the overall running time when searching for autocatalytic sets can potentially be obtained by using a modified version of the algorithm, (ii) the existence of large numbers of minimal autocatalytic sets can have important consequences for the possible evolvability of

  3. Efficient One-click Browsing of Large Trajectory Sets

    DEFF Research Database (Denmark)

    Krogh, Benjamin Bjerre; Andersen, Ove; Lewis-Kelham, Edwin

    2014-01-01

    presents a novel query type called sheaf, where users can browse trajectory data sets using a single mouse click. Sheaves are very versatile and can be used for location-based advertising, travel-time analysis, intersection analysis, and reachability analysis (isochrones). A novel in-memory trajectory...... index compresses the data by a factor of 12.4 and enables execution of sheaf queries in 40 ms. This is up to 2 orders of magnitude faster than existing work. We demonstrate the simplicity, versatility, and efficiency of sheaf queries using a real-world trajectory set consisting of 2.7 million...

  4. Experimental river delta size set by multiple floods and backwater hydrodynamics.

    Science.gov (United States)

    Ganti, Vamsi; Chadwick, Austin J; Hassenruck-Gudipati, Hima J; Fuller, Brian M; Lamb, Michael P

    2016-05-01

    River deltas worldwide are currently under threat of drowning and destruction by sea-level rise, subsidence, and oceanic storms, highlighting the need to quantify their growth processes. Deltas are built through construction of sediment lobes, and emerging theories suggest that the size of delta lobes scales with backwater hydrodynamics, but these ideas are difficult to test on natural deltas that evolve slowly. We show results of the first laboratory delta built through successive deposition of lobes that maintain a constant size. We show that the characteristic size of delta lobes emerges because of a preferential avulsion node-the location where the river course periodically and abruptly shifts-that remains fixed spatially relative to the prograding shoreline. The preferential avulsion node in our experiments is a consequence of multiple river floods and Froude-subcritical flows that produce persistent nonuniform flows and a peak in net channel deposition within the backwater zone of the coastal river. In contrast, experimental deltas without multiple floods produce flows with uniform velocities and delta lobes that lack a characteristic size. Results have broad applications to sustainable management of deltas and for decoding their stratigraphic record on Earth and Mars.

  5. Fabrication of large size alginate beads for three-dimensional cell-cluster culture

    Science.gov (United States)

    Zhang, Zhengtao; Ruan, Meilin; Liu, Hongni; Cao, Yiping; He, Rongxiang

    2017-08-01

    We fabricated large size alginate beads using a simple microfluidic device under a co-axial injection regime. This device was made by PDMS casting with a mold formed by small diameter metal and polytetrafluorothylene tubes. Droplets of 2% sodium alginate were generated in soybean oil through the device and then cross-linked in a 2% CaCl2 solution, which was mixed tween80 with at a concentration of 0.4 to 40% (w/v). Our results showed that the morphology of the produced alginate beads strongly depends on the tween80 concentration. With the increase of concentration of tween80, the shape of the alginate beads varied from semi-spherical to tailed-spherical, due to the decrease of interface tension between oil and cross-link solution. To access the biocompatibility of the approach, MCF-7 cells were cultured with the alginate beads, showing the formation of cancer cells clusters which might be useful for future studies.

  6. Larval assemblages of large and medium-sized pelagic species in the Straits of Florida

    Science.gov (United States)

    Richardson, David E.; Llopiz, Joel K.; Guigand, Cedric M.; Cowen, Robert K.

    2010-07-01

    Critical gaps in our understanding of the distributions, interactions, life histories and preferred habitats of large and medium-size pelagic fishes severely constrain the implementation of ecosystem-based, spatially structured fisheries management approaches. In particular, spawning distributions and the environmental characteristics associated with the early life stages are poorly documented. In this study, we consider the diversity, assemblages, and associated habitat of the larvae of large and medium-sized pelagic species collected during 2 years of monthly surveys across the Straits of Florida. In total, 36 taxa and 14,295 individuals were collected, with the highest diversity occurring during the summer and in the western, frontal region of the Florida Current. Only a few species (e.g. Thunnus obesus, T. alalunga, Tetrapturus pfluegeri) considered for this study were absent. Small scombrids (e.g. T. atlanticus, Katsuwonus pelamis, Auxis spp.) and gempylids dominated the catch and were orders of magnitude more abundant than many of the rare species (e.g. Thunnus thynnus,Kajikia albida). Both constrained (CCA) and unconstrained (NMDS) multivariate analyses revealed a number of species groupings including: (1) a summer Florida edge assemblage (e.g. Auxis spp., Euthynnus alleterattus, Istiophorus platypterus); (2) a summer offshore assemblage (e.g. Makaira nigricans, T. atlanticus, Ruvettus pretiosus, Lampris guttatus); (3) an ubiquitous assemblage (e.g. K. pelamis, Coryphaena hippurus, Xiphias gladius); and (4) a spring/winter assemblage that was widely dispersed in space (e.g. trachipterids). The primary environmental factors associated with these assemblages were sea-surface temperature (highest in summer-early fall), day length (highest in early summer), thermocline depth (shallowest on the Florida side) and fluorescence (highest on the Florida side). Overall, the results of this study provide insights into how a remarkable diversity of pelagic species

  7. Numerical modeling of deformation and vibrations in the construction of large-size fiberglass cooling tower fan

    Directory of Open Access Journals (Sweden)

    Fanisovich Shmakov Arthur

    2016-01-01

    Full Text Available This paper presents the results of numerical modeling of deformation processes and the analysis of the fundamental frequencies of the construction of large-size fiberglass cooling tower fan. Obtain the components of the stress-strain state structure based on imported gas dynamic and thermal loads and the form of fundamental vibrations. The analysis of fundamental frequencies, the results of which have been proposed constructive solutions to reduce the probability of failure of the action of aeroelastic forces.

  8. Localization Algorithm Based on a Spring Model (LASM for Large Scale Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Shuai Li

    2008-03-01

    Full Text Available A navigation method for a lunar rover based on large scale wireless sensornetworks is proposed. To obtain high navigation accuracy and large exploration area, highnode localization accuracy and large network scale are required. However, thecomputational and communication complexity and time consumption are greatly increasedwith the increase of the network scales. A localization algorithm based on a spring model(LASM method is proposed to reduce the computational complexity, while maintainingthe localization accuracy in large scale sensor networks. The algorithm simulates thedynamics of physical spring system to estimate the positions of nodes. The sensor nodesare set as particles with masses and connected with neighbor nodes by virtual springs. Thevirtual springs will force the particles move to the original positions, the node positionscorrespondingly, from the randomly set positions. Therefore, a blind node position can bedetermined from the LASM algorithm by calculating the related forces with the neighbornodes. The computational and communication complexity are O(1 for each node, since thenumber of the neighbor nodes does not increase proportionally with the network scale size.Three patches are proposed to avoid local optimization, kick out bad nodes and deal withnode variation. Simulation results show that the computational and communicationcomplexity are almost constant despite of the increase of the network scale size. The time consumption has also been proven to remain almost constant since the calculation steps arealmost unrelated with the network scale size.

  9. γ-Fe{sub 2}O{sub 3} by sol–gel with large nanoparticles size for magnetic hyperthermia application

    Energy Technology Data Exchange (ETDEWEB)

    Lemine, O.M., E-mail: leminej@yahoo.com [Physics Department, College of Sciences, Al Imam Mohammad Ibn Saud Islamic University (IMSIU), Riyadh (Saudi Arabia); Omri, K. [Laboratory of Physics of Materials and Nanomaterials Applied at Environment (LaPhyMNE), Faculty of Sciences in Gabes, Gabes (Tunisia); Iglesias, M.; Velasco, V. [Instituto de Magnetismo Aplicado, UCM-ADIF-CSIC (Spain); Crespo, P.; Presa, P. de la [Instituto de Magnetismo Aplicado, UCM-ADIF-CSIC (Spain); Dpto. Física de Materiales, Universidad Complutense de Madrid (Spain); El Mir, L. [Physics Department, College of Sciences, Al Imam Mohammad Ibn Saud Islamic University (IMSIU), Riyadh (Saudi Arabia); Laboratory of Physics of Materials and Nanomaterials Applied at Environment (LaPhyMNE), Faculty of Sciences in Gabes, Gabes (Tunisia); Bouzid, Houcine [Promising Centre for Sensors and Electronic Devices (PCSED), Najran University, P.O. Box 1988, Najran 11001 (Saudi Arabia); Laboratoire des Matériaux Ferroélectriques, Faculté des Sciences de Sfax, Route Soukra Km 3 5, B.P. 802, F-3018 Sfax (Tunisia); Yousif, A. [Department of Physics, College of Science, Sultan Qaboos University, P.O. Box 36, Code 123, Al Khoud (Oman); Al-Hajry, Ali [Promising Centre for Sensors and Electronic Devices (PCSED), Najran University, P.O. Box 1988, Najran 11001 (Saudi Arabia)

    2014-09-01

    Highlights: • Iron oxides nanoparticles with different sizes are successfully synthesized using sol–gel method. • The obtained nanoparticles are mainly composed of maghemite phase (γ-Fe{sub 2}O{sub 3}). • A non-negligible coercive field suggests that the particles are ferromagnetic. • A mean heating efficiency of 30 W/g is obtained for the smallest particles at 110 kHz and 190 Oe. - Abstract: Iron oxides nanoparticles with different sizes are successfully synthesized using sol–gel method. X-ray diffraction (XRD) and Mössbauer spectroscopy show that the obtained nanoparticles are mainly composed of maghemite phase (γ-Fe{sub 2}O{sub 3}). XRD and transmission electron microscopy (TEM) results suggest that the nanoparticles have sizes ranging from 14 to 30 nm, which are indeed confirmed by large magnetic saturation and high blocking temperature. At room temperature, the observation of a non-negligible coercive field suggests that the particles are ferro/ferrimagnetic. The specific absorption rate (SAR) under an alternating magnetic field is investigated as a function of size, frequency and amplitude of the applied magnetic field. A mean heating efficiency of 30 W/g is obtained for the smallest particles at 110 kHz and 190 Oe, whereas further increase of particle size does not improve significantly the heating efficiency.

  10. A Novel Read Scheme for Large Size One-Resistor Resistive Random Access Memory Array.

    Science.gov (United States)

    Zackriya, Mohammed; Kittur, Harish M; Chin, Albert

    2017-02-10

    The major issue of RRAM is the uneven sneak path that limits the array size. For the first time record large One-Resistor (1R) RRAM array of 128x128 is realized, and the array cells at the worst case still have good Low-/High-Resistive State (LRS/HRS) current difference of 378 nA/16 nA, even without using the selector device. This array has extremely low read current of 9.7 μA due to both low-current RRAM device and circuit interaction, where a novel and simple scheme of a reference point by half selected cell and a differential amplifier (DA) were implemented in the circuit design.

  11. The Phoenix series large scale LNG pool fire experiments.

    Energy Technology Data Exchange (ETDEWEB)

    Simpson, Richard B.; Jensen, Richard Pearson; Demosthenous, Byron; Luketa, Anay Josephine; Ricks, Allen Joseph; Hightower, Marion Michael; Blanchat, Thomas K.; Helmick, Paul H.; Tieszen, Sheldon Robert; Deola, Regina Anne; Mercier, Jeffrey Alan; Suo-Anttila, Jill Marie; Miller, Timothy J.

    2010-12-01

    The increasing demand for natural gas could increase the number and frequency of Liquefied Natural Gas (LNG) tanker deliveries to ports across the United States. Because of the increasing number of shipments and the number of possible new facilities, concerns about the potential safety of the public and property from an accidental, and even more importantly intentional spills, have increased. While improvements have been made over the past decade in assessing hazards from LNG spills, the existing experimental data is much smaller in size and scale than many postulated large accidental and intentional spills. Since the physics and hazards from a fire change with fire size, there are concerns about the adequacy of current hazard prediction techniques for large LNG spills and fires. To address these concerns, Congress funded the Department of Energy (DOE) in 2008 to conduct a series of laboratory and large-scale LNG pool fire experiments at Sandia National Laboratories (Sandia) in Albuquerque, New Mexico. This report presents the test data and results of both sets of fire experiments. A series of five reduced-scale (gas burner) tests (yielding 27 sets of data) were conducted in 2007 and 2008 at Sandia's Thermal Test Complex (TTC) to assess flame height to fire diameter ratios as a function of nondimensional heat release rates for extrapolation to large-scale LNG fires. The large-scale LNG pool fire experiments were conducted in a 120 m diameter pond specially designed and constructed in Sandia's Area III large-scale test complex. Two fire tests of LNG spills of 21 and 81 m in diameter were conducted in 2009 to improve the understanding of flame height, smoke production, and burn rate and therefore the physics and hazards of large LNG spills and fires.

  12. Structure and properties of large-sized forged disk of alloy type KhN73MBTYu-VD(EhI 698-VD)

    International Nuclear Information System (INIS)

    Sudakov, V.S.

    1994-01-01

    Investigation results are presented for structure and mechanical properties of serial large-sized forged disk 1100 mm in diameter produced of alloy type EhI 9698-VD hand tested after standard heat treatment and isothermal ageing at operating temperature. Chemical composition studies have revealed no macroheterogeneity. In a central cross-section macrostructure is free of pores, inclusions, delaminating and variation in grain size. The metal of the disk possesses high values of long-term rupture strength and creep resistance at 650-700 deg C

  13. A Core Set Based Large Vector-Angular Region and Margin Approach for Novelty Detection

    Directory of Open Access Journals (Sweden)

    Jiusheng Chen

    2016-01-01

    Full Text Available A large vector-angular region and margin (LARM approach is presented for novelty detection based on imbalanced data. The key idea is to construct the largest vector-angular region in the feature space to separate normal training patterns; meanwhile, maximize the vector-angular margin between the surface of this optimal vector-angular region and abnormal training patterns. In order to improve the generalization performance of LARM, the vector-angular distribution is optimized by maximizing the vector-angular mean and minimizing the vector-angular variance, which separates the normal and abnormal examples well. However, the inherent computation of quadratic programming (QP solver takes O(n3 training time and at least O(n2 space, which might be computational prohibitive for large scale problems. By (1+ε  and  (1-ε-approximation algorithm, the core set based LARM algorithm is proposed for fast training LARM problem. Experimental results based on imbalanced datasets have validated the favorable efficiency of the proposed approach in novelty detection.

  14. Early outcome in renal transplantation from large donors to small and size-matched recipients - a porcine experimental model

    DEFF Research Database (Denmark)

    Ravlo, Kristian; Chhoden, Tashi; Søndergaard, Peter

    2012-01-01

    in small recipients within 60 min after reperfusion. Interestingly, this was associated with a significant reduction in medullary RPP, while there was no significant change in the size-matched recipients. No difference was observed in urinary NGAL excretion between the groups. A significant higher level......Kidney transplantation from a large donor to a small recipient, as in pediatric transplantation, is associated with an increased risk of thrombosis and DGF. We established a porcine model for renal transplantation from an adult donor to a small or size-matched recipient with a high risk of DGF...... and studied GFR, RPP using MRI, and markers of kidney injury within 10 h after transplantation. After induction of BD, kidneys were removed from ∼63-kg donors and kept in cold storage for ∼22 h until transplanted into small (∼15 kg, n = 8) or size-matched (n = 8) recipients. A reduction in GFR was observed...

  15. Vacuum system for applying reflective coatings on large-size optical components using the method of magnetron sputtering

    Science.gov (United States)

    Azerbaev, Alexander A.; Abdulkadyrov, Magomed A.; Belousov, Sergey P.; Ignatov, Aleksandr N.; Mukhammedzyanov, Timur R.

    2016-10-01

    Vacuum system for reflective coatings deposition on large-size optical components up to 4.0 m diameter using the method of magnetron sputtering was built at JSC LZOS. The technological process for deposition of reflective Al coating with protective SiO2 layer was designed and approved. After climatic tests the lifetime of such coating was estimated as 30 years. Uniformity of coating thickness ±5% was achieved on maximum diameter 4.0 m.

  16. Song repertoire size correlates with measures of body size in Eurasian blackbirds

    DEFF Research Database (Denmark)

    Hesler, Nana; Mundry, Roger; Sacher, Thomas

    2012-01-01

    In most oscine bird species males possess a repertoire of different song patterns. The size of these repertoires is assumed to serve as an honest signal of male quality. The Eurasian blackbird’s (Turdus merula) song contains a large repertoire of different element types with a flexible song...... organisation. Here we investigated whether repertoire size in Eurasian blackbirds correlates with measures of body size, namely length of wing, 8th primary, beak and tarsus. So far, very few studies have investigated species with large repertoires and a flexible song organisation in this context. We found...... positive correlations, meaning that larger males had larger repertoires. Larger males may have better fighting abilities and, thus, advantages in territorial defence. Larger structural body size may also reflect better conditions during early development. Therefore, under the assumption that body size...

  17. A Characterization of Hypergraphs with Large Domination Number

    Directory of Open Access Journals (Sweden)

    Henning Michael A.

    2016-05-01

    Full Text Available Let H = (V, E be a hypergraph with vertex set V and edge set E. A dominating set in H is a subset of vertices D ⊆ V such that for every vertex v ∈ V \\ D there exists an edge e ∈ E for which v ∈ e and e ∩ D ≠ ∅. The domination number γ(H is the minimum cardinality of a dominating set in H. It is known [Cs. Bujtás, M.A. Henning and Zs. Tuza, Transversals and domination in uniform hypergraphs, European J. Combin. 33 (2012 62-71] that for k ≥ 5, if H is a hypergraph of order n and size m with all edges of size at least k and with no isolated vertex, then γ(H ≤ (n + ⌊(k − 3/2⌋m/(⌊3(k − 1/2⌋. In this paper, we apply a recent result of the authors on hypergraphs with large transversal number [M.A. Henning and C. Löwenstein, A characterization of hypergraphs that achieve equality in the Chvátal-McDiarmid Theorem, Discrete Math. 323 (2014 69-75] to characterize the hypergraphs achieving equality in this bound.

  18. Microstructural Control via Copious Nucleation Manipulated by In Situ Formed Nucleants: Large-Sized and Ductile Metallic Glass Composites.

    Science.gov (United States)

    Song, Wenli; Wu, Yuan; Wang, Hui; Liu, Xiongjun; Chen, Houwen; Guo, Zhenxi; Lu, Zhaoping

    2016-10-01

    A novel strategy to control the precipitation behavior of the austenitic phase, and to obtain large-sized, transformation-induced, plasticity-reinforced bulk metallic glass matrix composites, with good tensile properties, is proposed. By inducing heterogeneous nucleation of the transformable reinforcement via potent nucleants formed in situ, the characteristics of the austenitic phase are well manipulated. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  19. BACHSCORE. A tool for evaluating efficiently and reliably the quality of large sets of protein structures

    Science.gov (United States)

    Sarti, E.; Zamuner, S.; Cossio, P.; Laio, A.; Seno, F.; Trovato, A.

    2013-12-01

    In protein structure prediction it is of crucial importance, especially at the refinement stage, to score efficiently large sets of models by selecting the ones that are closest to the native state. We here present a new computational tool, BACHSCORE, that allows its users to rank different structural models of the same protein according to their quality, evaluated by using the BACH++ (Bayesian Analysis Conformation Hunt) scoring function. The original BACH statistical potential was already shown to discriminate with very good reliability the protein native state in large sets of misfolded models of the same protein. BACH++ features a novel upgrade in the solvation potential of the scoring function, now computed by adapting the LCPO (Linear Combination of Pairwise Orbitals) algorithm. This change further enhances the already good performance of the scoring function. BACHSCORE can be accessed directly through the web server: bachserver.pd.infn.it. Catalogue identifier: AEQD_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEQD_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: GNU General Public License version 3 No. of lines in distributed program, including test data, etc.: 130159 No. of bytes in distributed program, including test data, etc.: 24 687 455 Distribution format: tar.gz Programming language: C++. Computer: Any computer capable of running an executable produced by a g++ compiler (4.6.3 version). Operating system: Linux, Unix OS-es. RAM: 1 073 741 824 bytes Classification: 3. Nature of problem: Evaluate the quality of a protein structural model, taking into account the possible “a priori” knowledge of a reference primary sequence that may be different from the amino-acid sequence of the model; the native protein structure should be recognized as the best model. Solution method: The contact potential scores the occurrence of any given type of residue pair in 5 possible

  20. Sneaker Males Affect Fighter Male Body Size and Sexual Size Dimorphism in Salmon.

    Science.gov (United States)

    Weir, Laura K; Kindsvater, Holly K; Young, Kyle A; Reynolds, John D

    2016-08-01

    Large male body size is typically favored by directional sexual selection through competition for mates. However, alternative male life-history phenotypes, such as "sneakers," should decrease the strength of sexual selection acting on body size of large "fighter" males. We tested this prediction with salmon species; in southern populations, where sneakers are common, fighter males should be smaller than in northern populations, where sneakers are rare, leading to geographical clines in sexual size dimorphism (SSD). Consistent with our prediction, fighter male body size and SSD (fighter male∶female size) increase with latitude in species with sneaker males (Atlantic salmon Salmo salar and masu salmon Oncorhynchus masou) but not in species without sneakers (chum salmon Oncorhynchus keta and pink salmon Oncorhynchus gorbuscha). This is the first evidence that sneaker males affect SSD across populations and species, and it suggests that alternative male mating strategies may shape the evolution of body size.

  1. Value for money or making the healthy choice: the impact of proportional pricing on consumers' portion size choices.

    Science.gov (United States)

    Vermeer, Willemijn M; Alting, Esther; Steenhuis, Ingrid H M; Seidell, Jacob C

    2010-02-01

    Large food portion sizes are determinants of a high caloric intake, especially if they have been made attractive through value size pricing (i.e. lower unit prices for large than for small portion sizes). The purpose of the two questionnaire studies that are reported in this article was to assess the impact of proportional pricing (i.e. removing beneficial prices for large sizes) on people's portion size choices of high caloric food and drink items. Both studies employed an experimental design with a proportional pricing condition and a value size pricing condition. Study 1 was conducted in a fast food restaurant (N = 150) and study 2 in a worksite cafeteria (N = 141). Three different food products (i.e. soft drink, chicken nuggets in study 1 and a hot meal in study 2) with corresponding prices were displayed on pictures in the questionnaire. Outcome measures were consumers' intended portion size choices. No main effects of pricing were found. However, confronted with proportional pricing a trend was found for overweight fast food restaurant visitors being more likely to choose small portion sizes of chicken nuggets (OR = 4.31, P = 0.07) and less likely to choose large soft drink sizes (OR = 0.07, P = 0.04). Among a general public, proportional pricing did not reduce consumers' size choices. However, pricing strategies can help overweight and obese consumers selecting appropriate portion sizes of soft drink and high caloric snacks. More research in realistic settings with actual behaviour as outcome measure is required.

  2. Prospects for the domestic production of large-sized cast blades and vanes for industrial gas turbines

    Science.gov (United States)

    Kazanskiy, D. A.; Grin, E. A.; Klimov, A. N.; Berestevich, A. I.

    2017-10-01

    Russian experience in the production of large-sized cast blades and vanes for industrial gas turbines is analyzed for the past decades. It is noted that the production of small- and medium-sized blades and vanes made of Russian alloys using technologies for aviation, marine, and gas-pumping turbines cannot be scaled for industrial gas turbines. It is shown that, in order to provide manufacturability under large-scale casting from domestic nickel alloys, it is necessary to solve complex problems in changing their chemical composition, to develop new casting technologies and to optimize the heat treatment modes. An experience of PAO NPO Saturn in manufacturing the blades and vanes made of ChS88U-VI and IN738-LC foundry nickel alloys for the turbines of the GTE-110 gas turbine unit is considered in detail. Potentialities for achieving adopted target parameters for the mechanical properties of working blades cast from ChS88UM-VI modified alloy are established. For the blades made of IN738-LC alloy manufactured using the existing foundry technology, a complete compliance with the requirements of normative and technical documentation has been established. Currently, in Russia, the basis of the fleet of gas turbine plants is composed by foreign turbines, and, for the implementation of the import substitution program, one can use the positive experience of PAO NPO Saturn in casting blades from IN738-LC alloy based on a reverse engineering technique. A preliminary complex of studies of the original manufacturer's blades should be carried out, involving, first of all, the determination of geometric size using modern measurement methods as well as the studies on the chemical compositions of the used materials (base metal and protective coatings). Further, verifying the constructed calculation models based on the obtained data, one could choose available domestic materials that would meet the operating conditions of the blades according to their heat resistance and corrosion

  3. The evolution of body size in extant groups of North American freshwater fishes: speciation, size distributions, and Cope's rule.

    Science.gov (United States)

    Knouft, Jason H; Page, Lawrence M

    2003-03-01

    Change in body size within an evolutionary lineage over time has been under investigation since the synthesis of Cope's rule, which suggested that there is a tendency for mammals to evolve larger body size. Data from the fossil record have subsequently been examined for several other taxonomic groups to determine whether they also displayed an evolutionary increase in body size. However, we are not aware of any species-level study that has investigated the evolution of body size within an extant continental group. Data acquired from the fossil record and data derived from the evolutionary relationships of extant species are not similar, with each set exhibiting both strengths and weaknesses related to inferring evolutionary patterns. Consequently, expectation that general trends exhibited in the fossil record will correspond to patterns in extant groups is not necessarily warranted. Using phylogenetic relationships of extant species, we show that five of nine families of North American freshwater fishes exhibit an evolutionary trend of decreasing body size. These trends result from the basal position of large species and the more derived position of small species within families. Such trends may be caused by the invasion of small streams and subsequent isolation and speciation. This pattern, potentially influenced by size-biased dispersal rates and the high percentage of small streams in North America, suggests a scenario that could result in the generation of the size-frequency distribution of North American freshwater fishes.

  4. Large-sized SmBCO single crystals with T sub c over 93 K grown in atmospheric ambient by crystal pulling

    CERN Document Server

    Yao Xin; Shiohara, Y

    2003-01-01

    Sm sub 1 sub + sub x Ba sub 2 sub - sub x Cu sub 3 O sub z (SmBCO) single crystals were grown under atmospheric ambient by the top-seeded solution growth method. Inductively coupled plasma results indicate that there is negligible Sm substitution for Ba sites in the grown SmBCO crystals, although they crystallized from different Ba-Cu-O solvents with a wide composition range (Ba/Cu ratio of 0.5-0.6). As a result, these crystals show high superconducting critical transition temperature values (T sub c) of over 93 K with a sharp transition width after oxygenation. A large-sized crystal with an a-b plane of 23 x 22 mm sup 2 and a c-axis of 19 mm was obtained at a high growth rate of nearly 0.13 mm h sup - sup 1. In short, with more controllable thermodynamic parameters, SmBCO single crystals can readily achieve both large size and high superconducting properties. (rapid communication)

  5. New set-up for high-quality soft-X-ray absorption spectroscopy of large organic molecules in the gas phase

    Energy Technology Data Exchange (ETDEWEB)

    Holch, Florian; Huebner, Dominique [Universitaet Wuerzburg, Experimentelle Physik VII, Am and Roentgen Reasearch Center for Complex Materials (RCCM) Hubland, 97074 Wuerzburg (Germany); Fink, Rainer [Universitaet Erlangen-Nuernberg, ICMM and CENEM, Egerlandstrasse 3, 91058 Erlangen (Germany); Schoell, Achim, E-mail: achim.schoell@physik.uni-wuerzburg.de [Universitaet Wuerzburg, Experimentelle Physik VII, Am and Roentgen Reasearch Center for Complex Materials (RCCM) Hubland, 97074 Wuerzburg (Germany); Umbach, Eberhard [Karlsruhe Institute of Technology, 76021 Karlsruhe (Germany)

    2011-11-15

    Highlights: {yields} We present a new set-up for x-ray absorption (NEXAFS) on large molecules in the gas-phase. {yields} The cell has a confined volume and can be heated. {yields} The spectra can be acquired fast, are of very high quality with respect tosignal-to-noise ratio and energy resolution. {yields} This allowsthe analysis of spectroscopic details (e.g. solid state effects by comparing gas- and condensed phase data). - Abstract: We present a new experimental set-up for the investigation of large (>128 amu) organic molecules in the gas-phase by means of near-edge X-ray absorption fine structure spectroscopy in the soft X-ray range. Our approach uses a gas cell, which is sealed off against the surrounding vacuum and which can be heated above the sublimation temperature of the respective molecular compound. Using a confined volume rather than a molecular beam yields short acquisition times and intense signals due to the high molecular density, which can be tuned by the container temperature. In turn, the resulting spectra are of very high quality with respect to signal-to-noise ratio and energy resolution, which are the essential aspects for the analysis of fine spectroscopic details. Using the examples of ANQ, NTCDA, and PTCDA, specific challenges of gas phase measurements on large organic molecules with high sublimation temperatures are addressed in detail with respect to the presented set-up and possible ways to tackle them are outlined.

  6. Neonatal L-glutamine modulates anxiety-like behavior, cortical spreading depression, and microglial immunoreactivity: analysis in developing rats suckled on normal size- and large size litters.

    Science.gov (United States)

    de Lima, Denise Sandrelly Cavalcanti; Francisco, Elian da Silva; Lima, Cássia Borges; Guedes, Rubem Carlos Araújo

    2017-02-01

    In mammals, L-glutamine (Gln) can alter the glutamate-Gln cycle and consequently brain excitability. Here, we investigated in developing rats the effect of treatment with different doses of Gln on anxiety-like behavior, cortical spreading depression (CSD), and microglial activation expressed as Iba1-immunoreactivity. Wistar rats were suckled in litters with 9 and 15 pups (groups L 9 and L 15 ; respectively, normal size- and large size litters). From postnatal days (P) 7-27, the animals received Gln per gavage (250, 500 or 750 mg/kg/day), or vehicle (water), or no treatment (naive). At P28 and P30, we tested the animals, respectively, in the elevated plus maze and open field. At P30-35, we measured CSD parameters (velocity of propagation, amplitude, and duration). Fixative-perfused brains were processed for microglial immunolabeling with anti-IBA-1 antibodies to analyze cortical microglia. Rats treated with Gln presented an anxiolytic behavior and accelerated CSD propagation when compared to the water- and naive control groups. Furthermore, CSD velocity was higher (p litter sizes, and for microglial activation in the L 15 groups. Besides confirming previous electrophysiological findings (CSD acceleration after Gln), our data demonstrate for the first time a behavioral and microglial activation that is associated with early Gln treatment in developing animals, and that is possibly operated via changes in brain excitability.

  7. UpSet: Visualization of Intersecting Sets

    Science.gov (United States)

    Lex, Alexander; Gehlenborg, Nils; Strobelt, Hendrik; Vuillemot, Romain; Pfister, Hanspeter

    2016-01-01

    Understanding relationships between sets is an important analysis task that has received widespread attention in the visualization community. The major challenge in this context is the combinatorial explosion of the number of set intersections if the number of sets exceeds a trivial threshold. In this paper we introduce UpSet, a novel visualization technique for the quantitative analysis of sets, their intersections, and aggregates of intersections. UpSet is focused on creating task-driven aggregates, communicating the size and properties of aggregates and intersections, and a duality between the visualization of the elements in a dataset and their set membership. UpSet visualizes set intersections in a matrix layout and introduces aggregates based on groupings and queries. The matrix layout enables the effective representation of associated data, such as the number of elements in the aggregates and intersections, as well as additional summary statistics derived from subset or element attributes. Sorting according to various measures enables a task-driven analysis of relevant intersections and aggregates. The elements represented in the sets and their associated attributes are visualized in a separate view. Queries based on containment in specific intersections, aggregates or driven by attribute filters are propagated between both views. We also introduce several advanced visual encodings and interaction methods to overcome the problems of varying scales and to address scalability. UpSet is web-based and open source. We demonstrate its general utility in multiple use cases from various domains. PMID:26356912

  8. Large-size, high-uniformity, random silver nanowire networks as transparent electrodes for crystalline silicon wafer solar cells.

    Science.gov (United States)

    Xie, Shouyi; Ouyang, Zi; Jia, Baohua; Gu, Min

    2013-05-06

    Metal nanowire networks are emerging as next generation transparent electrodes for photovoltaic devices. We demonstrate the application of random silver nanowire networks as the top electrode on crystalline silicon wafer solar cells. The dependence of transmittance and sheet resistance on the surface coverage is measured. Superior optical and electrical properties are observed due to the large-size, highly-uniform nature of these networks. When applying the nanowire networks on the solar cells with an optimized two-step annealing process, we achieved as large as 19% enhancement on the energy conversion efficiency. The detailed analysis reveals that the enhancement is mainly caused by the improved electrical properties of the solar cells due to the silver nanowire networks. Our result reveals that this technology is a promising alternative transparent electrode technology for crystalline silicon wafer solar cells.

  9. Smaller sized reactors can be economically attractive

    International Nuclear Information System (INIS)

    Carelli, M.D.; Petrovic, B.; Mycoff, C.W.; Trucco, P.; Ricotti, M.E.; Locatelli, G.

    2007-01-01

    Smaller size reactors are going to be an important component of the worldwide nuclear renaissance. However, a misguided interpretation of the economy of scale would label these reactors as not economically competitive with larger plants because of their allegedly higher capital cost (dollar/kWe). Economy of scale does apply only if the considered designs are similar, which is not the case here. This paper identifies and briefly discusses the various factors which, beside size (power produced), contribute to determining the capital cost of smaller reactors and provides a preliminary evaluation for a few of these factors. When they are accounted for, in a set of realistic and comparable configurations, the final capital costs of small and large plants are practically equivalent. The Iris reactor is used as the example of smaller reactors, but the analysis and conclusions are applicable to the whole spectrum of small nuclear plants. (authors)

  10. Envision: An interactive system for the management and visualization of large geophysical data sets

    Science.gov (United States)

    Searight, K. R.; Wojtowicz, D. P.; Walsh, J. E.; Pathi, S.; Bowman, K. P.; Wilhelmson, R. B.

    1995-01-01

    Envision is a software project at the University of Illinois and Texas A&M, funded by NASA's Applied Information Systems Research Project. It provides researchers in the geophysical sciences convenient ways to manage, browse, and visualize large observed or model data sets. Envision integrates data management, analysis, and visualization of geophysical data in an interactive environment. It employs commonly used standards in data formats, operating systems, networking, and graphics. It also attempts, wherever possible, to integrate with existing scientific visualization and analysis software. Envision has an easy-to-use graphical interface, distributed process components, and an extensible design. It is a public domain package, freely available to the scientific community.

  11. Sample size methodology

    CERN Document Server

    Desu, M M

    2012-01-01

    One of the most important problems in designing an experiment or a survey is sample size determination and this book presents the currently available methodology. It includes both random sampling from standard probability distributions and from finite populations. Also discussed is sample size determination for estimating parameters in a Bayesian setting by considering the posterior distribution of the parameter and specifying the necessary requirements. The determination of the sample size is considered for ranking and selection problems as well as for the design of clinical trials. Appropria

  12. A fast BDD algorithm for large coherent fault trees analysis

    International Nuclear Information System (INIS)

    Jung, Woo Sik; Han, Sang Hoon; Ha, Jaejoo

    2004-01-01

    Although a binary decision diagram (BDD) algorithm has been tried to solve large fault trees until quite recently, they are not efficiently solved in a short time since the size of a BDD structure exponentially increases according to the number of variables. Furthermore, the truncation of If-Then-Else (ITE) connectives by the probability or size limit and the subsuming to delete subsets could not be directly applied to the intermediate BDD structure under construction. This is the motivation for this work. This paper presents an efficient BDD algorithm for large coherent systems (coherent BDD algorithm) by which the truncation and subsuming could be performed in the progress of the construction of the BDD structure. A set of new formulae developed in this study for AND or OR operation between two ITE connectives of a coherent system makes it possible to delete subsets and truncate ITE connectives with a probability or size limit in the intermediate BDD structure under construction. By means of the truncation and subsuming in every step of the calculation, large fault trees for coherent systems (coherent fault trees) are efficiently solved in a short time using less memory. Furthermore, the coherent BDD algorithm from the aspect of the size of a BDD structure is much less sensitive to variable ordering than the conventional BDD algorithm

  13. SIproc: an open-source biomedical data processing platform for large hyperspectral images.

    Science.gov (United States)

    Berisha, Sebastian; Chang, Shengyuan; Saki, Sam; Daeinejad, Davar; He, Ziqi; Mankar, Rupali; Mayerich, David

    2017-04-10

    There has recently been significant interest within the vibrational spectroscopy community to apply quantitative spectroscopic imaging techniques to histology and clinical diagnosis. However, many of the proposed methods require collecting spectroscopic images that have a similar region size and resolution to the corresponding histological images. Since spectroscopic images contain significantly more spectral samples than traditional histology, the resulting data sets can approach hundreds of gigabytes to terabytes in size. This makes them difficult to store and process, and the tools available to researchers for handling large spectroscopic data sets are limited. Fundamental mathematical tools, such as MATLAB, Octave, and SciPy, are extremely powerful but require that the data be stored in fast memory. This memory limitation becomes impractical for even modestly sized histological images, which can be hundreds of gigabytes in size. In this paper, we propose an open-source toolkit designed to perform out-of-core processing of hyperspectral images. By taking advantage of graphical processing unit (GPU) computing combined with adaptive data streaming, our software alleviates common workstation memory limitations while achieving better performance than existing applications.

  14. Study on growth techniques and macro defects of large-size Nd:YAG laser crystal

    Science.gov (United States)

    Quan, Jiliang; Yang, Xin; Yang, Mingming; Ma, Decai; Huang, Jinqiang; Zhu, Yunzhong; Wang, Biao

    2018-02-01

    Large-size neodymium-doped yttrium aluminum garnet (Nd:YAG) single crystals were grown by the Czochralski method. The extinction ratio and wavefront distortion of the crystal were tested to determine the optical homogeneity. Moreover, under different growth conditions, the macro defects of inclusion, striations, and cracking in the as-grown Nd:YAG crystals were analyzed. Specifically, the inclusion defects were characterized using scanning electron microscopy and energy dispersive spectroscopy. The stresses of growth striations and cracking were studied via a parallel plane polariscope. These results demonstrate that improper growth parameters and temperature fields can enhance defects significantly. Thus, by adjusting the growth parameters and optimizing the thermal environment, high-optical-quality Nd:YAG crystals with a diameter of 80 mm and a total length of 400 mm have been obtained successfully.

  15. Treatment of severe pulmonary hypertension in the setting of the large patent ductus arteriosus.

    Science.gov (United States)

    Niu, Mary C; Mallory, George B; Justino, Henri; Ruiz, Fadel E; Petit, Christopher J

    2013-05-01

    Treatment of the large patent ductus arteriosus (PDA) in the setting of pulmonary hypertension (PH) is challenging. Left patent, the large PDA can result in irreversible pulmonary vascular disease. Occlusion, however, may lead to right ventricular failure for certain patients with severe PH. Our center has adopted a staged management strategy using medical management, noninvasive imaging, and invasive cardiac catheterization to treat PH in the presence of a large PDA. This approach determines the safety of ductal closure but also leverages medical therapy to create an opportunity for safe PDA occlusion. We reviewed our experience with this approach. Patients with both severe PH and PDAs were studied. PH treatment history and hemodynamic data obtained during catheterizations were reviewed. Repeat catheterizations, echocardiograms, and clinical status at latest follow-up were also reviewed. Seven patients had both PH and large, unrestrictive PDAs. At baseline, all patients had near-systemic right ventricular pressures. Nine catheterizations were performed. Two patients underwent 2 catheterizations each due to poor initial response to balloon test occlusion. Six of 7 patients exhibited subsystemic pulmonary pressures during test occlusion and underwent successful PDA occlusion. One patient did not undergo PDA occlusion. In follow-up, 2 additional catheterizations were performed after successful PDA occlusion for subsequent hemodynamic assessment. At the latest follow-up, the 6 patients who underwent PDA occlusion are well, with continued improvement in PH. Five patients remain on PH treatment. A staged approach to PDA closure for patients with severe PH is an effective treatment paradigm. Aggressive treatment of PH creates a window of opportunity for PDA occlusion, echocardiography assists in identifying the timing for closure, and balloon test occlusion during cardiac catheterization is critical in determining safety of closure. By safely eliminating the large PDA

  16. An efficient and novel computation method for simulating diffraction patterns from large-scale coded apertures on large-scale focal plane arrays

    Science.gov (United States)

    Shrekenhamer, Abraham; Gottesman, Stephen R.

    2012-10-01

    A novel and memory efficient method for computing diffraction patterns produced on large-scale focal planes by largescale Coded Apertures at wavelengths where diffraction effects are significant has been developed and tested. The scheme, readily implementable on portable computers, overcomes the memory limitations of present state-of-the-art simulation codes such as Zemax. The method consists of first calculating a set of reference complex field (amplitude and phase) patterns on the focal plane produced by a single (reference) central hole, extending to twice the focal plane array size, with one such pattern for each Line-of-Sight (LOS) direction and wavelength in the scene, and with the pattern amplitude corresponding to the square-root of the spectral irradiance from each such LOS direction in the scene at selected wavelengths. Next the set of reference patterns is transformed to generate pattern sets for other holes. The transformation consists of a translational pattern shift corresponding to each hole's position offset and an electrical phase shift corresponding to each hole's position offset and incoming radiance's direction and wavelength. The set of complex patterns for each direction and wavelength is then summed coherently and squared for each detector to yield a set of power patterns unique for each direction and wavelength. Finally the set of power patterns is summed to produce the full waveband diffraction pattern from the scene. With this tool researchers can now efficiently simulate diffraction patterns produced from scenes by large-scale Coded Apertures onto large-scale focal plane arrays to support the development and optimization of coded aperture masks and image reconstruction algorithms.

  17. The research on electronic commerce security payment system based on set protocol

    Science.gov (United States)

    Guo, Hongliang

    2012-04-01

    With the rapid development of network technology, online transactions have become more and more common. In this paper, we firstly introduce the principle and the basic principal and technical foundation of SET, and then we analyze the progress of designing a system in the foundation of the procedure of the electronic business based on SET. On this basis, we design a system of the Payment System for Electronic Business. It will not only take on crucial realism signification for large-scale, medium-sized and mini-type corporations, but also provide guide meaning with programmer and design-developer to realize Electronic Commerce (EC).

  18. WebViz:A Web-based Collaborative Interactive Visualization System for large-Scale Data Sets

    Science.gov (United States)

    Yuen, D. A.; McArthur, E.; Weiss, R. M.; Zhou, J.; Yao, B.

    2010-12-01

    WebViz is a web-based application designed to conduct collaborative, interactive visualizations of large data sets for multiple users, allowing researchers situated all over the world to utilize the visualization services offered by the University of Minnesota’s Laboratory for Computational Sciences and Engineering (LCSE). This ongoing project has been built upon over the last 3 1/2 years .The motivation behind WebViz lies primarily with the need to parse through an increasing amount of data produced by the scientific community as a result of larger and faster multicore and massively parallel computers coming to the market, including the use of general purpose GPU computing. WebViz allows these large data sets to be visualized online by anyone with an account. The application allows users to save time and resources by visualizing data ‘on the fly’, wherever he or she may be located. By leveraging AJAX via the Google Web Toolkit (http://code.google.com/webtoolkit/), we are able to provide users with a remote, web portal to LCSE's (http://www.lcse.umn.edu) large-scale interactive visualization system already in place at the University of Minnesota. LCSE’s custom hierarchical volume rendering software provides high resolution visualizations on the order of 15 million pixels and has been employed for visualizing data primarily from simulations in astrophysics to geophysical fluid dynamics . In the current version of WebViz, we have implemented a highly extensible back-end framework built around HTTP "server push" technology. The web application is accessible via a variety of devices including netbooks, iPhones, and other web and javascript-enabled cell phones. Features in the current version include the ability for users to (1) securely login (2) launch multiple visualizations (3) conduct collaborative visualization sessions (4) delegate control aspects of a visualization to others and (5) engage in collaborative chats with other users within the user interface

  19. APPHi: Automated Photometry Pipeline for High Cadence Large Volume Data

    Science.gov (United States)

    Sánchez, E.; Castro, J.; Silva, J.; Hernández, J.; Reyes, M.; Hernández, B.; Alvarez, F.; García T.

    2018-04-01

    APPHi (Automated Photometry Pipeline) carries out aperture and differential photometry of TAOS-II project data. It is computationally efficient and can be used also with other astronomical wide-field image data. APPHi works with large volumes of data and handles both FITS and HDF5 formats. Due the large number of stars that the software has to handle in an enormous number of frames, it is optimized to automatically find the best value for parameters to carry out the photometry, such as mask size for aperture, size of window for extraction of a single star, and the number of counts for the threshold for detecting a faint star. Although intended to work with TAOS-II data, APPHi can analyze any set of astronomical images and is a robust and versatile tool to performing stellar aperture and differential photometry.

  20. Large capacity water and air bath calorimeters

    International Nuclear Information System (INIS)

    James, S.J.; Kasperski, P.W.; Renz, D.P.; Wetzel, J.R.

    1993-01-01

    EG and G Mound Applied Technologies has developed an 11 in. x 17 in. sample size water bath and an 11 in. x 17 in. sample size air bath calorimeter which both function under servo control mode of operation. The water bath calorimeter has four air bath preconditioners to increase sample throughput and the air bath calorimeter has two air bath preconditioners. The large capacity calorimeters and preconditioners were unique to Mound design which brought about unique design challenges. Both large capacity systems calculate the optimum set temperature for each preconditioner which is available to the operator. Each system is controlled by a personal computer under DOS which allows the operator to download data to commercial software packages when the calorimeter is idle. Qualification testing yielded a one standard deviation of 0.6% for 0.2W to 3.0W Pu-238 heat standard range in the water bath calorimeter and a one standard deviation of 0.3% for the 6.0W to 20.0W Pu-238 heat standard range in the air bath calorimeter

  1. Large size self-assembled quantum rings: quantum size effect and modulation on the surface diffusion.

    Science.gov (United States)

    Tong, Cunzhu; Yoon, Soon Fatt; Wang, Lijun

    2012-09-24

    We demonstrate experimentally the submicron size self-assembled (SA) GaAs quantum rings (QRs) by quantum size effect (QSE). An ultrathin In0.1 Ga0.9As layer with different thickness is deposited on the GaAs to modulate the surface nucleus diffusion barrier, and then the SA QRs are grown. It is found that the density of QRs is affected significantly by the thickness of inserted In0.1 Ga0.9As, and the diffusion barrier modulation reflects mainly on the first five monolayer . The physical mechanism behind is discussed. The further analysis shows that about 160 meV decrease in diffusion barrier can be achieved, which allows the SA QRs with density of as low as one QR per 6 μm2. Finally, the QRs with diameters of 438 nm and outer diameters of 736 nm are fabricated using QSE.

  2. Phylogenetic relationships of hexaploid large-sized barbs (genus Labeobarbus, Cyprinidae) based on mtDNA data.

    Science.gov (United States)

    Tsigenopoulos, Costas S; Kasapidis, Panagiotis; Berrebi, Patrick

    2010-08-01

    The phylogenetic relationships among species of the Labeobarbus genus (Teleostei, Cyprinidae) which comprises large body-sized hexaploid taxa were inferred using complete cytochrome b mitochondrial gene sequences. Molecular data suggest two main evolutionary groups which roughly correspond to a Northern (Middle East and Northwest Africa) and a sub-Saharan lineage. The splitting of the African hexaploids from their Asian ancestors and their subsequent diversification on the African continent occurred in the Late Miocene, a period in which other cyprinins also invaded Africa and radiated in the Mediterranean region. Finally, systematic implications of these results to the taxonomic validity of genera or subgenera such as Varicorhinus, Kosswigobarbus, Carasobarbus and Capoeta are further discussed. Copyright 2010 Elsevier Inc. All rights reserved.

  3. Size-dependent reactivity of magnetite nanoparticles: a field-laboratory comparison

    Science.gov (United States)

    Swindle, Andrew L.; Elwood Madden, Andrew S.; Cozzarelli, Isabelle M.; Benamara, Mourad

    2014-01-01

    Logistic challenges make direct comparisons between laboratory- and field-based investigations into the size-dependent reactivity of nanomaterials difficult. This investigation sought to compare the size-dependent reactivity of nanoparticles in a field setting to a laboratory analog using the specific example of magnetite dissolution. Synthetic magnetite nanoparticles of three size intervals, ∼6 nm, ∼44 nm, and ∼90 nm were emplaced in the subsurface of the USGS research site at the Norman Landfill for up to 30 days using custom-made subsurface nanoparticle holders. Laboratory analog dissolution experiments were conducted using synthetic groundwater. Reaction products were analyzed via TEM and SEM and compared to initial particle characterizations. Field results indicated that an organic coating developed on the particle surfaces largely inhibiting reactivity. Limited dissolution occurred, with the amount of dissolution decreasing as particle size decreased. Conversely, the laboratory analogs without organics revealed greater dissolution of the smaller particles. These results showed that the presence of dissolved organics led to a nearly complete reversal in the size-dependent reactivity trends displayed between the field and laboratory experiments indicating that size-dependent trends observed in laboratory investigations may not be relevant in organic-rich natural systems.

  4. Optimal Siting and Sizing of Energy Storage System for Power Systems with Large-scale Wind Power Integration

    DEFF Research Database (Denmark)

    Zhao, Haoran; Wu, Qiuwei; Huang, Shaojun

    2015-01-01

    This paper proposes algorithms for optimal sitingand sizing of Energy Storage System (ESS) for the operationplanning of power systems with large scale wind power integration.The ESS in this study aims to mitigate the wind powerfluctuations during the interval between two rolling Economic......Dispatches (EDs) in order to maintain generation-load balance.The charging and discharging of ESS is optimized consideringoperation cost of conventional generators, capital cost of ESSand transmission losses. The statistics from simulated systemoperations are then coupled to the planning process to determinethe...

  5. Large-size deployable construction heated by solar irradiation in free space

    Science.gov (United States)

    Pestrenina, Irena; Kondyurin, Alexey; Pestrenin, Valery; Kashin, Nickolay; Naymushin, Alexey

    Large-size deployable construction in free space with subsequent direct curing was invented more than fifteen years ago (Briskman et al., 1997 and Kondyurin, 1998). It caused a lot of scientific problems, one of which is a possibility to use the solar energy for initiation of the curing reaction. This paper is devoted to investigate the curing process under sun irradiation during a space flight in Earth orbits. A rotation of the construction is considered. This motion can provide an optimal temperature distribution in the construction that is required for the polymerization reaction. The cylindrical construction of 80 m length with two hemispherical ends of 10 m radius is considered. The wall of the construction of 10 mm carbon fibers/epoxy matrix composite is irradiated by heat flux from the sun and radiates heat from the external surface by the Stefan- Boltzmann law. A stage of polymerization reaction is calculated as a function of temperature/time based on the laboratory experiments with certified composite materials for space exploitation. The curing kinetics of the composite is calculated for different inclination Low Earth Orbits (300 km altitude) and Geostationary Earth Orbit (40000 km altitude). The results show that • the curing process depends strongly on the Earth orbit and the rotation of the construction; • the optimal flight orbit and rotation can be found to provide the thermal regime that is sufficient for the complete curing of the considered construction. The study is supported by RFBR grant No.12-08-00970-a. 1. Briskman V., A.Kondyurin, K.Kostarev, V.Leontyev, M.Levkovich, A.Mashinsky, G.Nechitailo, T.Yudina, Polymerization in microgravity as a new process in space technology, Paper No IAA-97-IAA.12.1.07, 48th International Astronautical Congress, October 6-10, 1997, Turin Italy. 2. Kondyurin A.V., Building the shells of large space stations by the polymerisation of epoxy composites in open space, Int. Polymer Sci. and Technol., v.25, N4

  6. Development of the quality control system of the readout electronics for the large size telescope of the Cherenkov Telescope Array observatory

    Science.gov (United States)

    Konno, Y.; Kubo, H.; Masuda, S.; Paoletti, R.; Poulios, S.; Rugliancich, A.; Saito, T.

    2016-07-01

    The Cherenkov Telescope Array (CTA) is the next generation VHE γ-ray observatory which will improve the currently available sensitivity by a factor of 10 in the range 100 GeV to 10 TeV. The array consists of different types of telescopes, called large size telescope (LST), medium size telescope (MST) and small size telescope (SST). A LST prototype is currently being built and will be installed at the Observatorio Roque de los Muchachos, island of La Palma, Canary islands, Spain. The readout system for the LST prototype has been designed and around 300 readout boards will be produced in the coming months. In this note we describe an automated quality control system able to measure basic performance parameters and quickly identify faulty boards.

  7. Considerations for Observational Research Using Large Data Sets in Radiation Oncology

    Energy Technology Data Exchange (ETDEWEB)

    Jagsi, Reshma, E-mail: rjagsi@med.umich.edu [Department of Radiation Oncology, University of Michigan, Ann Arbor, Michigan (United States); Bekelman, Justin E. [Departments of Radiation Oncology and Medical Ethics and Health Policy, University of Pennsylvania Perelman School of Medicine, Philadelphia, Pennsylvania (United States); Chen, Aileen [Department of Radiation Oncology, Harvard Medical School, Boston, Massachusetts (United States); Chen, Ronald C. [Department of Radiation Oncology, University of North Carolina at Chapel Hill School of Medicine, Chapel Hill, North Carolina (United States); Hoffman, Karen [Department of Radiation Oncology, Division of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Tina Shih, Ya-Chen [Department of Medicine, Section of Hospital Medicine, The University of Chicago, Chicago, Illinois (United States); Smith, Benjamin D. [Department of Radiation Oncology, Division of Radiation Oncology, and Department of Health Services Research, The University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Yu, James B. [Yale School of Medicine, New Haven, Connecticut (United States)

    2014-09-01

    The radiation oncology community has witnessed growing interest in observational research conducted using large-scale data sources such as registries and claims-based data sets. With the growing emphasis on observational analyses in health care, the radiation oncology community must possess a sophisticated understanding of the methodological considerations of such studies in order to evaluate evidence appropriately to guide practice and policy. Because observational research has unique features that distinguish it from clinical trials and other forms of traditional radiation oncology research, the International Journal of Radiation Oncology, Biology, Physics assembled a panel of experts in health services research to provide a concise and well-referenced review, intended to be informative for the lay reader, as well as for scholars who wish to embark on such research without prior experience. This review begins by discussing the types of research questions relevant to radiation oncology that large-scale databases may help illuminate. It then describes major potential data sources for such endeavors, including information regarding access and insights regarding the strengths and limitations of each. Finally, it provides guidance regarding the analytical challenges that observational studies must confront, along with discussion of the techniques that have been developed to help minimize the impact of certain common analytical issues in observational analysis. Features characterizing a well-designed observational study include clearly defined research questions, careful selection of an appropriate data source, consultation with investigators with relevant methodological expertise, inclusion of sensitivity analyses, caution not to overinterpret small but significant differences, and recognition of limitations when trying to evaluate causality. This review concludes that carefully designed and executed studies using observational data that possess these qualities hold

  8. Considerations for Observational Research Using Large Data Sets in Radiation Oncology

    International Nuclear Information System (INIS)

    Jagsi, Reshma; Bekelman, Justin E.; Chen, Aileen; Chen, Ronald C.; Hoffman, Karen; Tina Shih, Ya-Chen; Smith, Benjamin D.; Yu, James B.

    2014-01-01

    The radiation oncology community has witnessed growing interest in observational research conducted using large-scale data sources such as registries and claims-based data sets. With the growing emphasis on observational analyses in health care, the radiation oncology community must possess a sophisticated understanding of the methodological considerations of such studies in order to evaluate evidence appropriately to guide practice and policy. Because observational research has unique features that distinguish it from clinical trials and other forms of traditional radiation oncology research, the International Journal of Radiation Oncology, Biology, Physics assembled a panel of experts in health services research to provide a concise and well-referenced review, intended to be informative for the lay reader, as well as for scholars who wish to embark on such research without prior experience. This review begins by discussing the types of research questions relevant to radiation oncology that large-scale databases may help illuminate. It then describes major potential data sources for such endeavors, including information regarding access and insights regarding the strengths and limitations of each. Finally, it provides guidance regarding the analytical challenges that observational studies must confront, along with discussion of the techniques that have been developed to help minimize the impact of certain common analytical issues in observational analysis. Features characterizing a well-designed observational study include clearly defined research questions, careful selection of an appropriate data source, consultation with investigators with relevant methodological expertise, inclusion of sensitivity analyses, caution not to overinterpret small but significant differences, and recognition of limitations when trying to evaluate causality. This review concludes that carefully designed and executed studies using observational data that possess these qualities hold

  9. Embrittlement and decrease of apparent strength in large-sized ...

    Indian Academy of Sciences (India)

    R. Narasimhan (Krishtel eMaging) 1461 1996 Oct 15 13:05:22

    In fact, the dimensional disparity between tensile stress σ([F][L]. −2) and ..... they work only in a limited range. This is the case of the ...... ACI 1992 American Concrete Institute: Building Code 318R-89 (Detroit: ACI Press). Ba˘zant Z P 1984 Size ...

  10. The impact of image-size manipulation and sugar content on children's cereal consumption.

    Science.gov (United States)

    Neyens, E; Aerts, G; Smits, T

    2015-12-01

    Previous studies have demonstrated that portion sizes and food energy-density influence children's eating behavior. However, the potential effects of front-of-pack image-sizes of serving suggestions and sugar content have not been tested. Using a mixed experimental design among young children, this study examines the effects of image-size manipulation and sugar content on cereal and milk consumption. Children poured and consumed significantly more cereal and drank significantly more milk when exposed to a larger sized image of serving suggestion as compared to a smaller image-size. Sugar content showed no main effects. Nevertheless, cereal consumption only differed significantly between small and large image-sizes when sugar content was low. An advantage of this study was the mundane setting in which the data were collected: a school's dining room instead of an artificial lab. Future studies should include a control condition, with children eating by themselves to reflect an even more natural context. Copyright © 2015 Elsevier Ltd. All rights reserved.

  11. Size matters: the ethical, legal, and social issues surrounding large-scale genetic biobank initiatives

    Directory of Open Access Journals (Sweden)

    Klaus Lindgaard Hoeyer

    2012-04-01

    Full Text Available During the past ten years the complex ethical, legal and social issues (ELSI typically surrounding large-scale genetic biobank research initiatives have been intensely debated in academic circles. In many ways genetic epidemiology has undergone a set of changes resembling what in physics has been called a transition into Big Science. This article outlines consequences of this transition and suggests that the change in scale implies challenges to the roles of scientists and public alike. An overview of key issues is presented, and it is argued that biobanks represent not just scientific endeavors with purely epistemic objectives, but also political projects with social implications. As such, they demand clever maneuvering among social interests to succeed.

  12. Very Large-Scale Neighborhoods with Performance Guarantees for Minimizing Makespan on Parallel Machines

    NARCIS (Netherlands)

    Brueggemann, T.; Hurink, Johann L.; Vredeveld, T.; Woeginger, Gerhard

    2006-01-01

    We study the problem of minimizing the makespan on m parallel machines. We introduce a very large-scale neighborhood of exponential size (in the number of machines) that is based on a matching in a complete graph. The idea is to partition the jobs assigned to the same machine into two sets. This

  13. Computed Tomographic Window Setting for Bronchial Measurement to Guide Double-Lumen Tube Size.

    Science.gov (United States)

    Seo, Jeong-Hwa; Bae, Jinyoung; Paik, Hyesun; Koo, Chang-Hoon; Bahk, Jae-Hyon

    2018-04-01

    The bronchial diameter measured on computed tomography (CT) can be used to guide double-lumen tube (DLT) sizes objectively. The bronchus is known to be measured most accurately in the so-called bronchial CT window. The authors investigated whether using the bronchial window results in the selection of more appropriately sized DLTs than using the other windows. CT image analysis and prospective randomized study. Tertiary hospital. Adults receiving left-sided DLTs. The authors simulated selection of DLT sizes based on the left bronchial diameters measured in the lung (width 1,500 Hounsfield unit [HU] and level -700 HU), bronchial (1,000 HU and -450 HU), and mediastinal (400 HU and 25 HU) CT windows. Furthermore, patients were randomly assigned to undergo imaging with either the bronchial or mediastinal window to guide DLT sizes. Using the underwater seal technique, the authors assessed whether the DLT was appropriately sized, undersized, or oversized for the patient. On 130 CT images, the bronchial diameter (9.9 ± 1.2 mm v 10.5 ± 1.3 mm v 11.7 ± 1.3 mm) and the selected DLT size were different in the lung, bronchial, and mediastinal windows, respectively (p study, oversized tubes were chosen less frequently in the bronchial window than in the mediastinal window (6/110 v 23/111; risk ratio 0.38; 95% CI 0.19-0.79; p = 0.003). No tubes were undersized after measurements in these two windows. The bronchial measurement in the bronchial window guided more appropriately sized DLTs compared with the lung or mediastinal windows. Copyright © 2017 Elsevier Inc. All rights reserved.

  14. SparseLeap: Efficient Empty Space Skipping for Large-Scale Volume Rendering

    KAUST Repository

    Hadwiger, Markus; Al-Awami, Ali K.; Beyer, Johanna; Agus, Marco; Pfister, Hanspeter

    2017-01-01

    Recent advances in data acquisition produce volume data of very high resolution and large size, such as terabyte-sized microscopy volumes. These data often contain many fine and intricate structures, which pose huge challenges for volume rendering, and make it particularly important to efficiently skip empty space. This paper addresses two major challenges: (1) The complexity of large volumes containing fine structures often leads to highly fragmented space subdivisions that make empty regions hard to skip efficiently. (2) The classification of space into empty and non-empty regions changes frequently, because the user or the evaluation of an interactive query activate a different set of objects, which makes it unfeasible to pre-compute a well-adapted space subdivision. We describe the novel SparseLeap method for efficient empty space skipping in very large volumes, even around fine structures. The main performance characteristic of SparseLeap is that it moves the major cost of empty space skipping out of the ray-casting stage. We achieve this via a hybrid strategy that balances the computational load between determining empty ray segments in a rasterization (object-order) stage, and sampling non-empty volume data in the ray-casting (image-order) stage. Before ray-casting, we exploit the fast hardware rasterization of GPUs to create a ray segment list for each pixel, which identifies non-empty regions along the ray. The ray-casting stage then leaps over empty space without hierarchy traversal. Ray segment lists are created by rasterizing a set of fine-grained, view-independent bounding boxes. Frame coherence is exploited by re-using the same bounding boxes unless the set of active objects changes. We show that SparseLeap scales better to large, sparse data than standard octree empty space skipping.

  15. SparseLeap: Efficient Empty Space Skipping for Large-Scale Volume Rendering

    KAUST Repository

    Hadwiger, Markus

    2017-08-28

    Recent advances in data acquisition produce volume data of very high resolution and large size, such as terabyte-sized microscopy volumes. These data often contain many fine and intricate structures, which pose huge challenges for volume rendering, and make it particularly important to efficiently skip empty space. This paper addresses two major challenges: (1) The complexity of large volumes containing fine structures often leads to highly fragmented space subdivisions that make empty regions hard to skip efficiently. (2) The classification of space into empty and non-empty regions changes frequently, because the user or the evaluation of an interactive query activate a different set of objects, which makes it unfeasible to pre-compute a well-adapted space subdivision. We describe the novel SparseLeap method for efficient empty space skipping in very large volumes, even around fine structures. The main performance characteristic of SparseLeap is that it moves the major cost of empty space skipping out of the ray-casting stage. We achieve this via a hybrid strategy that balances the computational load between determining empty ray segments in a rasterization (object-order) stage, and sampling non-empty volume data in the ray-casting (image-order) stage. Before ray-casting, we exploit the fast hardware rasterization of GPUs to create a ray segment list for each pixel, which identifies non-empty regions along the ray. The ray-casting stage then leaps over empty space without hierarchy traversal. Ray segment lists are created by rasterizing a set of fine-grained, view-independent bounding boxes. Frame coherence is exploited by re-using the same bounding boxes unless the set of active objects changes. We show that SparseLeap scales better to large, sparse data than standard octree empty space skipping.

  16. Some problems raised by the operation of large nuclear turbo-generator sets. Automatic control system for steam turbo-generator units

    International Nuclear Information System (INIS)

    Cecconi, F.

    1976-01-01

    The design of an appropriate automatic system was found to be useful to improve the control of large size turbo-generator units so as to provide easy and efficient control and monitoring. The experience of the manufacturer of these turbo-generator units allowed a system well suited for this function to be designed [fr

  17. The effect of size and competition on tree growth rate in old-growth coniferous forests

    Science.gov (United States)

    Das, Adrian

    2012-01-01

    Tree growth and competition play central roles in forest dynamics. Yet models of competition often neglect important variation in species-specific responses. Furthermore, functions used to model changes in growth rate with size do not always allow for potential complexity. Using a large data set from old-growth forests in California, models were parameterized relating growth rate to tree size and competition for four common species. Several functions relating growth rate to size were tested. Competition models included parameters for tree size, competitor size, and competitor distance. Competitive strength was allowed to vary by species. The best ranked models (using Akaike’s information criterion) explained between 18% and 40% of the variance in growth rate, with each species showing a strong response to competition. Models indicated that relationships between competition and growth varied substantially among species. The results also suggested that the relationship between growth rate and tree size can be complex and that how we model it can affect not only our ability to detect that complexity but also whether we obtain misleading results. In this case, for three of four species, the best model captured an apparent and unexpected decline in potential growth rate for the smallest trees in the data set.

  18. Size-resolved particle emission factors for individual ships

    Science.gov (United States)

    Jonsson, Åsa M.; Westerlund, Jonathan; Hallquist, Mattias

    2011-07-01

    In these experiments size-resolved emission factors for particle number (EFPN) and mass (EFPM) have been determined for 734 individual ship passages for real-world dilution. The method used is an extractive sampling method of the passing ship plumes where particle number/mass and CO2 were measured with high time resolution (1 Hz). The measurements were conducted on a small island located in the entrance to the port of Gothenburg (N57.6849, E11.838), the largest harbor in Scandinavia. This is an emission control area (ECA) and in close vicinity to populated areas. The average EFPN and EFPM were 2.55 ± 0.11 × 1016 (kg fuel)-1 and 2050 ± 110 mg (kg fuel)-1, respectively. The determined EF for ships with multiple passages showed a great reproducibility. Size-resolved EFPN were peaking at small particle sizes ˜35 nm. Smaller particle sizes and hence less mass were observed by a gas turbine equipped ship compared to diesel engine equipped ships. On average 36 to 46% of the emitted particles by number were non-volatile and 24% by mass (EFPN 1.16 ± 0.19 × 1016 [kg fuel]-1 and EFPM 488 ± 73 mg [kg fuel]-1, respectively). This study shows a great potential to gain large data-sets regarding ship emission determining parameters that can improve current dispersion modeling for health assessments on local and regional scales. The global contributions of total and non-volatile particle mass from shipping using this extensive data-set from an ECA were estimated to be at least 0.80 Tgy-1 and 0.19 Tgy-1.

  19. Size scaling of static friction.

    Science.gov (United States)

    Braun, O M; Manini, Nicola; Tosatti, Erio

    2013-02-22

    Sliding friction across a thin soft lubricant film typically occurs by stick slip, the lubricant fully solidifying at stick, yielding and flowing at slip. The static friction force per unit area preceding slip is known from molecular dynamics (MD) simulations to decrease with increasing contact area. That makes the large-size fate of stick slip unclear and unknown; its possible vanishing is important as it would herald smooth sliding with a dramatic drop of kinetic friction at large size. Here we formulate a scaling law of the static friction force, which for a soft lubricant is predicted to decrease as f(m)+Δf/A(γ) for increasing contact area A, with γ>0. Our main finding is that the value of f(m), controlling the survival of stick slip at large size, can be evaluated by simulations of comparably small size. MD simulations of soft lubricant sliding are presented, which verify this theory.

  20. Judging a book by its cover: the unconscious influence of pupil size on consumer choice.

    Science.gov (United States)

    Wiseman, Richard; Watt, Caroline

    2010-01-01

    Past research suggests that men perceive women with large pupils as especially attractive. We employed an innovative methodology to examine whether this effect influences consumer decision-making. A popular psychology book was published with two slightly different front covers. Both covers contained the same photograph of a woman; however, the woman's pupils on one cover were digitally enlarged. Readers indicated whether they were male or female, and whether they possessed the cover with small or large pupils. A significantly greater percentage of men than women had chosen the cover with the large pupils. None of the participants who attempted to guess the nature of the experiment was correct, suggesting that the influence exerted by pupil size was unconscious. These findings provide further support for the notion that people's judgments are unconsciously swayed by pupil size, and demonstrate that this effect operates in a real world setting.

  1. Investigating the Variability in Cumulus Cloud Number as a Function of Subdomain Size and Organization using large-domain LES

    Science.gov (United States)

    Neggers, R.

    2017-12-01

    Recent advances in supercomputing have introduced a "grey zone" in the representation of cumulus convection in general circulation models, in which this process is partially resolved. Cumulus parameterizations need to be made scale-aware and scale-adaptive to be able to conceptually and practically deal with this situation. A potential way forward are schemes formulated in terms of discretized Cloud Size Densities, or CSDs. Advantages include i) the introduction of scale-awareness at the foundation of the scheme, and ii) the possibility to apply size-filtering of parameterized convective transport and clouds. The CSD is a new variable that requires closure; this concerns its shape, its range, but also variability in cloud number that can appear due to i) subsampling effects and ii) organization in a cloud field. The goal of this study is to gain insight by means of sub-domain analyses of various large-domain LES realizations of cumulus cloud populations. For a series of three-dimensional snapshots, each with a different degree of organization, the cloud size distribution is calculated in all subdomains, for a range of subdomain sizes. The standard deviation of the number of clouds of a certain size is found to decrease with the subdomain size, following a powerlaw scaling corresponding to an inverse-linear dependence. Cloud number variability also increases with cloud size; this reflects that subsampling affects the largest clouds first, due to their typically larger neighbor spacing. Rewriting this dependence in terms of two dimensionless groups, by dividing by cloud number and cloud size respectively, yields a data collapse. Organization in the cloud field is found to act on top of this primary dependence, by enhancing the cloud number variability at the smaller sizes. This behavior reflects that small clouds start to "live" on top of larger structures such as cold pools, favoring or inhibiting their formation (as illustrated by the attached figure of cloud mask

  2. Meet the parents? Family size and the geographic proximity between adult children and older mothers in Sweden.

    Science.gov (United States)

    Holmlund, Helena; Rainer, Helmut; Siedler, Thomas

    2013-06-01

    The aim of this study is to estimate the causal effect of family size on the proximity between older mothers and adult children by using a large administrative data set from Sweden. Our main results show that adult children in Sweden are not constrained by sibship size in choosing where to live: for families with more than one child, sibship size does not affect child-mother proximity. For aging parents, however, having fewer children reduces the probability of having at least one child living nearby, which is likely to have consequences for the intensity of intergenerational contact and eldercare.

  3. Common Noctule Bats Are Sexually Dimorphic in Migratory Behaviour and Body Size but Not Wing Shape.

    Directory of Open Access Journals (Sweden)

    M Teague O'Mara

    Full Text Available Within the large order of bats, sexual size dimorphism measured by forearm length and body mass is often female-biased. Several studies have explained this through the effects on load carrying during pregnancy, intrasexual competition, as well as the fecundity and thermoregulation advantages of increased female body size. We hypothesized that wing shape should differ along with size and be under variable selection pressure in a species where there are large differences in flight behaviour. We tested whether load carrying, sex differential migration, or reproductive advantages of large females affect size and wing shape dimorphism in the common noctule (Nyctalus noctula, in which females are typically larger than males and only females migrate long distances each year. We tested for univariate and multivariate size and shape dimorphism using data sets derived from wing photos and biometric data collected during pre-migratory spring captures in Switzerland. Females had forearms that are on average 1% longer than males and are 1% heavier than males after emerging from hibernation, but we found no sex differences in other size, shape, or other functional characters in any wing parameters during this pre-migratory period. Female-biased size dimorphism without wing shape differences indicates that reproductive advantages of big mothers are most likely responsible for sexual dimorphism in this species, not load compensation or shape differences favouring aerodynamic efficiency during pregnancy or migration. Despite large behavioural and ecological sex differences, morphology associated with a specialized feeding niche may limit potential dimorphism in narrow-winged bats such as common noctules and the dramatic differences in migratory behaviour may then be accomplished through plasticity in wing kinematics.

  4. Comparison of Property-Oriented Basis Sets for the Computation of Electronic and Nuclear Relaxation Hyperpolarizabilities.

    Science.gov (United States)

    Zaleśny, Robert; Baranowska-Łączkowska, Angelika; Medveď, Miroslav; Luis, Josep M

    2015-09-08

    -induced coordinates. On the other hand, the aug-cc-pVDZ and ORP basis sets overestimate the property in question only by roughly 30%. In this study, we also propose a low-cost composite treatment of anharmonicity that relies on the combination of two basis sets, i.e., a large-sized basis set is employed to determine lowest-order derivatives with respect to the field-induced coordinates, and a medium-sized basis set is used to compute the higher-order derivatives. The results of calculations performed at the MP2 level of theory demonstrate that this approximate scheme is very successful at predicting nuclear relaxation hyperpolarizabilities.

  5. Helium ion distributions in a 4 kJ plasma focus device by 1 mm-thick large-size polycarbonate detectors

    Science.gov (United States)

    Sohrabi, M.; Habibi, M.; Ramezani, V.

    2014-11-01

    Helium ion beam profile, angular and iso-ion beam distributions in 4 kJ Amirkabir plasma focus (APF) device were effectively observed by the unaided eyes and studied in single 1 mm-thick large-diameter (20 cm) polycarbonate track detectors (PCTD). The PCTDs were processed by 50 Hz-HV electrochemical etching using a large-size ECE chamber. The results show that helium ions produced in the APF device have a ring-shaped angular distribution peaked at an angle of ∼ ± 60 ° with respect to the top of the anode. Some information on the helium ion energy and distributions is also provided. The method is highly effective for ion beam studies.

  6. Cybele: a large size ion source of module construction for Tore-Supra injector

    International Nuclear Information System (INIS)

    Simonin, A.; Garibaldi, P.

    2005-01-01

    A 70 keV 40 A hydrogen beam injector has been developed at Cadarache for plasma diagnostic purpose (MSE diagnostic and Charge exchange) on the Tore-Supra Tokamak. This injector daily operates with a large size ions source (called Pagoda) which does not completely fulfill all the requirements necessary for the present experiment. As a consequence, the development of a new ion source (called Cybele) has been underway whose objective is to meet high proton rate (>80%), current density of 160 mA/cm 2 within 5% of uniformity on the whole extraction surface for long shot operation (from 1 to 100 s). Moreover, the main particularity of Cybele is the module construction concept: it is composed of five source modules vertically juxtaposed, with a special orientation which fits the curved extraction surface of the injector; this curvature ensures a geometrical focalization of the neutral beam 7 m downstream in the Tore-Supra chamber. Cybele will be tested first in positive ion production for the Tore-Supra injector, and afterward in negative ion production mode; its modular concept could be advantageous to ensure plasma uniformity on the large extraction surface (about 1 m 2 ) of the ITER neutral beam injector. A module prototype (called the Drift Source) has already been developed in the past and optimized in the laboratory both for positive and negative ion production, where it has met the ITER ion source requirements in terms of D-current density (200 A/m 2 ), source pressure (0.3 Pa), uniformity and arc efficiency (0.015 A D-/kW). (authors)

  7. PACOM: A Versatile Tool for Integrating, Filtering, Visualizing, and Comparing Multiple Large Mass Spectrometry Proteomics Data Sets.

    Science.gov (United States)

    Martínez-Bartolomé, Salvador; Medina-Aunon, J Alberto; López-García, Miguel Ángel; González-Tejedo, Carmen; Prieto, Gorka; Navajas, Rosana; Salazar-Donate, Emilio; Fernández-Costa, Carolina; Yates, John R; Albar, Juan Pablo

    2018-04-06

    Mass-spectrometry-based proteomics has evolved into a high-throughput technology in which numerous large-scale data sets are generated from diverse analytical platforms. Furthermore, several scientific journals and funding agencies have emphasized the storage of proteomics data in public repositories to facilitate its evaluation, inspection, and reanalysis. (1) As a consequence, public proteomics data repositories are growing rapidly. However, tools are needed to integrate multiple proteomics data sets to compare different experimental features or to perform quality control analysis. Here, we present a new Java stand-alone tool, Proteomics Assay COMparator (PACOM), that is able to import, combine, and simultaneously compare numerous proteomics experiments to check the integrity of the proteomic data as well as verify data quality. With PACOM, the user can detect source of errors that may have been introduced in any step of a proteomics workflow and that influence the final results. Data sets can be easily compared and integrated, and data quality and reproducibility can be visually assessed through a rich set of graphical representations of proteomics data features as well as a wide variety of data filters. Its flexibility and easy-to-use interface make PACOM a unique tool for daily use in a proteomics laboratory. PACOM is available at https://github.com/smdb21/pacom .

  8. HiQuant: Rapid Postquantification Analysis of Large-Scale MS-Generated Proteomics Data.

    Science.gov (United States)

    Bryan, Kenneth; Jarboui, Mohamed-Ali; Raso, Cinzia; Bernal-Llinares, Manuel; McCann, Brendan; Rauch, Jens; Boldt, Karsten; Lynn, David J

    2016-06-03

    Recent advances in mass-spectrometry-based proteomics are now facilitating ambitious large-scale investigations of the spatial and temporal dynamics of the proteome; however, the increasing size and complexity of these data sets is overwhelming current downstream computational methods, specifically those that support the postquantification analysis pipeline. Here we present HiQuant, a novel application that enables the design and execution of a postquantification workflow, including common data-processing steps, such as assay normalization and grouping, and experimental replicate quality control and statistical analysis. HiQuant also enables the interpretation of results generated from large-scale data sets by supporting interactive heatmap analysis and also the direct export to Cytoscape and Gephi, two leading network analysis platforms. HiQuant may be run via a user-friendly graphical interface and also supports complete one-touch automation via a command-line mode. We evaluate HiQuant's performance by analyzing a large-scale, complex interactome mapping data set and demonstrate a 200-fold improvement in the execution time over current methods. We also demonstrate HiQuant's general utility by analyzing proteome-wide quantification data generated from both a large-scale public tyrosine kinase siRNA knock-down study and an in-house investigation into the temporal dynamics of the KSR1 and KSR2 interactomes. Download HiQuant, sample data sets, and supporting documentation at http://hiquant.primesdb.eu .

  9. Out-coupling membrane for large-size organic light-emitting panels with high efficiency and improved uniformity

    Energy Technology Data Exchange (ETDEWEB)

    Ding, Lei, E-mail: dinglei@sust.edu.cn [College of Electrical and Information Engineering, Shaanxi University of Science and Technology, Xi’an, Shaanxi 710021 (China); Wang, Lu-Wei [College of Electrical and Information Engineering, Shaanxi University of Science and Technology, Xi’an, Shaanxi 710021 (China); Zhou, Lei, E-mail: zhzhlei@gmail.com [Faculty of Mathematics and Physics, Huaiyin Institute of Technology, Huai' an 223003 (China); Zhang, Fang-hui [College of Electrical and Information Engineering, Shaanxi University of Science and Technology, Xi’an, Shaanxi 710021 (China)

    2016-12-15

    Highlights: • An out-coupling membrane embedded with a scattering film of SiO{sub 2} spheres and polyethylene terephthalate (PET) plastic was successfully developed for 150 × 150 mm{sup 2} OLEDs. • Remarkable enhancement in efficiency was achieved from the OLEDs with out- coupling membrane. • The uniformity of large-size GOLED lighting panel is remarkably improved. - Abstract: An out-coupling membrane embedded with a scattering film of SiO{sub 2} spheres and polyethylene terephthalate (PET) plastic was successfully developed for 150 × 150 mm{sup 2} green OLEDs. Comparing with a reference OLED panel, an approximately 1-fold enhancement in the forward emission was obtained with an out-coupling membrane adhered to the surface of the external glass substrate of the panel. Moreover, it was verified that the emission color at different viewing angles can be stabilized without apparent spectral distortion. Particularly, the uniformity of the large-area OLEDs was greatly improved. Theoretical calculation clarified that the improved performance of the lighting panels is primarily attributed to the effect of particle scattering.

  10. The definition of basic parameters of the set of small-sized equipment for preparation of dry mortar for various applications

    Directory of Open Access Journals (Sweden)

    Emelyanova Inga

    2017-01-01

    Full Text Available Based on the conducted information retrieval and review of the scientific literature, unsolved issues have been identified in the process of preparation of dry construction mixtures in the conditions of a construction site. The constructions of existing technological complexes for the production of dry construction mixtures are considered and their main drawbacks are identified in terms of application in the conditions of the construction site. On the basis of the conducted research, the designs of technological sets of small-sized equipment for the preparation of dry construction mixtures in the construction site are proposed. It is found out that the basis for creating the proposed technological kits are new designs of concrete mixers operating in cascade mode. A technique for calculating the main parameters of technological sets of equipment is proposed, depending on the use of the base machine of the kit.

  11. Blocking sets in Desarguesian planes

    NARCIS (Netherlands)

    Blokhuis, A.; Miklós, D.; Sós, V.T.; Szönyi, T.

    1996-01-01

    We survey recent results concerning the size of blocking sets in desarguesian projective and affine planes, and implications of these results and the technique to prove them, to related problemis, such as the size of maximal partial spreads, small complete arcs, small strong representative systems

  12. Set of CAMAC modules on the base of large integrated circuits for an accelerator synchronization system

    International Nuclear Information System (INIS)

    Glejbman, Eh.M.; Pilyar, N.V.

    1986-01-01

    Parameters of functional moduli in the CAMAC standard developed for accelerator synchronization system are presented. They comprise BZN-8K and BZ-8K digital delay circuits, timing circuit and pulse selection circuit. In every module 3 large integral circuits of KR 580 VI53 type programmed timer, circuits of the given system bus bar interface with bus bars of crate, circuits of data recording control, 2 peripheric storage devices, circuits of initial regime setting, input and output shapers, circuits of installation and removal of blocking in channels are used

  13. On the Relationship between Pollen Size and Genome Size

    Directory of Open Access Journals (Sweden)

    Charles A. Knight

    2010-01-01

    Full Text Available Here we test whether genome size is a predictor of pollen size. If it were, inferences of ancient genome size would be possible using the abundant paleo-palynolgical record. We performed regression analyses across 464 species of pollen width and genome size. We found a significant positive trend. However, regression analysis using phylogentically independent contrasts did not support the correlated evolution of these traits. Instead, a large split between angiosperms and gymnosperms for both pollen width and genome size was revealed. Sister taxa were not more likely to show a positive contrast when compared to deeper nodes. However, significantly more congeneric species had a positive trend than expected by chance. These results may reflect the strong selection pressure for pollen to be small. Also, because pollen grains are not metabolically active when measured, their biology is different than other cells which have been shown to be strongly related to genome size, such as guard cells. Our findings contrast with previously published research. It was our hope that pollen size could be used as a proxy for inferring the genome size of ancient species. However, our results suggest pollen is not a good candidate for such endeavors.

  14. Implementation of Lifestyle Modification Program Focusing on Physical Activity and Dietary Habits in a Large Group, Community-Based Setting

    Science.gov (United States)

    Stoutenberg, Mark; Falcon, Ashley; Arheart, Kris; Stasi, Selina; Portacio, Francia; Stepanenko, Bryan; Lan, Mary L.; Castruccio-Prince, Catarina; Nackenson, Joshua

    2017-01-01

    Background: Lifestyle modification programs improve several health-related behaviors, including physical activity (PA) and nutrition. However, few of these programs have been expanded to impact a large number of individuals in one setting at one time. Therefore, the purpose of this study was to determine whether a PA- and nutrition-based lifestyle…

  15. Missing portion sizes in FFQ

    DEFF Research Database (Denmark)

    Køster-Rasmussen, Rasmus; Siersma, Volkert Dirk; Halldorson, Thorhallur I.

    2015-01-01

    -nearest neighbours (KNN) were compared with a reference based on self-reported portion sizes (quantified by a photographic food atlas embedded in the FFQ). Setting: The Danish Health Examination Survey 2007–2008. Subjects: The study included 3728 adults with complete portion size data. Results: Compared...

  16. Portion size

    Science.gov (United States)

    ... of cards One 3-ounce (84 grams) serving of fish is a checkbook One-half cup (40 grams) ... for the smallest size. By eating a small hamburger instead of a large, you will save about 150 calories. ...

  17. A Domain-Specific Languane for Regular Sets of Strings and Trees

    DEFF Research Database (Denmark)

    Schwartzbach, Michael Ignatieff; Klarlund, Nils

    1999-01-01

    We propose a new high-level progr amming notation, called FIDO, that we have designed to concisely express regular sets of strings or trees. In particular, it can be viewed as a domain-specific language for the expression of finite-state automata on large alphabets (of sometimes astronomical size......, called the Monadic Second-order Logic (M2L) on trees. FIDO is translated first into pure M2L via suitable encodings, and finally into finite-state automata through the MONA tool....

  18. Size matter!

    DEFF Research Database (Denmark)

    Hansen, Pelle Guldborg; Jespersen, Andreas Maaløe; Skov, Laurits Rhoden

    2015-01-01

    trash bags according to size of plates and weighed in bulk. Results Those eating from smaller plates (n=145) left significantly less food to waste (aver. 14,8g) than participants eating from standard plates (n=75) (aver. 20g) amounting to a reduction of 25,8%. Conclusions Our field experiment tests...... the hypothesis that a decrease in the size of food plates may lead to significant reductions in food waste from buffets. It supports and extends the set of circumstances in which a recent experiment found that reduced dinner plates in a hotel chain lead to reduced quantities of leftovers....

  19. Large superconducting conductors and joints for fusion magnets: From conceptual design to test at full size scale

    International Nuclear Information System (INIS)

    Ciazynski, D.; Duchateau, J.L.; Decool, P.; Libeyre, P.; Turck, B.

    2001-01-01

    A new kind of superconducting conductor, using the so-called cable-in-conduit concept, is emerging mainly involving fusion activity. It is to be noted that at present time no large Nb 3 Sn magnet in the world is operating using this concept. The difficulty of this technology which has now been studied for 20 years, is that it has to integrate major progresses in multiple interconnected new fields such as: large number (1000) of superconducting strands, high current conductors (50 kA), forced flow cryogenics, Nb 3 Sn technology, low loss conductors in pulsed operation, high current connections, high voltage insulation (10 kV), economical and industrial feasibility. CEA was very involved during these last 10 years in this development which took place in the frame of the NET and ITER technological programs. One major milestone was reached in 1998-1999 with the successful tests by our Association of three full size conductor and connection samples in the Sultan facility (Villigen, Switzerland). (author)

  20. Chemical Characterization and Source Apportionment of Size Fractionated Atmospheric Aerosols, and, Evaluating Student Attitudes and Learning in Large Lecture General Chemistry Classes

    Science.gov (United States)

    Allen, Gregory Harold

    Chemical speciation and source apportionment of size fractionated atmospheric aerosols were investigated using laser desorption time-of-flight mass spectrometry (LD TOF-MS) and source apportionment was carried out using carbon-14 accelerator mass spectrometry (14C AMS). Sample collection was carried out using the Davis Rotating-drum Unit for Monitoring impact analyzer in Davis, Colfax, and Yosemite, CA. Ambient atmospheric aerosols collected during the winter of 2010/11 and 2011/12 showed a significant difference in the types of compounds found in the small and large sized particles. The difference was due to the increase number of oxidized carbon species that were found in the small particles size ranges, but not in the large particles size ranges. Overall, the ambient atmospheric aerosols collected during the winter in Davis, CA had and average fraction modern of F14C = 0.753 +/- 0.006, indicating that the majority of the size fractionated particles originated from biogenic sources. Samples collected during the King Fire in Colfax, CA were used to determine the contribution of biomass burning (wildfire) aerosols. Factor analysis was used to reduce the ions found in the LD TOF-MS analysis of the King Fire samples. The final factor analysis generated a total of four factors that explained an overall 83% of the variance in the data set. Two of the factors correlated heavily with increased smoke events during the sample period. The increased smoke events produced a large number of highly oxidized organic aerosols (OOA2) and aromatic compounds that are indicative of biomass burning organic aerosols (WBOA). The signal intensities of the factors generated in the King Fire data were investigated in samples collected in Yosemite and Davis, CA to look at the impact of biomass burning on ambient atmospheric aerosols. In both comparison sample collections the OOA2 and WBOA factors both increased during biomass burning events located near the sampling sites. The correlation

  1. Beauty, body size and wages: Evidence from a unique data set.

    Science.gov (United States)

    Oreffice, Sonia; Quintana-Domeque, Climent

    2016-09-01

    We analyze how attractiveness rated at the start of the interview in the German General Social Survey is related to weight, height, and body mass index (BMI), separately by gender and accounting for interviewers' characteristics or fixed effects. We show that height, weight, and BMI all strongly contribute to male and female attractiveness when attractiveness is rated by opposite-sex interviewers, and that anthropometric characteristics are irrelevant to male interviewers when assessing male attractiveness. We also estimate whether, controlling for beauty, body size measures are related to hourly wages. We find that anthropometric attributes play a significant role in wage regressions in addition to attractiveness, showing that body size cannot be dismissed as a simple component of beauty. Our findings are robust to controlling for health status and accounting for selection into working. Copyright © 2016 Elsevier B.V. All rights reserved.

  2. Reparation and validation of a large size dried spike: Batch SAL-9951

    International Nuclear Information System (INIS)

    Doubek, N.; Jammet, G.; Zoigner, A.

    1991-02-01

    To determine uranium and plutonium concentration using isotope dilution mass spectrometry, weighed aliquands of a synthetic mixture containing about 2mg of Pu (with a 239 Pu abundance of about 98%) and 37mg of U (with a 235 U enrichment of about 19%) have been prepared by the IAEA-SAL and verified by three analytical laboratories: NMCC-SAL, OEFZS, IAEA-SAL; they will be used to spike samples of concentrated spent fuel solutions with a high burnup and a low 235 U enrichment. Certified Reference Materials Pu-NBL-126, natural U-NBL-112A and 93% enriched U-NBL-116 were used to prepare a stock solution containing about 3.2 mg/ml of Pu and 64.3 mg/ml of 18.7% enriched U. Before shipment to the Reprocessing Plant, aliquands of the stock solution are dried to give Large Size Dried (LSD) Spikes which resist shocks encountered during transportation, so that they can readily be recovered quantitatively at the plant. This paper describes the preparation and the validation of a fifth batch of LSD-spike which is intended to be used as a common spike by the plant operator, the national and the IAEA inspectorates. 7 refs, 6 tabs

  3. DNMT1 is associated with cell cycle and DNA replication gene sets in diffuse large B-cell lymphoma.

    Science.gov (United States)

    Loo, Suet Kee; Ab Hamid, Suzina Sheikh; Musa, Mustaffa; Wong, Kah Keng

    2018-01-01

    Dysregulation of DNA (cytosine-5)-methyltransferase 1 (DNMT1) is associated with the pathogenesis of various types of cancer. It has been previously shown that DNMT1 is frequently expressed in diffuse large B-cell lymphoma (DLBCL), however its functions remain to be elucidated in the disease. In this study, we gene expression profiled (GEP) shRNA targeting DNMT1(shDNMT1)-treated germinal center B-cell-like DLBCL (GCB-DLBCL)-derived cell line (i.e. HT) compared with non-silencing shRNA (control shRNA)-treated HT cells. Independent gene set enrichment analysis (GSEA) performed using GEPs of shRNA-treated HT cells and primary GCB-DLBCL cases derived from two publicly-available datasets (i.e. GSE10846 and GSE31312) produced three separate lists of enriched gene sets for each gene sets collection from Molecular Signatures Database (MSigDB). Subsequent Venn analysis identified 268, 145 and six consensus gene sets from analyzing gene sets in C2 collection (curated gene sets), C5 sub-collection [gene sets from gene ontology (GO) biological process ontology] and Hallmark collection, respectively to be enriched in positive correlation with DNMT1 expression profiles in shRNA-treated HT cells, GSE10846 and GSE31312 datasets [false discovery rate (FDR) 0.8) with DNMT1 expression and significantly downregulated (log fold-change <-1.35; p<0.05) following DNMT1 silencing in HT cells. These results suggest the involvement of DNMT1 in the activation of cell cycle and DNA replication in DLBCL cells. Copyright © 2017 Elsevier GmbH. All rights reserved.

  4. Chunking of Large Multidimensional Arrays

    Energy Technology Data Exchange (ETDEWEB)

    Rotem, Doron; Otoo, Ekow J.; Seshadri, Sridhar

    2007-02-28

    Data intensive scientific computations as well on-lineanalytical processing applications as are done on very large datasetsthat are modeled as k-dimensional arrays. The storage organization ofsuch arrays on disks is done by partitioning the large global array intofixed size hyper-rectangular sub-arrays called chunks or tiles that formthe units of data transfer between disk and memory. Typical queriesinvolve the retrieval of sub-arrays in a manner that accesses all chunksthat overlap the query results. An important metric of the storageefficiency is the expected number of chunks retrieved over all suchqueries. The question that immediately arises is "what shapes of arraychunks give the minimum expected number of chunks over a query workload?"In this paper we develop two probabilistic mathematical models of theproblem and provide exact solutions using steepest descent and geometricprogramming methods. Experimental results, using synthetic workloads onreal life data sets, show that our chunking is much more efficient thanthe existing approximate solutions.

  5. Plant Size and Competitive Dynamics along Nutrient Gradients.

    Science.gov (United States)

    Goldberg, Deborah E; Martina, Jason P; Elgersma, Kenneth J; Currie, William S

    2017-08-01

    Resource competition theory in plants has focused largely on resource acquisition traits that are independent of size, such as traits of individual leaves or roots or proportional allocation to different functions. However, plants also differ in maximum potential size, which could outweigh differences in module-level traits. We used a community ecosystem model called mondrian to investigate whether larger size inevitably increases competitive ability and how size interacts with nitrogen supply. Contrary to the conventional wisdom that bigger is better, we found that invader success and competitive ability are unimodal functions of maximum potential size, such that plants that are too large (or too small) are disproportionately suppressed by competition. Optimal size increases with nitrogen supply, even when plants compete for nitrogen only in a size-symmetric manner, although adding size-asymmetric competition for light does substantially increase the advantage of larger size at high nitrogen. These complex interactions of plant size and nitrogen supply lead to strong nonlinearities such that small differences in nitrogen can result in large differences in plant invasion success and the influence of competition along productivity gradients.

  6. The impact of sample size on the reproducibility of voxel-based lesion-deficit mappings.

    Science.gov (United States)

    Lorca-Puls, Diego L; Gajardo-Vidal, Andrea; White, Jitrachote; Seghier, Mohamed L; Leff, Alexander P; Green, David W; Crinion, Jenny T; Ludersdorfer, Philipp; Hope, Thomas M H; Bowman, Howard; Price, Cathy J

    2018-07-01

    This study investigated how sample size affects the reproducibility of findings from univariate voxel-based lesion-deficit analyses (e.g., voxel-based lesion-symptom mapping and voxel-based morphometry). Our effect of interest was the strength of the mapping between brain damage and speech articulation difficulties, as measured in terms of the proportion of variance explained. First, we identified a region of interest by searching on a voxel-by-voxel basis for brain areas where greater lesion load was associated with poorer speech articulation using a large sample of 360 right-handed English-speaking stroke survivors. We then randomly drew thousands of bootstrap samples from this data set that included either 30, 60, 90, 120, 180, or 360 patients. For each resample, we recorded effect size estimates and p values after conducting exactly the same lesion-deficit analysis within the previously identified region of interest and holding all procedures constant. The results show (1) how often small effect sizes in a heterogeneous population fail to be detected; (2) how effect size and its statistical significance varies with sample size; (3) how low-powered studies (due to small sample sizes) can greatly over-estimate as well as under-estimate effect sizes; and (4) how large sample sizes (N ≥ 90) can yield highly significant p values even when effect sizes are so small that they become trivial in practical terms. The implications of these findings for interpreting the results from univariate voxel-based lesion-deficit analyses are discussed. Copyright © 2018 The Author(s). Published by Elsevier Ltd.. All rights reserved.

  7. Design and performance of large-pixel-size high-fill-fraction TES arrays for future X-ray astrophysics missions

    International Nuclear Information System (INIS)

    Figueroa-Feliciano, E.; Bandler, S.R.; Chervenak, J.; Finkbeiner, F.; Iyomoto, N.; Kelley, R.L.; Kilbourne, C.A.; Porter, F.S.; Saab, T.; Sadleir, J.; White, J.

    2006-01-01

    We have designed, modeled, fabricated and tested a 600μm high-fill-fraction microcalorimeter array that will be a good match to the requirements of future X-ray missions. Our devices use transition-edge sensors coupled to overhanging bismuth/copper absorbers to produce arrays with 97% or higher fill fraction. An extensive modeling effort was undertaken in order to accommodate large pixel sizes (500-1000μm) and maintain the best energy resolution possible. The finite thermalization time of the large absorber and the associated position dependence of the pulse shape on absorption position constrain the time constants of the system given a desired energy-resolution performance. We show the results of our analysis and our new pixel design, consisting of a novel TES-on-the-side architecture which creates a controllable TES-absorber conductance

  8. HOW THE ROCKY FLATS ENVIRONMENTAL TECHNOLOGY SITE DEVELOPED A NEW WASTE PACKAGE USING A POLYUREA COATING THAT IS SAFELY AND ECONOMICALLY ELIMINATING SIZE REDUCTION OF LARGE ITEMS

    International Nuclear Information System (INIS)

    Dorr, Kent A.; Hogue, Richard S.; Kimokeo, Margaret K.

    2003-01-01

    One of the major challenges involved in closing the Rocky Flats Environmental Technology Site (RFETS) is the disposal of extremely large pieces of contaminated production equipment and building debris. Past practice has been to size reduce the equipment into pieces small enough to fit into approved, standard waste containers. Size reducing this equipment is extremely expensive, and exposes workers to high-risk tasks, including significant industrial, chemical, and radiological hazards. RFETS has developed a waste package using a Polyurea coating for shipping large contaminated objects. The cost and schedule savings have been significant

  9. Interaction between numbers and size during visual search

    OpenAIRE

    Krause, Florian; Bekkering, Harold; Pratt, Jay; Lindemann, Oliver

    2016-01-01

    The current study investigates an interaction between numbers and physical size (i.e. size congruity) in visual search. In three experiments, participants had to detect a physically large (or small) target item among physically small (or large) distractors in a search task comprising single-digit numbers. The relative numerical size of the digits was varied, such that the target item was either among the numerically large or small numbers in the search display and the relation between numeric...

  10. Is ‘fuzzy theory’ an appropriate tool for large size problems?

    CERN Document Server

    Biswas, Ranjit

    2016-01-01

    The work in this book is based on philosophical as well as logical views on the subject of decoding the ‘progress’ of decision making process in the cognition system of a decision maker (be it a human or an animal or a bird or any living thing which has a brain) while evaluating the membership value µ(x) in a fuzzy set or in an intuitionistic fuzzy set or in any such soft computing set model or in a crisp set. A new theory is introduced called by “Theory of CIFS”. The following two hypothesis are hidden facts in fuzzy computing or in any soft computing process :- Fact-1: A decision maker (intelligent agent) can never use or apply ‘fuzzy theory’ or any soft-computing set theory without intuitionistic fuzzy system. Fact-2 : The Fact-1 does not necessarily require that a fuzzy decision maker (or a crisp ordinary decision maker or a decision maker with any other soft theory models or a decision maker like animal/bird which has brain, etc.) must be aware or knowledgeable about IFS Theory! The “Theor...

  11. Long-term clinical evaluation of a 800-nm long-pulsed diode laser with a large spot size and vacuum-assisted suction for hair removal.

    Science.gov (United States)

    Ibrahimi, Omar A; Kilmer, Suzanne L

    2012-06-01

    The long-pulsed diode (800-810-nm) laser is one of the most commonly used and effective lasers for hair removal. Limitations of currently available devices include a small treatment spot size, treatment-associated pain, and the need for skin cooling. To evaluate the long-term hair reduction capabilities of a long-pulsed diode laser with a large spot size and vacuum assisted suction. Thirty-five subjects were enrolled in a prospective, self-controlled, single-center study of axillary hair removal. The study consisted of three treatments using a long-pulsed diode laser with a large spot size and vacuum-assisted suction at 4- to 6-week intervals with follow-up visits 6 and 15 months after the last treatment. Hair clearance was quantified using macro hair-count photographs taken at baseline and at 6- and 15-month follow-up visits. Changes in hair thickness and color, levels of treatment-associated pain, and adverse events were additional study endpoints. There was statistically significant hair clearance at the 6 (54%) and 15-month (42%) follow-up visits. Remaining hairs were thinner and lighter at the 15-month follow-up visit, and the majority of subjects reported feeling up to mild to moderate pain during treatment without the use of pretreatment anesthesia or skin cooling. A long-pulsed diode laser with a large spot size and vacuum-assisted suction is safe and effective for long-term hair removal. This is the largest prospective study to evaluate long-term hair removal and the first to quantify decreases in hair thickness and darkness with treatment. © 2012 by the American Society for Dermatologic Surgery, Inc. Published by Wiley Periodicals, Inc.

  12. Molecular basis sets - a general similarity-based approach for representing chemical spaces.

    Science.gov (United States)

    Raghavendra, Akshay S; Maggiora, Gerald M

    2007-01-01

    A new method, based on generalized Fourier analysis, is described that utilizes the concept of "molecular basis sets" to represent chemical space within an abstract vector space. The basis vectors in this space are abstract molecular vectors. Inner products among the basis vectors are determined using an ansatz that associates molecular similarities between pairs of molecules with their corresponding inner products. Moreover, the fact that similarities between pairs of molecules are, in essentially all cases, nonzero implies that the abstract molecular basis vectors are nonorthogonal, but since the similarity of a molecule with itself is unity, the molecular vectors are normalized to unity. A symmetric orthogonalization procedure, which optimally preserves the character of the original set of molecular basis vectors, is used to construct appropriate orthonormal basis sets. Molecules can then be represented, in general, by sets of orthonormal "molecule-like" basis vectors within a proper Euclidean vector space. However, the dimension of the space can become quite large. Thus, the work presented here assesses the effect of basis set size on a number of properties including the average squared error and average norm of molecular vectors represented in the space-the results clearly show the expected reduction in average squared error and increase in average norm as the basis set size is increased. Several distance-based statistics are also considered. These include the distribution of distances and their differences with respect to basis sets of differing size and several comparative distance measures such as Spearman rank correlation and Kruscal stress. All of the measures show that, even though the dimension can be high, the chemical spaces they represent, nonetheless, behave in a well-controlled and reasonable manner. Other abstract vector spaces analogous to that described here can also be constructed providing that the appropriate inner products can be directly

  13. On sets of vectors of a finite vector space in which every subset of basis size is a basis II

    OpenAIRE

    Ball, Simeon; De Beule, Jan

    2012-01-01

    This article contains a proof of the MDS conjecture for k a parts per thousand currency sign 2p - 2. That is, that if S is a set of vectors of in which every subset of S of size k is a basis, where q = p (h) , p is prime and q is not and k a parts per thousand currency sign 2p - 2, then |S| a parts per thousand currency sign q + 1. It also contains a short proof of the same fact for k a parts per thousand currency sign p, for all q.

  14. Window Size Impact in Human Activity Recognition

    Directory of Open Access Journals (Sweden)

    Oresti Banos

    2014-04-01

    Full Text Available Signal segmentation is a crucial stage in the activity recognition process; however, this has been rarely and vaguely characterized so far. Windowing approaches are normally used for segmentation, but no clear consensus exists on which window size should be preferably employed. In fact, most designs normally rely on figures used in previous works, but with no strict studies that support them. Intuitively, decreasing the window size allows for a faster activity detection, as well as reduced resources and energy needs. On the contrary, large data windows are normally considered for the recognition of complex activities. In this work, we present an extensive study to fairly characterize the windowing procedure, to determine its impact within the activity recognition process and to help clarify some of the habitual assumptions made during the recognition system design. To that end, some of the most widely used activity recognition procedures are evaluated for a wide range of window sizes and activities. From the evaluation, the interval 1–2 s proves to provide the best trade-off between recognition speed and accuracy. The study, specifically intended for on-body activity recognition systems, further provides designers with a set of guidelines devised to facilitate the system definition and configuration according to the particular application requirements and target activities.

  15. Knowledge and theme discovery across very large biological data sets using distributed queries: a prototype combining unstructured and structured data.

    Directory of Open Access Journals (Sweden)

    Uma S Mudunuri

    Full Text Available As the discipline of biomedical science continues to apply new technologies capable of producing unprecedented volumes of noisy and complex biological data, it has become evident that available methods for deriving meaningful information from such data are simply not keeping pace. In order to achieve useful results, researchers require methods that consolidate, store and query combinations of structured and unstructured data sets efficiently and effectively. As we move towards personalized medicine, the need to combine unstructured data, such as medical literature, with large amounts of highly structured and high-throughput data such as human variation or expression data from very large cohorts, is especially urgent. For our study, we investigated a likely biomedical query using the Hadoop framework. We ran queries using native MapReduce tools we developed as well as other open source and proprietary tools. Our results suggest that the available technologies within the Big Data domain can reduce the time and effort needed to utilize and apply distributed queries over large datasets in practical clinical applications in the life sciences domain. The methodologies and technologies discussed in this paper set the stage for a more detailed evaluation that investigates how various data structures and data models are best mapped to the proper computational framework.

  16. Synthesis of a large-sized mesoporous phosphosilicate thin film through evaporation-induced polymeric micelle assembly.

    Science.gov (United States)

    Li, Yunqi; Bastakoti, Bishnu Prasad; Imura, Masataka; Suzuki, Norihiro; Jiang, Xiangfen; Ohki, Shinobu; Deguchi, Kenzo; Suzuki, Madoka; Arai, Satoshi; Yamauchi, Yusuke

    2015-01-01

    A triblock copolymer, poly(styrene-b-2-vinyl pyridine-b-ethylene oxide) (PS-b-P2VP-b-PEO) was used as a soft template to synthesize large-sized mesoporous phosphosilicate thin films. The kinetically frozen PS core stabilizes the micelles. The strong interaction of the inorganic precursors with the P2VP shell enables the fabrication of highly robust walls of phosphosilicate and the PEO helps orderly packing of the micelles during solvent evaporation. The molar ratio of phosphoric acid and tetraethyl orthosilicate is crucial to achieve the final mesostructure. The insertion of phosphorus species into the siloxane network is studied by (29) Si and (31) P MAS NMR spectra. The mesoporous phosphosilicate films exhibit steady cell adhesion properties and show great promise as excellent materials in bone-growth engineering applications. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  17. Effect of crowd size on patient volume at a large, multipurpose, indoor stadium.

    Science.gov (United States)

    De Lorenzo, R A; Gray, B C; Bennett, P C; Lamparella, V J

    1989-01-01

    A prediction of patient volume expected at "mass gatherings" is desirable in order to provide optimal on-site emergency medical care. While several methods of predicting patient loads have been suggested, a reliable technique has not been established. This study examines the frequency of medical emergencies at the Syracuse University Carrier Dome, a 50,500-seat indoor stadium. Patient volume and level of care at collegiate basketball and football games as well as rock concerts, over a 7-year period were examined and tabulated. This information was analyzed using simple regression and nonparametric statistical methods to determine level of correlation between crowd size and patient volume. These analyses demonstrated no statistically significant increase in patient volume for increasing crowd size for basketball and football events. There was a small but statistically significant increase in patient volume for increasing crowd size for concerts. A comparison of similar crowd size for each of the three events showed that patient frequency is greatest for concerts and smallest for basketball. The study suggests that crowd size alone has only a minor influence on patient volume at any given event. Structuring medical services based solely on expected crowd size and not considering other influences such as event type and duration may give poor results.

  18. Emerging Cyber Infrastructure for NASA's Large-Scale Climate Data Analytics

    Science.gov (United States)

    Duffy, D.; Spear, C.; Bowen, M. K.; Thompson, J. H.; Hu, F.; Yang, C. P.; Pierce, D.

    2016-12-01

    The resolution of NASA climate and weather simulations have grown dramatically over the past few years with the highest-fidelity models reaching down to 1.5 KM global resolutions. With each doubling of the resolution, the resulting data sets grow by a factor of eight in size. As the climate and weather models push the envelope even further, a new infrastructure to store data and provide large-scale data analytics is necessary. The NASA Center for Climate Simulation (NCCS) has deployed the Data Analytics Storage Service (DASS) that combines scalable storage with the ability to perform in-situ analytics. Within this system, large, commonly used data sets are stored in a POSIX file system (write once/read many); examples of data stored include Landsat, MERRA2, observing system simulation experiments, and high-resolution downscaled reanalysis. The total size of this repository is on the order of 15 petabytes of storage. In addition to the POSIX file system, the NCCS has deployed file system connectors to enable emerging analytics built on top of the Hadoop File System (HDFS) to run on the same storage servers within the DASS. Coupled with a custom spatiotemporal indexing approach, users can now run emerging analytical operations built on MapReduce and Spark on the same data files stored within the POSIX file system without having to make additional copies. This presentation will discuss the architecture of this system and present benchmark performance measurements from traditional TeraSort and Wordcount to large-scale climate analytical operations on NetCDF data.

  19. The Molecule Cloud - compact visualization of large collections of molecules

    Directory of Open Access Journals (Sweden)

    Ertl Peter

    2012-07-01

    Full Text Available Abstract Background Analysis and visualization of large collections of molecules is one of the most frequent challenges cheminformatics experts in pharmaceutical industry are facing. Various sophisticated methods are available to perform this task, including clustering, dimensionality reduction or scaffold frequency analysis. In any case, however, viewing and analyzing large tables with molecular structures is necessary. We present a new visualization technique, providing basic information about the composition of molecular data sets at a single glance. Summary A method is presented here allowing visual representation of the most common structural features of chemical databases in a form of a cloud diagram. The frequency of molecules containing particular substructure is indicated by the size of respective structural image. The method is useful to quickly perceive the most prominent structural features present in the data set. This approach was inspired by popular word cloud diagrams that are used to visualize textual information in a compact form. Therefore we call this approach “Molecule Cloud”. The method also supports visualization of additional information, for example biological activity of molecules containing this scaffold or the protein target class typical for particular scaffolds, by color coding. Detailed description of the algorithm is provided, allowing easy implementation of the method by any cheminformatics toolkit. The layout algorithm is available as open source Java code. Conclusions Visualization of large molecular data sets using the Molecule Cloud approach allows scientists to get information about the composition of molecular databases and their most frequent structural features easily. The method may be used in the areas where analysis of large molecular collections is needed, for example processing of high throughput screening results, virtual screening or compound purchasing. Several example visualizations of large

  20. Annotating gene sets by mining large literature collections with protein networks.

    Science.gov (United States)

    Wang, Sheng; Ma, Jianzhu; Yu, Michael Ku; Zheng, Fan; Huang, Edward W; Han, Jiawei; Peng, Jian; Ideker, Trey

    2018-01-01

    Analysis of patient genomes and transcriptomes routinely recognizes new gene sets associated with human disease. Here we present an integrative natural language processing system which infers common functions for a gene set through automatic mining of the scientific literature with biological networks. This system links genes with associated literature phrases and combines these links with protein interactions in a single heterogeneous network. Multiscale functional annotations are inferred based on network distances between phrases and genes and then visualized as an ontology of biological concepts. To evaluate this system, we predict functions for gene sets representing known pathways and find that our approach achieves substantial improvement over the conventional text-mining baseline method. Moreover, our system discovers novel annotations for gene sets or pathways without previously known functions. Two case studies demonstrate how the system is used in discovery of new cancer-related pathways with ontological annotations.

  1. Facile synthesis of uniform large-sized InP nanocrystal quantum dots using tris(tert-butyldimethylsilyl)phosphine

    Science.gov (United States)

    2012-01-01

    Colloidal III-V semiconductor nanocrystal quantum dots [NQDs] have attracted interest because they have reduced toxicity compared with II-VI compounds. However, the study and application of III-V semiconductor nanocrystals are limited by difficulties in their synthesis. In particular, it is difficult to control nucleation because the molecular bonds in III-V semiconductors are highly covalent. A synthetic approach of InP NQDs was presented using newly synthesized organometallic phosphorus [P] precursors with different functional moieties while preserving the P-Si bond. Introducing bulky side chains in our study improved the stability while facilitating InP formation with strong confinement at a readily low temperature regime (210°C to 300°C). Further shell coating with ZnS resulted in highly luminescent core-shell materials. The design and synthesis of P precursors for high-quality InP NQDs were conducted for the first time, and we were able to control the nucleation by varying the reactivity of P precursors, therefore achieving uniform large-sized InP NQDs. This opens the way for the large-scale production of high-quality Cd-free nanocrystal quantum dots. PMID:22289352

  2. Large Scale EOF Analysis of Climate Data

    Science.gov (United States)

    Prabhat, M.; Gittens, A.; Kashinath, K.; Cavanaugh, N. R.; Mahoney, M.

    2016-12-01

    We present a distributed approach towards extracting EOFs from 3D climate data. We implement the method in Apache Spark, and process multi-TB sized datasets on O(1000-10,000) cores. We apply this method to latitude-weighted ocean temperature data from CSFR, a 2.2 terabyte-sized data set comprising ocean and subsurface reanalysis measurements collected at 41 levels in the ocean, at 6 hour intervals over 31 years. We extract the first 100 EOFs of this full data set and compare to the EOFs computed simply on the surface temperature field. Our analyses provide evidence of Kelvin and Rossy waves and components of large-scale modes of oscillation including the ENSO and PDO that are not visible in the usual SST EOFs. Further, they provide information on the the most influential parts of the ocean, such as the thermocline, that exist below the surface. Work is ongoing to understand the factors determining the depth-varying spatial patterns observed in the EOFs. We will experiment with weighting schemes to appropriately account for the differing depths of the observations. We also plan to apply the same distributed approach to analysis of analysis of 3D atmospheric climatic data sets, including multiple variables. Because the atmosphere changes on a quicker time-scale than the ocean, we expect that the results will demonstrate an even greater advantage to computing 3D EOFs in lieu of 2D EOFs.

  3. Size matters: bigger is faster.

    Science.gov (United States)

    Sereno, Sara C; O'Donnell, Patrick J; Sereno, Margaret E

    2009-06-01

    A largely unexplored aspect of lexical access in visual word recognition is "semantic size"--namely, the real-world size of an object to which a word refers. A total of 42 participants performed a lexical decision task on concrete nouns denoting either big or small objects (e.g., bookcase or teaspoon). Items were matched pairwise on relevant lexical dimensions. Participants' reaction times were reliably faster to semantically "big" versus "small" words. The results are discussed in terms of possible mechanisms, including more active representations for "big" words, due to the ecological importance attributed to large objects in the environment and the relative speed of neural responses to large objects.

  4. A new database sub-system for grain-size analysis

    Science.gov (United States)

    Suckow, Axel

    2013-04-01

    Detailed grain-size analyses of large depth profiles for palaeoclimate studies create large amounts of data. For instance (Novothny et al., 2011) presented a depth profile of grain-size analyses with 2 cm resolution and a total depth of more than 15 m, where each sample was measured with 5 repetitions on a Beckman Coulter LS13320 with 116 channels. This adds up to a total of more than four million numbers. Such amounts of data are not easily post-processed by spreadsheets or standard software; also MS Access databases would face serious performance problems. The poster describes a database sub-system dedicated to grain-size analyses. It expands the LabData database and laboratory management system published by Suckow and Dumke (2001). This compatibility with a very flexible database system provides ease to import the grain-size data, as well as the overall infrastructure of also storing geographic context and the ability to organize content like comprising several samples into one set or project. It also allows easy export and direct plot generation of final data in MS Excel. The sub-system allows automated import of raw data from the Beckman Coulter LS13320 Laser Diffraction Particle Size Analyzer. During post processing MS Excel is used as a data display, but no number crunching is implemented in Excel. Raw grain size spectra can be exported and controlled as Number- Surface- and Volume-fractions, while single spectra can be locked for further post-processing. From the spectra the usual statistical values (i.e. mean, median) can be computed as well as fractions larger than a grain size, smaller than a grain size, fractions between any two grain sizes or any ratio of such values. These deduced values can be easily exported into Excel for one or more depth profiles. However, such a reprocessing for large amounts of data also allows new display possibilities: normally depth profiles of grain-size data are displayed only with summarized parameters like the clay

  5. Easy and General Synthesis of Large-Sized Mesoporous Rare-Earth Oxide Thin Films by 'Micelle Assembly'.

    Science.gov (United States)

    Li, Yunqi; Bastakoti, Bishnu Prasad; Imura, Masataka; Dai, Pengcheng; Yamauchi, Yusuke

    2015-12-01

    Large-sized (ca. 40 nm) mesoporous Er2O3 thin films are synthesized by using a triblock copolymer poly(styrene-b-2-vinyl pyridine-b-ethylene oxide) (PS-b-P2VP-b-PEO) as a pore directing agent. Each block makes different contributions and the molar ratio of PVP/Er(3+) is crucial to guide the resultant mesoporous structure. An easy and general method is proposed and used to prepare a series of mesoporous rare-earth oxide (Sm2O3, Dy2O3, Tb2O3, Ho2O3, Yb2O3, and Lu2O3) thin films with potential uses in electronics and optical devices. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  6. PENGARUH LABA BERSIH, ARUS KAS OPERASIONAL, INVESTMENT OPPORTUNITY SET DAN FIRM SIZE TERHADAP DIVIDEN KAS (Studi Kasus Pada Perusahaan Manufaktur di Bursa Efek Indonesia Tahun 2010 – 2012

    Directory of Open Access Journals (Sweden)

    Luluk Muhimatul Ifada

    2014-12-01

    Full Text Available This study aimed to investigate the influence of net profit, operating cash flow, investment opportunity set, and firm size on cash dividend. The sample of this research is manufacturing companies list on Indonesia Stock Exchange (BEI in period 2010-2012 published by www.idx.co.idand posted at Indonesia Capital Market Directory (ICMD. There are 28 acquired companies that meet the criteria specified. The analysis method use multiple regression analysis with level of significance 5%. The conclusion of this research based on  t-statistic result.The result of this research proved that variable net profit have significantly positive influence on cash dividend. Operating cash flow have significantly positive influence on cash dividend. Investment opportunity set hasn’t significantly and have negative correlation influence towards cash dividend. Firm size hasn’t significantly but have positive correlation influence toward cash dividend.Penelitian   ini   bertujuan   untuk   menganalisis   pengaruh   laba   bersih, arus kas operasi, investment opportunity set, dan firm size terhadap dividen kas. Sampel penelitian ini adalah perusahaan manufaktur yang terdaftar di Bursa EfekIndonesia periode tahun 2010-2012. Data yang digunakan adalah laporan keuangan dari masing-masing perusahaan sampel, yang dipublikasikan melalui website www.idx.co.id. dan termuat dalam Indonesia Capital Market Dierctory (ICMD. Data yang memenuhi kriteria penelitian terdapat 28 perusahaan. Penelitian ini menggunakan alat uji statistik dengan pendekatan analisis regresi linier berganda dengan tingkat signifikansi 5%. Kesimpulan pengujian diambil berdasarkan hasil uji t-Statistik. Hasil pengujian ini menunjukkan bahwa pengujian pada variabel laba bersih terhadap dividen kas terbukti berpengaruh positif dan signifikan. Pengujian pada variabel arus kas operasional terhadap dividen kas terbukti berpengaruh positif dan signifikan.Pengujian pada variabel investment

  7. Sediment size of surface floodplain sediments along a large lowland river

    Science.gov (United States)

    Swanson, K. M.; Day, G.; Dietrich, W. E.

    2007-12-01

    Data on size distribution of surface sediment across a floodplain should place important constraints of modeling of floodplain deposition. Diffusive or advective models would predict that, generally, grain size should decrease away from channel banks. Variations in grain size downstream along floodplains may depend on downstream fining of river bed material, exchange rate with river banks and net deposition onto the floodplain. Here we report detailed grain size analyses taken from 17 floodplain transects along 450 km (along channel distance) reach of the middle Fly River, Papua New Guinea. Field studies have documented a systematic change in floodplain characteristics downstream from forested, more topographically elevated and topography bounded by an actively shifting mainstem channel to a downstream swamp grass, low elevation topography along which the river meanders are currently stagnant. Frequency and duration of flooding increase downstream. Flooding occurs both by overbank flows and by injections of floodwaters up tributary and tie channels connected to the mainstem. Previous studies show that about 40% of the total discharge of water passes across the floodplain, and, correspondingly, about 40% of the total load is deposited on the plain - decreasing exponentially from channel bank. We find that floodplain sediment is most sandy at the channel bank. Grain size rapidly declines away from the bank, but surprisingly two trends were also observed. A relatively short distance from the bank the surface material is finest, but with further distance from the bank (out to greater than 1 km from the 250 m wide channel) clay content decreases and silt content increases. The changes are small but repeated at most of the transects. The second trend is that bank material fines downstream, corresponding to a downstream finding bed material, but once away from the bank, there is a weak tendency for a given distance away from the bank the floodplain surface deposits to

  8. Preparation and provisional validation of a large size dried spike: Batch SAL-9934

    International Nuclear Information System (INIS)

    Jammet, G.; Zoigner, A.; Doubek, N.; Aigner, H.; Deron, S.; Bagliano, G.

    1990-05-01

    To determine uranium and plutonium concentration using isotope dilution mass spectrometry, weighed aliquands of a synthetic mixture containing about 2 mg of Pu (with a 239 Pu abundance of about 98%) and 40 mg of U (with a 235 U enrichment of about 19%) have been prepared and verified by SAL to be used to spike samples of concentrated spent fuel solutions with a high burn-up and a low 235 U enrichment. Certified Reference Materials Pu-NBL-126, natural U-NBS-960 and 93% enriched U-NBL-116 were used to prepare a stock solution containing 3.2 mg/ml of Pu and 64.3 mg/ml of 18.8% enriched U. Before shipment to the Reprocessing Plant, aliquands of the stock solution are dried to give Large Size Dried (LSD) Spikes which resist shocks encountered during transportation, so that they can readily be recovered quantitatively at the plant. This paper describes the preparation and the validation of a third batch of LSD-Spike which is intended to be used as a common spike by the plant operator, the national and the IAEA inspectorates. 6 refs, 6 tabs

  9. Risk Management of Large Component in Decommissioning

    International Nuclear Information System (INIS)

    Nah, Kyung Ku; Kim, Tae Ryong

    2014-01-01

    The need for energy, especially electric energy, has been dramatically increasing in Korea. Therefore, a rapid growth in nuclear power development has been achieved to have about 30% of electric power production. However, such a large nuclear power generation has been producing a significant amount of radioactive waste and other matters such as safety issue. In addition, owing to the severe accidents at the Fukushima in Japan, public concerns regarding NPP and radiation hazard have greatly increased. In Korea, the operation of KORI 1 has been scheduled to be faced with end of lifetime in several years and Wolsong 1 has been being under review for extending its life. This is the reason why the preparation of nuclear power plant decommissioning is significant in this time. Decommissioning is the final phase in the life-cycle of a nuclear facility and during decommissioning operation, one of the most important management in decommissioning is how to deal with the disused large component. Therefore, in this study, the risk in large component in decommissioning is to be identified and the key risk factor is to be analyzed from where can be prepared to handle decommissioning process safely and efficiently. Developing dedicated acceptance criteria for large components at disposal site was analyzed as a key factor. Acceptance criteria applied to deal with large components like what size of those should be and how to be taken care of during disposal process strongly affect other major works. For example, if the size of large component was not set up at disposal site, any dismantle work in decommissioning is not able to be conducted. Therefore, considering insufficient time left for decommissioning of some NPP, it is absolutely imperative that those criteria should be laid down

  10. Risk Management of Large Component in Decommissioning

    Energy Technology Data Exchange (ETDEWEB)

    Nah, Kyung Ku; Kim, Tae Ryong [KEPCO International Nuclear Graduate School, Ulsan (Korea, Republic of)

    2014-10-15

    The need for energy, especially electric energy, has been dramatically increasing in Korea. Therefore, a rapid growth in nuclear power development has been achieved to have about 30% of electric power production. However, such a large nuclear power generation has been producing a significant amount of radioactive waste and other matters such as safety issue. In addition, owing to the severe accidents at the Fukushima in Japan, public concerns regarding NPP and radiation hazard have greatly increased. In Korea, the operation of KORI 1 has been scheduled to be faced with end of lifetime in several years and Wolsong 1 has been being under review for extending its life. This is the reason why the preparation of nuclear power plant decommissioning is significant in this time. Decommissioning is the final phase in the life-cycle of a nuclear facility and during decommissioning operation, one of the most important management in decommissioning is how to deal with the disused large component. Therefore, in this study, the risk in large component in decommissioning is to be identified and the key risk factor is to be analyzed from where can be prepared to handle decommissioning process safely and efficiently. Developing dedicated acceptance criteria for large components at disposal site was analyzed as a key factor. Acceptance criteria applied to deal with large components like what size of those should be and how to be taken care of during disposal process strongly affect other major works. For example, if the size of large component was not set up at disposal site, any dismantle work in decommissioning is not able to be conducted. Therefore, considering insufficient time left for decommissioning of some NPP, it is absolutely imperative that those criteria should be laid down.

  11. Coordination of size-control, reproduction and generational memory in freshwater planarians

    Science.gov (United States)

    Yang, Xingbo; Kaj, Kelson J.; Schwab, David J.; Collins, Eva-Maria S.

    2017-06-01

    Uncovering the mechanisms that control size, growth, and division rates of organisms reproducing through binary division means understanding basic principles of their life cycle. Recent work has focused on how division rates are regulated in bacteria and yeast, but this question has not yet been addressed in more complex, multicellular organisms. We have, over the course of several years, assembled a unique large-scale data set on the growth and asexual reproduction of two freshwater planarian species, Dugesia japonica and Girardia tigrina, which reproduce by transverse fission and succeeding regeneration of head and tail pieces into new planarians. We show that generation-dependent memory effects in planarian reproduction need to be taken into account to accurately capture the experimental data. To achieve this, we developed a new additive model that mixes multiple size control strategies based on planarian size, growth, and time between divisions. Our model quantifies the proportions of each strategy in the mixed dynamics, revealing the ability of the two planarian species to utilize different strategies in a coordinated manner for size control. Additionally, we found that head and tail offspring of both species employ different mechanisms to monitor and trigger their reproduction cycles. Thus, we find a diversity of strategies not only between species but between heads and tails within species. Our additive model provides two advantages over existing 2D models that fit a multivariable splitting rate function to the data for size control: firstly, it can be fit to relatively small data sets and can thus be applied to systems where available data is limited. Secondly, it enables new biological insights because it explicitly shows the contributions of different size control strategies for each offspring type.

  12. Heritability of body size in the polar bears of Western Hudson Bay.

    Science.gov (United States)

    Malenfant, René M; Davis, Corey S; Richardson, Evan S; Lunn, Nicholas J; Coltman, David W

    2018-04-18

    Among polar bears (Ursus maritimus), fitness is dependent on body size through males' abilities to win mates, females' abilities to provide for their young and all bears' abilities to survive increasingly longer fasting periods caused by climate change. In the Western Hudson Bay subpopulation (near Churchill, Manitoba, Canada), polar bears have declined in body size and condition, but nothing is known about the genetic underpinnings of body size variation, which may be subject to natural selection. Here, we combine a 4449-individual pedigree and an array of 5,433 single nucleotide polymorphisms (SNPs) to provide the first quantitative genetic study of polar bears. We used animal models to estimate heritability (h 2 ) among polar bears handled between 1966 and 2011, obtaining h 2 estimates of 0.34-0.48 for strictly skeletal traits and 0.18 for axillary girth (which is also dependent on fatness). We genotyped 859 individuals with the SNP array to test for marker-trait association and combined p-values over genetic pathways using gene-set analysis. Variation in all traits appeared to be polygenic, but we detected one region of moderately large effect size in body length near a putative noncoding RNA in an unannotated region of the genome. Gene-set analysis suggested that variation in body length was associated with genes in the regulatory cascade of cyclin expression, which has previously been associated with body size in mice. A greater understanding of the genetic architecture of body size variation will be valuable in understanding the potential for adaptation in polar bear populations challenged by climate change. © 2018 John Wiley & Sons Ltd.

  13. Prototyping a large field size IORT applicator for a mobile linear accelerator

    Energy Technology Data Exchange (ETDEWEB)

    Janssen, Rogier W J; Dries, Wim J F [Catharina-Hospital Eindhoven, PO Box 1350, 5602 ZA, Eindhoven (Netherlands); Faddegon, Bruce A [University of California San Francisco Comprehensive Cancer Center, 1600 Divisadero Street, San Francisco, CA 94115-1708 (United States)], E-mail: rogier.janssen@mac.com

    2008-04-21

    The treatment of large tumors such as sarcomas with intra-operative radiotherapy using a Mobetron (registered) is often complicated because of the limited field size of the primary collimator and the available applicators (max Oe100 mm). To circumvent this limitation a prototype rectangular applicator of 80 x 150 mm{sup 2} was designed and built featuring an additional scattering foil located at the top of the applicator. Because of its proven accuracy in modeling linear accelerator components the design was based on the EGSnrc Monte Carlo simulation code BEAMnrc. First, the Mobetron (registered) treatment head was simulated both without an applicator and with a standard 100 mm applicator. Next, this model was used to design an applicator foil consisting of a rectangular Al base plate covering the whole beam and a pyramid of four stacked cylindrical slabs of different diameters centered on top of it. This foil was mounted on top of a plain rectangular Al tube. A prototype was built and tested with diode dosimetry in a water tank. Here, the prototype showed clinically acceptable 80 x 150 mm{sup 2} dose distributions for 4 MeV, 6 MeV and 9 MeV, obviating the use of complicated multiple irradiations with abutting field techniques. In addition, the measurements agreed well with the MC simulations, typically within 2%/1 mm.

  14. Prototyping a large field size IORT applicator for a mobile linear accelerator

    International Nuclear Information System (INIS)

    Janssen, Rogier W J; Dries, Wim J F; Faddegon, Bruce A

    2008-01-01

    The treatment of large tumors such as sarcomas with intra-operative radiotherapy using a Mobetron (registered) is often complicated because of the limited field size of the primary collimator and the available applicators (max Oe100 mm). To circumvent this limitation a prototype rectangular applicator of 80 x 150 mm 2 was designed and built featuring an additional scattering foil located at the top of the applicator. Because of its proven accuracy in modeling linear accelerator components the design was based on the EGSnrc Monte Carlo simulation code BEAMnrc. First, the Mobetron (registered) treatment head was simulated both without an applicator and with a standard 100 mm applicator. Next, this model was used to design an applicator foil consisting of a rectangular Al base plate covering the whole beam and a pyramid of four stacked cylindrical slabs of different diameters centered on top of it. This foil was mounted on top of a plain rectangular Al tube. A prototype was built and tested with diode dosimetry in a water tank. Here, the prototype showed clinically acceptable 80 x 150 mm 2 dose distributions for 4 MeV, 6 MeV and 9 MeV, obviating the use of complicated multiple irradiations with abutting field techniques. In addition, the measurements agreed well with the MC simulations, typically within 2%/1 mm

  15. Optimal set of grid size and angular increment for practical dose calculation using the dynamic conformal arc technique: a systematic evaluation of the dosimetric effects in lung stereotactic body radiation therapy

    International Nuclear Information System (INIS)

    Park, Ji-Yeon; Kim, Siyong; Park, Hae-Jin; Lee, Jeong-Woo; Kim, Yeon-Sil; Suh, Tae-Suk

    2014-01-01

    To recommend the optimal plan parameter set of grid size and angular increment for dose calculations in treatment planning for lung stereotactic body radiation therapy (SBRT) using dynamic conformal arc therapy (DCAT) considering both accuracy and computational efficiency. Dose variations with varying grid sizes (2, 3, and 4 mm) and angular increments (2°, 4°, 6°, and 10°) were analyzed in a thorax phantom for 3 spherical target volumes and in 9 patient cases. A 2-mm grid size and 2° angular increment are assumed sufficient to serve as reference values. The dosimetric effect was evaluated using dose–volume histograms, monitor units (MUs), and dose to organs at risk (OARs) for a definite volume corresponding to the dose–volume constraint in lung SBRT. The times required for dose calculations using each parameter set were compared for clinical practicality. Larger grid sizes caused a dose increase to the structures and required higher MUs to achieve the target coverage. The discrete beam arrangements at each angular increment led to over- and under-estimated OARs doses due to the undulating dose distribution. When a 2° angular increment was used in both studies, a 4-mm grid size changed the dose variation by up to 3–4% (50 cGy) for the heart and the spinal cord, while a 3-mm grid size produced a dose difference of <1% (12 cGy) in all tested OARs. When a 3-mm grid size was employed, angular increments of 6° and 10° caused maximum dose variations of 3% (23 cGy) and 10% (61 cGy) in the spinal cord, respectively, while a 4° increment resulted in a dose difference of <1% (8 cGy) in all cases except for that of one patient. The 3-mm grid size and 4° angular increment enabled a 78% savings in computation time without making any critical sacrifices to dose accuracy. A parameter set with a 3-mm grid size and a 4° angular increment is found to be appropriate for predicting patient dose distributions with a dose difference below 1% while reducing the

  16. Large explosive basaltic eruptions at Katla volcano, Iceland: Fragmentation, grain size and eruption dynamics

    Science.gov (United States)

    Schmith, Johanne; Höskuldsson, Ármann; Holm, Paul Martin; Larsen, Guðrún

    2018-04-01

    Katla volcano in Iceland produces hazardous large explosive basaltic eruptions on a regular basis, but very little quantitative data for future hazard assessments exist. Here details on fragmentation mechanism and eruption dynamics are derived from a study of deposit stratigraphy with detailed granulometry and grain morphology analysis, granulometric modeling, componentry and the new quantitative regularity index model of fragmentation mechanism. We show that magma/water interaction is important in the ash generation process, but to a variable extent. By investigating the large explosive basaltic eruptions from 1755 and 1625, we document that eruptions of similar size and magma geochemistry can have very different fragmentation dynamics. Our models show that fragmentation in the 1755 eruption was a combination of magmatic degassing and magma/water-interaction with the most magma/water-interaction at the beginning of the eruption. The fragmentation of the 1625 eruption was initially also a combination of both magmatic and phreatomagmatic processes, but magma/water-interaction diminished progressively during the later stages of the eruption. However, intense magma/water interaction was reintroduced during the final stages of the eruption dominating the fine fragmentation at the end. This detailed study of fragmentation changes documents that subglacial eruptions have highly variable interaction with the melt water showing that the amount and access to melt water changes significantly during eruptions. While it is often difficult to reconstruct the progression of eruptions that have no quantitative observational record, this study shows that integrating field observations and granulometry with the new regularity index can form a coherent model of eruption evolution.

  17. Materialised Ideals Sizes and Beauty

    Directory of Open Access Journals (Sweden)

    Kirsi Laitala

    2011-04-01

    Full Text Available Today’s clothing industry is based on a system where clothes are made in ready-to-wear sizes and meant to fit most people. Studies have pointed out that consumers are discontent with the use of these systems: size designations are not accurate enough to find clothing that fits, and different sizes are poorly available. This article discusses in depth who these consumers are, and which consumer groups are the most dissatisfied with today’s sizing systems. Results are based on a web survey where 2834 Nordic consumers responded, complemented with eight in-depth interviews, market analysis on clothing sizes and in-store trouser size measurements. Results indicate that higher shares of the consumers who have a body out of touch with the existing beauty ideals express discontentment with the sizing systems and the poor selection available. In particular, large women, very large men, and thin, short men are those who experience less priority in clothing stores and have more difficulties in finding clothes that fit. Consumers tend to blame themselves when the clothes do not fit their bodies, while our study points out that the industry is to blame as they do not produce clothing for all customers.

  18. Is family size related to adolescence mental hospitalization?

    Science.gov (United States)

    Kylmänen, Paula; Hakko, Helinä; Räsänen, Pirkko; Riala, Kaisa

    2010-05-15

    The aim of this study was to investigate the association between family size and psychiatric disorders of underage adolescent psychiatric inpatients. The study sample consisted of 508 adolescents (age 12-17) admitted to psychiatric impatient care between April 2001 and March 2006. Diagnostic and Statistical Manual of Mental Disorders, fourth edition-based psychiatric diagnoses and variables measuring family size were obtained from the Schedule for Affective Disorder and Schizophrenia for School-Age Children Present and Lifetime (K-SADS-PL). The family size of the general Finnish population was used as a reference population. There was a significant difference between the family size of the inpatient adolescents and the general population: 17.0% of adolescents came from large families (with 6 or more children) while the percentage in the general population was 3.3. A girl from a large family had an about 4-fold risk of psychosis other than schizophrenia. However, large family size was not associated with a risk for schizophrenia. Large family size was overrepresented among underage adolescents admitted for psychiatric hospitalization in Northern Finland. Copyright 2009 Elsevier Ltd. All rights reserved.

  19. Mesenchymal stem cells-seeded bio-ceramic construct for bone regeneration in large critical-size bone defect in rabbit

    Directory of Open Access Journals (Sweden)

    Maiti SK

    2016-11-01

    Full Text Available Bone marrow derived mesenchymal stem cells (BMSC represent an attractive cell population for tissue engineering purpose. The objective of this study was to determine whether the addition of recombinant human bone morphogenetic protein (rhBMP-2 and insulin-like growth factor (IGF-1 to a silica-coated calcium hydroxyapatite (HASi - rabbit bone marrow derived mesenchymal stem cell (rBMSC construct promoted bone healing in a large segmental bone defect beyond standard critical -size radial defects (15mm in rabbits. An extensively large 30mm long radial ostectomy was performed unilaterally in thirty rabbits divided equally in five groups. Defects were filled with a HASi scaffold only (group B; HASi scaffold seeded with rBMSC (group C; HASi scaffold seeded with rBMSC along with rhBMP-2 and IGF-1 in groups D and E respectively. The same number of rBMSC (five million cells and concentration of growth factors rhBMP-2 (50µg and IGF-1 (50µg was again injected at the site of bone defect after 15 days of surgery in their respective groups. An empty defect served as the control group (group A. Radiographically, bone healing was evaluated at 7, 15, 30, 45, 60 and 90 days post implantation. Histological qualitative analysis with microCT (µ-CT, haematoxylin and eosin (H & E and Masson’s trichrome staining were performed 90 days after implantation. All rhBMP-2-added constructs induced the formation of well-differentiated mineralized woven bone surrounding the HASi scaffolds and bridging bone/implant interfaces as early as eight weeks after surgery. Bone regeneration appeared to develop earlier with the rhBMP-2 constructs than with the IGF-1 added construct. Constructs without any rhBMP-2 or IGF-1 showed osteoconductive properties limited to the bone junctions without bone ingrowths within the implantation site. In conclusion, the addition of rhBMP-2 to a HASi scaffold could promote bone generation in a large critical-size-defect.

  20. From mantle to critical zone: A review of large and giant sized deposits of the rare earth elements

    Directory of Open Access Journals (Sweden)

    M.P. Smith

    2016-05-01

    Full Text Available The rare earth elements are unusual when defining giant-sized ore deposits, as resources are often quoted as total rare earth oxide, but the importance of a deposit may be related to the grade for individual, or a limited group of the elements. Taking the total REE resource, only one currently known deposit (Bayan Obo would class as giant (>1.7 × 107 tonnes contained metal, but a range of others classify as large (>1.7 × 106 tonnes. With the exception of unclassified resource estimates from the Olympic Dam IOCG deposit, all of these deposits are related to alkaline igneous activity – either carbonatites or agpaitic nepheline syenites. The total resource in these deposits must relate to the scale of the primary igneous source, but the grade is a complex function of igneous source, magmatic crystallisation, hydrothermal modification and supergene enrichment during weathering. Isotopic data suggest that the sources conducive to the formation of large REE deposits are developed in subcontinental lithospheric mantle, enriched in trace elements either by plume activity, or by previous subduction. The reactivation of such enriched mantle domains in relatively restricted geographical areas may have played a role in the formation of some of the largest deposits (e.g. Bayan Obo. Hydrothermal activity involving fluids from magmatic to meteoric sources may result in the redistribution of the REE and increases in grade, depending on primary mineralogy and the availability of ligands. Weathering and supergene enrichment of carbonatite has played a role in the formation of the highest grade deposits at Mount Weld (Australia and Tomtor (Russia. For the individual REE with the current highest economic value (Nd and the HREE, the boundaries for the large and giant size classes are two orders of magnitude lower, and deposits enriched in these metals (agpaitic systems, ion absorption deposits may have significant economic impact in the near future.

  1. Prognostic significance of tumor size of small lung adenocarcinomas evaluated with mediastinal window settings on computed tomography.

    Directory of Open Access Journals (Sweden)

    Yukinori Sakao

    Full Text Available BACKGROUND: We aimed to clarify that the size of the lung adenocarcinoma evaluated using mediastinal window on computed tomography is an important and useful modality for predicting invasiveness, lymph node metastasis and prognosis in small adenocarcinoma. METHODS: We evaluated 176 patients with small lung adenocarcinomas (diameter, 1-3 cm who underwent standard surgical resection. Tumours were examined using computed tomography with thin section conditions (1.25 mm thick on high-resolution computed tomography with tumour dimensions evaluated under two settings: lung window and mediastinal window. We also determined the patient age, gender, preoperative nodal status, tumour size, tumour disappearance ratio, preoperative serum carcinoembryonic antigen levels and pathological status (lymphatic vessel, vascular vessel or pleural invasion. Recurrence-free survival was used for prognosis. RESULTS: Lung window, mediastinal window, tumour disappearance ratio and preoperative nodal status were significant predictive factors for recurrence-free survival in univariate analyses. Areas under the receiver operator curves for recurrence were 0.76, 0.73 and 0.65 for mediastinal window, tumour disappearance ratio and lung window, respectively. Lung window, mediastinal window, tumour disappearance ratio, preoperative serum carcinoembryonic antigen levels and preoperative nodal status were significant predictive factors for lymph node metastasis in univariate analyses; areas under the receiver operator curves were 0.61, 0.76, 0.72 and 0.66, for lung window, mediastinal window, tumour disappearance ratio and preoperative serum carcinoembryonic antigen levels, respectively. Lung window, mediastinal window, tumour disappearance ratio, preoperative serum carcinoembryonic antigen levels and preoperative nodal status were significant factors for lymphatic vessel, vascular vessel or pleural invasion in univariate analyses; areas under the receiver operator curves were 0

  2. Prognostic Significance of Tumor Size of Small Lung Adenocarcinomas Evaluated with Mediastinal Window Settings on Computed Tomography

    Science.gov (United States)

    Sakao, Yukinori; Kuroda, Hiroaki; Mun, Mingyon; Uehara, Hirofumi; Motoi, Noriko; Ishikawa, Yuichi; Nakagawa, Ken; Okumura, Sakae

    2014-01-01

    Background We aimed to clarify that the size of the lung adenocarcinoma evaluated using mediastinal window on computed tomography is an important and useful modality for predicting invasiveness, lymph node metastasis and prognosis in small adenocarcinoma. Methods We evaluated 176 patients with small lung adenocarcinomas (diameter, 1–3 cm) who underwent standard surgical resection. Tumours were examined using computed tomography with thin section conditions (1.25 mm thick on high-resolution computed tomography) with tumour dimensions evaluated under two settings: lung window and mediastinal window. We also determined the patient age, gender, preoperative nodal status, tumour size, tumour disappearance ratio, preoperative serum carcinoembryonic antigen levels and pathological status (lymphatic vessel, vascular vessel or pleural invasion). Recurrence-free survival was used for prognosis. Results Lung window, mediastinal window, tumour disappearance ratio and preoperative nodal status were significant predictive factors for recurrence-free survival in univariate analyses. Areas under the receiver operator curves for recurrence were 0.76, 0.73 and 0.65 for mediastinal window, tumour disappearance ratio and lung window, respectively. Lung window, mediastinal window, tumour disappearance ratio, preoperative serum carcinoembryonic antigen levels and preoperative nodal status were significant predictive factors for lymph node metastasis in univariate analyses; areas under the receiver operator curves were 0.61, 0.76, 0.72 and 0.66, for lung window, mediastinal window, tumour disappearance ratio and preoperative serum carcinoembryonic antigen levels, respectively. Lung window, mediastinal window, tumour disappearance ratio, preoperative serum carcinoembryonic antigen levels and preoperative nodal status were significant factors for lymphatic vessel, vascular vessel or pleural invasion in univariate analyses; areas under the receiver operator curves were 0.60, 0.81, 0

  3. Long-term resource variation and group size: A large-sample field test of the Resource Dispersion Hypothesis

    Directory of Open Access Journals (Sweden)

    Morecroft Michael D

    2001-07-01

    Full Text Available Abstract Background The Resource Dispersion Hypothesis (RDH proposes a mechanism for the passive formation of social groups where resources are dispersed, even in the absence of any benefits of group living per se. Despite supportive modelling, it lacks empirical testing. The RDH predicts that, rather than Territory Size (TS increasing monotonically with Group Size (GS to account for increasing metabolic needs, TS is constrained by the dispersion of resource patches, whereas GS is independently limited by their richness. We conducted multiple-year tests of these predictions using data from the long-term study of badgers Meles meles in Wytham Woods, England. The study has long failed to identify direct benefits from group living and, consequently, alternative explanations for their large group sizes have been sought. Results TS was not consistently related to resource dispersion, nor was GS consistently related to resource richness. Results differed according to data groupings and whether territories were mapped using minimum convex polygons or traditional methods. Habitats differed significantly in resource availability, but there was also evidence that food resources may be spatially aggregated within habitat types as well as between them. Conclusions This is, we believe, the largest ever test of the RDH and builds on the long-term project that initiated part of the thinking behind the hypothesis. Support for predictions were mixed and depended on year and the method used to map territory borders. We suggest that within-habitat patchiness, as well as model assumptions, should be further investigated for improved tests of the RDH in the future.

  4. The potential of natural gas use including cogeneration in large-sized industry and commercial sector in Peru

    International Nuclear Information System (INIS)

    Gonzales Palomino, Raul; Nebra, Silvia A.

    2012-01-01

    In recent years there have been several discussions on a greater use of natural gas nationwide. Moreover, there have been several announcements by the private and public sectors regarding the construction of new pipelines to supply natural gas to the Peruvian southern and central-north markets. This paper presents future scenarios for the use of natural gas in the large-sized industrial and commercial sectors of the country based on different hypotheses on developments in the natural gas industry, national economic growth, energy prices, technological changes and investment decisions. First, the paper estimates the market potential and characterizes the energy consumption. Then it makes a selection of technological alternatives for the use of natural gas, and it makes an energetic and economic analysis and economic feasibility. Finally, the potential use of natural gas is calculated through nine different scenarios. The natural gas use in cogeneration systems is presented as an alternative to contribute to the installed power capacity of the country. Considering the introduction of the cogeneration in the optimistic–advanced scenario and assuming that all of their conditions would be put into practice, in 2020, the share of the cogeneration in electricity production in Peru would be 9.9%. - Highlights: ► This paper presents future scenarios for the use of natural gas in the large-sized industrial and commercial sectors of Peru. ► The potential use of natural gas is calculated through nine different scenarios.► The scenarios were based on different hypotheses on developments in the natural gas industry, national economic growth, energy prices, technological changes and investment decisions. ► We estimated the market potential and characterized the energy consumption, and made a selection of technological alternatives for the use of natural gas.

  5. Rhesus monkeys (Macaca mulatta) show robust primacy and recency in memory for lists from small, but not large, image sets.

    Science.gov (United States)

    Basile, Benjamin M; Hampton, Robert R

    2010-02-01

    The combination of primacy and recency produces a U-shaped serial position curve typical of memory for lists. In humans, primacy is often thought to result from rehearsal, but there is little evidence for rehearsal in nonhumans. To further evaluate the possibility that rehearsal contributes to primacy in monkeys, we compared memory for lists of familiar stimuli (which may be easier to rehearse) to memory for unfamiliar stimuli (which are likely difficult to rehearse). Six rhesus monkeys saw lists of five images drawn from either large, medium, or small image sets. After presentation of each list, memory for one item was assessed using a serial probe recognition test. Across four experiments, we found robust primacy and recency with lists drawn from small and medium, but not large, image sets. This finding is consistent with the idea that familiar items are easier to rehearse and that rehearsal contributes to primacy, warranting further study of the possibility of rehearsal in monkeys. However, alternative interpretations are also viable and are discussed. Copyright 2009 Elsevier B.V. All rights reserved.

  6. Measurement of liquid mixing characteristics in large-sized ion exchange column for isotope separation by stepwise response method

    International Nuclear Information System (INIS)

    Fujine, Sachio; Saito, Keiichiro; Iwamoto, Kazumi; Itoi, Toshiaki.

    1981-07-01

    Liquid mixing in a large-sized ion exchange column for isotope separation was measured by the step-wise response method, using NaCl solution as tracer. A 50 cm diameter column was packed with an ion exchange resin of 200 μm in mean diameter. Experiments were carried out for several types of distributor and collector, which were attached to each end of the column. The smallest mixing was observed for the perforated plate type of the collector, coupled with a minimum stagnant volume above the ion exchange resin bed. The 50 cm diameter column exhibited the better characteristics of liquid mixing than the 2 cm diameter column for which the good performance of lithium isotope separation had already been confirmed. These results indicate that a large increment of throughput is attainable by the scale-up of column diameter with the same performance of isotope separation as for the 2 cm diameter column. (author)

  7. Covariance approximation for large multivariate spatial data sets with an application to multiple climate model errors

    KAUST Repository

    Sang, Huiyan

    2011-12-01

    This paper investigates the cross-correlations across multiple climate model errors. We build a Bayesian hierarchical model that accounts for the spatial dependence of individual models as well as cross-covariances across different climate models. Our method allows for a nonseparable and nonstationary cross-covariance structure. We also present a covariance approximation approach to facilitate the computation in the modeling and analysis of very large multivariate spatial data sets. The covariance approximation consists of two parts: a reduced-rank part to capture the large-scale spatial dependence, and a sparse covariance matrix to correct the small-scale dependence error induced by the reduced rank approximation. We pay special attention to the case that the second part of the approximation has a block-diagonal structure. Simulation results of model fitting and prediction show substantial improvement of the proposed approximation over the predictive process approximation and the independent blocks analysis. We then apply our computational approach to the joint statistical modeling of multiple climate model errors. © 2012 Institute of Mathematical Statistics.

  8. Selecting Sustainability Indicators for Small to Medium Sized Urban Water Systems Using Fuzzy-ELECTRE.

    Science.gov (United States)

    Chhipi-Shrestha, Gyan; Hewage, Kasun; Sadiq, Rehan

    2017-03-01

      Urban water systems (UWSs) are challenged by the sustainability perspective. Certain limitations of the sustainability of centralized UWSs and decentralized household level wastewater treatments can be overcome by managing UWSs at an intermediate scale, referred to as small to medium sized UWSs (SMUWSs). SMUWSs are different from large UWSs, mainly in terms of smaller infrastructure, data limitation, smaller service area, and institutional limitations. Moreover, sustainability assessment systems to evaluate the sustainability of an entire UWS are very limited and confined only to large UWSs. This research addressed the gap and has developed a set of 38 applied sustainability performance indicators (SPIs) by using fuzzy-Elimination and Choice Translating Reality (ELECTRE) I outranking method to assess the sustainability of SMUWSs. The developed set of SPIs can be applied to existing and new SMUWSs and also provides a flexibility to include additional SPIs in the future based on the same selection criteria.

  9. Preferential enrichment of large-sized very low density lipoprotein populations with transferred cholesteryl esters

    International Nuclear Information System (INIS)

    Eisenberg, S.

    1985-01-01

    The effect of lipid transfer proteins on the exchange and transfer of cholesteryl esters from rat plasma HDL2 to human very low (VLDL) and low density (LDL) lipoprotein populations was studied. The use of a combination of radiochemical and chemical methods allowed separate assessment of [ 3 H]cholesteryl ester exchange and of cholesteryl ester transfer. VLDL-I was the preferred acceptor for transferred cholesteryl esters, followed by VLDL-II and VLDL-III. LDL did not acquire cholesteryl esters. The contribution of exchange of [ 3 H]cholesteryl esters to total transfer was highest for LDL and decreased in reverse order along the VLDL density range. Inactivation of lecithin: cholesterol acyltransferase (LCAT) and heating the HDL2 for 60 min at 56 degrees C accelerated transfer and exchange of [ 3 H]cholesteryl esters. Addition of lipid transfer proteins increased cholesterol esterification in all systems. The data demonstrate that large-sized, triglyceride-rich VLDL particles are preferred acceptors for transferred cholesteryl esters. It is suggested that enrichment of very low density lipoproteins with cholesteryl esters reflects the triglyceride content of the particles

  10. Biases in the OSSOS Detection of Large Semimajor Axis Trans-Neptunian Objects

    Science.gov (United States)

    Gladman, Brett; Shankman, Cory; OSSOS Collaboration

    2017-10-01

    The accumulating but small set of large semimajor axis trans-Neptunian objects (TNOs) shows an apparent clustering in the orientations of their orbits. This clustering must either be representative of the intrinsic distribution of these TNOs, or else have arisen as a result of observation biases and/or statistically expected variations for such a small set of detected objects. The clustered TNOs were detected across different and independent surveys, which has led to claims that the detections are therefore free of observational bias. This apparent clustering has led to the so-called “Planet 9” hypothesis that a super-Earth currently resides in the distant solar system and causes this clustering. The Outer Solar System Origins Survey (OSSOS) is a large program that ran on the Canada-France-Hawaii Telescope from 2013 to 2017, discovering more than 800 new TNOs. One of the primary design goals of OSSOS was the careful determination of observational biases that would manifest within the detected sample. We demonstrate the striking and non-intuitive biases that exist for the detection of TNOs with large semimajor axes. The eight large semimajor axis OSSOS detections are an independent data set, of comparable size to the conglomerate samples used in previous studies. We conclude that the orbital distribution of the OSSOS sample is consistent with being detected from a uniform underlying angular distribution.

  11. Chemical and electrochemical synthesis of nano-sized TiO2 anatase for large-area photon conversion

    International Nuclear Information System (INIS)

    Babasaheb, Raghunath Sankapal; Shrikrishna, Dattatraya Sartale; Lux-Steiner, M.Ch.; Ennaoui, A.

    2006-01-01

    We report on the synthesis of nanocrystalline titanium dioxide thin films and powders by chemical and electrochemical deposition methods. Both methods are simple, inexpensive and suitable for large-scale production. Air-annealing of the films and powders at T = 500 C leads to densely packed nanometer sized anatase TiO 2 particles. The obtained layers are characterized by different methods such as: X-ray diffraction (XRD), transmission electron microscopy (TEM), scanning electron microscopy (SEM), X-ray photoelectron spectroscopy (XPS) and atomic force microscopy (AFM). Titanium dioxide TiO 2 (anatase) phase with (101) preferred orientation has been obtained for the films deposited on glass; indium doped tin oxide (ITO) and quartz substrates. The powder obtained as the byproduct consists of TiO 2 with anatase-phase as well. (authors)

  12. A function accounting for training set size and marker density to model the average accuracy of genomic prediction.

    Science.gov (United States)

    Erbe, Malena; Gredler, Birgit; Seefried, Franz Reinhold; Bapst, Beat; Simianer, Henner

    2013-01-01

    Prediction of genomic breeding values is of major practical relevance in dairy cattle breeding. Deterministic equations have been suggested to predict the accuracy of genomic breeding values in a given design which are based on training set size, reliability of phenotypes, and the number of independent chromosome segments ([Formula: see text]). The aim of our study was to find a general deterministic equation for the average accuracy of genomic breeding values that also accounts for marker density and can be fitted empirically. Two data sets of 5'698 Holstein Friesian bulls genotyped with 50 K SNPs and 1'332 Brown Swiss bulls genotyped with 50 K SNPs and imputed to ∼600 K SNPs were available. Different k-fold (k = 2-10, 15, 20) cross-validation scenarios (50 replicates, random assignment) were performed using a genomic BLUP approach. A maximum likelihood approach was used to estimate the parameters of different prediction equations. The highest likelihood was obtained when using a modified form of the deterministic equation of Daetwyler et al. (2010), augmented by a weighting factor (w) based on the assumption that the maximum achievable accuracy is [Formula: see text]. The proportion of genetic variance captured by the complete SNP sets ([Formula: see text]) was 0.76 to 0.82 for Holstein Friesian and 0.72 to 0.75 for Brown Swiss. When modifying the number of SNPs, w was found to be proportional to the log of the marker density up to a limit which is population and trait specific and was found to be reached with ∼20'000 SNPs in the Brown Swiss population studied.

  13. A function accounting for training set size and marker density to model the average accuracy of genomic prediction.

    Directory of Open Access Journals (Sweden)

    Malena Erbe

    Full Text Available Prediction of genomic breeding values is of major practical relevance in dairy cattle breeding. Deterministic equations have been suggested to predict the accuracy of genomic breeding values in a given design which are based on training set size, reliability of phenotypes, and the number of independent chromosome segments ([Formula: see text]. The aim of our study was to find a general deterministic equation for the average accuracy of genomic breeding values that also accounts for marker density and can be fitted empirically. Two data sets of 5'698 Holstein Friesian bulls genotyped with 50 K SNPs and 1'332 Brown Swiss bulls genotyped with 50 K SNPs and imputed to ∼600 K SNPs were available. Different k-fold (k = 2-10, 15, 20 cross-validation scenarios (50 replicates, random assignment were performed using a genomic BLUP approach. A maximum likelihood approach was used to estimate the parameters of different prediction equations. The highest likelihood was obtained when using a modified form of the deterministic equation of Daetwyler et al. (2010, augmented by a weighting factor (w based on the assumption that the maximum achievable accuracy is [Formula: see text]. The proportion of genetic variance captured by the complete SNP sets ([Formula: see text] was 0.76 to 0.82 for Holstein Friesian and 0.72 to 0.75 for Brown Swiss. When modifying the number of SNPs, w was found to be proportional to the log of the marker density up to a limit which is population and trait specific and was found to be reached with ∼20'000 SNPs in the Brown Swiss population studied.

  14. Evidence from a Large Sample on the Effects of Group Size and Decision-Making Time on Performance in a Marketing Simulation Game

    Science.gov (United States)

    Treen, Emily; Atanasova, Christina; Pitt, Leyland; Johnson, Michael

    2016-01-01

    Marketing instructors using simulation games as a way of inducing some realism into a marketing course are faced with many dilemmas. Two important quandaries are the optimal size of groups and how much of the students' time should ideally be devoted to the game. Using evidence from a very large sample of teams playing a simulation game, the study…

  15. Determination of size distribution of small DNA fragments by polyacrylamide gel electrophoresis

    International Nuclear Information System (INIS)

    Lau How Mooi

    1998-01-01

    Size distribution determination of DNA fragments can be normally determined by the agarose gel electrophoresis, including the normal DNA banding pattern analysis. However this method is only good for large DNA, such as the DNA of the size of kilo base pairs to mega base pairs range. DNA of size less than kilo base pairs is difficult to be quantified by the agarose gel method. Polyacrylamide gel electrophoresis however can be used to measure the quantity of DNA fragments of size less than kilo base pairs in length, down to less than ten base pairs. This method is good for determining the quantity of the smaller size DNA, single stranded polymers or even some proteins, if the known standards are available. In this report detail description of the method of preparing the polyacrylamide gel, and the experimental set up is discussed. Possible uses of this method, and the comparison with the standard sizes of DNA is also shown. This method is used to determine the distribution of the amount of the fragmented DNA after the Calf-thymus DNA has been exposed to various types of radiation and of different doses. The standards were used to determine the sizes of the fragmented Calf-thymus DNA. The higher the dose the higher is the amount of the smaller size DNA measured

  16. Interaction between numbers and size during visual search

    NARCIS (Netherlands)

    Krause, F.; Bekkering, H.; Pratt, J.; Lindemann, O.

    2017-01-01

    The current study investigates an interaction between numbers and physical size (i.e. size congruity) in visual search. In three experiments, participants had to detect a physically large (or small) target item among physically small (or large) distractors in a search task comprising single-digit

  17. MOCUS, Minimal Cut Sets and Minimal Path Sets from Fault Tree Analysis

    International Nuclear Information System (INIS)

    Fussell, J.B.; Henry, E.B.; Marshall, N.H.

    1976-01-01

    1 - Description of problem or function: From a description of the Boolean failure logic of a system, called a fault tree, and control parameters specifying the minimal cut set length to be obtained MOCUS determines the system failure modes, or minimal cut sets, and the system success modes, or minimal path sets. 2 - Method of solution: MOCUS uses direct resolution of the fault tree into the cut and path sets. The algorithm used starts with the main failure of interest, the top event, and proceeds to basic independent component failures, called primary events, to resolve the fault tree to obtain the minimal sets. A key point of the algorithm is that an and gate alone always increases the number of path sets; an or gate alone always increases the number of cut sets and increases the size of path sets. Other types of logic gates must be described in terms of and and or logic gates. 3 - Restrictions on the complexity of the problem: Output from MOCUS can include minimal cut and path sets for up to 20 gates

  18. Sauropod dinosaurs evolved moderately sized genomes unrelated to body size.

    Science.gov (United States)

    Organ, Chris L; Brusatte, Stephen L; Stein, Koen

    2009-12-22

    Sauropodomorph dinosaurs include the largest land animals to have ever lived, some reaching up to 10 times the mass of an African elephant. Despite their status defining the upper range for body size in land animals, it remains unknown whether sauropodomorphs evolved larger-sized genomes than non-avian theropods, their sister taxon, or whether a relationship exists between genome size and body size in dinosaurs, two questions critical for understanding broad patterns of genome evolution in dinosaurs. Here we report inferences of genome size for 10 sauropodomorph taxa. The estimates are derived from a Bayesian phylogenetic generalized least squares approach that generates posterior distributions of regression models relating genome size to osteocyte lacunae volume in extant tetrapods. We estimate that the average genome size of sauropodomorphs was 2.02 pg (range of species means: 1.77-2.21 pg), a value in the upper range of extant birds (mean = 1.42 pg, range: 0.97-2.16 pg) and near the average for extant non-avian reptiles (mean = 2.24 pg, range: 1.05-5.44 pg). The results suggest that the variation in size and architecture of genomes in extinct dinosaurs was lower than the variation found in mammals. A substantial difference in genome size separates the two major clades within dinosaurs, Ornithischia (large genomes) and Saurischia (moderate to small genomes). We find no relationship between body size and estimated genome size in extinct dinosaurs, which suggests that neutral forces did not dominate the evolution of genome size in this group.

  19. Size matters: the interplay between sensing and size in aquatic environments

    Science.gov (United States)

    Wadhwa, Navish; Martens, Erik A.; Lindemann, Christian; Jacobsen, Nis S.; Andersen, Ken H.; Visser, Andre

    2015-11-01

    Sensing the presence or absence of other organisms in the surroundings is critical for the survival of any aquatic organism. This is achieved via the use of various sensory modes such as chemosensing, mechanosensing, vision, hearing, and echolocation. We ask how the size of an organism determines what sensory modes are available to it while others are not. We investigate this by examining the physical laws governing signal generation, transmission, and reception, together with the limits set by physiology. Hydrodynamics plays an important role in sensing; in particular chemosensing and mechanosensing are constrained by the physics of fluid motion at various scales. Through our analysis, we find a hierarchy of sensing modes determined by body size. We theoretically predict the body size limits for various sensory modes, which align well with size ranges found in the literature. Our analysis of all ocean life, from unicellular organisms to whales, demonstrates how body size determines available sensing modes, and thereby acts as a major structuring factor of aquatic life. The Centre for Ocean Life is a VKR center of excellence supported by the Villum Foundation.

  20. Size-Dictionary Interpolation for Robot's Adjustment

    Directory of Open Access Journals (Sweden)

    Morteza eDaneshmand

    2015-05-01

    Full Text Available This paper describes the classification and size-dictionary interpolation of the three-dimensional data obtained by a laser scanner to be used in a realistic virtual fitting room, where automatic activation of the chosen mannequin robot, while several mannequin robots of different genders and sizes are simultaneously connected to the same computer, is also considered to make it mimic the body shapes and sizes instantly. The classification process consists of two layers, dealing, respectively, with gender and size. The interpolation procedure tries to find out which set of the positions of the biologically-inspired actuators for activation of the mannequin robots could lead to the closest possible resemblance of the shape of the body of the person having been scanned, through linearly mapping the distances between the subsequent size-templates and the corresponding position set of the bioengineered actuators, and subsequently, calculating the control measures that could maintain the same distance proportions, where minimizing the Euclidean distance between the size-dictionary template vectors and that of the desired body sizes determines the mathematical description. In this research work, the experimental results of the implementation of the proposed method on Fits.me's mannequin robots are visually illustrated, and explanation of the remaining steps towards completion of the whole realistic online fitting package is provided.

  1. Hydrophobic polymers modification of mesoporous silica with large pore size for drug release

    Energy Technology Data Exchange (ETDEWEB)

    Zhu Shenmin, E-mail: smzhu@sjtu.edu.c [Shanghai Jiao Tong University, State Key Lab of Metal Matrix Composites (China); Zhang Di; Yang Na [Fudan University, Ministry of Education, Key Lab of Molecular Engineering of Polymers (China)

    2009-04-15

    Mesostructure cellular foam (MCF) materials were modified with hydrophobic polyisoprene (PI) through free radical polymerization in the pores network, and the resulting materials (MCF-PI) were investigated as matrices for drug storage. The successful synthesis of PI inside MCF was characterized by Fourier transform infrared (FT-IR), hydrogen nuclear magnetic resonance ({sup 1}H NMR), X-ray diffraction patterns (XRD) and nitrogen adsorption/desorption measurements. It was interesting to find the resultant system held a relatively large pore size (19.5 nm) and pore volume (1.02 cm{sup 3} g{sup -1}), which would benefit for drug storage. Ibuprofen (IBU) and vancomycin were selected as model drugs and loaded onto unmodified MCF and modified MCF (MCF-PI). The adsorption capacities of these model drugs on MCF-PI were observed increase as compared to that of on pure MCF, due to the trap effects induced by polyisoprene chains inside the pores. The delivery system of MCF-PI was found to be more favorable for the adsorption of IBU (31 wt%, IBU/silica), possibly attributing to the hydrophobic interaction between IBU and PI formed on the internal surface of MCF matrix. The release of drug through the porous network was investigated by measuring uptake and release of IBU.

  2. Green Lot-Sizing

    NARCIS (Netherlands)

    M. Retel Helmrich (Mathijn Jan)

    2013-01-01

    textabstractThe lot-sizing problem concerns a manufacturer that needs to solve a production planning problem. The producer must decide at which points in time to set up a production process, and when he/she does, how much to produce. There is a trade-off between inventory costs and costs associated

  3. Scaling of the Urban Water Footprint: An Analysis of 65 Mid- to Large-Sized U.S. Metropolitan Areas

    Science.gov (United States)

    Mahjabin, T.; Garcia, S.; Grady, C.; Mejia, A.

    2017-12-01

    Scaling laws have been shown to be relevant to a range of disciplines including biology, ecology, hydrology, and physics, among others. Recently, scaling was shown to be important for understanding and characterizing cities. For instance, it was found that urban infrastructure (water supply pipes and electrical wires) tends to scale sublinearly with city population, implying that large cities are more efficient. In this study, we explore the scaling of the water footprint of cities. The water footprint is a measure of water appropriation that considers both the direct and indirect (virtual) water use of a consumer or producer. Here we compute the water footprint of 65 mid- to large-sized U.S. metropolitan areas, accounting for direct and indirect water uses associated with agricultural and industrial commodities, and residential and commercial water uses. We find that the urban water footprint, computed as the sum of the water footprint of consumption and production, exhibits sublinear scaling with an exponent of 0.89. This suggests the possibility of large cities being more water-efficient than small ones. To further assess this result, we conduct additional analysis by accounting for international flows, and the effects of green water and city boundary definition on the scaling. The analysis confirms the scaling and provides additional insight about its interpretation.

  4. Efficient motif finding algorithms for large-alphabet inputs

    Directory of Open Access Journals (Sweden)

    Pavlovic Vladimir

    2010-10-01

    Full Text Available Abstract Background We consider the problem of identifying motifs, recurring or conserved patterns, in the biological sequence data sets. To solve this task, we present a new deterministic algorithm for finding patterns that are embedded as exact or inexact instances in all or most of the input strings. Results The proposed algorithm (1 improves search efficiency compared to existing algorithms, and (2 scales well with the size of alphabet. On a synthetic planted DNA motif finding problem our algorithm is over 10× more efficient than MITRA, PMSPrune, and RISOTTO for long motifs. Improvements are orders of magnitude higher in the same setting with large alphabets. On benchmark TF-binding site problems (FNP, CRP, LexA we observed reduction in running time of over 12×, with high detection accuracy. The algorithm was also successful in rapidly identifying protein motifs in Lipocalin, Zinc metallopeptidase, and supersecondary structure motifs for Cadherin and Immunoglobin families. Conclusions Our algorithm reduces computational complexity of the current motif finding algorithms and demonstrate strong running time improvements over existing exact algorithms, especially in important and difficult cases of large-alphabet sequences.

  5. Body size limits dim-light foraging activity in stingless bees (Apidae: Meliponini).

    Science.gov (United States)

    Streinzer, Martin; Huber, Werner; Spaethe, Johannes

    2016-10-01

    Stingless bees constitute a species-rich tribe of tropical and subtropical eusocial Apidae that act as important pollinators for flowering plants. Many foraging tasks rely on vision, e.g. spatial orientation and detection of food sources and nest entrances. Meliponini workers are usually small, which sets limits on eye morphology and thus quality of vision. Limitations are expected both on acuity, and thus on the ability to detect objects from a distance, as well as on sensitivity, and thus on the foraging time window at dusk and dawn. In this study, we determined light intensity thresholds for flight under dim light conditions in eight stingless bee species in relation to body size in a Neotropical lowland rainforest. Species varied in body size (0.8-1.7 mm thorax-width), and we found a strong negative correlation with light intensity thresholds (0.1-79 lx). Further, we measured eye size, ocelli diameter, ommatidia number, and facet diameter. All parameters significantly correlated with body size. A disproportionately low light intensity threshold in the minute Trigonisca pipioli, together with a large eye parameter P eye suggests specific adaptations to circumvent the optical constraints imposed by the small body size. We discuss the implications of body size in bees on foraging behavior.

  6. The tempo and mode of evolution: body sizes of island mammals.

    Science.gov (United States)

    Raia, Pasquale; Meiri, Shai

    2011-07-01

    The tempo and mode of body size evolution on islands are believed to be well known. It is thought that body size evolves relatively quickly on islands toward the mammalian modal value, thus generating extreme cases of size evolution and the island rule. Here, we tested both theories in a phylogenetically explicit context, by using two different species-level mammalian phylogenetic hypotheses limited to sister clades dichotomizing into an exclusively insular and an exclusively mainland daughter nodes. Taken as a whole, mammals were found to show a largely punctuational mode of size evolution. We found that, accounting for this, and regardless of the phylogeny used, size evolution on islands is no faster than on the continents. We compared different selection regimes using a set of Ornstein-Uhlenbeck models to examine the effects of insularity of the mode of evolution. The models strongly supported clade-specific selection regimes. Under this regime, however, an evolutionary model allowing insular species to evolve differently from their mainland relatives performs worse than a model that ignores insularity as a factor. Thus, insular taxa do not experience statistically different selection from their mainland relatives. © 2011 The Author(s). Evolution© 2011 The Society for the Study of Evolution.

  7. YBYRÁ facilitates comparison of large phylogenetic trees.

    Science.gov (United States)

    Machado, Denis Jacob

    2015-07-01

    The number and size of tree topologies that are being compared by phylogenetic systematists is increasing due to technological advancements in high-throughput DNA sequencing. However, we still lack tools to facilitate comparison among phylogenetic trees with a large number of terminals. The "YBYRÁ" project integrates software solutions for data analysis in phylogenetics. It comprises tools for (1) topological distance calculation based on the number of shared splits or clades, (2) sensitivity analysis and automatic generation of sensitivity plots and (3) clade diagnoses based on different categories of synapomorphies. YBYRÁ also provides (4) an original framework to facilitate the search for potential rogue taxa based on how much they affect average matching split distances (using MSdist). YBYRÁ facilitates comparison of large phylogenetic trees and outperforms competing software in terms of usability and time efficiency, specially for large data sets. The programs that comprises this toolkit are written in Python, hence they do not require installation and have minimum dependencies. The entire project is available under an open-source licence at http://www.ib.usp.br/grant/anfibios/researchSoftware.html .

  8. Aerodynamic Limits on Large Civil Tiltrotor Sizing and Efficiency

    Science.gov (United States)

    Acree, C W.

    2014-01-01

    The NASA Large Civil Tiltrotor (2nd generation, or LCTR2) is a useful reference design for technology impact studies. The present paper takes a broad view of technology assessment by examining the extremes of what aerodynamic improvements might hope to accomplish. Performance was analyzed with aerodynamically idealized rotor, wing, and airframe, representing the physical limits of a large tiltrotor. The analysis was repeated with more realistic assumptions, which revealed that increased maximum rotor lift capability is potentially more effective in improving overall vehicle efficiency than higher rotor or wing efficiency. To balance these purely theoretical studies, some practical limitations on airframe layout are also discussed, along with their implications for wing design. Performance of a less efficient but more practical aircraft with non-tilting nacelles is presented.

  9. Explaining religious differentials in family-size preference: Evidence from Nepal in 1996.

    Science.gov (United States)

    Pearce, Lisa D; Brauner-Otto, Sarah R; Ji, Yingchun

    2015-01-01

    We examine how religio-ethnic identity, individual religiosity, and family members' religiosity were related to preferred family size in Nepal in 1996. Analyses of survey data from the Chitwan Valley Family Study show that socio-economic characteristics and individual experiences can suppress, as well as largely account for, religio-ethnic differences in fertility preference. These religio-ethnic differentials are associated with variance in particularized theologies or general value orientations (like son preference) across groups. In addition, individual and family religiosity are both positively associated with preferred family size, seemingly because of their association with religious beliefs—beliefs that are likely to shape fertility strategies. These findings suggest the need for improvements in how we conceptualize and measure supra-individual religious influence in a variety of settings and for a range of demographically interesting outcomes.

  10. Explaining Religious Differentials in Family Size Preferences: Evidence from Nepal in 1996

    Science.gov (United States)

    Pearce, Lisa D.; Brauner-Otto, Sarah; Ji, Yingchun

    2015-01-01

    This paper presents an examination of how religio-ethnic identity, individual religiosity, and family members’ religiosity are related to preferred family size in Nepal. Analyses of survey data from the Chitwan Valley Family Study show that socioeconomic characteristics and individual experiences can suppress, as well as largely account for, religio-ethnic differences in fertility preferences. These religio-ethnic differentials are associated with variance in particularized religious theologies or general value orientations (like son preference) across groups. In addition, individual and family religiosity are both positively associated with preferred family size, seemingly because of their association with religious beliefs that are likely to shape fertility strategies. These findings suggest improvements in how we conceptualize and empirically measure supra-individual religious influence in a variety of settings and for a range of demographically interesting outcomes. PMID:25685878

  11. New Maximal Two-distance Sets

    DEFF Research Database (Denmark)

    Lisonek, Petr

    1996-01-01

    A two-distance set in E^d is a point set X inthe d-dimensional Euclidean spacesuch that the distances between distinct points in Xassume only two different non-zero values. Based on results from classical distance geometry, we developan algorithm to classify, for a given dimension, all maximal...... (largest possible)two-distance sets in E^d.Using this algorithm we have completed the full classificationfor all dimensions less than or equal to 7, andwe have found one set in E^8 whosemaximality follows from Blokhuis' upper bound on sizes of s-distance sets.While in the dimensions less than or equal to 6...

  12. Development of the quality control system of the readout electronics for the large size telescope of the Cherenkov Telescope Array observatory

    Energy Technology Data Exchange (ETDEWEB)

    Konno, Y.; Kubo, H.; Masuda, S. [Department of Physics, Graduate School of Science, Kyoto University, Kyoto (Japan); Paoletti, R.; Poulios, S. [SFTA Department, Physics Section, University of Siena and INFN, Siena (Italy); Rugliancich, A., E-mail: andrea.rugliancich@pi.infn.it [SFTA Department, Physics Section, University of Siena and INFN, Siena (Italy); Saito, T. [Department of Physics, Graduate School of Science, Kyoto University, Kyoto (Japan)

    2016-07-11

    The Cherenkov Telescope Array (CTA) is the next generation VHE γ-ray observatory which will improve the currently available sensitivity by a factor of 10 in the range 100 GeV to 10 TeV. The array consists of different types of telescopes, called large size telescope (LST), medium size telescope (MST) and small size telescope (SST). A LST prototype is currently being built and will be installed at the Observatorio Roque de los Muchachos, island of La Palma, Canary islands, Spain. The readout system for the LST prototype has been designed and around 300 readout boards will be produced in the coming months. In this note we describe an automated quality control system able to measure basic performance parameters and quickly identify faulty boards. - Highlights: • The Dragon Board is part of the DAQ of the LST Cherenkov telescope prototype. • We developed an automated quality control system for the Dragon Board. • We check pedestal, linearity, pulse shape and crosstalk values. • The quality control test can be performed on the production line.

  13. Evolution of body size in Galapagos marine iguanas.

    Science.gov (United States)

    Wikelski, Martin

    2005-10-07

    Body size is one of the most important traits of organisms and allows predictions of an individual's morphology, physiology, behaviour and life history. However, explaining the evolution of complex traits such as body size is difficult because a plethora of other traits influence body size. Here I review what we know about the evolution of body size in a group of island reptiles and try to generalize about the mechanisms that shape body size. Galapagos marine iguanas occupy all 13 larger islands in this Pacific archipelago and have maximum island body weights between 900 and 12 000g. The distribution of body sizes does not match mitochondrial clades, indicating that body size evolves independently of genetic relatedness. Marine iguanas lack intra- and inter-specific food competition and predators are not size-specific, discounting these factors as selective agents influencing body size. Instead I hypothesize that body size reflects the trade-offs between sexual and natural selection. We found that sexual selection continuously favours larger body sizes. Large males establish display territories and some gain over-proportional reproductive success in the iguanas' mating aggregations. Females select males based on size and activity and are thus responsible for the observed mating skew. However, large individuals are strongly selected against during El Niño-related famines when dietary algae disappear from the intertidal foraging areas. We showed that differences in algae sward ('pasture') heights and thermal constraints on large size are causally responsible for differences in maximum body size among populations. I hypothesize that body size in many animal species reflects a trade-off between foraging constraints and sexual selection and suggest that future research could focus on physiological and genetic mechanisms determining body size in wild animals. Furthermore, evolutionary stable body size distributions within populations should be analysed to better

  14. Can interface features affect aggression resulting from violent video game play? An examination of realistic controller and large screen size.

    Science.gov (United States)

    Kim, Ki Joon; Sundar, S Shyam

    2013-05-01

    Aggressiveness attributed to violent video game play is typically studied as a function of the content features of the game. However, can interface features of the game also affect aggression? Guided by the General Aggression Model (GAM), we examine the controller type (gun replica vs. mouse) and screen size (large vs. small) as key technological aspects that may affect the state aggression of gamers, with spatial presence and arousal as potential mediators. Results from a between-subjects experiment showed that a realistic controller and a large screen display induced greater aggression, presence, and arousal than a conventional mouse and a small screen display, respectively, and confirmed that trait aggression was a significant predictor of gamers' state aggression. Contrary to GAM, however, arousal showed no effects on aggression; instead, presence emerged as a significant mediator.

  15. Rectocele--does the size matter?

    Science.gov (United States)

    Carter, Dan; Gabel, Marc Beer

    2012-07-01

    Large rectoceles (>2 cm) are believed to be associated with difficulty in evacuation, constipation, rectal pain, and rectal bleeding. The aim of our study was to determine whether rectocele size is related to patient's symptoms or defecatory parameters. We conducted a retrospective study on data collected on patients referred to our clinic for the evaluation of evacuation disorders. All patients were questioned for constipation, fecal incontinence, and irritable bowel syndrome and were assessed with dynamic perineal ultrasonography and conventional anorectal manometry. Four hundred eighty-seven women were included in our study. Rectocele was diagnosed in 106 (22%) women, and rectocele diameter >2 cm in 93 (87%) women. Rectocele size was not significantly related to demographic data, parity, or patient's symptoms. The severity of the symptoms was not correlated to the size or to the position of the rectocele. The diagnosis of irritable bowel syndrome was neither related to the size of the rectocele. Rectocele location, occurrence of enterocele, and intussusception were not related to the size of the rectocele. Full evacuation of rectoceles was more common in small rectoceles (79% vs. 24%, p = 0.0001), and no evacuation was more common in large rectoceles (37% vs. 0, p = 0.01). Rectal hyposensitivity and anismus were not related to the size of the rectocele. In conclusion, only the evacuation of rectoceles was correlated to the size of the rectoceles, but had no clinical significance. Other clinical, anatomical factors were also not associated to the size of the rectoceles. Rectoceles' size alone may not be an indication for surgery.

  16. Perception-action dissociation generalizes to the size-inertia illusion.

    Science.gov (United States)

    Platkiewicz, Jonathan; Hayward, Vincent

    2014-04-01

    Two objects of similar visual aspects and of equal mass, but of different sizes, generally do not elicit the same percept of heaviness in humans. The larger object is consistently felt to be lighter than the smaller, an effect known as the "size-weight illusion." When asked to repeatedly lift the two objects, the grip forces were observed to adapt rapidly to the true object weight while the size-weight illusion persisted, a phenomenon interpreted as a dissociation between perception and action. We investigated whether the same phenomenon can be observed if the mass of an object is available to participants through inertial rather than gravitational cues and if the number and statistics of the stimuli is such that participants cannot remember each individual stimulus. We compared the responses of 10 participants in 2 experimental conditions, where they manipulated 33 objects having uncorrelated masses and sizes, supported by a frictionless, air-bearing slide that could be oriented vertically or horizontally. We also analyzed the participants' anticipatory motor behavior by measuring the grip force before motion onset. We found that the perceptual illusory effect was quantitatively the same in the two conditions and observed that both visual size and haptic mass had a negligible effect on the anticipatory gripping control of the participants in the gravitational and inertial conditions, despite the enormous differences in the mechanics of the two conditions and the large set of uncorrelated stimuli.

  17. Body size mediates social and environmental effects on nest building behaviour in a fish with paternal care.

    Science.gov (United States)

    Lehtonen, Topi K; Lindström, Kai; Wong, Bob B M

    2015-07-01

    Body size, social setting, and the physical environment can all influence reproductive behaviours, but their interactions are not well understood. Here, we investigated how male body size, male-male competition, and water turbidity influence nest-building behaviour in the sand goby (Pomatoschistus minutus), a marine fish with exclusive paternal care. We found that environmental and social factors affected the nest characteristics of small and large males differently. In particular, association between male size and the level of nest elaboration (i.e. the amount of sand piled on top of the nest) was positive only under clear water conditions. Similarly, male size and nest entrance size were positively associated only in the absence of competition. Such interactions may, in turn, help to explain the persistence of variation in reproductive behaviours, which-due to their importance in offspring survival-are otherwise expected to be under strong balancing selection.

  18. Why large cells dominate estuarine phytoplankton

    Science.gov (United States)

    Cloern, James E.

    2018-01-01

    Surveys across the world oceans have shown that phytoplankton biomass and production are dominated by small cells (picoplankton) where nutrient concentrations are low, but large cells (microplankton) dominate when nutrient-rich deep water is mixed to the surface. I analyzed phytoplankton size structure in samples collected over 25 yr in San Francisco Bay, a nutrient-rich estuary. Biomass was dominated by large cells because their biomass selectively grew during blooms. Large-cell dominance appears to be a characteristic of ecosystems at the land–sea interface, and these places may therefore function as analogs to oceanic upwelling systems. Simulations with a size-structured NPZ model showed that runs of positive net growth rate persisted long enough for biomass of large, but not small, cells to accumulate. Model experiments showed that small cells would dominate in the absence of grazing, at lower nutrient concentrations, and at elevated (+5°C) temperatures. Underlying these results are two fundamental scaling laws: (1) large cells are grazed more slowly than small cells, and (2) grazing rate increases with temperature faster than growth rate. The model experiments suggest testable hypotheses about phytoplankton size structure at the land–sea interface: (1) anthropogenic nutrient enrichment increases cell size; (2) this response varies with temperature and only occurs at mid-high latitudes; (3) large-cell blooms can only develop when temperature is below a critical value, around 15°C; (4) cell size diminishes along temperature gradients from high to low latitudes; and (5) large-cell blooms will diminish or disappear where planetary warming increases temperature beyond their critical threshold.

  19. Analysis for preliminary evaluation of discrete fracture flow and large-scale permeability in sedimentary rocks

    International Nuclear Information System (INIS)

    Kanehiro, B.Y.; Lai, C.H.; Stow, S.H.

    1987-05-01

    Conceptual models for sedimentary rock settings that could be used in future evaluation and suitability studies are being examined through the DOE Repository Technology Program. One area of concern for the hydrologic aspects of these models is discrete fracture flow analysis as related to the estimation of the size of the representative elementary volume, evaluation of the appropriateness of continuum assumptions and estimation of the large-scale permeabilities of sedimentary rocks. A basis for preliminary analysis of flow in fracture systems of the types that might be expected to occur in low permeability sedimentary rocks is presented. The approach used involves numerical modeling of discrete fracture flow for the configuration of a large-scale hydrologic field test directed at estimation of the size of the representative elementary volume and large-scale permeability. Analysis of fracture data on the basis of this configuration is expected to provide a preliminary indication of the scale at which continuum assumptions can be made

  20. Estimation of body-size traits by photogrammetry in large mammals to inform conservation.

    Science.gov (United States)

    Berger, Joel

    2012-10-01

    Photography, including remote imagery and camera traps, has contributed substantially to conservation. However, the potential to use photography to understand demography and inform policy is limited. To have practical value, remote assessments must be reasonably accurate and widely deployable. Prior efforts to develop noninvasive methods of estimating trait size have been motivated by a desire to answer evolutionary questions, measure physiological growth, or, in the case of illegal trade, assess economics of horn sizes; but rarely have such methods been directed at conservation. Here I demonstrate a simple, noninvasive photographic technique and address how knowledge of values of individual-specific metrics bears on conservation policy. I used 10 years of data on juvenile moose (Alces alces) to examine whether body size and probability of survival are positively correlated in cold climates. I investigated whether the presence of mothers improved juvenile survival. The posited latter relation is relevant to policy because harvest of adult females has been permitted in some Canadian and American jurisdictions under the assumption that probability of survival of young is independent of maternal presence. The accuracy of estimates of head sizes made from photographs exceeded 98%. The estimates revealed that overwinter juvenile survival had no relation to the juvenile's estimated mass (p < 0.64) and was more strongly associated with maternal presence (p < 0.02) than winter snow depth (p < 0.18). These findings highlight the effects on survival of a social dynamic (the mother-young association) rather than body size and suggest a change in harvest policy will increase survival. Furthermore, photographic imaging of growth of individual juvenile muskoxen (Ovibos moschatus) over 3 Arctic winters revealed annual variability in size, which supports the idea that noninvasive monitoring may allow one to detect how some environmental conditions ultimately affect body growth.

  1. Leaf transpiration plays a role in phosphorus acquisition among a large set of chickpea genotypes.

    Science.gov (United States)

    Pang, Jiayin; Zhao, Hongxia; Bansal, Ruchi; Bohuon, Emilien; Lambers, Hans; Ryan, Megan H; Siddique, Kadambot H M

    2018-01-09

    Low availability of inorganic phosphorus (P) is considered a major constraint for crop productivity worldwide. A unique set of 266 chickpea (Cicer arietinum L.) genotypes, originating from 29 countries and with diverse genetic background, were used to study P-use efficiency. Plants were grown in pots containing sterilized river sand supplied with P at a rate of 10 μg P g -1 soil as FePO 4 , a poorly soluble form of P. The results showed large genotypic variation in plant growth, shoot P content, physiological P-use efficiency, and P-utilization efficiency in response to low P supply. Further investigation of a subset of 100 chickpea genotypes with contrasting growth performance showed significant differences in photosynthetic rate and photosynthetic P-use efficiency. A positive correlation was found between leaf P concentration and transpiration rate of the young fully expanded leaves. For the first time, our study has suggested a role of leaf transpiration in P acquisition, consistent with transpiration-driven mass flow in chickpea grown in low-P sandy soils. The identification of 6 genotypes with high plant growth, P-acquisition, and P-utilization efficiency suggests that the chickpea reference set can be used in breeding programmes to improve both P-acquisition and P-utilization efficiency under low-P conditions. © 2018 John Wiley & Sons Ltd.

  2. How do low dispersal species establish large range sizes? The case of the water beetle Graphoderus bilineatus

    DEFF Research Database (Denmark)

    Iversen, Lars Lønsmann; Rannap, Riinu; Thomsen, Philip Francis

    2013-01-01

    important than species phylogeny or local spatial attributes. In this study we used the water beetle Graphoderus bilineatus a philopatric species of conservation concern in Europe as a model to explain large range size and to support effective conservation measures for such species that also have limited...... systems and wetlands which used to be highly connected throughout the central plains of Europe. Our data suggest that a broad habitat niche can prevent landscape elements from becoming barriers for species like G. bilineatus. Therefore, we question the usefulness of site protection as conservation...... measures for G. bilineatus and similar philopatric species. Instead, conservation actions should be focused at the landscape level to ensure a long-term viability of such species across their range....

  3. Helium ion distributions in a 4 kJ plasma focus device by 1 mm-thick large-size polycarbonate detectors

    Energy Technology Data Exchange (ETDEWEB)

    Sohrabi, M., E-mail: dr_msohrabi@yahoo.com; Habibi, M.; Ramezani, V.

    2014-11-14

    Helium ion beam profile, angular and iso-ion beam distributions in 4 kJ Amirkabir plasma focus (APF) device were effectively observed by the unaided eyes and studied in single 1 mm-thick large-diameter (20 cm) polycarbonate track detectors (PCTD). The PCTDs were processed by 50 Hz–HV electrochemical etching using a large-size ECE chamber. The results show that helium ions produced in the APF device have a ring-shaped angular distribution peaked at an angle of ∼±60° with respect to the top of the anode. Some information on the helium ion energy and distributions is also provided. The method is highly effective for ion beam studies. - Highlights: • Helium iso-ion beam profile and angular distributions were studied in the 4 kJ APF device. • Large-area 1 mm-thick polycarbonate detectors were processed by 50 Hz-HV ECE. • Helium ion beam profile and distributions were observed by unaided eyes in a single detector. • Helium ion profile has ring-shaped distributions with energies lower at the ring location. • Helium iso-ion track density, diameter and energy distributions are estimated.

  4. Hierarchical Cantor set in the large scale structure with torus geometry

    Energy Technology Data Exchange (ETDEWEB)

    Murdzek, R. [Physics Department, ' Al. I. Cuza' University, Blvd. Carol I, Nr. 11, Iassy 700506 (Romania)], E-mail: rmurdzek@yahoo.com

    2008-12-15

    The formation of large scale structures is considered within a model with string on toroidal space-time. Firstly, the space-time geometry is presented. In this geometry, the Universe is represented by a string describing a torus surface. Thereafter, the large scale structure of the Universe is derived from the string oscillations. The results are in agreement with the cellular structure of the large scale distribution and with the theory of a Cantorian space-time.

  5. Criticality studies of fast assemblies with the new 27-group cross-section set

    International Nuclear Information System (INIS)

    Garg, S.B.; Shukla, V.K.

    1976-01-01

    A test of 27-group cross-section set (Garg-set) recently derived from ENDF/B library has been carried out in the criticality studies of the Pu 239 , U 235 and U 233 based metal, oxide and carbide fuelled fast critical assemblies. A total of twenty fast critical assemblies of different sizes and varying neutron spectra have been selected for analysis. Based on these analyses it has been observed that the Garg-set predicts well the criticality of uranium and plutonium based hard-spectra assemblies. In the soft-spectra systems it underpredicts criticality because of the following reasons: (a) It makes use of the higher capture cross-sections of structural and coolant elements given in ENDF/B - Version IV library. (b) It does not account for the resonance self-shielding effects of cross-sections. It has also been observed that the Garg-set gives better results than the MABBN-set for dense and dilute plutonium-based and the hard uranium-based assemblies. This superior trend of the Garg-set is slightly lost in the uranium-based dilute systems because of large differences in the capture cross-sections of structural elements of these two sets. (author)

  6. Efficacy of formative evaluation using a focus group for a large classroom setting in an accelerated pharmacy program.

    Science.gov (United States)

    Nolette, Shaun; Nguyen, Alyssa; Kogan, David; Oswald, Catherine; Whittaker, Alana; Chakraborty, Arup

    2017-07-01

    Formative evaluation is a process utilized to improve communication between students and faculty. This evaluation method allows the ability to address pertinent issues in a timely manner; however, implementation of formative evaluation can be a challenge, especially in a large classroom setting. Using mediated formative evaluation, the purpose of this study is to determine if a student based focus group is a viable option to improve efficacy of communication between an instructor and students as well as time management in a large classroom setting. Out of 140 total students, six students were selected to form a focus group - one from each of six total sections of the classroom. Each focus group representative was responsible for collecting all the questions from students of their corresponding sections and submitting them to the instructor two to three times a day. Responses from the instructor were either passed back to pertinent students by the focus group representatives or addressed directly with students by the instructor. This study was conducted using a fifteen-question survey after the focus group model was utilized for one month. A printed copy of the survey was distributed in the class by student investigators. Questions were of varying types, including Likert scale, yes/no, and open-ended response. One hundred forty surveys were administered, and 90 complete responses were collected. Surveys showed that 93.3% of students found that use of the focus group made them more likely to ask questions for understanding. The surveys also showed 95.5% of students found utilizing the focus group for questions allowed for better understanding of difficult concepts. General open-ended answer portions of the survey showed that most students found the focus group allowed them to ask questions more easily since they did not feel intimidated by asking in front of the whole class. No correlation was found between demographic characteristics and survey responses. This may

  7. Quench protection and design of large high-current-density superconducting magnets

    International Nuclear Information System (INIS)

    Green, M.A.

    1981-03-01

    Although most large superconducting magnets have been designed using the concept of cryostability, there is increased need for large magnets which operate at current densities above the cryostable limit (greater than 10 8 Am -2 ). Large high current density superconducting magnets are chosen for the following reasons: reduced mass, reduced coil thickness or size, and reduced cost. The design of large high current density, adiabatically stable, superconducting magnets requires a very different set of design rules than either large cryostable superconducting magnets or small self-protected high current density magnets. The problems associated with large high current density superconducting magnets fall into three categories; (a) quench protection, (b) stress and training, and (c) cryogenic design. The three categories must be considered simultaneously. The paper discusses quench protection and its implication for magnets of large stored energies (this includes strings of smaller magnets). Training and its relationship to quench protection and magnetic strain are discussed. Examples of magnets, built at the Lawrence Berkeley Laboratory and elsewhere using the design guidelines given in this report, are presented

  8. Novel Visualization of Large Health Related Data Sets

    Science.gov (United States)

    2015-03-01

    lower all-cause mortality. 3 While large cross-sectional studies of populations such as the National Health and Nutrition Examination Survey find a...due to impaired renal and hepatic metabolism, decreased dietary intake related to anorexia or nausea, and falsely low HbA1c secondary to uremia or...Renal Nutrition . 2009:19(1):33- 37. 2014 Workshop on Visual Analytics in Healthcare ! ! !"#$%&’(%’$)*+%,"’#%-’$%./*.0*12,$)345%6)*7’$%./’#*8)’#$9*1

  9. Energetic constraints, size gradients, and size limits in benthic marine invertebrates.

    Science.gov (United States)

    Sebens, Kenneth P

    2002-08-01

    Populations of marine benthic organisms occupy habitats with a range of physical and biological characteristics. In the intertidal zone, energetic costs increase with temperature and aerial exposure, and prey intake increases with immersion time, generating size gradients with small individuals often found at upper limits of distribution. Wave action can have similar effects, limiting feeding time or success, although certain species benefit from wave dislodgment of their prey; this also results in gradients of size and morphology. The difference between energy intake and metabolic (and/or behavioral) costs can be used to determine an energetic optimal size for individuals in such populations. Comparisons of the energetic optimal size to the maximum predicted size based on mechanical constraints, and the ensuing mortality schedule, provides a mechanism to study and explain organism size gradients in intertidal and subtidal habitats. For species where the energetic optimal size is well below the maximum size that could persist under a certain set of wave/flow conditions, it is probable that energetic constraints dominate. When the opposite is true, populations of small individuals can dominate habitats with strong dislodgment or damage probability. When the maximum size of individuals is far below either energetic optima or mechanical limits, other sources of mortality (e.g., predation) may favor energy allocation to early reproduction rather than to continued growth. Predictions based on optimal size models have been tested for a variety of intertidal and subtidal invertebrates including sea anemones, corals, and octocorals. This paper provides a review of the optimal size concept, and employs a combination of the optimal energetic size model and life history modeling approach to explore energy allocation to growth or reproduction as the optimal size is approached.

  10. Development of a composite large-size SiPM (assembled matrix) based modular detector cluster for MAGIC

    Science.gov (United States)

    Hahn, A.; Mazin, D.; Bangale, P.; Dettlaff, A.; Fink, D.; Grundner, F.; Haberer, W.; Maier, R.; Mirzoyan, R.; Podkladkin, S.; Teshima, M.; Wetteskind, H.

    2017-02-01

    The MAGIC collaboration operates two 17 m diameter Imaging Atmospheric Cherenkov Telescopes (IACTs) on the Canary Island of La Palma. Each of the two telescopes is currently equipped with a photomultiplier tube (PMT) based imaging camera. Due to the advances in the development of Silicon Photomultipliers (SiPMs), they are becoming a widely used alternative to PMTs in many research fields including gamma-ray astronomy. Within the Otto-Hahn group at the Max Planck Institute for Physics, Munich, we are developing a SiPM based detector module for a possible upgrade of the MAGIC cameras and also for future experiments as, e.g., the Large Size Telescopes (LST) of the Cherenkov Telescope Array (CTA). Because of the small size of individual SiPM sensors (6 mm×6 mm) with respect to the 1-inch diameter PMTs currently used in MAGIC, we use a custom-made matrix of SiPMs to cover the same detection area. We developed an electronic circuit to actively sum up and amplify the SiPM signals. Existing non-imaging hexagonal light concentrators (Winston cones) used in MAGIC have been modified for the angular acceptance of the SiPMs by using C++ based ray tracing simulations. The first prototype based detector module includes seven channels and was installed into the MAGIC camera in May 2015. We present the results of the first prototype and its performance as well as the status of the project and discuss its challenges.

  11. Silk elasticity as a potential constraint on spider body size.

    Science.gov (United States)

    Rodríguez-Gironés, Miguel A; Corcobado, Guadalupe; Moya-Laraño, Jordi

    2010-10-07

    Silk is known for its strength and extensibility and has played a key role in the radiation of spiders. Individual spiders use different glands to produce silk types with unique sets of proteins. Most research has studied the properties of major ampullate and capture spiral silks and their ecological implications, while little is known about minor ampullate silk, the type used by those spider species studied to date for bridging displacements. A biomechanical model parameterised with available data shows that the minimum radius of silk filaments required for efficient bridging grows with the square root of the spider's body mass, faster than the radius of minor ampullate silk filaments actually produced by spiders. Because the morphology of spiders adapted to walking along or under silk threads is ill suited for moving on a solid surface, for these species there is a negative relationship between body mass and displacement ability. As it stands, the model suggests that spiders that use silk for their displacements are prevented from attaining a large body size if they must track their resources in space. In particular, silk elasticity would favour sexual size dimorphism because males that must use bridging lines to search for females cannot grow large. 2010 Elsevier Ltd. All rights reserved.

  12. Investigation of Low-Cost Surface Processing Techniques for Large-Size Multicrystalline Silicon Solar Cells

    Directory of Open Access Journals (Sweden)

    Yuang-Tung Cheng

    2010-01-01

    Full Text Available The subject of the present work is to develop a simple and effective method of enhancing conversion efficiency in large-size solar cells using multicrystalline silicon (mc-Si wafer. In this work, industrial-type mc-Si solar cells with area of 125×125 mm2 were acid etched to produce simultaneously POCl3 emitters and silicon nitride deposition by plasma-enhanced chemical vapor deposited (PECVD. The study of surface morphology and reflectivity of different mc-Si etched surfaces has also been discussed in this research. Using our optimal acid etching solution ratio, we are able to fabricate mc-Si solar cells of 16.34% conversion efficiency with double layers silicon nitride (Si3N4 coating. From our experiment, we find that depositing double layers silicon nitride coating on mc-Si solar cells can get the optimal performance parameters. Open circuit (Voc is 616 mV, short circuit current (Jsc is 34.1 mA/cm2, and minority carrier diffusion length is 474.16 μm. The isotropic texturing and silicon nitride layers coating approach contribute to lowering cost and achieving high efficiency in mass production.

  13. Solution approach for a large scale personnel transport system for a large company in Latin America

    Energy Technology Data Exchange (ETDEWEB)

    Garzón-Garnica, Eduardo-Arturo; Caballero-Morales, Santiago-Omar; Martínez-Flores, José-Luis

    2017-07-01

    The present paper focuses on the modelling and solution of a large-scale personnel transportation system in Mexico where many routes and vehicles are currently used to service 525 points. The routing system proposed can be applied to many cities in the Latin-American region. Design/methodology/approach: This system was modelled as a VRP model considering the use of real-world transit times, and the fact that routes start at the farthest point from the destination center. Experiments were performed on different sized sets of service points. As the size of the instances was increased, the performance of the heuristic method was assessed in comparison with the results of an exact algorithm, the results remaining very close between both. When the size of the instance was full-scale and the exact algorithm took too much time to solve the problem, then the heuristic algorithm provided a feasible solution. Supported by the validation with smaller scale instances, where the difference between both solutions was close to a 6%, the full –scale solution obtained with the heuristic algorithm was considered to be within that same range. Findings: The proposed modelling and solving method provided a solution that would produce significant savings in the daily operation of the routes. Originality/value: The urban distribution of the cities in Latin America is unique to other regions in the world. The general layout of the large cities in this region includes a small town center, usually antique, and a somewhat disordered outer region. The lack of a vehicle-centered urban planning poses distinct challenges for vehicle routing problems in the region. The use of a heuristic VRP combined with the results of an exact VRP, allowed the obtention of an improved routing plan specific to the requirements of the region.

  14. Solution approach for a large scale personnel transport system for a large company in Latin America

    International Nuclear Information System (INIS)

    Garzón-Garnica, Eduardo-Arturo; Caballero-Morales, Santiago-Omar; Martínez-Flores, José-Luis

    2017-01-01

    The present paper focuses on the modelling and solution of a large-scale personnel transportation system in Mexico where many routes and vehicles are currently used to service 525 points. The routing system proposed can be applied to many cities in the Latin-American region. Design/methodology/approach: This system was modelled as a VRP model considering the use of real-world transit times, and the fact that routes start at the farthest point from the destination center. Experiments were performed on different sized sets of service points. As the size of the instances was increased, the performance of the heuristic method was assessed in comparison with the results of an exact algorithm, the results remaining very close between both. When the size of the instance was full-scale and the exact algorithm took too much time to solve the problem, then the heuristic algorithm provided a feasible solution. Supported by the validation with smaller scale instances, where the difference between both solutions was close to a 6%, the full –scale solution obtained with the heuristic algorithm was considered to be within that same range. Findings: The proposed modelling and solving method provided a solution that would produce significant savings in the daily operation of the routes. Originality/value: The urban distribution of the cities in Latin America is unique to other regions in the world. The general layout of the large cities in this region includes a small town center, usually antique, and a somewhat disordered outer region. The lack of a vehicle-centered urban planning poses distinct challenges for vehicle routing problems in the region. The use of a heuristic VRP combined with the results of an exact VRP, allowed the obtention of an improved routing plan specific to the requirements of the region.

  15. Solution approach for a large scale personnel transport system for a large company in Latin America

    Directory of Open Access Journals (Sweden)

    Eduardo-Arturo Garzón-Garnica

    2017-10-01

    Full Text Available Purpose: The present paper focuses on the modelling and solution of a large-scale personnel transportation system in Mexico where many routes and vehicles are currently used to service 525 points. The routing system proposed can be applied to many cities in the Latin-American region. Design/methodology/approach: This system was modelled as a VRP model considering the use of real-world transit times, and the fact that routes start at the farthest point from the destination center. Experiments were performed on different sized sets of service points. As the size of the instances was increased, the performance of the heuristic method was assessed in comparison with the results of an exact algorithm, the results remaining very close between both.  When the size of the instance was full-scale and the exact algorithm took too much time to solve the problem, then the heuristic algorithm provided a feasible solution. Supported by the validation with smaller scale instances, where the difference between both solutions was close to a 6%, the full –scale solution obtained with the heuristic algorithm was considered to be within that same range. Findings: The proposed modelling and solving method provided a solution that would produce significant savings in the daily operation of the routes. Originality/value: The urban distribution of the cities in Latin America is unique to other regions in the world. The general layout of the large cities in this region includes a small town center, usually antique, and a somewhat disordered outer region. The lack of a vehicle-centered urban planning poses distinct challenges for vehicle routing problems in the region. The use of a heuristic VRP combined with the results of an exact VRP, allowed the obtention of an improved routing plan specific to the requirements of the region.

  16. Simulation and analysis of the soot particle size distribution in a turbulent nonpremixed flame

    KAUST Repository

    Lucchesi, Marco

    2017-02-05

    A modeling framework based on Direct Simulation Monte Carlo (DSMC) is employed to simulate the evolution of the soot particle size distribution in turbulent sooting flames. The stochastic reactor describes the evolution of soot in fluid parcels following Lagrangian trajectories in a turbulent flow field. The trajectories are sampled from a Direct Numerical Simulation (DNS) of a n-heptane turbulent nonpremixed flame. The DSMC method is validated against experimentally measured size distributions in laminar premixed flames and found to reproduce quantitatively the experimental results, including the appearance of the second mode at large aggregate sizes and the presence of a trough at mobility diameters in the range 3–8 nm. The model is then applied to the simulation of soot formation and growth in simplified configurations featuring a constant concentration of soot precursors and the evolution of the size distribution in time is found to depend on the intensity of the nucleation rate. Higher nucleation rates lead to a higher peak in number density and to the size distribution attaining its second mode sooner. The ensemble-averaged PSDF in the turbulent flame is computed from individual samples of the PSDF from large sets of Lagrangian trajectories. This statistical measure is equivalent to time-averaged, scanning mobility particle size (SMPS) measurements in turbulent flames. Although individual trajectories display strong bimodality as in laminar flames, the ensemble-average PSDF possesses only one mode and a long, broad tail, which implies significant polydispersity induced by turbulence. Our results agree very well with SMPS measurements available in the literature. Conditioning on key features of the trajectory, such as mixture fraction or radial locations does not reduce the scatter in the size distributions and the ensemble-averaged PSDF remains broad. The results highlight and explain the important role of turbulence in broadening the size distribution of

  17. The potential distributions, and estimated spatial requirements and population sizes, of the medium to large-sized mammals in the planning domain of the Greater Addo Elephant National Park project

    Directory of Open Access Journals (Sweden)

    A.F. Boshoff

    2002-12-01

    Full Text Available The Greater Addo Elephant National Park project (GAENP involves the establishment of a mega biodiversity reserve in the Eastern Cape, South Africa. Conservation planning in the GAENP planning domain requires systematic information on the potential distributions and estimated spatial requirements, and population sizes of the medium to largesized mammals. The potential distribution of each species is based on a combination of literature survey, a review of their ecological requirements, and consultation with conservation scientists and managers. Spatial requirements were estimated within 21 Mammal Habitat Classes derived from 43 Land Classes delineated by expert-based vegetation and river mapping procedures. These estimates were derived from spreadsheet models based on forage availability estimates and the metabolic requirements of the respective mammal species, and that incorporate modifications of the agriculture-based Large Stock Unit approach. The potential population size of each species was calculated by multiplying its density estimate with the area of suitable habitat. Population sizes were calculated for pristine, or near pristine, habitats alone, and then for these habitats together with potentially restorable habitats for two park planning domain scenarios. These data will enable (a the measurement of the effectiveness of the GAENP in achieving predetermined demographic, genetic and evolutionary targets for mammals that can potentially occur in selected park sizes and configurations, (b decisions regarding acquisition of additional land to achieve these targets to be informed, (c the identification of species for which targets can only be met through metapopulation management,(d park managers to be guided regarding the re-introduction of appropriate species, and (e the application of realistic stocking rates. Where possible, the model predictions were tested by comparison with empirical data, which in general corroborated the

  18. Distribution of genotype network sizes in sequence-to-structure genotype-phenotype maps.

    Science.gov (United States)

    Manrubia, Susanna; Cuesta, José A

    2017-04-01

    An essential quantity to ensure evolvability of populations is the navigability of the genotype space. Navigability, understood as the ease with which alternative phenotypes are reached, relies on the existence of sufficiently large and mutually attainable genotype networks. The size of genotype networks (e.g. the number of RNA sequences folding into a particular secondary structure or the number of DNA sequences coding for the same protein structure) is astronomically large in all functional molecules investigated: an exhaustive experimental or computational study of all RNA folds or all protein structures becomes impossible even for moderately long sequences. Here, we analytically derive the distribution of genotype network sizes for a hierarchy of models which successively incorporate features of increasingly realistic sequence-to-structure genotype-phenotype maps. The main feature of these models relies on the characterization of each phenotype through a prototypical sequence whose sites admit a variable fraction of letters of the alphabet. Our models interpolate between two limit distributions: a power-law distribution, when the ordering of sites in the prototypical sequence is strongly constrained, and a lognormal distribution, as suggested for RNA, when different orderings of the same set of sites yield different phenotypes. Our main result is the qualitative and quantitative identification of those features of sequence-to-structure maps that lead to different distributions of genotype network sizes. © 2017 The Author(s).

  19. Reduced clot debris size using standing waves formed via high intensity focused ultrasound

    Science.gov (United States)

    Guo, Shifang; Du, Xuan; Wang, Xin; Lu, Shukuan; Shi, Aiwei; Xu, Shanshan; Bouakaz, Ayache; Wan, Mingxi

    2017-09-01

    The feasibility of utilizing high intensity focused ultrasound (HIFU) to induce thrombolysis has been demonstrated previously. However, clinical concerns still remain related to the clot debris produced via fragmentation of the original clot potentially being too large and hence occluding downstream vessels, causing hazardous emboli. This study investigates the use of standing wave fields formed via HIFU to disintegrate the thrombus while achieving a reduced clot debris size in vitro. The results showed that the average diameter of the clot debris calculated by volume percentage was smaller in the standing wave mode than in the travelling wave mode at identical ultrasound thrombolysis settings. Furthermore, the inertial cavitation dose was shown to be lower in the standing wave mode, while the estimated cavitation bubble size distribution was similar in both modes. These results show that a reduction of the clot debris size with standing waves may be attributed to the particle trapping of the acoustic potential well which contributed to particle fragmentation.

  20. FR-type radio sources in COSMOS: relation of radio structure to size, accretion modes and large-scale environment

    Science.gov (United States)

    Vardoulaki, Eleni; Faustino Jimenez Andrade, Eric; Delvecchio, Ivan; Karim, Alexander; Smolčić, Vernesa; Magnelli, Benjamin; Bertoldi, Frank; Schinnener, Eva; Sargent, Mark; Finoguenov, Alexis; VLA COSMOS Team

    2018-01-01

    The radio sources associated with active galactic nuclei (AGN) can exhibit a variety of radio structures, from simple to more complex, giving rise to a variety of classification schemes. The question which still remains open, given deeper surveys revealing new populations of radio sources, is whether this plethora of radio structures can be attributed to the physical properties of the host or to the environment. Here we present an analysis on the radio structure of radio-selected AGN from the VLA-COSMOS Large Project at 3 GHz (JVLA-COSMOS; Smolčić et al.) in relation to: 1) their linear projected size, 2) the Eddington ratio, and 3) the environment their hosts lie within. We classify these as FRI (jet-like) and FRII (lobe-like) based on the FR-type classification scheme, and compare them to a sample of jet-less radio AGN in JVLA-COSMOS. We measure their linear projected sizes using a semi-automatic machine learning technique. Their Eddington ratios are calculated from X-ray data available for COSMOS. As environmental probes we take the X-ray groups (hundreds kpc) and the density fields (~Mpc-scale) in COSMOS. We find that FRII radio sources are on average larger than FRIs, which agrees with literature. But contrary to past studies, we find no dichotomy in FR objects in JVLA-COSMOS given their Eddington ratios, as on average they exhibit similar values. Furthermore our results show that the large-scale environment does not explain the observed dichotomy in lobe- and jet-like FR-type objects as both types are found on similar environments, but it does affect the shape of the radio structure introducing bents for objects closer to the centre of an X-ray group.