WorldWideScience

Sample records for large set size

  1. Determining an Estimate of an Equivalence Relation for Moderate and Large Sized Sets

    Directory of Open Access Journals (Sweden)

    Leszek Klukowski

    2017-01-01

    Full Text Available This paper presents two approaches to determining estimates of an equivalence relation on the basis of pairwise comparisons with random errors. Obtaining such an estimate requires the solution of a discrete programming problem which minimizes the sum of the differences between the form of the relation and the comparisons. The problem is NP hard and can be solved with the use of exact algorithms for sets of moderate size, i.e. about 50 elements. In the case of larger sets, i.e. at least 200 comparisons for each element, it is necessary to apply heuristic algorithms. The paper presents results (a statistical preprocessing, which enable us to determine the optimal or a near-optimal solution with acceptable computational cost. They include: the development of a statistical procedure producing comparisons with low probabilities of errors and a heuristic algorithm based on such comparisons. The proposed approach guarantees the applicability of such estimators for any size of set. (original abstract

  2. Large Data Set Mining

    NARCIS (Netherlands)

    Leemans, I.B.; Broomhall, Susan

    2017-01-01

    Digital emotion research has yet to make history. Until now large data set mining has not been a very active field of research in early modern emotion studies. This is indeed surprising since first, the early modern field has such rich, copyright-free, digitized data sets and second, emotion studies

  3. Large litter sizes

    DEFF Research Database (Denmark)

    Sandøe, Peter; Rutherford, K.M.D.; Berg, Peer

    2012-01-01

    This paper presents some key results and conclusions from a review (Rutherford et al. 2011) undertaken regarding the ethical and welfare implications of breeding for large litter size in the domestic pig and about different ways of dealing with these implications. Focus is primarily on the direct...... possible to achieve a drop in relative piglet mortality and the related welfare problems. However, there will be a growing problem with the need to use foster or nurse sows which may have negative effects on both sows and piglets. This gives rise to new challenges for management....

  4. The large sample size fallacy.

    Science.gov (United States)

    Lantz, Björn

    2013-06-01

    Significance in the statistical sense has little to do with significance in the common practical sense. Statistical significance is a necessary but not a sufficient condition for practical significance. Hence, results that are extremely statistically significant may be highly nonsignificant in practice. The degree of practical significance is generally determined by the size of the observed effect, not the p-value. The results of studies based on large samples are often characterized by extreme statistical significance despite small or even trivial effect sizes. Interpreting such results as significant in practice without further analysis is referred to as the large sample size fallacy in this article. The aim of this article is to explore the relevance of the large sample size fallacy in contemporary nursing research. Relatively few nursing articles display explicit measures of observed effect sizes or include a qualitative discussion of observed effect sizes. Statistical significance is often treated as an end in itself. Effect sizes should generally be calculated and presented along with p-values for statistically significant results, and observed effect sizes should be discussed qualitatively through direct and explicit comparisons with the effects in related literature. © 2012 Nordic College of Caring Science.

  5. Metastrategies in large-scale bargaining settings

    NARCIS (Netherlands)

    Hennes, D.; Jong, S. de; Tuyls, K.; Gal, Y.

    2015-01-01

    This article presents novel methods for representing and analyzing a special class of multiagent bargaining settings that feature multiple players, large action spaces, and a relationship among players' goals, tasks, and resources. We show how to reduce these interactions to a set of bilateral

  6. Set size and culture influence children's attention to number.

    Science.gov (United States)

    Cantrell, Lisa; Kuwabara, Megumi; Smith, Linda B

    2015-03-01

    Much research evidences a system in adults and young children for approximately representing quantity. Here we provide evidence that the bias to attend to discrete quantity versus other dimensions may be mediated by set size and culture. Preschool-age English-speaking children in the United States and Japanese-speaking children in Japan were tested in a match-to-sample task where number was pitted against cumulative surface area in both large and small numerical set comparisons. Results showed that children from both cultures were biased to attend to the number of items for small sets. Large set responses also showed a general attention to number when ratio difficulty was easy. However, relative to the responses for small sets, attention to number decreased for both groups; moreover, both U.S. and Japanese children showed a significant bias to attend to total amount for difficult numerical ratio distances, although Japanese children shifted attention to total area at relatively smaller set sizes than U.S. children. These results add to our growing understanding of how quantity is represented and how such representation is influenced by context--both cultural and perceptual. Copyright © 2014 Elsevier Inc. All rights reserved.

  7. Large size space construction for space exploitation

    Science.gov (United States)

    Kondyurin, Alexey

    2016-07-01

    Space exploitation is impossible without large space structures. We need to make sufficient large volume of pressurized protecting frames for crew, passengers, space processing equipment, & etc. We have to be unlimited in space. Now the size and mass of space constructions are limited by possibility of a launch vehicle. It limits our future in exploitation of space by humans and in development of space industry. Large-size space construction can be made with using of the curing technology of the fibers-filled composites and a reactionable matrix applied directly in free space. For curing the fabric impregnated with a liquid matrix (prepreg) is prepared in terrestrial conditions and shipped in a container to orbit. In due time the prepreg is unfolded by inflating. After polymerization reaction, the durable construction can be fitted out with air, apparatus and life support systems. Our experimental studies of the curing processes in the simulated free space environment showed that the curing of composite in free space is possible. The large-size space construction can be developed. A project of space station, Moon base, Mars base, mining station, interplanet space ship, telecommunication station, space observatory, space factory, antenna dish, radiation shield, solar sail is proposed and overviewed. The study was supported by Humboldt Foundation, ESA (contract 17083/03/NL/SFe), NASA program of the stratospheric balloons and RFBR grants (05-08-18277, 12-08-00970 and 14-08-96011).

  8. Data Programming: Creating Large Training Sets, Quickly

    Science.gov (United States)

    Ratner, Alexander; De Sa, Christopher; Wu, Sen; Selsam, Daniel; Ré, Christopher

    2018-01-01

    Large labeled training sets are the critical building blocks of supervised learning methods and are key enablers of deep learning techniques. For some applications, creating labeled training sets is the most time-consuming and expensive part of applying machine learning. We therefore propose a paradigm for the programmatic creation of training sets called data programming in which users express weak supervision strategies or domain heuristics as labeling functions, which are programs that label subsets of the data, but that are noisy and may conflict. We show that by explicitly representing this training set labeling process as a generative model, we can “denoise” the generated training set, and establish theoretically that we can recover the parameters of these generative models in a handful of settings. We then show how to modify a discriminative loss function to make it noise-aware, and demonstrate our method over a range of discriminative models including logistic regression and LSTMs. Experimentally, on the 2014 TAC-KBP Slot Filling challenge, we show that data programming would have led to a new winning score, and also show that applying data programming to an LSTM model leads to a TAC-KBP score almost 6 F1 points over a state-of-the-art LSTM baseline (and into second place in the competition). Additionally, in initial user studies we observed that data programming may be an easier way for non-experts to create machine learning models when training data is limited or unavailable. PMID:29872252

  9. Spatial occupancy models for large data sets

    Science.gov (United States)

    Johnson, Devin S.; Conn, Paul B.; Hooten, Mevin B.; Ray, Justina C.; Pond, Bruce A.

    2013-01-01

    Since its development, occupancy modeling has become a popular and useful tool for ecologists wishing to learn about the dynamics of species occurrence over time and space. Such models require presence–absence data to be collected at spatially indexed survey units. However, only recently have researchers recognized the need to correct for spatially induced overdisperison by explicitly accounting for spatial autocorrelation in occupancy probability. Previous efforts to incorporate such autocorrelation have largely focused on logit-normal formulations for occupancy, with spatial autocorrelation induced by a random effect within a hierarchical modeling framework. Although useful, computational time generally limits such an approach to relatively small data sets, and there are often problems with algorithm instability, yielding unsatisfactory results. Further, recent research has revealed a hidden form of multicollinearity in such applications, which may lead to parameter bias if not explicitly addressed. Combining several techniques, we present a unifying hierarchical spatial occupancy model specification that is particularly effective over large spatial extents. This approach employs a probit mixture framework for occupancy and can easily accommodate a reduced-dimensional spatial process to resolve issues with multicollinearity and spatial confounding while improving algorithm convergence. Using open-source software, we demonstrate this new model specification using a case study involving occupancy of caribou (Rangifer tarandus) over a set of 1080 survey units spanning a large contiguous region (108 000 km2) in northern Ontario, Canada. Overall, the combination of a more efficient specification and open-source software allows for a facile and stable implementation of spatial occupancy models for large data sets.

  10. Computational scalability of large size image dissemination

    Science.gov (United States)

    Kooper, Rob; Bajcsy, Peter

    2011-01-01

    We have investigated the computational scalability of image pyramid building needed for dissemination of very large image data. The sources of large images include high resolution microscopes and telescopes, remote sensing and airborne imaging, and high resolution scanners. The term 'large' is understood from a user perspective which means either larger than a display size or larger than a memory/disk to hold the image data. The application drivers for our work are digitization projects such as the Lincoln Papers project (each image scan is about 100-150MB or about 5000x8000 pixels with the total number to be around 200,000) and the UIUC library scanning project for historical maps from 17th and 18th century (smaller number but larger images). The goal of our work is understand computational scalability of the web-based dissemination using image pyramids for these large image scans, as well as the preservation aspects of the data. We report our computational benchmarks for (a) building image pyramids to be disseminated using the Microsoft Seadragon library, (b) a computation execution approach using hyper-threading to generate image pyramids and to utilize the underlying hardware, and (c) an image pyramid preservation approach using various hard drive configurations of Redundant Array of Independent Disks (RAID) drives for input/output operations. The benchmarks are obtained with a map (334.61 MB, JPEG format, 17591x15014 pixels). The discussion combines the speed and preservation objectives.

  11. Multidimensional scaling for large genomic data sets

    Directory of Open Access Journals (Sweden)

    Lu Henry

    2008-04-01

    Full Text Available Abstract Background Multi-dimensional scaling (MDS is aimed to represent high dimensional data in a low dimensional space with preservation of the similarities between data points. This reduction in dimensionality is crucial for analyzing and revealing the genuine structure hidden in the data. For noisy data, dimension reduction can effectively reduce the effect of noise on the embedded structure. For large data set, dimension reduction can effectively reduce information retrieval complexity. Thus, MDS techniques are used in many applications of data mining and gene network research. However, although there have been a number of studies that applied MDS techniques to genomics research, the number of analyzed data points was restricted by the high computational complexity of MDS. In general, a non-metric MDS method is faster than a metric MDS, but it does not preserve the true relationships. The computational complexity of most metric MDS methods is over O(N2, so that it is difficult to process a data set of a large number of genes N, such as in the case of whole genome microarray data. Results We developed a new rapid metric MDS method with a low computational complexity, making metric MDS applicable for large data sets. Computer simulation showed that the new method of split-and-combine MDS (SC-MDS is fast, accurate and efficient. Our empirical studies using microarray data on the yeast cell cycle showed that the performance of K-means in the reduced dimensional space is similar to or slightly better than that of K-means in the original space, but about three times faster to obtain the clustering results. Our clustering results using SC-MDS are more stable than those in the original space. Hence, the proposed SC-MDS is useful for analyzing whole genome data. Conclusion Our new method reduces the computational complexity from O(N3 to O(N when the dimension of the feature space is far less than the number of genes N, and it successfully

  12. Looking at large data sets using binned data plots

    Energy Technology Data Exchange (ETDEWEB)

    Carr, D.B.

    1990-04-01

    This report addresses the monumental challenge of developing exploratory analysis methods for large data sets. The goals of the report are to increase awareness of large data sets problems and to contribute simple graphical methods that address some of the problems. The graphical methods focus on two- and three-dimensional data and common task such as finding outliers and tail structure, assessing central structure and comparing central structures. The methods handle large sample size problems through binning, incorporate information from statistical models and adapt image processing algorithms. Examples demonstrate the application of methods to a variety of publicly available large data sets. The most novel application addresses the too many plots to examine'' problem by using cognostics, computer guiding diagnostics, to prioritize plots. The particular application prioritizes views of computational fluid dynamics solution sets on the fly. That is, as each time step of a solution set is generated on a parallel processor the cognostics algorithms assess virtual plots based on the previous time step. Work in such areas is in its infancy and the examples suggest numerous challenges that remain. 35 refs., 15 figs.

  13. Parallel clustering algorithm for large-scale biological data sets.

    Science.gov (United States)

    Wang, Minchao; Zhang, Wu; Ding, Wang; Dai, Dongbo; Zhang, Huiran; Xie, Hao; Chen, Luonan; Guo, Yike; Xie, Jiang

    2014-01-01

    Recent explosion of biological data brings a great challenge for the traditional clustering algorithms. With increasing scale of data sets, much larger memory and longer runtime are required for the cluster identification problems. The affinity propagation algorithm outperforms many other classical clustering algorithms and is widely applied into the biological researches. However, the time and space complexity become a great bottleneck when handling the large-scale data sets. Moreover, the similarity matrix, whose constructing procedure takes long runtime, is required before running the affinity propagation algorithm, since the algorithm clusters data sets based on the similarities between data pairs. Two types of parallel architectures are proposed in this paper to accelerate the similarity matrix constructing procedure and the affinity propagation algorithm. The memory-shared architecture is used to construct the similarity matrix, and the distributed system is taken for the affinity propagation algorithm, because of its large memory size and great computing capacity. An appropriate way of data partition and reduction is designed in our method, in order to minimize the global communication cost among processes. A speedup of 100 is gained with 128 cores. The runtime is reduced from serval hours to a few seconds, which indicates that parallel algorithm is capable of handling large-scale data sets effectively. The parallel affinity propagation also achieves a good performance when clustering large-scale gene data (microarray) and detecting families in large protein superfamilies.

  14. Large margin image set representation and classification

    KAUST Repository

    Wang, Jim Jing-Yan

    2014-07-06

    In this paper, we propose a novel image set representation and classification method by maximizing the margin of image sets. The margin of an image set is defined as the difference of the distance to its nearest image set from different classes and the distance to its nearest image set of the same class. By modeling the image sets by using both their image samples and their affine hull models, and maximizing the margins of the images sets, the image set representation parameter learning problem is formulated as an minimization problem, which is further optimized by an expectation - maximization (EM) strategy with accelerated proximal gradient (APG) optimization in an iterative algorithm. To classify a given test image set, we assign it to the class which could provide the largest margin. Experiments on two applications of video-sequence-based face recognition demonstrate that the proposed method significantly outperforms state-of-the-art image set classification methods in terms of both effectiveness and efficiency.

  15. Large margin image set representation and classification

    KAUST Repository

    Wang, Jim Jing-Yan; Alzahrani, Majed A.; Gao, Xin

    2014-01-01

    In this paper, we propose a novel image set representation and classification method by maximizing the margin of image sets. The margin of an image set is defined as the difference of the distance to its nearest image set from different classes and the distance to its nearest image set of the same class. By modeling the image sets by using both their image samples and their affine hull models, and maximizing the margins of the images sets, the image set representation parameter learning problem is formulated as an minimization problem, which is further optimized by an expectation - maximization (EM) strategy with accelerated proximal gradient (APG) optimization in an iterative algorithm. To classify a given test image set, we assign it to the class which could provide the largest margin. Experiments on two applications of video-sequence-based face recognition demonstrate that the proposed method significantly outperforms state-of-the-art image set classification methods in terms of both effectiveness and efficiency.

  16. Large-scale ocean connectivity and planktonic body size

    KAUST Repository

    Villarino, Ernesto

    2018-01-04

    Global patterns of planktonic diversity are mainly determined by the dispersal of propagules with ocean currents. However, the role that abundance and body size play in determining spatial patterns of diversity remains unclear. Here we analyse spatial community structure - β-diversity - for several planktonic and nektonic organisms from prokaryotes to small mesopelagic fishes collected during the Malaspina 2010 Expedition. β-diversity was compared to surface ocean transit times derived from a global circulation model, revealing a significant negative relationship that is stronger than environmental differences. Estimated dispersal scales for different groups show a negative correlation with body size, where less abundant large-bodied communities have significantly shorter dispersal scales and larger species spatial turnover rates than more abundant small-bodied plankton. Our results confirm that the dispersal scale of planktonic and micro-nektonic organisms is determined by local abundance, which scales with body size, ultimately setting global spatial patterns of diversity.

  17. Large-scale ocean connectivity and planktonic body size

    KAUST Repository

    Villarino, Ernesto; Watson, James R.; Jö nsson, Bror; Gasol, Josep M.; Salazar, Guillem; Acinas, Silvia G.; Estrada, Marta; Massana, Ramó n; Logares, Ramiro; Giner, Caterina R.; Pernice, Massimo C.; Olivar, M. Pilar; Citores, Leire; Corell, Jon; Rodrí guez-Ezpeleta, Naiara; Acuñ a, José Luis; Molina-Ramí rez, Axayacatl; Gonzá lez-Gordillo, J. Ignacio; Có zar, André s; Martí , Elisa; Cuesta, José A.; Agusti, Susana; Fraile-Nuez, Eugenio; Duarte, Carlos M.; Irigoien, Xabier; Chust, Guillem

    2018-01-01

    Global patterns of planktonic diversity are mainly determined by the dispersal of propagules with ocean currents. However, the role that abundance and body size play in determining spatial patterns of diversity remains unclear. Here we analyse spatial community structure - β-diversity - for several planktonic and nektonic organisms from prokaryotes to small mesopelagic fishes collected during the Malaspina 2010 Expedition. β-diversity was compared to surface ocean transit times derived from a global circulation model, revealing a significant negative relationship that is stronger than environmental differences. Estimated dispersal scales for different groups show a negative correlation with body size, where less abundant large-bodied communities have significantly shorter dispersal scales and larger species spatial turnover rates than more abundant small-bodied plankton. Our results confirm that the dispersal scale of planktonic and micro-nektonic organisms is determined by local abundance, which scales with body size, ultimately setting global spatial patterns of diversity.

  18. Iterative dictionary construction for compression of large DNA data sets.

    Science.gov (United States)

    Kuruppu, Shanika; Beresford-Smith, Bryan; Conway, Thomas; Zobel, Justin

    2012-01-01

    Genomic repositories increasingly include individual as well as reference sequences, which tend to share long identical and near-identical strings of nucleotides. However, the sequential processing used by most compression algorithms, and the volumes of data involved, mean that these long-range repetitions are not detected. An order-insensitive, disk-based dictionary construction method can detect this repeated content and use it to compress collections of sequences. We explore a dictionary construction method that improves repeat identification in large DNA data sets. Our adaptation, COMRAD, of an existing disk-based method identifies exact repeated content in collections of sequences with similarities within and across the set of input sequences. COMRAD compresses the data over multiple passes, which is an expensive process, but allows COMRAD to compress large data sets within reasonable time and space. COMRAD allows for random access to individual sequences and subsequences without decompressing the whole data set. COMRAD has no competitor in terms of the size of data sets that it can compress (extending to many hundreds of gigabytes) and, even for smaller data sets, the results are competitive compared to alternatives; as an example, 39 S. cerevisiae genomes compressed to 0.25 bits per base.

  19. Large Pelagic Logbook Set Survey (Vessels)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This data set contains catch and effort for fishing trips that are taken by vessels with a Federal permit issued for the swordfish and sharks under the Highly...

  20. Modeling large data sets in marketing

    NARCIS (Netherlands)

    Balasubramanian, S; Gupta, S; Kamakura, W; Wedel, M

    In the last two decades, marketing databases have grown significantly in terms of size and richness of available information. The analysis of these databases raises several information-related and statistical issues. We aim at providing an overview of a selection of issues related to the analysis of

  1. CLUSTER DYNAMICS LARGELY SHAPES PROTOPLANETARY DISK SIZES

    Energy Technology Data Exchange (ETDEWEB)

    Vincke, Kirsten; Pfalzner, Susanne, E-mail: kvincke@mpifr-bonn.mpg.de [Max Planck Institute for Radio Astronomy, Auf dem Hügel 69, D-53121 Bonn (Germany)

    2016-09-01

    To what degree the cluster environment influences the sizes of protoplanetary disks surrounding young stars is still an open question. This is particularly true for the short-lived clusters typical for the solar neighborhood, in which the stellar density and therefore the influence of the cluster environment change considerably over the first 10 Myr. In previous studies, the effect of the gas on the cluster dynamics has often been neglected; this is remedied here. Using the code NBody6++, we study the stellar dynamics in different developmental phases—embedded, expulsion, and expansion—including the gas, and quantify the effect of fly-bys on the disk size. We concentrate on massive clusters (M {sub cl} ≥ 10{sup 3}–6 ∗ 10{sup 4} M {sub Sun}), which are representative for clusters like the Orion Nebula Cluster (ONC) or NGC 6611. We find that not only the stellar density but also the duration of the embedded phase matters. The densest clusters react fastest to the gas expulsion and drop quickly in density, here 98% of relevant encounters happen before gas expulsion. By contrast, disks in sparser clusters are initially less affected, but because these clusters expand more slowly, 13% of disks are truncated after gas expulsion. For ONC-like clusters, we find that disks larger than 500 au are usually affected by the environment, which corresponds to the observation that 200 au-sized disks are common. For NGC 6611-like clusters, disk sizes are cut-down on average to roughly 100 au. A testable hypothesis would be that the disks in the center of NGC 6611 should be on average ≈20 au and therefore considerably smaller than those in the ONC.

  2. Information overload or search-amplified risk? Set size and order effects on decisions from experience.

    Science.gov (United States)

    Hills, Thomas T; Noguchi, Takao; Gibbert, Michael

    2013-10-01

    How do changes in choice-set size influence information search and subsequent decisions? Moreover, does information overload influence information processing with larger choice sets? We investigated these questions by letting people freely explore sets of gambles before choosing one of them, with the choice sets either increasing or decreasing in number for each participant (from two to 32 gambles). Set size influenced information search, with participants taking more samples overall, but sampling a smaller proportion of gambles and taking fewer samples per gamble, when set sizes were larger. The order of choice sets also influenced search, with participants sampling from more gambles and taking more samples overall if they started with smaller as opposed to larger choice sets. Inconsistent with information overload, information processing appeared consistent across set sizes and choice order conditions, reliably favoring gambles with higher sample means. Despite the lack of evidence for information overload, changes in information search did lead to systematic changes in choice: People who started with smaller choice sets were more likely to choose gambles with the highest expected values, but only for small set sizes. For large set sizes, the increase in total samples increased the likelihood of encountering rare events at the same time that the reduction in samples per gamble amplified the effect of these rare events when they occurred-what we call search-amplified risk. This led to riskier choices for individuals whose choices most closely followed the sample mean.

  3. Time series clustering in large data sets

    Directory of Open Access Journals (Sweden)

    Jiří Fejfar

    2011-01-01

    Full Text Available The clustering of time series is a widely researched area. There are many methods for dealing with this task. We are actually using the Self-organizing map (SOM with the unsupervised learning algorithm for clustering of time series. After the first experiment (Fejfar, Weinlichová, Šťastný, 2009 it seems that the whole concept of the clustering algorithm is correct but that we have to perform time series clustering on much larger dataset to obtain more accurate results and to find the correlation between configured parameters and results more precisely. The second requirement arose in a need for a well-defined evaluation of results. It seems useful to use sound recordings as instances of time series again. There are many recordings to use in digital libraries, many interesting features and patterns can be found in this area. We are searching for recordings with the similar development of information density in this experiment. It can be used for musical form investigation, cover songs detection and many others applications.The objective of the presented paper is to compare clustering results made with different parameters of feature vectors and the SOM itself. We are describing time series in a simplistic way evaluating standard deviations for separated parts of recordings. The resulting feature vectors are clustered with the SOM in batch training mode with different topologies varying from few neurons to large maps.There are other algorithms discussed, usable for finding similarities between time series and finally conclusions for further research are presented. We also present an overview of the related actual literature and projects.

  4. Large-N in Volcano Settings: Volcanosri

    Science.gov (United States)

    Lees, J. M.; Song, W.; Xing, G.; Vick, S.; Phillips, D.

    2014-12-01

    We seek a paradigm shift in the approach we take on volcano monitoring where the compromise from high fidelity to large numbers of sensors is used to increase coverage and resolution. Accessibility, danger and the risk of equipment loss requires that we develop systems that are independent and inexpensive. Furthermore, rather than simply record data on hard disk for later analysis we desire a system that will work autonomously, capitalizing on wireless technology and in field network analysis. To this end we are currently producing a low cost seismic array which will incorporate, at the very basic level, seismological tools for first cut analysis of a volcano in crises mode. At the advanced end we expect to perform tomographic inversions in the network in near real time. Geophone (4 Hz) sensors connected to a low cost recording system will be installed on an active volcano where triggering earthquake location and velocity analysis will take place independent of human interaction. Stations are designed to be inexpensive and possibly disposable. In one of the first implementations the seismic nodes consist of an Arduino Due processor board with an attached Seismic Shield. The Arduino Due processor board contains an Atmel SAM3X8E ARM Cortex-M3 CPU. This 32 bit 84 MHz processor can filter and perform coarse seismic event detection on a 1600 sample signal in fewer than 200 milliseconds. The Seismic Shield contains a GPS module, 900 MHz high power mesh network radio, SD card, seismic amplifier, and 24 bit ADC. External sensors can be attached to either this 24-bit ADC or to the internal multichannel 12 bit ADC contained on the Arduino Due processor board. This allows the node to support attachment of multiple sensors. By utilizing a high-speed 32 bit processor complex signal processing tasks can be performed simultaneously on multiple sensors. Using a 10 W solar panel, second system being developed can run autonomously and collect data on 3 channels at 100Hz for 6 months

  5. Analyzing ROC curves using the effective set-size model

    Science.gov (United States)

    Samuelson, Frank W.; Abbey, Craig K.; He, Xin

    2018-03-01

    The Effective Set-Size model has been used to describe uncertainty in various signal detection experiments. The model regards images as if they were an effective number (M*) of searchable locations, where the observer treats each location as a location-known-exactly detection task with signals having average detectability d'. The model assumes a rational observer behaves as if he searches an effective number of independent locations and follows signal detection theory at each location. Thus the location-known-exactly detectability (d') and the effective number of independent locations M* fully characterize search performance. In this model the image rating in a single-response task is assumed to be the maximum response that the observer would assign to these many locations. The model has been used by a number of other researchers, and is well corroborated. We examine this model as a way of differentiating imaging tasks that radiologists perform. Tasks involving more searching or location uncertainty may have higher estimated M* values. In this work we applied the Effective Set-Size model to a number of medical imaging data sets. The data sets include radiologists reading screening and diagnostic mammography with and without computer-aided diagnosis (CAD), and breast tomosynthesis. We developed an algorithm to fit the model parameters using two-sample maximum-likelihood ordinal regression, similar to the classic bi-normal model. The resulting model ROC curves are rational and fit the observed data well. We find that the distributions of M* and d' differ significantly among these data sets, and differ between pairs of imaging systems within studies. For example, on average tomosynthesis increased readers' d' values, while CAD reduced the M* parameters. We demonstrate that the model parameters M* and d' are correlated. We conclude that the Effective Set-Size model may be a useful way of differentiating location uncertainty from the diagnostic uncertainty in medical

  6. Large-sized seaweed monitoring based on MODIS

    Science.gov (United States)

    Ma, Long; Li, Ying; Lan, Guo-xin; Li, Chuan-long

    2009-10-01

    In recent years, large-sized seaweed, such as ulva lactuca, blooms frequently in coastal water in China, which threatens marine eco-environment. In order to take effective measures, it is important to make operational surveillance. A case of large-sized seaweed blooming (i.e. enteromorpha), occurred in June, 2008, in the sea near Qingdao city, is studied. Seaweed blooming is dynamically monitored using Moderate Resolution Imaging Spectroradiometer (MODIS). After analyzing imaging spectral characteristics of enteromorpha, MODIS band 1 and 2 are used to create a band ratio algorithm for detecting and mapping large-sized seaweed blooming. In addition, chlorophyll-α concentration is inversed based on an empirical model developed using MODIS. Chlorophyll-α concentration maps are derived using multitemporal MODIS data, and chlorophyll-α concentration change is analyzed. Results show that the presented methods are useful to get the dynamic distribution and the growth of large-sized seaweed, and can support contingency planning.

  7. Experimental study on propagation properties of large size TEM antennas

    International Nuclear Information System (INIS)

    Zhang Guowei; Wang Haiyang; Chen Weiqing; Wang Wei; Zhu Xiangqin; Xie Linshen

    2014-01-01

    The propagation properties of large size TEM antennas were studied by experiment. The size of the TEM antennas is 60 m × 20 m × 10 m and the character Impedance is 120 Ω. A kind of dielectric foil switch is designed compactly with TEM antennas which can generate double exponential waveform with altitude of 10 kV and rise time of l.2 ns. The radiated field distribution was measured. The relationship between rise time/altitude and distance were provided, and the propagation properties of large size TEM antennas were summarized. (authors)

  8. Estimated spatial requirements of the medium- to large-sized ...

    African Journals Online (AJOL)

    Conservation planning in the Cape Floristic Region (CFR) of South Africa, a recognised world plant diversity hotspot, required information on the estimated spatial requirements of selected medium- to large-sized mammals within each of 102 Broad Habitat Units (BHUs) delineated according to key biophysical parameters.

  9. Word length, set size, and lexical factors: Re-examining what causes the word length effect.

    Science.gov (United States)

    Guitard, Dominic; Gabel, Andrew J; Saint-Aubin, Jean; Surprenant, Aimée M; Neath, Ian

    2018-04-19

    The word length effect, better recall of lists of short (fewer syllables) than long (more syllables) words has been termed a benchmark effect of working memory. Despite this, experiments on the word length effect can yield quite different results depending on set size and stimulus properties. Seven experiments are reported that address these 2 issues. Experiment 1 replicated the finding of a preserved word length effect under concurrent articulation for large stimulus sets, which contrasts with the abolition of the word length effect by concurrent articulation for small stimulus sets. Experiment 2, however, demonstrated that when the short and long words are equated on more dimensions, concurrent articulation abolishes the word length effect for large stimulus sets. Experiment 3 shows a standard word length effect when output time is equated, but Experiments 4-6 show no word length effect when short and long words are equated on increasingly more dimensions that previous demonstrations have overlooked. Finally, Experiment 7 compared recall of a small and large neighborhood words that were equated on all the dimensions used in Experiment 6 (except for those directly related to neighborhood size) and a neighborhood size effect was still observed. We conclude that lexical factors, rather than word length per se, are better predictors of when the word length effect will occur. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  10. Large exon size does not limit splicing in vivo.

    Science.gov (United States)

    Chen, I T; Chasin, L A

    1994-03-01

    Exon sizes in vertebrate genes are, with a few exceptions, limited to less than 300 bases. It has been proposed that this limitation may derive from the exon definition model of splice site recognition. In this model, a downstream donor site enhances splicing at the upstream acceptor site of the same exon. This enhancement may require contact between factors bound to each end of the exon; an exon size limitation would promote such contact. To test the idea that proximity was required for exon definition, we inserted random DNA fragments from Escherichia coli into a central exon in a three-exon dihydrofolate reductase minigene and tested whether the expanded exons were efficiently spliced. DNA from a plasmid library of expanded minigenes was used to transfect a CHO cell deletion mutant lacking the dhfr locus. PCR analysis of DNA isolated from the pooled stable cotransfectant populations displayed a range of DNA insert sizes from 50 to 1,500 nucleotides. A parallel analysis of the RNA from this population by reverse transcription followed by PCR showed a similar size distribution. Central exons as large as 1,400 bases could be spliced into mRNA. We also tested individual plasmid clones containing exon inserts of defined sizes. The largest exon included in mRNA was 1,200 bases in length, well above the 300-base limit implied by the survey of naturally occurring exons. We conclude that a limitation in exon size is not part of the exon definition mechanism.

  11. Accelerated EM-based clustering of large data sets

    NARCIS (Netherlands)

    Verbeek, J.J.; Nunnink, J.R.J.; Vlassis, N.

    2006-01-01

    Motivated by the poor performance (linear complexity) of the EM algorithm in clustering large data sets, and inspired by the successful accelerated versions of related algorithms like k-means, we derive an accelerated variant of the EM algorithm for Gaussian mixtures that: (1) offers speedups that

  12. Accuracy of the photogrametric measuring system for large size elements

    Directory of Open Access Journals (Sweden)

    M. Grzelka

    2011-04-01

    Full Text Available The aim of this paper is to present methods of estimating and guidelines for verifying the accuracy of optical photogrammetric measuringsystems, using for measurement of large size elements. Measuring systems applied to measure workpieces of a large size which oftenreach more than 10000mm require use of appropriate standards. Those standards provided by the manufacturer of photogrammetricsystems are certified and are inspected annually. To make sure that these systems work properly there was developed a special standardVDI / VDE 2634, "Optical 3D measuring systems. Imaging systems with point - by - point probing. " According to recommendationsdescribed in this standard research on accuracy of photogrametric measuring system was conducted using K class gauge blocks dedicatedto calibrate and test accuracy of classic CMMs. The paper presents results of research of estimation the actual error of indication for sizemeasurement MPEE for photogrammetric coordinate measuring system TRITOP.

  13. Processing and properties of large-sized ceramic slabs

    Energy Technology Data Exchange (ETDEWEB)

    Raimondo, M.; Dondi, M.; Zanelli, C.; Guarini, G.; Gozzi, A.; Marani, F.; Fossa, L.

    2010-07-01

    Large-sized ceramic slabs with dimensions up to 360x120 cm{sup 2} and thickness down to 2 mm are manufactured through an innovative ceramic process, starting from porcelain stoneware formulations and involving wet ball milling, spray drying, die-less slow-rate pressing, a single stage of fast drying-firing, and finishing (trimming, assembling of ceramic-fiberglass composites). Fired and unfired industrial slabs were selected and characterized from the technological, compositional (XRF, XRD) and microstructural (SEM) viewpoints. Semi-finished products exhibit a remarkable microstructural uniformity and stability in a rather wide window of firing schedules. The phase composition and compact microstructure of fired slabs are very similar to those of porcelain stoneware tiles. The values of water absorption, bulk density, closed porosity, functional performances as well as mechanical and tribological properties conform to the top quality range of porcelain stoneware tiles. However, the large size coupled with low thickness bestow on the slab a certain degree of flexibility, which is emphasized in ceramic-fiberglass composites. These outstanding performances make the large-sized slabs suitable to be used in novel applications: building and construction (new floorings without dismantling the previous paving, ventilated facades, tunnel coverings, insulating panelling), indoor furnitures (table tops, doors), support for photovoltaic ceramic panels. (Author) 24 refs.

  14. Processing and properties of large-sized ceramic slabs

    International Nuclear Information System (INIS)

    Raimondo, M.; Dondi, M.; Zanelli, C.; Guarini, G.; Gozzi, A.; Marani, F.; Fossa, L.

    2010-01-01

    Large-sized ceramic slabs with dimensions up to 360x120 cm 2 and thickness down to 2 mm are manufactured through an innovative ceramic process, starting from porcelain stoneware formulations and involving wet ball milling, spray drying, die-less slow-rate pressing, a single stage of fast drying-firing, and finishing (trimming, assembling of ceramic-fiberglass composites). Fired and unfired industrial slabs were selected and characterized from the technological, compositional (XRF, XRD) and microstructural (SEM) viewpoints. Semi-finished products exhibit a remarkable microstructural uniformity and stability in a rather wide window of firing schedules. The phase composition and compact microstructure of fired slabs are very similar to those of porcelain stoneware tiles. The values of water absorption, bulk density, closed porosity, functional performances as well as mechanical and tribological properties conform to the top quality range of porcelain stoneware tiles. However, the large size coupled with low thickness bestow on the slab a certain degree of flexibility, which is emphasized in ceramic-fiberglass composites. These outstanding performances make the large-sized slabs suitable to be used in novel applications: building and construction (new floorings without dismantling the previous paving, ventilated facades, tunnel coverings, insulating panelling), indoor furnitures (table tops, doors), support for photovoltaic ceramic panels. (Author) 24 refs.

  15. Technological Aspects of Creating Large-size Optical Telescopes

    Directory of Open Access Journals (Sweden)

    V. V. Sychev

    2015-01-01

    Full Text Available A concept of the telescope creation, first of all, depends both on a choice of the optical scheme to form optical radiation and images with minimum losses of energy and information and on a choice of design to meet requirements for strength, stiffness, and stabilization characteristics in real telescope operation conditions. Thus, the concept of creating large-size telescopes, certainly, involves the use of adaptive optics methods and means.The level of technological capabilities to realize scientific and engineering ideas define a successful development of large-size optical telescopes in many respects. All developers pursue the same aim that is to raise an amount of information by increasing a main mirror diameter of the telescope.The article analyses the adaptive telescope designs developed in our country. Using a domestic ACT-25 telescope as an example, it considers creation of large-size optical telescopes in terms of technological aspects. It also describes the telescope creation concept features, which allow reaching marginally possible characteristics to ensure maximum amount of information.The article compares a wide range of large-size telescopes projects. It shows that a domestic project to create the adaptive ACT-25 super-telescope surpasses its foreign counterparts, and there is no sense to implement Euro50 (50m and OWL (100m projects.The considered material gives clear understanding on a role of technological aspects in development of such complicated optic-electronic complexes as a large-size optical telescope. The technological criteria of an assessment offered in the article, namely specific informational content of the telescope, its specific mass, and specific cost allow us to reveal weaknesses in the project development and define a reserve regarding further improvement of the telescope.The analysis of results and their judgment have shown that improvement of optical largesize telescopes in terms of their maximum

  16. Reducing Information Overload in Large Seismic Data Sets

    Energy Technology Data Exchange (ETDEWEB)

    HAMPTON,JEFFERY W.; YOUNG,CHRISTOPHER J.; MERCHANT,BION J.; CARR,DORTHE B.; AGUILAR-CHANG,JULIO

    2000-08-02

    Event catalogs for seismic data can become very large. Furthermore, as researchers collect multiple catalogs and reconcile them into a single catalog that is stored in a relational database, the reconciled set becomes even larger. The sheer number of these events makes searching for relevant events to compare with events of interest problematic. Information overload in this form can lead to the data sets being under-utilized and/or used incorrectly or inconsistently. Thus, efforts have been initiated to research techniques and strategies for helping researchers to make better use of large data sets. In this paper, the authors present their efforts to do so in two ways: (1) the Event Search Engine, which is a waveform correlation tool and (2) some content analysis tools, which area combination of custom-built and commercial off-the-shelf tools for accessing, managing, and querying seismic data stored in a relational database. The current Event Search Engine is based on a hierarchical clustering tool known as the dendrogram tool, which is written as a MatSeis graphical user interface. The dendrogram tool allows the user to build dendrogram diagrams for a set of waveforms by controlling phase windowing, down-sampling, filtering, enveloping, and the clustering method (e.g. single linkage, complete linkage, flexible method). It also allows the clustering to be based on two or more stations simultaneously, which is important to bridge gaps in the sparsely recorded event sets anticipated in such a large reconciled event set. Current efforts are focusing on tools to help the researcher winnow the clusters defined using the dendrogram tool down to the minimum optimal identification set. This will become critical as the number of reference events in the reconciled event set continually grows. The dendrogram tool is part of the MatSeis analysis package, which is available on the Nuclear Explosion Monitoring Research and Engineering Program Web Site. As part of the research

  17. Using SETS to find minimal cut sets in large fault trees

    International Nuclear Information System (INIS)

    Worrell, R.B.; Stack, D.W.

    1978-01-01

    An efficient algebraic algorithm for finding the minimal cut sets for a large fault tree was defined and a new procedure which implements the algorithm was added to the Set Equation Transformation System (SETS). The algorithm includes the identification and separate processing of independent subtrees, the coalescing of consecutive gates of the same kind, the creation of additional independent subtrees, and the derivation of the fault tree stem equation in stages. The computer time required to determine the minimal cut sets using these techniques is shown to be substantially less than the computer time required to determine the minimal cut sets when these techniques are not employed. It is shown for a given example that the execution time required to determine the minimal cut sets can be reduced from 7,686 seconds to 7 seconds when all of these techniques are employed

  18. Higher albedos and size distribution of large transneptunian objects

    Science.gov (United States)

    Lykawka, Patryk Sofia; Mukai, Tadashi

    2005-11-01

    Transneptunian objects (TNOs) orbit beyond Neptune and do offer important clues about the formation of our solar system. Although observations have been increasing the number of discovered TNOs and improving their orbital elements, very little is known about elementary physical properties such as sizes, albedos and compositions. Due to TNOs large distances (>40 AU) and observational limitations, reliable physical information can be obtained only from brighter objects (supposedly larger bodies). According to size and albedo measurements available, it is evident the traditionally assumed albedo p=0.04 cannot hold for all TNOs, especially those with approximately absolute magnitudes H⩽5.5. That is, the largest TNOs possess higher albedos (generally >0.04) that strongly appear to increase as a function of size. Using a compilation of published data, we derived empirical relations which can provide estimations of diameters and albedos as a function of absolute magnitude. Calculations result in more accurate size/albedo estimations for TNOs with H⩽5.5 than just assuming p=0.04. Nevertheless, considering low statistics, the value p=0.04 sounds still convenient for H>5.5 non-binary TNOs as a group. We also discuss about physical processes (e.g., collisions, intrinsic activity and the presence of tenuous atmospheres) responsible for the increase of albedo among large bodies. Currently, all big TNOs (>700 km) would be capable to sustain thin atmospheres or icy frosts composed of CH 4, CO or N 2 even for body bulk densities as low as 0.5 g cm -3. A size-dependent albedo has important consequences for the TNOs size distribution, cumulative luminosity function and total mass estimations. According to our analysis, the latter can be reduced up to 50% if higher albedos are common among large bodies. Lastly, by analyzing orbital properties of classical TNOs ( 42AUbodies. For both populations, distinct absolute magnitude distributions are maximized for an inclination threshold

  19. Phased array inspection of large size forged steel parts

    Science.gov (United States)

    Dupont-Marillia, Frederic; Jahazi, Mohammad; Belanger, Pierre

    2018-04-01

    High strength forged steel requires uncompromising quality to warrant advance performance for numerous critical applications. Ultrasonic inspection is commonly used in nondestructive testing to detect cracks and other defects. In steel blocks of relatively small dimensions (at least two directions not exceeding a few centimetres), phased array inspection is a trusted method to generate images of the inside of the blocks and therefore identify and size defects. However, casting of large size forged ingots introduces changes of mechanical parameters such as grain size, the Young's modulus, the Poisson's ratio, and the chemical composition. These heterogeneities affect the wave propagation, and consequently, the reliability of ultrasonic inspection and the imaging capabilities for these blocks. In this context, a custom phased array transducer designed for a 40-ton bainitic forged ingot was investigated. Following a previous study that provided local mechanical parameters for a similar block, two-dimensional simulations were made to compute the optimal transducer parameters including the pitch, width and number of elements. It appeared that depending on the number of elements, backwall reconstruction can generate high amplitude artefacts. Indeed, the large dimensions of the simulated block introduce numerous constructive interferences from backwall reflections which may lead to important artefacts. To increase image quality, the reconstruction algorithm was adapted and promising results were observed and compared with the scattering cone filter method available in the CIVA software.

  20. Reliable pipeline repair system for very large pipe size

    Energy Technology Data Exchange (ETDEWEB)

    Charalambides, John N.; Sousa, Alexandre Barreto de [Oceaneering International, Inc., Houston, TX (United States)

    2004-07-01

    The oil and gas industry worldwide has been mainly depending on the long-term reliability of rigid pipelines to ensure the transportation of hydrocarbons, crude oil, gas, fuel, etc. Many other methods are also utilized onshore and offshore (e.g. flexible lines, FPSO's, etc.), but when it comes to the underwater transportation of very high volumes of oil and gas, the industry commonly uses large size rigid pipelines (i.e. steel pipes). Oil and gas operators learned to depend on the long-lasting integrity of these very large pipelines and many times they forget or disregard that even steel pipelines degrade over time and more often that that, they are also susceptible to various forms of damage (minor or major, environmental or external, etc.). Over the recent years the industry had recognized the need of implementing an 'emergency repair plan' to account for such unforeseen events and the oil and gas operators have become 'smarter' by being 'pro-active' in order to ensure 'flow assurance'. When we consider very large diameter steel pipelines such as 42' and 48' nominal pipe size (NPS), the industry worldwide does not provide 'ready-made', 'off-the-shelf' repair hardware that can be easily shipped to the offshore location and effect a major repair within acceptable time frames and avoid substantial profit losses due to 'down-time' in production. The typical time required to establish a solid repair system for large pipe diameters could be as long as six or more months (depending on the availability of raw materials). This paper will present in detail the Emergency Pipeline Repair Systems (EPRS) that Oceaneering successfully designed, manufactured, tested and provided to two major oil and gas operators, located in two different continents (Gulf of Mexico, U.S.A. and Arabian Gulf, U.A.E.), for two different very large pipe sizes (42'' and 48'' Nominal Pipe Sizes

  1. Polish Phoneme Statistics Obtained On Large Set Of Written Texts

    Directory of Open Access Journals (Sweden)

    Bartosz Ziółko

    2009-01-01

    Full Text Available The phonetical statistics were collected from several Polish corpora. The paper is a summaryof the data which are phoneme n-grams and some phenomena in the statistics. Triphonestatistics apply context-dependent speech units which have an important role in speech recognitionsystems and were never calculated for a large set of Polish written texts. The standardphonetic alphabet for Polish, SAMPA, and methods of providing phonetic transcriptions are described.

  2. Shortest triplet clustering: reconstructing large phylogenies using representative sets

    Directory of Open Access Journals (Sweden)

    Sy Vinh Le

    2005-04-01

    Full Text Available Abstract Background Understanding the evolutionary relationships among species based on their genetic information is one of the primary objectives in phylogenetic analysis. Reconstructing phylogenies for large data sets is still a challenging task in Bioinformatics. Results We propose a new distance-based clustering method, the shortest triplet clustering algorithm (STC, to reconstruct phylogenies. The main idea is the introduction of a natural definition of so-called k-representative sets. Based on k-representative sets, shortest triplets are reconstructed and serve as building blocks for the STC algorithm to agglomerate sequences for tree reconstruction in O(n2 time for n sequences. Simulations show that STC gives better topological accuracy than other tested methods that also build a first starting tree. STC appears as a very good method to start the tree reconstruction. However, all tested methods give similar results if balanced nearest neighbor interchange (BNNI is applied as a post-processing step. BNNI leads to an improvement in all instances. The program is available at http://www.bi.uni-duesseldorf.de/software/stc/. Conclusion The results demonstrate that the new approach efficiently reconstructs phylogenies for large data sets. We found that BNNI boosts the topological accuracy of all methods including STC, therefore, one should use BNNI as a post-processing step to get better topological accuracy.

  3. Size matters: large objects capture attention in visual search.

    Science.gov (United States)

    Proulx, Michael J

    2010-12-23

    Can objects or events ever capture one's attention in a purely stimulus-driven manner? A recent review of the literature set out the criteria required to find stimulus-driven attentional capture independent of goal-directed influences, and concluded that no published study has satisfied that criteria. Here visual search experiments assessed whether an irrelevantly large object can capture attention. Capture of attention by this static visual feature was found. The results suggest that a large object can indeed capture attention in a stimulus-driven manner and independent of displaywide features of the task that might encourage a goal-directed bias for large items. It is concluded that these results are either consistent with the stimulus-driven criteria published previously or alternatively consistent with a flexible, goal-directed mechanism of saliency detection.

  4. Large Time Asymptotics for a Continuous Coagulation-Fragmentation Model with Degenerate Size-Dependent Diffusion

    KAUST Repository

    Desvillettes, Laurent

    2010-01-01

    We study a continuous coagulation-fragmentation model with constant kernels for reacting polymers (see [M. Aizenman and T. Bak, Comm. Math. Phys., 65 (1979), pp. 203-230]). The polymers are set to diffuse within a smooth bounded one-dimensional domain with no-flux boundary conditions. In particular, we consider size-dependent diffusion coefficients, which may degenerate for small and large cluster-sizes. We prove that the entropy-entropy dissipation method applies directly in this inhomogeneous setting. We first show the necessary basic a priori estimates in dimension one, and second we show faster-than-polynomial convergence toward global equilibria for diffusion coefficients which vanish not faster than linearly for large sizes. This extends the previous results of [J.A. Carrillo, L. Desvillettes, and K. Fellner, Comm. Math. Phys., 278 (2008), pp. 433-451], which assumes that the diffusion coefficients are bounded below. © 2009 Society for Industrial and Applied Mathematics.

  5. Classification of large-sized hyperspectral imagery using fast machine learning algorithms

    Science.gov (United States)

    Xia, Junshi; Yokoya, Naoto; Iwasaki, Akira

    2017-07-01

    We present a framework of fast machine learning algorithms in the context of large-sized hyperspectral images classification from the theoretical to a practical viewpoint. In particular, we assess the performance of random forest (RF), rotation forest (RoF), and extreme learning machine (ELM) and the ensembles of RF and ELM. These classifiers are applied to two large-sized hyperspectral images and compared to the support vector machines. To give the quantitative analysis, we pay attention to comparing these methods when working with high input dimensions and a limited/sufficient training set. Moreover, other important issues such as the computational cost and robustness against the noise are also discussed.

  6. Optimizing distance-based methods for large data sets

    Science.gov (United States)

    Scholl, Tobias; Brenner, Thomas

    2015-10-01

    Distance-based methods for measuring spatial concentration of industries have received an increasing popularity in the spatial econometrics community. However, a limiting factor for using these methods is their computational complexity since both their memory requirements and running times are in {{O}}(n^2). In this paper, we present an algorithm with constant memory requirements and shorter running time, enabling distance-based methods to deal with large data sets. We discuss three recent distance-based methods in spatial econometrics: the D&O-Index by Duranton and Overman (Rev Econ Stud 72(4):1077-1106, 2005), the M-function by Marcon and Puech (J Econ Geogr 10(5):745-762, 2010) and the Cluster-Index by Scholl and Brenner (Reg Stud (ahead-of-print):1-15, 2014). Finally, we present an alternative calculation for the latter index that allows the use of data sets with millions of firms.

  7. Querying Large Physics Data Sets Over an Information Grid

    CERN Document Server

    Baker, N; Kovács, Z; Le Goff, J M; McClatchey, R

    2001-01-01

    Optimising use of the Web (WWW) for LHC data analysis is a complex problem and illustrates the challenges arising from the integration of and computation across massive amounts of information distributed worldwide. Finding the right piece of information can, at times, be extremely time-consuming, if not impossible. So-called Grids have been proposed to facilitate LHC computing and many groups have embarked on studies of data replication, data migration and networking philosophies. Other aspects such as the role of 'middleware' for Grids are emerging as requiring research. This paper positions the need for appropriate middleware that enables users to resolve physics queries across massive data sets. It identifies the role of meta-data for query resolution and the importance of Information Grids for high-energy physics analysis rather than just Computational or Data Grids. This paper identifies software that is being implemented at CERN to enable the querying of very large collaborating HEP data-sets, initially...

  8. A homeostatic clock sets daughter centriole size in flies

    Science.gov (United States)

    Aydogan, Mustafa G.; Steinacker, Thomas L.; Novak, Zsofia A.; Baumbach, Janina; Muschalik, Nadine

    2018-01-01

    Centrioles are highly structured organelles whose size is remarkably consistent within any given cell type. New centrioles are born when Polo-like kinase 4 (Plk4) recruits Ana2/STIL and Sas-6 to the side of an existing “mother” centriole. These two proteins then assemble into a cartwheel, which grows outwards to form the structural core of a new daughter. Here, we show that in early Drosophila melanogaster embryos, daughter centrioles grow at a linear rate during early S-phase and abruptly stop growing when they reach their correct size in mid- to late S-phase. Unexpectedly, the cartwheel grows from its proximal end, and Plk4 determines both the rate and period of centriole growth: the more active the centriolar Plk4, the faster centrioles grow, but the faster centriolar Plk4 is inactivated and growth ceases. Thus, Plk4 functions as a homeostatic clock, establishing an inverse relationship between growth rate and period to ensure that daughter centrioles grow to the correct size. PMID:29500190

  9. Efficient algorithms for collaborative decision making for large scale settings

    DEFF Research Database (Denmark)

    Assent, Ira

    2011-01-01

    to bring about more effective and more efficient retrieval systems that support the users' decision making process. We sketch promising research directions for more efficient algorithms for collaborative decision making, especially for large scale systems.......Collaborative decision making is a successful approach in settings where data analysis and querying can be done interactively. In large scale systems with huge data volumes or many users, collaboration is often hindered by impractical runtimes. Existing work on improving collaboration focuses...... on avoiding redundancy for users working on the same task. While this improves the effectiveness of the user work process, the underlying query processing engine is typically considered a "black box" and left unchanged. Research in multiple query processing, on the other hand, ignores the application...

  10. Vertebral Adaptations to Large Body Size in Theropod Dinosaurs.

    Directory of Open Access Journals (Sweden)

    John P Wilson

    Full Text Available Rugose projections on the anterior and posterior aspects of vertebral neural spines appear throughout Amniota and result from the mineralization of the supraspinous and interspinous ligaments via metaplasia, the process of permanent tissue-type transformation. In mammals, this metaplasia is generally pathological or stress induced, but is a normal part of development in some clades of birds. Such structures, though phylogenetically sporadic, appear throughout the fossil record of non-avian theropod dinosaurs, yet their physiological and adaptive significance has remained unexamined. Here we show novel histologic and phylogenetic evidence that neural spine projections were a physiological response to biomechanical stress in large-bodied theropod species. Metaplastic projections also appear to vary between immature and mature individuals of the same species, with immature animals either lacking them or exhibiting smaller projections, supporting the hypothesis that these structures develop through ontogeny as a result of increasing bending stress subjected to the spinal column. Metaplastic mineralization of spinal ligaments would likely affect the flexibility of the spinal column, increasing passive support for body weight. A stiff spinal column would also provide biomechanical support for the primary hip flexors and, therefore, may have played a role in locomotor efficiency and mobility in large-bodied species. This new association of interspinal ligament metaplasia in Theropoda with large body size contributes additional insight to our understanding of the diverse biomechanical coping mechanisms developed throughout Dinosauria, and stresses the significance of phylogenetic methods when testing for biological trends, evolutionary or not.

  11. Vertebral Adaptations to Large Body Size in Theropod Dinosaurs.

    Science.gov (United States)

    Wilson, John P; Woodruff, D Cary; Gardner, Jacob D; Flora, Holley M; Horner, John R; Organ, Chris L

    2016-01-01

    Rugose projections on the anterior and posterior aspects of vertebral neural spines appear throughout Amniota and result from the mineralization of the supraspinous and interspinous ligaments via metaplasia, the process of permanent tissue-type transformation. In mammals, this metaplasia is generally pathological or stress induced, but is a normal part of development in some clades of birds. Such structures, though phylogenetically sporadic, appear throughout the fossil record of non-avian theropod dinosaurs, yet their physiological and adaptive significance has remained unexamined. Here we show novel histologic and phylogenetic evidence that neural spine projections were a physiological response to biomechanical stress in large-bodied theropod species. Metaplastic projections also appear to vary between immature and mature individuals of the same species, with immature animals either lacking them or exhibiting smaller projections, supporting the hypothesis that these structures develop through ontogeny as a result of increasing bending stress subjected to the spinal column. Metaplastic mineralization of spinal ligaments would likely affect the flexibility of the spinal column, increasing passive support for body weight. A stiff spinal column would also provide biomechanical support for the primary hip flexors and, therefore, may have played a role in locomotor efficiency and mobility in large-bodied species. This new association of interspinal ligament metaplasia in Theropoda with large body size contributes additional insight to our understanding of the diverse biomechanical coping mechanisms developed throughout Dinosauria, and stresses the significance of phylogenetic methods when testing for biological trends, evolutionary or not.

  12. Pattern-set generation algorithm for the one-dimensional multiple stock sizes cutting stock problem

    Science.gov (United States)

    Cui, Yaodong; Cui, Yi-Ping; Zhao, Zhigang

    2015-09-01

    A pattern-set generation algorithm (PSG) for the one-dimensional multiple stock sizes cutting stock problem (1DMSSCSP) is presented. The solution process contains two stages. In the first stage, the PSG solves the residual problems repeatedly to generate the patterns in the pattern set, where each residual problem is solved by the column-generation approach, and each pattern is generated by solving a single large object placement problem. In the second stage, the integer linear programming model of the 1DMSSCSP is solved using a commercial solver, where only the patterns in the pattern set are considered. The computational results of benchmark instances indicate that the PSG outperforms existing heuristic algorithms and rivals the exact algorithm in solution quality.

  13. BAND STRUCTURE OF NON-STEIOCHIOMETRIC LARGE-SIZED NANOCRYSTALLITES

    Directory of Open Access Journals (Sweden)

    I.V.Kityk

    2004-01-01

    Full Text Available A band structure of large-sized (from 20 to 35nm non-steichiometric nanocrystallites (NC of the Si2-xCx (1.04 < x < 1.10 has been investigated using different band energy approaches and a modified Car-Parinello molecular dynamics structure optimization of the NC interfaces. The non-steichiometric excess of carbon favors the appearance of a thin prevailingly carbon-contained layer (with thickness of about 1 nm covering the crystallites. As a consequence, one can observe a substantial structure reconstruction of boundary SiC crystalline layers. The numerical modeling has shown that these NC can be considered as SiC reconstructed crystalline films with thickness of about 2 nm covering the SiC crystallites. The observed data are considered within the different one-electron band structure methods. It was shown that the nano-sized carbon sheet plays a key role in a modified band structure. Independent manifestation of the important role played by the reconstructed confined layers is due to the experimentally discovered excitonic-like resonances. Low-temperature absorption measurements confirm the existence of sharp-like absorption resonances originating from the reconstructed layers.

  14. Visualization of diversity in large multivariate data sets.

    Science.gov (United States)

    Pham, Tuan; Hess, Rob; Ju, Crystal; Zhang, Eugene; Metoyer, Ronald

    2010-01-01

    Understanding the diversity of a set of multivariate objects is an important problem in many domains, including ecology, college admissions, investing, machine learning, and others. However, to date, very little work has been done to help users achieve this kind of understanding. Visual representation is especially appealing for this task because it offers the potential to allow users to efficiently observe the objects of interest in a direct and holistic way. Thus, in this paper, we attempt to formalize the problem of visualizing the diversity of a large (more than 1000 objects), multivariate (more than 5 attributes) data set as one worth deeper investigation by the information visualization community. In doing so, we contribute a precise definition of diversity, a set of requirements for diversity visualizations based on this definition, and a formal user study design intended to evaluate the capacity of a visual representation for communicating diversity information. Our primary contribution, however, is a visual representation, called the Diversity Map, for visualizing diversity. An evaluation of the Diversity Map using our study design shows that users can judge elements of diversity consistently and as or more accurately than when using the only other representation specifically designed to visualize diversity.

  15. An alternative method for determining particle-size distribution of forest road aggregate and soil with large-sized particles

    Science.gov (United States)

    Hakjun Rhee; Randy B. Foltz; James L. Fridley; Finn Krogstad; Deborah S. Page-Dumroese

    2014-01-01

    Measurement of particle-size distribution (PSD) of soil with large-sized particles (e.g., 25.4 mm diameter) requires a large sample and numerous particle-size analyses (PSAs). A new method is needed that would reduce time, effort, and cost for PSAs of the soil and aggregate material with large-sized particles. We evaluated a nested method for sampling and PSA by...

  16. New Sequences with Low Correlation and Large Family Size

    Science.gov (United States)

    Zeng, Fanxin

    In direct-sequence code-division multiple-access (DS-CDMA) communication systems and direct-sequence ultra wideband (DS-UWB) radios, sequences with low correlation and large family size are important for reducing multiple access interference (MAI) and accepting more active users, respectively. In this paper, a new collection of families of sequences of length pn-1, which includes three constructions, is proposed. The maximum number of cyclically distinct families without GMW sequences in each construction is φ(pn-1)/n·φ(pm-1)/m, where p is a prime number, n is an even number, and n=2m, and these sequences can be binary or polyphase depending upon choice of the parameter p. In Construction I, there are pn distinct sequences within each family and the new sequences have at most d+2 nontrivial periodic correlation {-pm-1, -1, pm-1, 2pm-1,…,dpm-1}. In Construction II, the new sequences have large family size p2n and possibly take the nontrivial correlation values in {-pm-1, -1, pm-1, 2pm-1,…,(3d-4)pm-1}. In Construction III, the new sequences possess the largest family size p(d-1)n and have at most 2d correlation levels {-pm-1, -1,pm-1, 2pm-1,…,(2d-2)pm-1}. Three constructions are near-optimal with respect to the Welch bound because the values of their Welch-Ratios are moderate, WR_??_d, WR_??_3d-4 and WR_??_2d-2, respectively. Each family in Constructions I, II and III contains a GMW sequence. In addition, Helleseth sequences and Niho sequences are special cases in Constructions I and III, and their restriction conditions to the integers m and n, pm≠2 (mod 3) and n≅0 (mod 4), respectively, are removed in our sequences. Our sequences in Construction III include the sequences with Niho type decimation 3·2m-2, too. Finally, some open questions are pointed out and an example that illustrates the performance of these sequences is given.

  17. Sizing the star cluster population of the Large Magellanic Cloud

    Science.gov (United States)

    Piatti, Andrés E.

    2018-04-01

    The number of star clusters that populate the Large Magellanic Cloud (LMC) at deprojected distances knowledge of the LMC cluster formation and dissolution histories, we closely revisited such a compilation of objects and found that only ˜35 per cent of the previously known catalogued clusters have been included. The remaining entries are likely related to stellar overdensities of the LMC composite star field, because there is a remarkable enhancement of objects with assigned ages older than log(t yr-1) ˜ 9.4, which contrasts with the existence of the LMC cluster age gap; the assumption of a cluster formation rate similar to that of the LMC star field does not help to conciliate so large amount of clusters either; and nearly 50 per cent of them come from cluster search procedures known to produce more than 90 per cent of false detections. The lack of further analyses to confirm the physical reality as genuine star clusters of the identified overdensities also glooms those results. We support that the actual size of the LMC main body cluster population is close to that previously known.

  18. Processing and properties of large-sized ceramic slabs

    Directory of Open Access Journals (Sweden)

    Fossa, L.

    2010-10-01

    Full Text Available Large-sized ceramic slabs – with dimensions up to 360x120 cm2 and thickness down to 2 mm – are manufactured through an innovative ceramic process, starting from porcelain stoneware formulations and involving wet ball milling, spray drying, die-less slow-rate pressing, a single stage of fast drying-firing, and finishing (trimming, assembling of ceramic-fiberglass composites. Fired and unfired industrial slabs were selected and characterized from the technological, compositional (XRF, XRD and microstructural (SEM viewpoints. Semi-finished products exhibit a remarkable microstructural uniformity and stability in a rather wide window of firing schedules. The phase composition and compact microstructure of fired slabs are very similar to those of porcelain stoneware tiles. The values of water absorption, bulk density, closed porosity, functional performances as well as mechanical and tribological properties conform to the top quality range of porcelain stoneware tiles. However, the large size coupled with low thickness bestow on the slab a certain degree of flexibility, which is emphasized in ceramic-fiberglass composites. These outstanding performances make the large-sized slabs suitable to be used in novel applications: building and construction (new floorings without dismantling the previous paving, ventilated façades, tunnel coverings, insulating panelling, indoor furnitures (table tops, doors, support for photovoltaic ceramic panels.

    Se han fabricado piezas de gran formato, con dimensiones de hasta 360x120 cm, y menos de 2 mm, de espesor, empleando métodos innovadores de fabricación, partiendo de composiciones de gres porcelánico y utilizando, molienda con bolas por vía húmeda, atomización, prensado a baja velocidad sin boquilla de extrusión, secado y cocción rápido en una sola etapa, y un acabado que incluye la adhesión de fibra de vidrio al soporte cerámico y el rectificado de la pieza final. Se han

  19. An effective filter for IBD detection in large data sets.

    KAUST Repository

    Huang, Lin

    2014-03-25

    Identity by descent (IBD) inference is the task of computationally detecting genomic segments that are shared between individuals by means of common familial descent. Accurate IBD detection plays an important role in various genomic studies, ranging from mapping disease genes to exploring ancient population histories. The majority of recent work in the field has focused on improving the accuracy of inference, targeting shorter genomic segments that originate from a more ancient common ancestor. The accuracy of these methods, however, is achieved at the expense of high computational cost, resulting in a prohibitively long running time when applied to large cohorts. To enable the study of large cohorts, we introduce SpeeDB, a method that facilitates fast IBD detection in large unphased genotype data sets. Given a target individual and a database of individuals that potentially share IBD segments with the target, SpeeDB applies an efficient opposite-homozygous filter, which excludes chromosomal segments from the database that are highly unlikely to be IBD with the corresponding segments from the target individual. The remaining segments can then be evaluated by any IBD detection method of choice. When examining simulated individuals sharing 4 cM IBD regions, SpeeDB filtered out 99.5% of genomic regions from consideration while retaining 99% of the true IBD segments. Applying the SpeeDB filter prior to detecting IBD in simulated fourth cousins resulted in an overall running time that was 10,000x faster than inferring IBD without the filter and retained 99% of the true IBD segments in the output.

  20. An effective filter for IBD detection in large data sets.

    KAUST Repository

    Huang, Lin; Bercovici, Sivan; Rodriguez, Jesse M; Batzoglou, Serafim

    2014-01-01

    Identity by descent (IBD) inference is the task of computationally detecting genomic segments that are shared between individuals by means of common familial descent. Accurate IBD detection plays an important role in various genomic studies, ranging from mapping disease genes to exploring ancient population histories. The majority of recent work in the field has focused on improving the accuracy of inference, targeting shorter genomic segments that originate from a more ancient common ancestor. The accuracy of these methods, however, is achieved at the expense of high computational cost, resulting in a prohibitively long running time when applied to large cohorts. To enable the study of large cohorts, we introduce SpeeDB, a method that facilitates fast IBD detection in large unphased genotype data sets. Given a target individual and a database of individuals that potentially share IBD segments with the target, SpeeDB applies an efficient opposite-homozygous filter, which excludes chromosomal segments from the database that are highly unlikely to be IBD with the corresponding segments from the target individual. The remaining segments can then be evaluated by any IBD detection method of choice. When examining simulated individuals sharing 4 cM IBD regions, SpeeDB filtered out 99.5% of genomic regions from consideration while retaining 99% of the true IBD segments. Applying the SpeeDB filter prior to detecting IBD in simulated fourth cousins resulted in an overall running time that was 10,000x faster than inferring IBD without the filter and retained 99% of the true IBD segments in the output.

  1. High voltage distribution scheme for large size GEM detector

    International Nuclear Information System (INIS)

    Saini, J.; Kumar, A.; Dubey, A.K.; Negi, V.S.; Chattopadhyay, S.

    2016-01-01

    Gas Electron Multiplier (GEM) detectors will be used for Muon tracking in the Compressed Baryonic Matter (CBM) experiment at the Facility for Anti-proton Ion Research (FAIR) at Darmstadt, Germany. The sizes of the detector modules in the Muon chambers are of the order of 1 metre x 0.5 metre. For construction of these chambers, three GEM foils are used per chamber. These foils are made by two layered 50μm thin kapton foil. Each GEM foil has millions of holes on it. In such a large scale manufacturing of the foils, even after stringent quality controls, some of the holes may still have defects or defects might develop over the time with operating conditions. These defects may result in short-circuit of the entire GEM foil. A short even in a single hole will make entire foil un-usable. To reduce such occurrences, high voltage (HV) segmentation within the foils has been introduced. These segments are powered either by individual HV supply per segment or through an active HV distribution to manage such a large number of segments across the foil. Individual supplies apart from being costly, are highly complex to implement. Additionally, CBM will have high intensity of particles bombarding on the detector causing the change of resistive chain current feeding the GEM detector with the variation in the intensity. This leads to voltage fluctuations across the foil resulting in the gain variation with the particle intensity. Hence, a low cost active HV distribution is designed to take care of the above discussed issues

  2. A hybrid adaptive large neighborhood search algorithm applied to a lot-sizing problem

    DEFF Research Database (Denmark)

    Muller, Laurent Flindt; Spoorendonk, Simon

    This paper presents a hybrid of a general heuristic framework that has been successfully applied to vehicle routing problems and a general purpose MIP solver. The framework uses local search and an adaptive procedure which choses between a set of large neighborhoods to be searched. A mixed integer...... of a solution and to investigate the feasibility of elements in such a neighborhood. The hybrid heuristic framework is applied to the multi-item capacitated lot sizing problem with dynamic lot sizes, where experiments have been conducted on a series of instances from the literature. On average the heuristic...

  3. Large-size space debris flyby in low earth orbits

    Science.gov (United States)

    Baranov, A. A.; Grishko, D. A.; Razoumny, Y. N.

    2017-09-01

    the analysis of NORAD catalogue of space objects executed with respect to the overall sizes of upper-stages and last stages of carrier rockets allows the classification of 5 groups of large-size space debris (LSSD). These groups are defined according to the proximity of orbital inclinations of the involved objects. The orbits within a group have various values of deviations in the Right Ascension of the Ascending Node (RAAN). It is proposed to use the RAANs deviations' evolution portrait to clarify the orbital planes' relative spatial distribution in a group so that the RAAN deviations should be calculated with respect to the concrete precessing orbital plane of the concrete object. In case of the first three groups (inclinations i = 71°, i = 74°, i = 81°) the straight lines of the RAAN relative deviations almost do not intersect each other. So the simple, successive flyby of group's elements is effective, but the significant value of total Δ V is required to form drift orbits. In case of the fifth group (Sun-synchronous orbits) these straight lines chaotically intersect each other for many times due to the noticeable differences in values of semi-major axes and orbital inclinations. The intersections' existence makes it possible to create such a flyby sequence for LSSD group when the orbit of one LSSD object simultaneously serves as the drift orbit to attain another LSSD object. This flyby scheme requiring less Δ V was called "diagonal." The RAANs deviations' evolution portrait built for the fourth group (to be studied in the paper) contains both types of lines, so the simultaneous combination of diagonal and successive flyby schemes is possible. The value of total Δ V and temporal costs were calculated to cover all the elements of the 4th group. The article is also enriched by the results obtained for the flyby problem solution in case of all the five mentioned LSSD groups. The general recommendations are given concerned with the required reserve of total

  4. Visual exposure to large and small portion sizes and perceptions of portion size normality: Three experimental studies

    OpenAIRE

    Robinson, Eric; Oldham, Melissa; Cuckson, Imogen; Brunstrom, Jeffrey M.; Rogers, Peter J.; Hardman, Charlotte A.

    2016-01-01

    Portion sizes of many foods have increased in recent times. In three studies we examined the effect that repeated visual exposure to larger versus smaller food portion sizes has on perceptions of what constitutes a normal-sized food portion and measures of portion size selection. In studies 1 and 2 participants were visually exposed to images of large or small portions of spaghetti bolognese, before making evaluations about an image of an intermediate sized portion of the same food. In study ...

  5. Effects of Group Size on Students Mathematics Achievement in Small Group Settings

    Science.gov (United States)

    Enu, Justice; Danso, Paul Amoah; Awortwe, Peter K.

    2015-01-01

    An ideal group size is hard to obtain in small group settings; hence there are groups with more members than others. The purpose of the study was to find out whether group size has any effects on students' mathematics achievement in small group settings. Two third year classes of the 2011/2012 academic year were selected from two schools in the…

  6. Visual exposure to large and small portion sizes and perceptions of portion size normality: Three experimental studies.

    Science.gov (United States)

    Robinson, Eric; Oldham, Melissa; Cuckson, Imogen; Brunstrom, Jeffrey M; Rogers, Peter J; Hardman, Charlotte A

    2016-03-01

    Portion sizes of many foods have increased in recent times. In three studies we examined the effect that repeated visual exposure to larger versus smaller food portion sizes has on perceptions of what constitutes a normal-sized food portion and measures of portion size selection. In studies 1 and 2 participants were visually exposed to images of large or small portions of spaghetti bolognese, before making evaluations about an image of an intermediate sized portion of the same food. In study 3 participants were exposed to images of large or small portions of a snack food before selecting a portion size of snack food to consume. Across the three studies, visual exposure to larger as opposed to smaller portion sizes resulted in participants considering a normal portion of food to be larger than a reference intermediate sized portion. In studies 1 and 2 visual exposure to larger portion sizes also increased the size of self-reported ideal meal size. In study 3 visual exposure to larger portion sizes of a snack food did not affect how much of that food participants subsequently served themselves and ate. Visual exposure to larger portion sizes may adjust visual perceptions of what constitutes a 'normal' sized portion. However, we did not find evidence that visual exposure to larger portions altered snack food intake. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.

  7. Accelerating inference for diffusions observed with measurement error and large sample sizes using approximate Bayesian computation

    DEFF Research Database (Denmark)

    Picchini, Umberto; Forman, Julie Lyng

    2016-01-01

    a nonlinear stochastic differential equation model observed with correlated measurement errors and an application to protein folding modelling. An approximate Bayesian computation (ABC)-MCMC algorithm is suggested to allow inference for model parameters within reasonable time constraints. The ABC algorithm......In recent years, dynamical modelling has been provided with a range of breakthrough methods to perform exact Bayesian inference. However, it is often computationally unfeasible to apply exact statistical methodologies in the context of large data sets and complex models. This paper considers...... applications. A simulation study is conducted to compare our strategy with exact Bayesian inference, the latter resulting two orders of magnitude slower than ABC-MCMC for the considered set-up. Finally, the ABC algorithm is applied to a large size protein data. The suggested methodology is fairly general...

  8. Improving a Lecture-Size Molecular Model Set by Repurposing Used Whiteboard Markers

    Science.gov (United States)

    Dragojlovic, Veljko

    2015-01-01

    Preparation of an inexpensive model set from whiteboard markers and either HGS molecular model set or atoms made of wood is described. The model set is relatively easy to prepare and is sufficiently large to be suitable as an instructor set for use in lectures.

  9. Embrittlement and decrease of apparent strength in large-sized ...

    Indian Academy of Sciences (India)

    R. Narasimhan (Krishtel eMaging) 1461 1996 Oct 15 13:05:22

    In fact, the dimensional disparity between tensile stress σ([F][L]. −2) and ..... they work only in a limited range. This is the case of the ...... ACI 1992 American Concrete Institute: Building Code 318R-89 (Detroit: ACI Press). Ba˘zant Z P 1984 Size ...

  10. A scalable method for identifying frequent subtrees in sets of large phylogenetic trees.

    Science.gov (United States)

    Ramu, Avinash; Kahveci, Tamer; Burleigh, J Gordon

    2012-10-03

    We consider the problem of finding the maximum frequent agreement subtrees (MFASTs) in a collection of phylogenetic trees. Existing methods for this problem often do not scale beyond datasets with around 100 taxa. Our goal is to address this problem for datasets with over a thousand taxa and hundreds of trees. We develop a heuristic solution that aims to find MFASTs in sets of many, large phylogenetic trees. Our method works in multiple phases. In the first phase, it identifies small candidate subtrees from the set of input trees which serve as the seeds of larger subtrees. In the second phase, it combines these small seeds to build larger candidate MFASTs. In the final phase, it performs a post-processing step that ensures that we find a frequent agreement subtree that is not contained in a larger frequent agreement subtree. We demonstrate that this heuristic can easily handle data sets with 1000 taxa, greatly extending the estimation of MFASTs beyond current methods. Although this heuristic does not guarantee to find all MFASTs or the largest MFAST, it found the MFAST in all of our synthetic datasets where we could verify the correctness of the result. It also performed well on large empirical data sets. Its performance is robust to the number and size of the input trees. Overall, this method provides a simple and fast way to identify strongly supported subtrees within large phylogenetic hypotheses.

  11. A full scale approximation of covariance functions for large spatial data sets

    KAUST Repository

    Sang, Huiyan

    2011-10-10

    Gaussian process models have been widely used in spatial statistics but face tremendous computational challenges for very large data sets. The model fitting and spatial prediction of such models typically require O(n 3) operations for a data set of size n. Various approximations of the covariance functions have been introduced to reduce the computational cost. However, most existing approximations cannot simultaneously capture both the large- and the small-scale spatial dependence. A new approximation scheme is developed to provide a high quality approximation to the covariance function at both the large and the small spatial scales. The new approximation is the summation of two parts: a reduced rank covariance and a compactly supported covariance obtained by tapering the covariance of the residual of the reduced rank approximation. Whereas the former part mainly captures the large-scale spatial variation, the latter part captures the small-scale, local variation that is unexplained by the former part. By combining the reduced rank representation and sparse matrix techniques, our approach allows for efficient computation for maximum likelihood estimation, spatial prediction and Bayesian inference. We illustrate the new approach with simulated and real data sets. © 2011 Royal Statistical Society.

  12. A full scale approximation of covariance functions for large spatial data sets

    KAUST Repository

    Sang, Huiyan; Huang, Jianhua Z.

    2011-01-01

    Gaussian process models have been widely used in spatial statistics but face tremendous computational challenges for very large data sets. The model fitting and spatial prediction of such models typically require O(n 3) operations for a data set of size n. Various approximations of the covariance functions have been introduced to reduce the computational cost. However, most existing approximations cannot simultaneously capture both the large- and the small-scale spatial dependence. A new approximation scheme is developed to provide a high quality approximation to the covariance function at both the large and the small spatial scales. The new approximation is the summation of two parts: a reduced rank covariance and a compactly supported covariance obtained by tapering the covariance of the residual of the reduced rank approximation. Whereas the former part mainly captures the large-scale spatial variation, the latter part captures the small-scale, local variation that is unexplained by the former part. By combining the reduced rank representation and sparse matrix techniques, our approach allows for efficient computation for maximum likelihood estimation, spatial prediction and Bayesian inference. We illustrate the new approach with simulated and real data sets. © 2011 Royal Statistical Society.

  13. Efficient One-click Browsing of Large Trajectory Sets

    DEFF Research Database (Denmark)

    Krogh, Benjamin Bjerre; Andersen, Ove; Lewis-Kelham, Edwin

    2014-01-01

    presents a novel query type called sheaf, where users can browse trajectory data sets using a single mouse click. Sheaves are very versatile and can be used for location-based advertising, travel-time analysis, intersection analysis, and reachability analysis (isochrones). A novel in-memory trajectory...... index compresses the data by a factor of 12.4 and enables execution of sheaf queries in 40 ms. This is up to 2 orders of magnitude faster than existing work. We demonstrate the simplicity, versatility, and efficiency of sheaf queries using a real-world trajectory set consisting of 2.7 million...

  14. The higher order flux mapping method in large size PHWRs

    International Nuclear Information System (INIS)

    Kulkarni, A.K.; Balaraman, V.; Purandare, H.D.

    1997-01-01

    A new higher order method is proposed for obtaining flux map using single set of expansion mode. In this procedure, one can make use of the difference between predicted value of detector reading and their actual values for determining the strength of local fluxes around detector site. The local fluxes are arising due to constant perturbation changes (both extrinsic and intrinsic) taking place in the reactor. (author)

  15. Multivariate modeling of complications with data driven variable selection: Guarding against overfitting and effects of data set size

    International Nuclear Information System (INIS)

    Schaaf, Arjen van der; Xu Chengjian; Luijk, Peter van; Veld, Aart A. van’t; Langendijk, Johannes A.; Schilstra, Cornelis

    2012-01-01

    Purpose: Multivariate modeling of complications after radiotherapy is frequently used in conjunction with data driven variable selection. This study quantifies the risk of overfitting in a data driven modeling method using bootstrapping for data with typical clinical characteristics, and estimates the minimum amount of data needed to obtain models with relatively high predictive power. Materials and methods: To facilitate repeated modeling and cross-validation with independent datasets for the assessment of true predictive power, a method was developed to generate simulated data with statistical properties similar to real clinical data sets. Characteristics of three clinical data sets from radiotherapy treatment of head and neck cancer patients were used to simulate data with set sizes between 50 and 1000 patients. A logistic regression method using bootstrapping and forward variable selection was used for complication modeling, resulting for each simulated data set in a selected number of variables and an estimated predictive power. The true optimal number of variables and true predictive power were calculated using cross-validation with very large independent data sets. Results: For all simulated data set sizes the number of variables selected by the bootstrapping method was on average close to the true optimal number of variables, but showed considerable spread. Bootstrapping is more accurate in selecting the optimal number of variables than the AIC and BIC alternatives, but this did not translate into a significant difference of the true predictive power. The true predictive power asymptotically converged toward a maximum predictive power for large data sets, and the estimated predictive power converged toward the true predictive power. More than half of the potential predictive power is gained after approximately 200 samples. Our simulations demonstrated severe overfitting (a predicative power lower than that of predicting 50% probability) in a number of small

  16. Simultaneous identification of long similar substrings in large sets of sequences

    Directory of Open Access Journals (Sweden)

    Wittig Burghardt

    2007-05-01

    Full Text Available Abstract Background Sequence comparison faces new challenges today, with many complete genomes and large libraries of transcripts known. Gene annotation pipelines match these sequences in order to identify genes and their alternative splice forms. However, the software currently available cannot simultaneously compare sets of sequences as large as necessary especially if errors must be considered. Results We therefore present a new algorithm for the identification of almost perfectly matching substrings in very large sets of sequences. Its implementation, called ClustDB, is considerably faster and can handle 16 times more data than VMATCH, the most memory efficient exact program known today. ClustDB simultaneously generates large sets of exactly matching substrings of a given minimum length as seeds for a novel method of match extension with errors. It generates alignments of maximum length with a considered maximum number of errors within each overlapping window of a given size. Such alignments are not optimal in the usual sense but faster to calculate and often more appropriate than traditional alignments for genomic sequence comparisons, EST and full-length cDNA matching, and genomic sequence assembly. The method is used to check the overlaps and to reveal possible assembly errors for 1377 Medicago truncatula BAC-size sequences published at http://www.medicago.org/genome/assembly_table.php?chr=1. Conclusion The program ClustDB proves that window alignment is an efficient way to find long sequence sections of homogenous alignment quality, as expected in case of random errors, and to detect systematic errors resulting from sequence contaminations. Such inserts are systematically overlooked in long alignments controlled by only tuning penalties for mismatches and gaps. ClustDB is freely available for academic use.

  17. Aerodynamic Limits on Large Civil Tiltrotor Sizing and Efficiency

    Science.gov (United States)

    Acree, C W.

    2014-01-01

    The NASA Large Civil Tiltrotor (2nd generation, or LCTR2) is a useful reference design for technology impact studies. The present paper takes a broad view of technology assessment by examining the extremes of what aerodynamic improvements might hope to accomplish. Performance was analyzed with aerodynamically idealized rotor, wing, and airframe, representing the physical limits of a large tiltrotor. The analysis was repeated with more realistic assumptions, which revealed that increased maximum rotor lift capability is potentially more effective in improving overall vehicle efficiency than higher rotor or wing efficiency. To balance these purely theoretical studies, some practical limitations on airframe layout are also discussed, along with their implications for wing design. Performance of a less efficient but more practical aircraft with non-tilting nacelles is presented.

  18. Development of large size NC trepanning and horning machine

    International Nuclear Information System (INIS)

    Wada, Yoshiei; Aono, Fumiaki; Siga, Toshihiko; Sudo, Eiichi; Takasa, Seiju; Fukuyama, Masaaki; Sibukawa, Koichi; Nakagawa, Hirokatu

    2010-01-01

    Due to the recent increase in world energy demand, construction of considerable number of nuclear and fossil power plant has been proceeded and is further planned. High generating capacity plant requires large forged components such as monoblock turbine rotor shafts and the dimensions of them tend to increase. Some of these components have center bore for material test, NDE and other use. In order to cope with the increase in production of these large forgings with center bores, a new trepanning machine, which exclusively bore a deep hole, was developed in JSW taking account of many accumulated experiences and know-how of experts. The machine is the world largest 400t trepanning and horning machine with numerical control and has many advantage in safety, the machining precision, machining efficiency, operability, labor-saving, and energy saving. Furthermore, transfer of the technical skill became easy through concentrated monitoring system based on numerically analysed experts' know-how. (author)

  19. Remote aerosol testing of large size HEPA filter banks

    International Nuclear Information System (INIS)

    Franklin, B.; Pasha, M.; Bronger, C.A.

    1987-01-01

    Different methods of testing HEPA filter banks are described. Difficulties in remote testing of large banks of HEPA filters in series with minimum distances between banks, and with no available access upstream and downstream of the filter house, are discussed. Modifications incorporated to make the filter system suitable for remote testing without personnel re-entry into the filter house are described for a 51,000 m/sup 3//hr filter unit at the WIPP site

  20. Large Sets in Boolean and Non-Boolean Groups and Topology

    Directory of Open Access Journals (Sweden)

    Ol’ga V. Sipacheva

    2017-10-01

    Full Text Available Various notions of large sets in groups, including the classical notions of thick, syndetic, and piecewise syndetic sets and the new notion of vast sets in groups, are studied with emphasis on the interplay between such sets in Boolean groups. Natural topologies closely related to vast sets are considered; as a byproduct, interesting relations between vast sets and ultrafilters are revealed.

  1. Data Mining and Visualization of Large Human Behavior Data Sets

    DEFF Research Database (Denmark)

    Cuttone, Andrea

    and credit card transactions – have provided us new sources for studying our behavior. In particular smartphones have emerged as new tools for collecting data about human activity, thanks to their sensing capabilities and their ubiquity. This thesis investigates the question of what we can learn about human...... behavior from this rich and pervasive mobile sensing data. In the first part, we describe a large-scale data collection deployment collecting high-resolution data for over 800 students at the Technical University of Denmark using smartphones, including location, social proximity, calls and SMS. We provide...... an overview of the technical infrastructure, the experimental design, and the privacy measures. The second part investigates the usage of this mobile sensing data for understanding personal behavior. We describe two large-scale user studies on the deployment of self-tracking apps, in order to understand...

  2. The large size straw drift chambers of the COMPASS experiment

    CERN Document Server

    Bychkov, V N; Dünnweber, W; Faessler, Martin A; Fischer, H; Franz, J; Geyer, R; Gousakov, Yu V; Grünemaier, A; Heinsius, F H; Ilgner, C; Ivanchenko, I M; Kekelidze, G D; Königsmann, K C; Livinski, V V; Lysan, V M; Marzec, J; Matveev, D A; Mishin, S V; Mialkovski, V V; Novikov, E A; Peshekhonov, V D; Platzer, K; San, M; Schmid, T; Shokin, V I; Sissakian, A N; Viriasov, K S; Wiedner, U; Zaremba, K; Zhukov, I A; Zlobin, Y L; Zvyagin, A

    2005-01-01

    Straw drift chambers are used for the Large Area Tracking (LAT) of the Common Muon and Proton Apparatus for Structure and Spectroscopy (COMPASS) at CERN. An active area of 130 m2 in total is covered by 12 440 straw tubes, which are arranged in 15 double layers. The design has been optimized with respect to spatial resolution, rate capability, low material budget and compactness of the detectors. Mechanical and electrical design considerations of the chambers are discussed as well as new production techniques. The mechanical precision of the chambers has been determined using a CCD X-ray scanning apparatus. Results about the performance during data taking in COMPASS are described.

  3. Statistical characterization of a large geochemical database and effect of sample size

    Science.gov (United States)

    Zhang, C.; Manheim, F.T.; Hinde, J.; Grossman, J.N.

    2005-01-01

    smaller numbers of data points showed that few elements passed standard statistical tests for normality or log-normality until sample size decreased to a few hundred data points. Large sample size enhances the power of statistical tests, and leads to rejection of most statistical hypotheses for real data sets. For large sample sizes (e.g., n > 1000), graphical methods such as histogram, stem-and-leaf, and probability plots are recommended for rough judgement of probability distribution if needed. ?? 2005 Elsevier Ltd. All rights reserved.

  4. Novel Visualization of Large Health Related Data Sets

    Science.gov (United States)

    2015-03-01

    lower all-cause mortality. 3 While large cross-sectional studies of populations such as the National Health and Nutrition Examination Survey find a...due to impaired renal and hepatic metabolism, decreased dietary intake related to anorexia or nausea, and falsely low HbA1c secondary to uremia or...Renal Nutrition . 2009:19(1):33- 37. 2014 Workshop on Visual Analytics in Healthcare ! ! !"#$%&’(%’$)*+%,"’#%-’$%./*.0*12,$)345%6)*7’$%./’#*8)’#$9*1

  5. From Visualisation to Data Mining with Large Data Sets

    CERN Document Server

    Adelmann, Andreas; Shalf, John M; Siegerist, Cristina

    2005-01-01

    In 3D particle simulations, the generated 6D phase space data are can be very large due to the need for accurate statistics, sufficient noise attenuation in the field solver and tracking of many turns in ring machines or accelerators. There is a need for distributed applications that allow users to peruse these extremely large remotely located datasets with the same ease as locally downloaded data. This paper will show concepts and a prototype tool to extract useful physical information out of 6D raw phase space data. ParViT allows the user to project 6D data into 3D space by selecting which dimensions will be represented spatially and which dimensions are represented as particle attributes, and the construction of complex transfer functions for representing the particle attributes. It also allows management of time-series data. An HDF5-based parallel-I/O library, with C++, C and Fortran bindings simplifies the interface with a variety of codes. A number of hooks in ParVit will allow it to connect with a para...

  6. Processing large remote sensing image data sets on Beowulf clusters

    Science.gov (United States)

    Steinwand, Daniel R.; Maddox, Brian; Beckmann, Tim; Schmidt, Gail

    2003-01-01

    High-performance computing is often concerned with the speed at which floating- point calculations can be performed. The architectures of many parallel computers and/or their network topologies are based on these investigations. Often, benchmarks resulting from these investigations are compiled with little regard to how a large dataset would move about in these systems. This part of the Beowulf study addresses that concern by looking at specific applications software and system-level modifications. Applications include an implementation of a smoothing filter for time-series data, a parallel implementation of the decision tree algorithm used in the Landcover Characterization project, a parallel Kriging algorithm used to fit point data collected in the field on invasive species to a regular grid, and modifications to the Beowulf project's resampling algorithm to handle larger, higher resolution datasets at a national scale. Systems-level investigations include a feasibility study on Flat Neighborhood Networks and modifications of that concept with Parallel File Systems.

  7. Small size modular fast reactors in large scale nuclear power

    International Nuclear Information System (INIS)

    Zrodnikov, A.V.; Toshinsky, G.I.; Komlev, O.G.; Dragunov, U.G.; Stepanov, V.S.; Klimov, N.N.; Kopytov, I.I.; Krushelnitsky, V.N.

    2005-01-01

    The report presents an innovative nuclear power technology (NPT) based on usage of modular type fast reactors (FR) (SVBR-75/100) with heavy liquid metal coolant (HLMC) i. e. eutectic lead-bismuth alloy mastered for Russian nuclear submarines' (NS) reactors. Use of this NPT makes it possible to eliminate a conflict between safety and economic requirements peculiar to the traditional reactors. Physical features of FRs, an integral design of the reactor and its small power (100 MWe), as well as natural properties of lead-bismuth coolant assured realization of the inherent safety properties. This made it possible to eliminate a lot of safety systems necessary for the reactor installations (RI) of operating NPPs and to design the modular NPP which technical and economical parameters are competitive not only with those of the NPP based on light water reactors (LWR) but with those of the steam-gas electric power plant. Multipurpose usage of transportable reactor modules SVBR-75/100 of entirely factory manufacture assures their production in large quantities that reduces their fabrication costs. The proposed NPT provides economically expedient change over to the closed nuclear fuel cycle (NFC). When the uranium-plutonium fuel is used, the breeding ratio is over one. Use of proposed NPT makes it possible to considerably increase the investment attractiveness of nuclear power (NP) with fast neutron reactors even today at low costs of natural uranium. (authors)

  8. Optimizing Distributed Machine Learning for Large Scale EEG Data Set

    Directory of Open Access Journals (Sweden)

    M Bilal Shaikh

    2017-06-01

    Full Text Available Distributed Machine Learning (DML has gained its importance more than ever in this era of Big Data. There are a lot of challenges to scale machine learning techniques on distributed platforms. When it comes to scalability, improving the processor technology for high level computation of data is at its limit, however increasing machine nodes and distributing data along with computation looks as a viable solution. Different frameworks   and platforms are available to solve DML problems. These platforms provide automated random data distribution of datasets which miss the power of user defined intelligent data partitioning based on domain knowledge. We have conducted an empirical study which uses an EEG Data Set collected through P300 Speller component of an ERP (Event Related Potential which is widely used in BCI problems; it helps in translating the intention of subject w h i l e performing any cognitive task. EEG data contains noise due to waves generated by other activities in the brain which contaminates true P300Speller. Use of Machine Learning techniques could help in detecting errors made by P300 Speller. We are solving this classification problem by partitioning data into different chunks and preparing distributed models using Elastic CV Classifier. To present a case of optimizing distributed machine learning, we propose an intelligent user defined data partitioning approach that could impact on the accuracy of distributed machine learners on average. Our results show better average AUC as compared to average AUC obtained after applying random data partitioning which gives no control to user over data partitioning. It improves the average accuracy of distributed learner due to the domain specific intelligent partitioning by the user. Our customized approach achieves 0.66 AUC on individual sessions and 0.75 AUC on mixed sessions, whereas random / uncontrolled data distribution records 0.63 AUC.

  9. Large size self-assembled quantum rings: quantum size effect and modulation on the surface diffusion.

    Science.gov (United States)

    Tong, Cunzhu; Yoon, Soon Fatt; Wang, Lijun

    2012-09-24

    We demonstrate experimentally the submicron size self-assembled (SA) GaAs quantum rings (QRs) by quantum size effect (QSE). An ultrathin In0.1 Ga0.9As layer with different thickness is deposited on the GaAs to modulate the surface nucleus diffusion barrier, and then the SA QRs are grown. It is found that the density of QRs is affected significantly by the thickness of inserted In0.1 Ga0.9As, and the diffusion barrier modulation reflects mainly on the first five monolayer . The physical mechanism behind is discussed. The further analysis shows that about 160 meV decrease in diffusion barrier can be achieved, which allows the SA QRs with density of as low as one QR per 6 μm2. Finally, the QRs with diameters of 438 nm and outer diameters of 736 nm are fabricated using QSE.

  10. Explicit Constructions and Bounds for Batch Codes with Restricted Size of Reconstruction Sets

    OpenAIRE

    Thomas, Eldho K.; Skachek, Vitaly

    2017-01-01

    Linear batch codes and codes for private information retrieval (PIR) with a query size $t$ and a restricted size $r$ of the reconstruction sets are studied. New bounds on the parameters of such codes are derived for small values of $t$ or of $r$ by providing corresponding constructions. By building on the ideas of Cadambe and Mazumdar, a new bound in a recursive form is derived for batch codes and PIR codes.

  11. Determinants of Awareness, Consideration, and Choice Set Size in University Choice.

    Science.gov (United States)

    Dawes, Philip L.; Brown, Jennifer

    2002-01-01

    Developed and tested a model of students' university "brand" choice using five individual-level variables (ethnic group, age, gender, number of parents going to university, and academic ability) and one situational variable (duration of search) to explain variation in the sizes of awareness, consideration, and choice decision sets. (EV)

  12. Combining the role of convenience and consideration set size in explaining fish consumption in Norway.

    Science.gov (United States)

    Rortveit, Asbjorn Warvik; Olsen, Svein Ottar

    2009-04-01

    The purpose of this study is to explore how convenience orientation, perceived product inconvenience and consideration set size are related to attitudes towards fish and fish consumption. The authors present a structural equation model (SEM) based on the integration of two previous studies. The results of a SEM analysis using Lisrel 8.72 on data from a Norwegian consumer survey (n=1630) suggest that convenience orientation and perceived product inconvenience have a negative effect on both consideration set size and consumption frequency. Attitude towards fish has the greatest impact on consumption frequency. The results also indicate that perceived product inconvenience is a key variable since it has a significant impact on attitude, and on consideration set size and consumption frequency. Further, the analyses confirm earlier findings suggesting that the effect of convenience orientation on consumption is partially mediated through perceived product inconvenience. The study also confirms earlier findings suggesting that the consideration set size affects consumption frequency. Practical implications drawn from this research are that the seafood industry would benefit from developing and positioning products that change beliefs about fish as an inconvenient product. Future research for other food categories should be done to enhance the external validity.

  13. The gradient boosting algorithm and random boosting for genome-assisted evaluation in large data sets.

    Science.gov (United States)

    González-Recio, O; Jiménez-Montero, J A; Alenda, R

    2013-01-01

    In the next few years, with the advent of high-density single nucleotide polymorphism (SNP) arrays and genome sequencing, genomic evaluation methods will need to deal with a large number of genetic variants and an increasing sample size. The boosting algorithm is a machine-learning technique that may alleviate the drawbacks of dealing with such large data sets. This algorithm combines different predictors in a sequential manner with some shrinkage on them; each predictor is applied consecutively to the residuals from the committee formed by the previous ones to form a final prediction based on a subset of covariates. Here, a detailed description is provided and examples using a toy data set are included. A modification of the algorithm called "random boosting" was proposed to increase predictive ability and decrease computation time of genome-assisted evaluation in large data sets. Random boosting uses a random selection of markers to add a subsequent weak learner to the predictive model. These modifications were applied to a real data set composed of 1,797 bulls genotyped for 39,714 SNP. Deregressed proofs of 4 yield traits and 1 type trait from January 2009 routine evaluations were used as dependent variables. A 2-fold cross-validation scenario was implemented. Sires born before 2005 were used as a training sample (1,576 and 1,562 for production and type traits, respectively), whereas younger sires were used as a testing sample to evaluate predictive ability of the algorithm on yet-to-be-observed phenotypes. Comparison with the original algorithm was provided. The predictive ability of the algorithm was measured as Pearson correlations between observed and predicted responses. Further, estimated bias was computed as the average difference between observed and predicted phenotypes. The results showed that the modification of the original boosting algorithm could be run in 1% of the time used with the original algorithm and with negligible differences in accuracy

  14. Details Matter: Noise and Model Structure Set the Relationship between Cell Size and Cell Cycle Timing

    Directory of Open Access Journals (Sweden)

    Felix Barber

    2017-11-01

    Full Text Available Organisms across all domains of life regulate the size of their cells. However, the means by which this is done is poorly understood. We study two abstracted “molecular” models for size regulation: inhibitor dilution and initiator accumulation. We apply the models to two settings: bacteria like Escherichia coli, that grow fully before they set a division plane and divide into two equally sized cells, and cells that form a bud early in the cell division cycle, confine new growth to that bud, and divide at the connection between that bud and the mother cell, like the budding yeast Saccharomyces cerevisiae. In budding cells, delaying cell division until buds reach the same size as their mother leads to very weak size control, with average cell size and standard deviation of cell size increasing over time and saturating up to 100-fold higher than those values for cells that divide when the bud is still substantially smaller than its mother. In budding yeast, both inhibitor dilution or initiator accumulation models are consistent with the observation that the daughters of diploid cells add a constant volume before they divide. This “adder” behavior has also been observed in bacteria. We find that in bacteria an inhibitor dilution model produces adder correlations that are not robust to noise in the timing of DNA replication initiation or in the timing from initiation of DNA replication to cell division (the C+D period. In contrast, in bacteria an initiator accumulation model yields robust adder correlations in the regime where noise in the timing of DNA replication initiation is much greater than noise in the C + D period, as reported previously (Ho and Amir, 2015. In bacteria, division into two equally sized cells does not broaden the size distribution.

  15. The influence of spatial grain size on the suitability of the higher-taxon approach in continental priority-setting

    DEFF Research Database (Denmark)

    Larsen, Frank Wugt; Rahbek, Carsten

    2005-01-01

    The higher-taxon approach may provide a pragmatic surrogate for the rapid identification of priority areas for conservation. To date, no continent-wide study has examined the use of higher-taxon data to identify complementarity-based networks of priority areas, nor has the influence of spatial gr...... grain size been assessed. We used data obtained from 939 sub-Saharan mammals to analyse the performance of higher-taxon data for continental priority-setting and to assess the influence of spatial grain sizes in terms of the size of selection units (1°× 1°, 2°× 2° and 4°× 4° latitudinal...... as effectively as species-based priority areas, genus-based areas perform considerably less effectively than species-based areas for the 1° and 2° grain size. Thus, our results favour the higher-taxon approach for continental priority-setting only when large grain sizes (= 4°) are used.......The higher-taxon approach may provide a pragmatic surrogate for the rapid identification of priority areas for conservation. To date, no continent-wide study has examined the use of higher-taxon data to identify complementarity-based networks of priority areas, nor has the influence of spatial...

  16. Calculations of safe collimator settings and β^{*} at the CERN Large Hadron Collider

    Directory of Open Access Journals (Sweden)

    R. Bruce

    2015-06-01

    Full Text Available The first run of the Large Hadron Collider (LHC at CERN was very successful and resulted in important physics discoveries. One way of increasing the luminosity in a collider, which gave a very significant contribution to the LHC performance in the first run and can be used even if the beam intensity cannot be increased, is to decrease the transverse beam size at the interaction points by reducing the optical function β^{*}. However, when doing so, the beam becomes larger in the final focusing system, which could expose its aperture to beam losses. For the LHC, which is designed to store beams with a total energy of 362 MJ, this is critical, since the loss of even a small fraction of the beam could cause a magnet quench or even damage. Therefore, the machine aperture has to be protected by the collimation system. The settings of the collimators constrain the maximum beam size that can be tolerated and therefore impose a lower limit on β^{*}. In this paper, we present calculations to determine safe collimator settings and the resulting limit on β^{*}, based on available aperture and operational stability of the machine. Our model was used to determine the LHC configurations in 2011 and 2012 and it was found that β^{*} could be decreased significantly compared to the conservative model used in 2010. The gain in luminosity resulting from the decreased margins between collimators was more than a factor 2, and a further contribution from the use of realistic aperture estimates based on measurements was almost as large. This has played an essential role in the rapid and successful accumulation of experimental data in the LHC.

  17. Calculations of safe collimator settings and β* at the CERN Large Hadron Collider

    Science.gov (United States)

    Bruce, R.; Assmann, R. W.; Redaelli, S.

    2015-06-01

    The first run of the Large Hadron Collider (LHC) at CERN was very successful and resulted in important physics discoveries. One way of increasing the luminosity in a collider, which gave a very significant contribution to the LHC performance in the first run and can be used even if the beam intensity cannot be increased, is to decrease the transverse beam size at the interaction points by reducing the optical function β*. However, when doing so, the beam becomes larger in the final focusing system, which could expose its aperture to beam losses. For the LHC, which is designed to store beams with a total energy of 362 MJ, this is critical, since the loss of even a small fraction of the beam could cause a magnet quench or even damage. Therefore, the machine aperture has to be protected by the collimation system. The settings of the collimators constrain the maximum beam size that can be tolerated and therefore impose a lower limit on β*. In this paper, we present calculations to determine safe collimator settings and the resulting limit on β*, based on available aperture and operational stability of the machine. Our model was used to determine the LHC configurations in 2011 and 2012 and it was found that β* could be decreased significantly compared to the conservative model used in 2010. The gain in luminosity resulting from the decreased margins between collimators was more than a factor 2, and a further contribution from the use of realistic aperture estimates based on measurements was almost as large. This has played an essential role in the rapid and successful accumulation of experimental data in the LHC.

  18. Large- and small-size advantages in sneaking behaviour in the dusky frillgoby Bathygobius fuscus

    OpenAIRE

    Takegaki, Takeshi; Kaneko, Takashi; Matsumoto, Yukio

    2012-01-01

    Sneaking tactic, a male alternative reproductive tactic involving sperm competition, is generally adopted by small individuals because of its inconspicuousness. However, large size has an advantage when competition occurs between sneakers for fertilization of eggs. Here, we suggest that both large- and small-size advantages of sneaker males are present within the same species. Large sneaker males of the dusky frillgoby Bathygobius fuscus showed a high success rate in intruding into spawning n...

  19. Determination of size-specific exposure settings in dental cone-beam CT

    International Nuclear Information System (INIS)

    Pauwels, Ruben; Jacobs, Reinhilde; Bogaerts, Ria; Bosmans, Hilde; Panmekiate, Soontra

    2017-01-01

    To estimate the possible reduction of tube output as a function of head size in dental cone-beam computed tomography (CBCT). A 16 cm PMMA phantom, containing a central and six peripheral columns filled with PMMA, was used to represent an average adult male head. The phantom was scanned using CBCT, with 0-6 peripheral columns having been removed in order to simulate varying head sizes. For five kV settings (70-90 kV), the mAs required to reach a predetermined image noise level was determined, and corresponding radiation doses were derived. Results were expressed as a function of head size, age, and gender, based on growth reference charts. The use of 90 kV consistently resulted in the largest relative dose reduction. A potential mAs reduction ranging from 7 % to 50 % was seen for the different simulated head sizes, showing an exponential relation between head size and mAs. An optimized exposure protocol based on head circumference or age/gender is proposed. A considerable dose reduction, through reduction of the mAs rather than the kV, is possible for small-sized patients in CBCT, including children and females. Size-specific exposure protocols should be clinically implemented. (orig.)

  20. Determination of size-specific exposure settings in dental cone-beam CT

    Energy Technology Data Exchange (ETDEWEB)

    Pauwels, Ruben [Chulalongkorn University, Department of Radiology, Faculty of Dentistry, Patumwan, Bangkok (Thailand); University of Leuven, OMFS-IMPATH Research Group, Department of Imaging and Pathology, Biomedical Sciences Group, Leuven (Belgium); Jacobs, Reinhilde [University of Leuven, OMFS-IMPATH Research Group, Department of Imaging and Pathology, Biomedical Sciences Group, Leuven (Belgium); Bogaerts, Ria [University of Leuven, Laboratory of Experimental Radiotherapy, Department of Oncology, Biomedical Sciences Group, Leuven (Belgium); Bosmans, Hilde [University of Leuven, Medical Physics and Quality Assessment, Department of Imaging and Pathology, Biomedical Sciences Group, Leuven (Belgium); Panmekiate, Soontra [Chulalongkorn University, Department of Radiology, Faculty of Dentistry, Patumwan, Bangkok (Thailand)

    2017-01-15

    To estimate the possible reduction of tube output as a function of head size in dental cone-beam computed tomography (CBCT). A 16 cm PMMA phantom, containing a central and six peripheral columns filled with PMMA, was used to represent an average adult male head. The phantom was scanned using CBCT, with 0-6 peripheral columns having been removed in order to simulate varying head sizes. For five kV settings (70-90 kV), the mAs required to reach a predetermined image noise level was determined, and corresponding radiation doses were derived. Results were expressed as a function of head size, age, and gender, based on growth reference charts. The use of 90 kV consistently resulted in the largest relative dose reduction. A potential mAs reduction ranging from 7 % to 50 % was seen for the different simulated head sizes, showing an exponential relation between head size and mAs. An optimized exposure protocol based on head circumference or age/gender is proposed. A considerable dose reduction, through reduction of the mAs rather than the kV, is possible for small-sized patients in CBCT, including children and females. Size-specific exposure protocols should be clinically implemented. (orig.)

  1. Auditory proactive interference in monkeys: the roles of stimulus set size and intertrial interval.

    Science.gov (United States)

    Bigelow, James; Poremba, Amy

    2013-09-01

    We conducted two experiments to examine the influences of stimulus set size (the number of stimuli that are used throughout the session) and intertrial interval (ITI, the elapsed time between trials) in auditory short-term memory in monkeys. We used an auditory delayed matching-to-sample task wherein the animals had to indicate whether two sounds separated by a 5-s retention interval were the same (match trials) or different (nonmatch trials). In Experiment 1, we randomly assigned stimulus set sizes of 2, 4, 8, 16, 32, 64, or 192 (trial-unique) for each session of 128 trials. Consistent with previous visual studies, overall accuracy was consistently lower when smaller stimulus set sizes were used. Further analyses revealed that these effects were primarily caused by an increase in incorrect "same" responses on nonmatch trials. In Experiment 2, we held the stimulus set size constant at four for each session and alternately set the ITI at 5, 10, or 20 s. Overall accuracy improved when the ITI was increased from 5 to 10 s, but it was the same across the 10- and 20-s conditions. As in Experiment 1, the overall decrease in accuracy during the 5-s condition was caused by a greater number of false "match" responses on nonmatch trials. Taken together, Experiments 1 and 2 showed that auditory short-term memory in monkeys is highly susceptible to proactive interference caused by stimulus repetition. Additional analyses of the data from Experiment 1 suggested that monkeys may make same-different judgments on the basis of a familiarity criterion that is adjusted by error-related feedback.

  2. Are large farms more efficient? Tenure security, farm size and farm efficiency: evidence from northeast China

    Science.gov (United States)

    Zhou, Yuepeng; Ma, Xianlei; Shi, Xiaoping

    2017-04-01

    How to increase production efficiency, guarantee grain security, and increase farmers' income using the limited farmland is a great challenge that China is facing. Although theory predicts that secure property rights and moderate scale management of farmland can increase land productivity, reduce farm-related costs, and raise farmer's income, empirical studies on the size and magnitude of these effects are scarce. A number of studies have examined the impacts of land tenure or farm size on productivity or efficiency, respectively. There are also a few studies linking farm size, land tenure and efficiency together. However, to our best knowledge, there are no studies considering tenure security and farm efficiency together for different farm scales in China. In addition, there is little study analyzing the profit frontier. In this study, we particularly focus on the impacts of land tenure security and farm size on farm profit efficiency, using farm level data collected from 23 villages, 811 households in Liaoning in 2015. 7 different farm scales have been identified to further represent small farms, median farms, moderate-scale farms, and large farms. Technical efficiency is analyzed with stochastic frontier production function. The profit efficiency is regressed on a set of explanatory variables which includes farm size dummies, land tenure security indexes, and household characteristics. We found that: 1) The technical efficiency scores for production efficiency (average score = 0.998) indicate that it is already very close to the production frontier, and thus there is little room to improve production efficiency. However, there is larger space to raise profit efficiency (average score = 0.768) by investing more on farm size expansion, seed, hired labor, pesticide, and irrigation. 2) Farms between 50-80 mu are most efficient from the viewpoint of profit efficiency. The so-called moderate-scale farms (100-150 mu) according to the governmental guideline show no

  3. Technical trends of large-size photomasks for flat panel displays

    Science.gov (United States)

    Yoshida, Koichiro

    2017-06-01

    Currently, flat panel displays (FPDs) are one of the main parts for information technology devices and sets. From 1990's to 2000's, liquid crystal displays (LCDs) and plasma displays had been mainstream FPDs. In the middle of 2000's, demand of plasma displays declined and organic light emitting diodes (OLEDs) newly came into FPD market. And today, major technology of FPDs are LCDs and OLEDs. Especially for mobile devices, the penetration of OLEDs is remarkable. In FPDs panel production, photolithography is the key technology as same as LSI. Photomasks for FPDs are used not only as original master of circuit pattern, but also as a tool to form other functional structures of FPDs. Photomasks for FPDs are called as "Large Size Photomasks(LSPMs)", since the remarkable feature is " Size" which reaches over 1- meter square and over 100kg. In this report, we discuss three LSPMs technical topics with FPDs technical transition and trend. The first topics is upsizing of LSPMs, the second is the challenge for higher resolution patterning, and the last is "Multi-Tone Mask" for "Half -Tone Exposure".

  4. Set size influences the relationship between ANS acuity and math performance: a result of different strategies?

    Science.gov (United States)

    Dietrich, Julia Felicitas; Nuerk, Hans-Christoph; Klein, Elise; Moeller, Korbinian; Huber, Stefan

    2017-08-29

    Previous research has proposed that the approximate number system (ANS) constitutes a building block for later mathematical abilities. Therefore, numerous studies investigated the relationship between ANS acuity and mathematical performance, but results are inconsistent. Properties of the experimental design have been discussed as a potential explanation of these inconsistencies. In the present study, we investigated the influence of set size and presentation duration on the association between non-symbolic magnitude comparison and math performance. Moreover, we focused on strategies reported as an explanation for these inconsistencies. In particular, we employed a non-symbolic magnitude comparison task and asked participants how they solved the task. We observed that set size was a significant moderator of the relationship between non-symbolic magnitude comparison and math performance, whereas presentation duration of the stimuli did not moderate this relationship. This supports the notion that specific design characteristics contribute to the inconsistent results. Moreover, participants reported different strategies including numerosity-based, visual, counting, calculation-based, and subitizing strategies. Frequencies of these strategies differed between different set sizes and presentation durations. However, we found no specific strategy, which alone predicted arithmetic performance, but when considering the frequency of all reported strategies, arithmetic performance could be predicted. Visual strategies made the largest contribution to this prediction. To conclude, the present findings suggest that different design characteristics contribute to the inconsistent findings regarding the relationship between non-symbolic magnitude comparison and mathematical performance by inducing different strategies and additional processes.

  5. Large- and small-size advantages in sneaking behaviour in the dusky frillgoby Bathygobius fuscus

    Science.gov (United States)

    Takegaki, Takeshi; Kaneko, Takashi; Matsumoto, Yukio

    2012-04-01

    Sneaking tactic, a male alternative reproductive tactic involving sperm competition, is generally adopted by small individuals because of its inconspicuousness. However, large size has an advantage when competition occurs between sneakers for fertilization of eggs. Here, we suggest that both large- and small-size advantages of sneaker males are present within the same species. Large sneaker males of the dusky frillgoby Bathygobius fuscus showed a high success rate in intruding into spawning nests because of their advantage in competition among sneaker males in keeping a suitable position to sneak, whereas small sneakers had few chances to sneak. However, small sneaker males were able to stay in the nests longer than large sneaker males when they succeeded in sneak intrusion. This suggests the possibility of an increase in their paternity. The findings of these size-specific behavioural advantages may be important in considering the evolution of size-related reproductive traits.

  6. Fiber-chip edge coupler with large mode size for silicon photonic wire waveguides.

    Science.gov (United States)

    Papes, Martin; Cheben, Pavel; Benedikovic, Daniel; Schmid, Jens H; Pond, James; Halir, Robert; Ortega-Moñux, Alejandro; Wangüemert-Pérez, Gonzalo; Ye, Winnie N; Xu, Dan-Xia; Janz, Siegfried; Dado, Milan; Vašinek, Vladimír

    2016-03-07

    Fiber-chip edge couplers are extensively used in integrated optics for coupling of light between planar waveguide circuits and optical fibers. In this work, we report on a new fiber-chip edge coupler concept with large mode size for silicon photonic wire waveguides. The coupler allows direct coupling with conventional cleaved optical fibers with large mode size while circumventing the need for lensed fibers. The coupler is designed for 220 nm silicon-on-insulator (SOI) platform. It exhibits an overall coupling efficiency exceeding 90%, as independently confirmed by 3D Finite-Difference Time-Domain (FDTD) and fully vectorial 3D Eigenmode Expansion (EME) calculations. We present two specific coupler designs, namely for a high numerical aperture single mode optical fiber with 6 µm mode field diameter (MFD) and a standard SMF-28 fiber with 10.4 µm MFD. An important advantage of our coupler concept is the ability to expand the mode at the chip edge without leading to high substrate leakage losses through buried oxide (BOX), which in our design is set to 3 µm. This remarkable feature is achieved by implementing in the SiO 2 upper cladding thin high-index Si 3 N 4 layers. The Si 3 N 4 layers increase the effective refractive index of the upper cladding near the facet. The index is controlled along the taper by subwavelength refractive index engineering to facilitate adiabatic mode transformation to the silicon wire waveguide while the Si-wire waveguide is inversely tapered along the coupler. The mode overlap optimization at the chip facet is carried out with a full vectorial mode solver. The mode transformation along the coupler is studied using 3D-FDTD simulations and with fully-vectorial 3D-EME calculations. The couplers are optimized for operating with transverse electric (TE) polarization and the operating wavelength is centered at 1.55 µm.

  7. Small, medium, large or supersize? The development and evaluation of interventions targeted at portion size

    Science.gov (United States)

    Vermeer, W M; Steenhuis, I H M; Poelman, M P

    2014-01-01

    In the past decades, portion sizes of high-caloric foods and drinks have increased and can be considered an important environmental obesogenic factor. This paper describes a research project in which the feasibility and effectiveness of environmental interventions targeted at portion size was evaluated. The studies that we conducted revealed that portion size labeling, offering a larger variety of portion sizes, and proportional pricing (that is, a comparable price per unit regardless of the size) were considered feasible to implement according to both consumers and point-of-purchase representatives. Studies into the effectiveness of these interventions demonstrated that the impact of portion size labeling on the (intended) consumption of soft drinks was, at most, modest. Furthermore, the introduction of smaller portion sizes of hot meals in worksite cafeterias in addition to the existing size stimulated a moderate number of consumers to replace their large meals by a small meal. Elaborating on these findings, we advocate further research into communication and marketing strategies related to portion size interventions; the development of environmental portion size interventions as well as educational interventions that improve people's ability to deal with a ‘super-sized' environment; the implementation of regulation with respect to portion size labeling, and the use of nudges to stimulate consumers to select healthier portion sizes. PMID:25033959

  8. The welfare implications of large litter size in the domestic pig I

    DEFF Research Database (Denmark)

    Rutherford, K.M.D; Baxter, E.M.; D'Eath, R.B.

    2013-01-01

    Increasing litter size has long been a goal of pig breeders and producers, and may have implications for pig (Sus scrofa domesticus) welfare. This paper reviews the scientific evidence on biological factors affecting sow and piglet welfare in relation to large litter size. It is concluded that, i...

  9. Semi-empirical formula for large pore-size estimation from o-Ps annihilation lifetime

    International Nuclear Information System (INIS)

    Nguyen Duc Thanh; Tran Quoc Dung; Luu Anh Tuyen; Khuong Thanh Tuan

    2007-01-01

    The o-Ps annihilation rate in large pore was investigated by the semi-classical approach. The semi-empirical formula that simply correlates between the pore size and the o-Ps lifetime was proposed. The calculated results agree well with experiment in the range from some angstroms to several ten nanometers size of pore. (author)

  10. Fission gas release during post irradiation annealing of large grain size fuels from Hinkley point B

    International Nuclear Information System (INIS)

    Killeen, J.C.

    1997-01-01

    A series of post-irradiation anneals has been carried out on fuel taken from an experimental stringer from Hinkley Point B AGR. The stringer was part of an experimental programme in the reactor to study the effect of large grain size fuel. Three differing fuel types were present in separate pins in the stringer. One variant of large grain size fuel had been prepared by using an MgO dopant during fuel manufactured, a second by high temperature sintering of standard fuel and the third was a reference, 12μm grain size fuel. Both large grain size variants had similar grain sizes around 35μm. The present experiments took fuel samples from highly rated pins from the stringer with local burn-up in excess of 25GWd/tU and annealed these to temperature of up to 1535 deg. C under reducing conditions to allow a comparison of fission gas behaviour at high release levels. The results demonstrate the beneficial effect of large grain size on release rate of 85 Kr following interlinkage. At low temperatures and release rates there was no difference between the fuel types, but at temperatures in excess of 1400 deg. C the release rate was found to be inversely dependent on the fuel grain size. The experiments showed some differences between the doped and undoped large grains size fuel in that the former became interlinked at a lower temperature, releasing fission gas at an increased rate at this temperature. At higher temperatures the grain size effect was dominant. The temperature dependence for fission gas release was determined over a narrow range of temperature and found to be similar for all three types and for both pre-interlinkage and post-interlinkage releases, the difference between the release rates is then seen to be controlled by grain size. (author). 4 refs, 7 figs, 3 tabs

  11. Fission gas release during post irradiation annealing of large grain size fuels from Hinkley point B

    Energy Technology Data Exchange (ETDEWEB)

    Killeen, J C [Nuclear Electric plc, Barnwood (United Kingdom)

    1997-08-01

    A series of post-irradiation anneals has been carried out on fuel taken from an experimental stringer from Hinkley Point B AGR. The stringer was part of an experimental programme in the reactor to study the effect of large grain size fuel. Three differing fuel types were present in separate pins in the stringer. One variant of large grain size fuel had been prepared by using an MgO dopant during fuel manufactured, a second by high temperature sintering of standard fuel and the third was a reference, 12{mu}m grain size fuel. Both large grain size variants had similar grain sizes around 35{mu}m. The present experiments took fuel samples from highly rated pins from the stringer with local burn-up in excess of 25GWd/tU and annealed these to temperature of up to 1535 deg. C under reducing conditions to allow a comparison of fission gas behaviour at high release levels. The results demonstrate the beneficial effect of large grain size on release rate of {sup 85}Kr following interlinkage. At low temperatures and release rates there was no difference between the fuel types, but at temperatures in excess of 1400 deg. C the release rate was found to be inversely dependent on the fuel grain size. The experiments showed some differences between the doped and undoped large grains size fuel in that the former became interlinked at a lower temperature, releasing fission gas at an increased rate at this temperature. At higher temperatures the grain size effect was dominant. The temperature dependence for fission gas release was determined over a narrow range of temperature and found to be similar for all three types and for both pre-interlinkage and post-interlinkage releases, the difference between the release rates is then seen to be controlled by grain size. (author). 4 refs, 7 figs, 3 tabs.

  12. Operational Aspects of Dealing with the Large BaBar Data Set

    Energy Technology Data Exchange (ETDEWEB)

    Trunov, Artem G

    2003-06-13

    To date, the BaBar experiment has stored over 0.7PB of data in an Objectivity/DB database. Approximately half this data-set comprises simulated data of which more than 70% has been produced at more than 20 collaborating institutes outside of SLAC. The operational aspects of managing such a large data set and providing access to the physicists in a timely manner is a challenging and complex problem. We describe the operational aspects of managing such a large distributed data-set as well as importing and exporting data from geographically spread BaBar collaborators. We also describe problems common to dealing with such large datasets.

  13. Interlayer catalytic exfoliation realizing scalable production of large-size pristine few-layer graphene

    OpenAIRE

    Geng, Xiumei; Guo, Yufen; Li, Dongfang; Li, Weiwei; Zhu, Chao; Wei, Xiangfei; Chen, Mingliang; Gao, Song; Qiu, Shengqiang; Gong, Youpin; Wu, Liqiong; Long, Mingsheng; Sun, Mengtao; Pan, Gebo; Liu, Liwei

    2013-01-01

    Mass production of reduced graphene oxide and graphene nanoplatelets has recently been achieved. However, a great challenge still remains in realizing large-quantity and high-quality production of large-size thin few-layer graphene (FLG). Here, we create a novel route to solve the issue by employing one-time-only interlayer catalytic exfoliation (ICE) of salt-intercalated graphite. The typical FLG with a large lateral size of tens of microns and a thickness less than 2?nm have been obtained b...

  14. Zebrafish Expression Ontology of Gene Sets (ZEOGS): a tool to analyze enrichment of zebrafish anatomical terms in large gene sets.

    Science.gov (United States)

    Prykhozhij, Sergey V; Marsico, Annalisa; Meijsing, Sebastiaan H

    2013-09-01

    The zebrafish (Danio rerio) is an established model organism for developmental and biomedical research. It is frequently used for high-throughput functional genomics experiments, such as genome-wide gene expression measurements, to systematically analyze molecular mechanisms. However, the use of whole embryos or larvae in such experiments leads to a loss of the spatial information. To address this problem, we have developed a tool called Zebrafish Expression Ontology of Gene Sets (ZEOGS) to assess the enrichment of anatomical terms in large gene sets. ZEOGS uses gene expression pattern data from several sources: first, in situ hybridization experiments from the Zebrafish Model Organism Database (ZFIN); second, it uses the Zebrafish Anatomical Ontology, a controlled vocabulary that describes connected anatomical structures; and third, the available connections between expression patterns and anatomical terms contained in ZFIN. Upon input of a gene set, ZEOGS determines which anatomical structures are overrepresented in the input gene set. ZEOGS allows one for the first time to look at groups of genes and to describe them in terms of shared anatomical structures. To establish ZEOGS, we first tested it on random gene selections and on two public microarray datasets with known tissue-specific gene expression changes. These tests showed that ZEOGS could reliably identify the tissues affected, whereas only very few enriched terms to none were found in the random gene sets. Next we applied ZEOGS to microarray datasets of 24 and 72 h postfertilization zebrafish embryos treated with beclomethasone, a potent glucocorticoid. This analysis resulted in the identification of several anatomical terms related to glucocorticoid-responsive tissues, some of which were stage-specific. Our studies highlight the ability of ZEOGS to extract spatial information from datasets derived from whole embryos, indicating that ZEOGS could be a useful tool to automatically analyze gene expression

  15. Zebrafish Expression Ontology of Gene Sets (ZEOGS): A Tool to Analyze Enrichment of Zebrafish Anatomical Terms in Large Gene Sets

    Science.gov (United States)

    Marsico, Annalisa

    2013-01-01

    Abstract The zebrafish (Danio rerio) is an established model organism for developmental and biomedical research. It is frequently used for high-throughput functional genomics experiments, such as genome-wide gene expression measurements, to systematically analyze molecular mechanisms. However, the use of whole embryos or larvae in such experiments leads to a loss of the spatial information. To address this problem, we have developed a tool called Zebrafish Expression Ontology of Gene Sets (ZEOGS) to assess the enrichment of anatomical terms in large gene sets. ZEOGS uses gene expression pattern data from several sources: first, in situ hybridization experiments from the Zebrafish Model Organism Database (ZFIN); second, it uses the Zebrafish Anatomical Ontology, a controlled vocabulary that describes connected anatomical structures; and third, the available connections between expression patterns and anatomical terms contained in ZFIN. Upon input of a gene set, ZEOGS determines which anatomical structures are overrepresented in the input gene set. ZEOGS allows one for the first time to look at groups of genes and to describe them in terms of shared anatomical structures. To establish ZEOGS, we first tested it on random gene selections and on two public microarray datasets with known tissue-specific gene expression changes. These tests showed that ZEOGS could reliably identify the tissues affected, whereas only very few enriched terms to none were found in the random gene sets. Next we applied ZEOGS to microarray datasets of 24 and 72 h postfertilization zebrafish embryos treated with beclomethasone, a potent glucocorticoid. This analysis resulted in the identification of several anatomical terms related to glucocorticoid-responsive tissues, some of which were stage-specific. Our studies highlight the ability of ZEOGS to extract spatial information from datasets derived from whole embryos, indicating that ZEOGS could be a useful tool to automatically analyze gene

  16. Large and abundant flowers increase indirect costs of corollas: a study of coflowering sympatric Mediterranean species of contrasting flower size.

    Science.gov (United States)

    Teixido, Alberto L; Valladares, Fernando

    2013-09-01

    Large floral displays receive more pollinator visits but involve higher production and maintenance costs. This can result in indirect costs which may negatively affect functions like reproductive output. In this study, we explored the relationship between floral display and indirect costs in two pairs of coflowering sympatric Mediterranean Cistus of contrasting flower size. We hypothesized that: (1) corolla production entails direct costs in dry mass, N and P, (2) corollas entail significant indirect costs in terms of fruit set and seed production, (3) indirect costs increase with floral display, (4) indirect costs are greater in larger-flowered sympatric species, and (5) local climatic conditions influence indirect costs. We compared fruit set and seed production of petal-removed flowers and unmanipulated control flowers and evaluated the influence of mean flower number and mean flower size on relative fruit and seed gain of petal-removed and control flowers. Fruit set and seed production were significantly higher in petal-removed flowers in all the studied species. A positive relationship was found between relative fruit gain and mean individual flower size within species. In one pair of species, fruit gain was higher in the large-flowered species, as was the correlation between fruit gain and mean number of open flowers. In the other pair, the correlation between fruit gain and mean flower size was also higher in the large-flowered species. These results reveal that Mediterranean environments impose significant constraints on floral display, counteracting advantages of large flowers from the pollination point of view with increased indirect costs of such flowers.

  17. Ssecrett and neuroTrace: Interactive visualization and analysis tools for large-scale neuroscience data sets

    KAUST Repository

    Jeong, Wonki; Beyer, Johanna; Hadwiger, Markus; Blue, Rusty; Law, Charles; Vá zquez Reina, Amelio; Reid, Rollie Clay; Lichtman, Jeff W M D; Pfister, Hanspeter

    2010-01-01

    Recent advances in optical and electron microscopy let scientists acquire extremely high-resolution images for neuroscience research. Data sets imaged with modern electron microscopes can range between tens of terabytes to about one petabyte. These large data sizes and the high complexity of the underlying neural structures make it very challenging to handle the data at reasonably interactive rates. To provide neuroscientists flexible, interactive tools, the authors introduce Ssecrett and NeuroTrace, two tools they designed for interactive exploration and analysis of large-scale optical- and electron-microscopy images to reconstruct complex neural circuits of the mammalian nervous system. © 2010 IEEE.

  18. Ssecrett and neuroTrace: Interactive visualization and analysis tools for large-scale neuroscience data sets

    KAUST Repository

    Jeong, Wonki

    2010-05-01

    Recent advances in optical and electron microscopy let scientists acquire extremely high-resolution images for neuroscience research. Data sets imaged with modern electron microscopes can range between tens of terabytes to about one petabyte. These large data sizes and the high complexity of the underlying neural structures make it very challenging to handle the data at reasonably interactive rates. To provide neuroscientists flexible, interactive tools, the authors introduce Ssecrett and NeuroTrace, two tools they designed for interactive exploration and analysis of large-scale optical- and electron-microscopy images to reconstruct complex neural circuits of the mammalian nervous system. © 2010 IEEE.

  19. Spin-torque oscillation in large size nano-magnet with perpendicular magnetic fields

    Energy Technology Data Exchange (ETDEWEB)

    Luo, Linqiang, E-mail: LL6UK@virginia.edu [Department of Physics, University of Virginia, Charlottesville, VA 22904 (United States); Kabir, Mehdi [Department of Electrical & Computer Engineering, University of Virginia, Charlottesville, VA 22904 (United States); Dao, Nam; Kittiwatanakul, Salinporn [Department of Materials Science & Engineering, University of Virginia, Charlottesville, VA 22904 (United States); Cyberey, Michael [Department of Electrical Engineering, University of Virginia, Charlottesville, VA 22904 (United States); Wolf, Stuart A. [Department of Physics, University of Virginia, Charlottesville, VA 22904 (United States); Department of Materials Science & Engineering, University of Virginia, Charlottesville, VA 22904 (United States); Institute of Defense Analyses, Alexandria, VA 22311 (United States); Stan, Mircea [Department of Electrical & Computer Engineering, University of Virginia, Charlottesville, VA 22904 (United States); Lu, Jiwei [Department of Materials Science & Engineering, University of Virginia, Charlottesville, VA 22904 (United States)

    2017-06-15

    Highlights: • 500 nm size nano-pillar device was fabricated by photolithography techniques. • A magnetic hybrid structure was achieved with perpendicular magnetic fields. • Spin torque switching and oscillation was demonstrated in the large sized device. • Micromagnetic simulations accurately reproduced the experimental results. • Simulations demonstrated the synchronization of magnetic inhomogeneities. - Abstract: DC current induced magnetization reversal and magnetization oscillation was observed in 500 nm large size Co{sub 90}Fe{sub 10}/Cu/Ni{sub 80}Fe{sub 20} pillars. A perpendicular external field enhanced the coercive field separation between the reference layer (Co{sub 90}Fe{sub 10}) and free layer (Ni{sub 80}Fe{sub 20}) in the pseudo spin valve, allowing a large window of external magnetic field for exploring the free-layer reversal. A magnetic hybrid structure was achieved for the study of spin torque oscillation by applying a perpendicular field >3 kOe. The magnetization precession was manifested in terms of the multiple peaks on the differential resistance curves. Depending on the bias current and applied field, the regions of magnetic switching and magnetization precession on a dynamical stability diagram has been discussed in details. Micromagnetic simulations are shown to be in good agreement with experimental results and provide insight for synchronization of inhomogeneities in large sized device. The ability to manipulate spin-dynamics on large size devices could be proved useful for increasing the output power of the spin-transfer nano-oscillators (STNOs).

  20. How large a training set is needed to develop a classifier for microarray data?

    Science.gov (United States)

    Dobbin, Kevin K; Zhao, Yingdong; Simon, Richard M

    2008-01-01

    A common goal of gene expression microarray studies is the development of a classifier that can be used to divide patients into groups with different prognoses, or with different expected responses to a therapy. These types of classifiers are developed on a training set, which is the set of samples used to train a classifier. The question of how many samples are needed in the training set to produce a good classifier from high-dimensional microarray data is challenging. We present a model-based approach to determining the sample size required to adequately train a classifier. It is shown that sample size can be determined from three quantities: standardized fold change, class prevalence, and number of genes or features on the arrays. Numerous examples and important experimental design issues are discussed. The method is adapted to address ex post facto determination of whether the size of a training set used to develop a classifier was adequate. An interactive web site for performing the sample size calculations is provided. We showed that sample size calculations for classifier development from high-dimensional microarray data are feasible, discussed numerous important considerations, and presented examples.

  1. Galaxy Evolution Insights from Spectral Modeling of Large Data Sets from the Sloan Digital Sky Survey

    Energy Technology Data Exchange (ETDEWEB)

    Hoversten, Erik A. [Johns Hopkins Univ., Baltimore, MD (United States)

    2007-10-01

    This thesis centers on the use of spectral modeling techniques on data from the Sloan Digital Sky Survey (SDSS) to gain new insights into current questions in galaxy evolution. The SDSS provides a large, uniform, high quality data set which can be exploited in a number of ways. One avenue pursued here is to use the large sample size to measure precisely the mean properties of galaxies of increasingly narrow parameter ranges. The other route taken is to look for rare objects which open up for exploration new areas in galaxy parameter space. The crux of this thesis is revisiting the classical Kennicutt method for inferring the stellar initial mass function (IMF) from the integrated light properties of galaxies. A large data set (~ 105 galaxies) from the SDSS DR4 is combined with more in-depth modeling and quantitative statistical analysis to search for systematic IMF variations as a function of galaxy luminosity. Galaxy Hα equivalent widths are compared to a broadband color index to constrain the IMF. It is found that for the sample as a whole the best fitting IMF power law slope above 0.5 M is Γ = 1.5 ± 0.1 with the error dominated by systematics. Galaxies brighter than around Mr,0.1 = -20 (including galaxies like the Milky Way which has Mr,0.1 ~ -21) are well fit by a universal Γ ~ 1.4 IMF, similar to the classical Salpeter slope, and smooth, exponential star formation histories (SFH). Fainter galaxies prefer steeper IMFs and the quality of the fits reveal that for these galaxies a universal IMF with smooth SFHs is actually a poor assumption. Related projects are also pursued. A targeted photometric search is conducted for strongly lensed Lyman break galaxies (LBG) similar to MS1512-cB58. The evolution of the photometric selection technique is described as are the results of spectroscopic follow-up of the best targets. The serendipitous discovery of two interesting blue compact dwarf galaxies is reported. These

  2. Teaching the Assessment of Normality Using Large Easily-Generated Real Data Sets

    Science.gov (United States)

    Kulp, Christopher W.; Sprechini, Gene D.

    2016-01-01

    A classroom activity is presented, which can be used in teaching students statistics with an easily generated, large, real world data set. The activity consists of analyzing a video recording of an object. The colour data of the recorded object can then be used as a data set to explore variation in the data using graphs including histograms,…

  3. Combining RP and SP data while accounting for large choice sets and travel mode

    DEFF Research Database (Denmark)

    Abildtrup, Jens; Olsen, Søren Bøye; Stenger, Anne

    2015-01-01

    set used for site selection modelling when the actual choice set considered is potentially large and unknown to the analyst. Easy access to forests also implies that around half of the visitors walk or bike to the forest. We apply an error-component mixed-logit model to simultaneously model the travel...

  4. The reference frame for encoding and retention of motion depends on stimulus set size.

    Science.gov (United States)

    Huynh, Duong; Tripathy, Srimant P; Bedell, Harold E; Öğmen, Haluk

    2017-04-01

    The goal of this study was to investigate the reference frames used in perceptual encoding and storage of visual motion information. In our experiments, observers viewed multiple moving objects and reported the direction of motion of a randomly selected item. Using a vector-decomposition technique, we computed performance during smooth pursuit with respect to a spatiotopic (nonretinotopic) and to a retinotopic component and compared them with performance during fixation, which served as the baseline. For the stimulus encoding stage, which precedes memory, we found that the reference frame depends on the stimulus set size. For a single moving target, the spatiotopic reference frame had the most significant contribution with some additional contribution from the retinotopic reference frame. When the number of items increased (Set Sizes 3 to 7), the spatiotopic reference frame was able to account for the performance. Finally, when the number of items became larger than 7, the distinction between reference frames vanished. We interpret this finding as a switch to a more abstract nonmetric encoding of motion direction. We found that the retinotopic reference frame was not used in memory. Taken together with other studies, our results suggest that, whereas a retinotopic reference frame may be employed for controlling eye movements, perception and memory use primarily nonretinotopic reference frames. Furthermore, the use of nonretinotopic reference frames appears to be capacity limited. In the case of complex stimuli, the visual system may use perceptual grouping in order to simplify the complexity of stimuli or resort to a nonmetric abstract coding of motion information.

  5. Damage threshold from large retinal spot size repetitive-pulse laser exposures.

    Science.gov (United States)

    Lund, Brian J; Lund, David J; Edsall, Peter R

    2014-10-01

    The retinal damage thresholds for large spot size, multiple-pulse exposures to a Q-switched, frequency doubled Nd:YAG laser (532 nm wavelength, 7 ns pulses) have been measured for 100 μm and 500 μm retinal irradiance diameters. The ED50, expressed as energy per pulse, varies only weakly with the number of pulses, n, for these extended spot sizes. The previously reported threshold for a multiple-pulse exposure for a 900 μm retinal spot size also shows the same weak dependence on the number of pulses. The multiple-pulse ED50 for an extended spot-size exposure does not follow the n dependence exhibited by small spot size exposures produced by a collimated beam. Curves derived by using probability-summation models provide a better fit to the data.

  6. The reconstruction of choice value in the brain: a look into the size of consideration sets and their affective consequences.

    Science.gov (United States)

    Kim, Hye-Young; Shin, Yeonsoon; Han, Sanghoon

    2014-04-01

    It has been proposed that choice utility exhibits an inverted U-shape as a function of the number of options in the choice set. However, most researchers have so far only focused on the "physically extant" number of options in the set while disregarding the more important psychological factor, the "subjective" number of options worth considering to choose-that is, the size of the consideration set. To explore this previously ignored aspect, we examined how variations in the size of a consideration set can produce different affective consequences after making choices and investigated the underlying neural mechanism using fMRI. After rating their preferences for art posters, participants made a choice from a presented set and then reported on their level of satisfaction with their choice and the level of difficulty experienced in choosing it. Our behavioral results demonstrated that enlarged assortment set can lead to greater choice satisfaction only when increases in both consideration set size and preference contrast are involved. Moreover, choice difficulty is determined based on the size of an individual's consideration set rather than on the size of the assortment set, and it decreases linearly as a function of the level of contrast among alternatives. The neuroimaging analysis of choice-making revealed that subjective consideration set size was encoded in the striatum, the dACC, and the insula. In addition, the striatum also represented variations in choice satisfaction resulting from alterations in the size of consideration sets, whereas a common neural specificity for choice difficulty and consideration set size was shown in the dACC. These results have theoretical and practical importance in that it is one of the first studies investigating the influence of the psychological attributes of choice sets on the value-based decision-making process.

  7. RADIOMETRIC NORMALIZATION OF LARGE AIRBORNE IMAGE DATA SETS ACQUIRED BY DIFFERENT SENSOR TYPES

    Directory of Open Access Journals (Sweden)

    S. Gehrke

    2016-06-01

    Full Text Available Generating seamless mosaics of aerial images is a particularly challenging task when the mosaic comprises a large number of im-ages, collected over longer periods of time and with different sensors under varying imaging conditions. Such large mosaics typically consist of very heterogeneous image data, both spatially (different terrain types and atmosphere and temporally (unstable atmo-spheric properties and even changes in land coverage. We present a new radiometric normalization or, respectively, radiometric aerial triangulation approach that takes advantage of our knowledge about each sensor’s properties. The current implementation supports medium and large format airborne imaging sensors of the Leica Geosystems family, namely the ADS line-scanner as well as DMC and RCD frame sensors. A hierarchical modelling – with parameters for the overall mosaic, the sensor type, different flight sessions, strips and individual images – allows for adaptation to each sensor’s geometric and radiometric properties. Additional parameters at different hierarchy levels can compensate radiome-tric differences of various origins to compensate for shortcomings of the preceding radiometric sensor calibration as well as BRDF and atmospheric corrections. The final, relative normalization is based on radiometric tie points in overlapping images, absolute radiometric control points and image statistics. It is computed in a global least squares adjustment for the entire mosaic by altering each image’s histogram using a location-dependent mathematical model. This model involves contrast and brightness corrections at radiometric fix points with bilinear interpolation for corrections in-between. The distribution of the radiometry fixes is adaptive to each image and generally increases with image size, hence enabling optimal local adaptation even for very long image strips as typi-cally captured by a line-scanner sensor. The normalization approach is implemented in

  8. Genome size variation affects song attractiveness in grasshoppers: evidence for sexual selection against large genomes.

    Science.gov (United States)

    Schielzeth, Holger; Streitner, Corinna; Lampe, Ulrike; Franzke, Alexandra; Reinhold, Klaus

    2014-12-01

    Genome size is largely uncorrelated to organismal complexity and adaptive scenarios. Genetic drift as well as intragenomic conflict have been put forward to explain this observation. We here study the impact of genome size on sexual attractiveness in the bow-winged grasshopper Chorthippus biguttulus. Grasshoppers show particularly large variation in genome size due to the high prevalence of supernumerary chromosomes that are considered (mildly) selfish, as evidenced by non-Mendelian inheritance and fitness costs if present in high numbers. We ranked male grasshoppers by song characteristics that are known to affect female preferences in this species and scored genome sizes of attractive and unattractive individuals from the extremes of this distribution. We find that attractive singers have significantly smaller genomes, demonstrating that genome size is reflected in male courtship songs and that females prefer songs of males with small genomes. Such a genome size dependent mate preference effectively selects against selfish genetic elements that tend to increase genome size. The data therefore provide a novel example of how sexual selection can reinforce natural selection and can act as an agent in an intragenomic arms race. Furthermore, our findings indicate an underappreciated route of how choosy females could gain indirect benefits. © 2014 The Author(s). Evolution © 2014 The Society for the Study of Evolution.

  9. Uncertainty budget in internal monostandard NAA for small and large size samples analysis

    International Nuclear Information System (INIS)

    Dasari, K.B.; Acharya, R.

    2014-01-01

    Total uncertainty budget evaluation on determined concentration value is important under quality assurance programme. Concentration calculation in NAA or carried out by relative NAA and k0 based internal monostandard NAA (IM-NAA) method. IM-NAA method has been used for small and large sample analysis of clay potteries. An attempt was made to identify the uncertainty components in IM-NAA and uncertainty budget for La in both small and large size samples has been evaluated and compared. (author)

  10. Timetable-based simulation method for choice set generation in large-scale public transport networks

    DEFF Research Database (Denmark)

    Rasmussen, Thomas Kjær; Anderson, Marie Karen; Nielsen, Otto Anker

    2016-01-01

    The composition and size of the choice sets are a key for the correct estimation of and prediction by route choice models. While existing literature has posed a great deal of attention towards the generation of path choice sets for private transport problems, the same does not apply to public...... transport problems. This study proposes a timetable-based simulation method for generating path choice sets in a multimodal public transport network. Moreover, this study illustrates the feasibility of its implementation by applying the method to reproduce 5131 real-life trips in the Greater Copenhagen Area...... and to assess the choice set quality in a complex multimodal transport network. Results illustrate the applicability of the algorithm and the relevance of the utility specification chosen for the reproduction of real-life path choices. Moreover, results show that the level of stochasticity used in choice set...

  11. Comparison of silicon strip tracker module size using large sensors from 6 inch wafers

    CERN Multimedia

    Honma, Alan

    1999-01-01

    Two large silicon strip sensor made from 6 inch wafers are placed next to each other to simulate the size of a CMS outer silicon tracker module. On the left is a prototype 2 sensor CMS inner endcap silicon tracker module made from 4 inch wafers.

  12. Synthesis of mesoporous carbon nanoparticles with large and tunable pore sizes

    Science.gov (United States)

    Liu, Chao; Yu, Meihua; Li, Yang; Li, Jiansheng; Wang, Jing; Yu, Chengzhong; Wang, Lianjun

    2015-07-01

    Mesoporous carbon nanoparticles (MCNs) with large and adjustable pores have been synthesized by using poly(ethylene oxide)-b-polystyrene (PEO-b-PS) as a template and resorcinol-formaldehyde (RF) as a carbon precursor. The resulting MCNs possess small diameters (100-126 nm) and high BET surface areas (up to 646 m2 g-1). By using home-designed block copolymers, the pore size of MCNs can be tuned in the range of 13-32 nm. Importantly, the pore size of 32 nm is the largest among the MCNs prepared by the soft-templating route. The formation mechanism and structure evolution of MCNs were studied by TEM and DLS measurements, based on which a soft-templating/sphere packing mechanism was proposed. Because of the large pores and small particle sizes, the resulting MCNs were excellent nano-carriers to deliver biomolecules into cancer cells. MCNs were further demonstrated with negligible toxicity. It is anticipated that this carbon material with large pores and small particle sizes may have excellent potential in drug/gene delivery.Mesoporous carbon nanoparticles (MCNs) with large and adjustable pores have been synthesized by using poly(ethylene oxide)-b-polystyrene (PEO-b-PS) as a template and resorcinol-formaldehyde (RF) as a carbon precursor. The resulting MCNs possess small diameters (100-126 nm) and high BET surface areas (up to 646 m2 g-1). By using home-designed block copolymers, the pore size of MCNs can be tuned in the range of 13-32 nm. Importantly, the pore size of 32 nm is the largest among the MCNs prepared by the soft-templating route. The formation mechanism and structure evolution of MCNs were studied by TEM and DLS measurements, based on which a soft-templating/sphere packing mechanism was proposed. Because of the large pores and small particle sizes, the resulting MCNs were excellent nano-carriers to deliver biomolecules into cancer cells. MCNs were further demonstrated with negligible toxicity. It is anticipated that this carbon material with large pores and

  13. Small-size pedestrian detection in large scene based on fast R-CNN

    Science.gov (United States)

    Wang, Shengke; Yang, Na; Duan, Lianghua; Liu, Lu; Dong, Junyu

    2018-04-01

    Pedestrian detection is a canonical sub-problem of object detection with high demand during recent years. Although recent deep learning object detectors such as Fast/Faster R-CNN have shown excellent performance for general object detection, they have limited success for small size pedestrian detection in large-view scene. We study that the insufficient resolution of feature maps lead to the unsatisfactory accuracy when handling small instances. In this paper, we investigate issues involving Fast R-CNN for pedestrian detection. Driven by the observations, we propose a very simple but effective baseline for pedestrian detection based on Fast R-CNN, employing the DPM detector to generate proposals for accuracy, and training a fast R-CNN style network to jointly optimize small size pedestrian detection with skip connection concatenating feature from different layers to solving coarseness of feature maps. And the accuracy is improved in our research for small size pedestrian detection in the real large scene.

  14. Portfolio of automated trading systems: complexity and learning set size issues.

    Science.gov (United States)

    Raudys, Sarunas

    2013-03-01

    In this paper, we consider using profit/loss histories of multiple automated trading systems (ATSs) as N input variables in portfolio management. By means of multivariate statistical analysis and simulation studies, we analyze the influences of sample size (L) and input dimensionality on the accuracy of determining the portfolio weights. We find that degradation in portfolio performance due to inexact estimation of N means and N(N - 1)/2 correlations is proportional to N/L; however, estimation of N variances does not worsen the result. To reduce unhelpful sample size/dimensionality effects, we perform a clustering of N time series and split them into a small number of blocks. Each block is composed of mutually correlated ATSs. It generates an expert trading agent based on a nontrainable 1/N portfolio rule. To increase the diversity of the expert agents, we use training sets of different lengths for clustering. In the output of the portfolio management system, the regularized mean-variance framework-based fusion agent is developed in each walk-forward step of an out-of-sample portfolio validation experiment. Experiments with the real financial data (2003-2012) confirm the effectiveness of the suggested approach.

  15. A three-step algorithm for CANDECOMP/PARAFAC analysis of large data sets with multicollinearity

    NARCIS (Netherlands)

    Kiers, H.A.L.

    1998-01-01

    Fitting the CANDECOMP/PARAFAC model by the standard alternating least squares algorithm often requires very many iterations. One case in point is that of analysing data with mild to severe multicollinearity. If, in addition, the size of the data is large, the computation of one CANDECOMP/PARAFAC

  16. Generalization of some hidden subgroup algorithms for input sets of arbitrary size

    Science.gov (United States)

    Poslu, Damla; Say, A. C. Cem

    2006-05-01

    We consider the problem of generalizing some quantum algorithms so that they will work on input domains whose cardinalities are not necessarily powers of two. When analyzing the algorithms we assume that generating superpositions of arbitrary subsets of basis states whose cardinalities are not necessarily powers of two perfectly is possible. We have taken Ballhysa's model as a template and have extended it to Chi, Kim and Lee's generalizations of the Deutsch-Jozsa algorithm and to Simon's algorithm. With perfectly equal superpositions of input sets of arbitrary size, Chi, Kim and Lee's generalized Deutsch-Jozsa algorithms, both for evenly-distributed and evenly-balanced functions, worked with one-sided error property. For Simon's algorithm the success probability of the generalized algorithm is the same as that of the original for input sets of arbitrary cardinalities with equiprobable superpositions, since the property that the measured strings are all those which have dot product zero with the string we search, for the case where the function is 2-to-1, is not lost.

  17. Validation Of Intermediate Large Sample Analysis (With Sizes Up to 100 G) and Associated Facility Improvement

    International Nuclear Information System (INIS)

    Bode, P.; Koster-Ammerlaan, M.J.J.

    2018-01-01

    Pragmatic rather than physical correction factors for neutron and gamma-ray shielding were studied for samples of intermediate size, i.e. up to the 10-100 gram range. It was found that for most biological and geological materials, the neutron self-shielding is less than 5 % and the gamma-ray self-attenuation can easily be estimated. A trueness control material of 1 kg size was made based on use of left-overs of materials, used in laboratory intercomparisons. A design study for a large sample pool-side facility, handling plate-type volumes, had to be stopped because of a reduction in human resources, available for this CRP. The large sample NAA facilities were made available to guest scientists from Greece and Brazil. The laboratory for neutron activation analysis participated in the world’s first laboratory intercomparison utilizing large samples. (author)

  18. The influence of negative training set size on machine learning-based virtual screening.

    Science.gov (United States)

    Kurczab, Rafał; Smusz, Sabina; Bojarski, Andrzej J

    2014-01-01

    The paper presents a thorough analysis of the influence of the number of negative training examples on the performance of machine learning methods. The impact of this rather neglected aspect of machine learning methods application was examined for sets containing a fixed number of positive and a varying number of negative examples randomly selected from the ZINC database. An increase in the ratio of positive to negative training instances was found to greatly influence most of the investigated evaluating parameters of ML methods in simulated virtual screening experiments. In a majority of cases, substantial increases in precision and MCC were observed in conjunction with some decreases in hit recall. The analysis of dynamics of those variations let us recommend an optimal composition of training data. The study was performed on several protein targets, 5 machine learning algorithms (SMO, Naïve Bayes, Ibk, J48 and Random Forest) and 2 types of molecular fingerprints (MACCS and CDK FP). The most effective classification was provided by the combination of CDK FP with SMO or Random Forest algorithms. The Naïve Bayes models appeared to be hardly sensitive to changes in the number of negative instances in the training set. In conclusion, the ratio of positive to negative training instances should be taken into account during the preparation of machine learning experiments, as it might significantly influence the performance of particular classifier. What is more, the optimization of negative training set size can be applied as a boosting-like approach in machine learning-based virtual screening.

  19. MiniWall Tool for Analyzing CFD and Wind Tunnel Large Data Sets

    Science.gov (United States)

    Schuh, Michael J.; Melton, John E.; Stremel, Paul M.

    2017-01-01

    It is challenging to review and assimilate large data sets created by Computational Fluid Dynamics (CFD) simulations and wind tunnel tests. Over the past 10 years, NASA Ames Research Center has developed and refined a software tool dubbed the MiniWall to increase productivity in reviewing and understanding large CFD-generated data sets. Under the recent NASA ERA project, the application of the tool expanded to enable rapid comparison of experimental and computational data. The MiniWall software is browser based so that it runs on any computer or device that can display a web page. It can also be used remotely and securely by using web server software such as the Apache HTTP server. The MiniWall software has recently been rewritten and enhanced to make it even easier for analysts to review large data sets and extract knowledge and understanding from these data sets. This paper describes the MiniWall software and demonstrates how the different features are used to review and assimilate large data sets.

  20. SAR matrices: automated extraction of information-rich SAR tables from large compound data sets.

    Science.gov (United States)

    Wassermann, Anne Mai; Haebel, Peter; Weskamp, Nils; Bajorath, Jürgen

    2012-07-23

    We introduce the SAR matrix data structure that is designed to elucidate SAR patterns produced by groups of structurally related active compounds, which are extracted from large data sets. SAR matrices are systematically generated and sorted on the basis of SAR information content. Matrix generation is computationally efficient and enables processing of large compound sets. The matrix format is reminiscent of SAR tables, and SAR patterns revealed by different categories of matrices are easily interpretable. The structural organization underlying matrix formation is more flexible than standard R-group decomposition schemes. Hence, the resulting matrices capture SAR information in a comprehensive manner.

  1. Management of a Large Qualitative Data Set: Establishing Trustworthiness of the Data

    Directory of Open Access Journals (Sweden)

    Debbie Elizabeth White RN, PhD

    2012-07-01

    Full Text Available Health services research is multifaceted and impacted by the multiple contexts and stakeholders involved. Hence, large data sets are necessary to fully understand the complex phenomena (e.g., scope of nursing practice being studied. The management of these large data sets can lead to numerous challenges in establishing trustworthiness of the study. This article reports on strategies utilized in data collection and analysis of a large qualitative study to establish trustworthiness. Specific strategies undertaken by the research team included training of interviewers and coders, variation in participant recruitment, consistency in data collection, completion of data cleaning, development of a conceptual framework for analysis, consistency in coding through regular communication and meetings between coders and key research team members, use of N6™ software to organize data, and creation of a comprehensive audit trail with internal and external audits. Finally, we make eight recommendations that will help ensure rigour for studies with large qualitative data sets: organization of the study by a single person; thorough documentation of the data collection and analysis process; attention to timelines; the use of an iterative process for data collection and analysis; internal and external audits; regular communication among the research team; adequate resources for timely completion; and time for reflection and diversion. Following these steps will enable researchers to complete a rigorous, qualitative research study when faced with large data sets to answer complex health services research questions.

  2. Size Reduction Techniques for Large Scale Permanent Magnet Generators in Wind Turbines

    Science.gov (United States)

    Khazdozian, Helena; Hadimani, Ravi; Jiles, David

    2015-03-01

    Increased wind penetration is necessary to reduce U.S. dependence on fossil fuels, combat climate change and increase national energy security. The U.S Department of Energy has recommended large scale and offshore wind turbines to achieve 20% wind electricity generation by 2030. Currently, geared doubly-fed induction generators (DFIGs) are typically employed in the drivetrain for conversion of mechanical to electrical energy. Yet, gearboxes account for the greatest downtime of wind turbines, decreasing reliability and contributing to loss of profit. Direct drive permanent magnet generators (PMGs) offer a reliable alternative to DFIGs by eliminating the gearbox. However, PMGs scale up in size and weight much more rapidly than DFIGs as rated power is increased, presenting significant challenges for large scale wind turbine application. Thus, size reduction techniques are needed for viability of PMGs in large scale wind turbines. Two size reduction techniques are presented. It is demonstrated that 25% size reduction of a 10MW PMG is possible with a high remanence theoretical permanent magnet. Additionally, the use of a Halbach cylinder in an outer rotor PMG is investigated to focus magnetic flux over the rotor surface in order to increase torque. This work was supported by the National Science Foundation under Grant No. 1069283 and a Barbara and James Palmer Endowment at Iowa State University.

  3. Evaluation of Kirkwood-Buff integrals via finite size scaling: a large scale molecular dynamics study

    Science.gov (United States)

    Dednam, W.; Botha, A. E.

    2015-01-01

    Solvation of bio-molecules in water is severely affected by the presence of co-solvent within the hydration shell of the solute structure. Furthermore, since solute molecules can range from small molecules, such as methane, to very large protein structures, it is imperative to understand the detailed structure-function relationship on the microscopic level. For example, it is useful know the conformational transitions that occur in protein structures. Although such an understanding can be obtained through large-scale molecular dynamic simulations, it is often the case that such simulations would require excessively large simulation times. In this context, Kirkwood-Buff theory, which connects the microscopic pair-wise molecular distributions to global thermodynamic properties, together with the recently developed technique, called finite size scaling, may provide a better method to reduce system sizes, and hence also the computational times. In this paper, we present molecular dynamics trial simulations of biologically relevant low-concentration solvents, solvated by aqueous co-solvent solutions. In particular we compare two different methods of calculating the relevant Kirkwood-Buff integrals. The first (traditional) method computes running integrals over the radial distribution functions, which must be obtained from large system-size NVT or NpT simulations. The second, newer method, employs finite size scaling to obtain the Kirkwood-Buff integrals directly by counting the particle number fluctuations in small, open sub-volumes embedded within a larger reservoir that can be well approximated by a much smaller simulation cell. In agreement with previous studies, which made a similar comparison for aqueous co-solvent solutions, without the additional solvent, we conclude that the finite size scaling method is also applicable to the present case, since it can produce computationally more efficient results which are equivalent to the more costly radial distribution

  4. Evaluation of Kirkwood-Buff integrals via finite size scaling: a large scale molecular dynamics study

    International Nuclear Information System (INIS)

    Dednam, W; Botha, A E

    2015-01-01

    Solvation of bio-molecules in water is severely affected by the presence of co-solvent within the hydration shell of the solute structure. Furthermore, since solute molecules can range from small molecules, such as methane, to very large protein structures, it is imperative to understand the detailed structure-function relationship on the microscopic level. For example, it is useful know the conformational transitions that occur in protein structures. Although such an understanding can be obtained through large-scale molecular dynamic simulations, it is often the case that such simulations would require excessively large simulation times. In this context, Kirkwood-Buff theory, which connects the microscopic pair-wise molecular distributions to global thermodynamic properties, together with the recently developed technique, called finite size scaling, may provide a better method to reduce system sizes, and hence also the computational times. In this paper, we present molecular dynamics trial simulations of biologically relevant low-concentration solvents, solvated by aqueous co-solvent solutions. In particular we compare two different methods of calculating the relevant Kirkwood-Buff integrals. The first (traditional) method computes running integrals over the radial distribution functions, which must be obtained from large system-size NVT or NpT simulations. The second, newer method, employs finite size scaling to obtain the Kirkwood-Buff integrals directly by counting the particle number fluctuations in small, open sub-volumes embedded within a larger reservoir that can be well approximated by a much smaller simulation cell. In agreement with previous studies, which made a similar comparison for aqueous co-solvent solutions, without the additional solvent, we conclude that the finite size scaling method is also applicable to the present case, since it can produce computationally more efficient results which are equivalent to the more costly radial distribution

  5. Sizing and scaling requirements of a large-scale physical model for code validation

    International Nuclear Information System (INIS)

    Khaleel, R.; Legore, T.

    1990-01-01

    Model validation is an important consideration in application of a code for performance assessment and therefore in assessing the long-term behavior of the engineered and natural barriers of a geologic repository. Scaling considerations relevant to porous media flow are reviewed. An analysis approach is presented for determining the sizing requirements of a large-scale, hydrology physical model. The physical model will be used to validate performance assessment codes that evaluate the long-term behavior of the repository isolation system. Numerical simulation results for sizing requirements are presented for a porous medium model in which the media properties are spatially uncorrelated

  6. Study on external reactor vessel cooling capacity for advanced large size PWR

    International Nuclear Information System (INIS)

    Jin Di; Liu Xiaojing; Cheng Xu; Li Fei

    2014-01-01

    External reactor vessel cooling (ERVC) is widely adopted as a part of in- vessel retention (IVR) in severe accident management strategies. In this paper, some flow parameters and boundary conditions, eg., inlet and outlet area, water inlet temperature, heating power of the lower head, the annular gap size at the position of the lower head and flooding water level, were considered to qualitatively study the effect of them on natural circulation capacity of the external reactor vessel cooling for an advanced large size PWR by using RELAP5 code. And the calculation results provide some basis of analysis for the structure design and the following transient response behavior of the system. (authors)

  7. Statistical characteristics and stability index (si) of large-sized landslide dams around the world

    International Nuclear Information System (INIS)

    Iqbal, J.; Dai, F.; Raja, I.A.

    2014-01-01

    In the last few decades, landslide dams have received greater attention of researchers, as they have caused loss to property and human lives. Over 261 large-sized landslide dams from different countries of the world with volume greater than 1 x 105 m have been reviewed for this study. The data collected for this study shows that 58% of the catastrophic landslides were triggered by earthquakes and 21 % by rainfall, revealing that earthquake and rainfall are the two major triggers, accounting for 75% of large-sized landslide dams. These land-slides were most frequent during last two decades (1990-2010) throughout the world. The mean landslide dam volume of the studied cases was 53.39 x 10 m with mean dam height of 71.98 m, while the mean lake volume was found to be 156.62 x 10 m. Failure of these large landslide dams pose a severe threat to the property and people living downstream, hence immediate attention is required to deal with this problem. A stability index (SI) has been derived on the basis on 59 large-sized landslide dams (out of the 261 dams) with complete parametric information. (author)

  8. Effect of pore size on performance of monolithic tube chromatography of large biomolecules.

    Science.gov (United States)

    Podgornik, Ales; Hamachi, Masataka; Isakari, Yu; Yoshimoto, Noriko; Yamamoto, Shuichi

    2017-11-01

    Effect of pore size on the performance of ion-exchange monolith tube chromatography of large biomolecules was investigated. Radial flow 1 mL polymer based monolith tubes of different pore sizes (1.5, 2, and 6 μm) were tested with model samples such as 20 mer poly T-DNA, basic proteins, and acidic proteins (molecular weight 14 000-670 000). Pressure drop, pH transient, the number of binding site, dynamic binding capacity, and peak width were examined. Pressure drop-flow rate curves and dynamic binding capacity values were well correlated with the nominal pore size. While duration of the pH transient curves depends on the pore size, it was found that pH duration normalized on estimated surface area was constant, indicating that the ligand density is the same. This was also confirmed by the constant number of binding site values being independent of pore size. The peak width values were similar to those for axial flow monolith chromatography. These results showed that it is easy to scale up axial flow monolith chromatography to radial flow monolith tube chromatography by choosing the right pore size in terms of the pressure drop and capacity. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  9. 13 CFR 121.412 - What are the size procedures for partial small business set-asides?

    Science.gov (United States)

    2010-01-01

    ... Requirements for Government Procurement § 121.412 What are the size procedures for partial small business set... portion of a procurement, and is not required to qualify as a small business for the unrestricted portion. ...

  10. Improving small RNA-seq by using a synthetic spike-in set for size-range quality control together with a set for data normalization.

    Science.gov (United States)

    Locati, Mauro D; Terpstra, Inez; de Leeuw, Wim C; Kuzak, Mateusz; Rauwerda, Han; Ensink, Wim A; van Leeuwen, Selina; Nehrdich, Ulrike; Spaink, Herman P; Jonker, Martijs J; Breit, Timo M; Dekker, Rob J

    2015-08-18

    There is an increasing interest in complementing RNA-seq experiments with small-RNA (sRNA) expression data to obtain a comprehensive view of a transcriptome. Currently, two main experimental challenges concerning sRNA-seq exist: how to check the size distribution of isolated sRNAs, given the sensitive size-selection steps in the protocol; and how to normalize data between samples, given the low complexity of sRNA types. We here present two separate sets of synthetic RNA spike-ins for monitoring size-selection and for performing data normalization in sRNA-seq. The size-range quality control (SRQC) spike-in set, consisting of 11 oligoribonucleotides (10-70 nucleotides), was tested by intentionally altering the size-selection protocol and verified via several comparative experiments. We demonstrate that the SRQC set is useful to reproducibly track down biases in the size-selection in sRNA-seq. The external reference for data-normalization (ERDN) spike-in set, consisting of 19 oligoribonucleotides, was developed for sample-to-sample normalization in differential-expression analysis of sRNA-seq data. Testing and applying the ERDN set showed that it can reproducibly detect differential expression over a dynamic range of 2(18). Hence, biological variation in sRNA composition and content between samples is preserved while technical variation is effectively minimized. Together, both spike-in sets can significantly improve the technical reproducibility of sRNA-seq. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.

  11. Influences of large sets of environmental exposures on immune responses in healthy adult men.

    Science.gov (United States)

    Yi, Buqing; Rykova, Marina; Jäger, Gundula; Feuerecker, Matthias; Hörl, Marion; Matzel, Sandra; Ponomarev, Sergey; Vassilieva, Galina; Nichiporuk, Igor; Choukèr, Alexander

    2015-08-26

    Environmental factors have long been known to influence immune responses. In particular, clinical studies about the association between migration and increased risk of atopy/asthma have provided important information on the role of migration associated large sets of environmental exposures in the development of allergic diseases. However, investigations about environmental effects on immune responses are mostly limited in candidate environmental exposures, such as air pollution. The influences of large sets of environmental exposures on immune responses are still largely unknown. A simulated 520-d Mars mission provided an opportunity to investigate this topic. Six healthy males lived in a closed habitat simulating a spacecraft for 520 days. When they exited their "spacecraft" after the mission, the scenario was similar to that of migration, involving exposure to a new set of environmental pollutants and allergens. We measured multiple immune parameters with blood samples at chosen time points after the mission. At the early adaptation stage, highly enhanced cytokine responses were observed upon ex vivo antigen stimulations. For cell population frequencies, we found the subjects displayed increased neutrophils. These results may presumably represent the immune changes occurred in healthy humans when migrating, indicating that large sets of environmental exposures may trigger aberrant immune activity.

  12. Settings and artefacts relevant for Doppler ultrasound in large vessel vasculitis

    DEFF Research Database (Denmark)

    Terslev, L; Diamantopoulos, A P; Døhn, U Møller

    2017-01-01

    Ultrasound is used increasingly for diagnosing large vessel vasculitis (LVV). The application of Doppler in LVV is very different from in arthritic conditions. This paper aims to explain the most important Doppler parameters, including spectral Doppler, and how the settings differ from those used...

  13. Large and small sets with respect to homomorphisms and products of groups

    Directory of Open Access Journals (Sweden)

    Riccardo Gusso

    2002-10-01

    Full Text Available We study the behaviour of large, small and medium subsets with respect to homomorphisms and products of groups. Then we introduce the definition af a P-small set in abelian groups and we investigate the relations between this kind of smallness and the previous one, giving some examples that distinguish them.

  14. Large data sets in finance and marketing: introduction by the special issue editor

    NARCIS (Netherlands)

    Ph.H.B.F. Franses (Philip Hans)

    1998-01-01

    textabstractOn December 18 and 19 of 1997, a small conference on the "Statistical Analysis of Large Data Sets in Business Economics" was organized by the Rotterdam Institute for Business Economic Studies. Eleven presentations were delivered in plenary sessions, which were attended by about 90

  15. Using Content-Specific Lyrics to Familiar Tunes in a Large Lecture Setting

    Science.gov (United States)

    McLachlin, Derek T.

    2009-01-01

    Music can be used in lectures to increase student engagement and help students retain information. In this paper, I describe my use of biochemistry-related lyrics written to the tune of the theme to the television show, The Flintstones, in a large class setting (400-800 students). To determine student perceptions, the class was surveyed several…

  16. Teaching Children to Organise and Represent Large Data Sets in a Histogram

    Science.gov (United States)

    Nisbet, Steven; Putt, Ian

    2004-01-01

    Although some bright students in primary school are able to organise numerical data into classes, most attend to the characteristics of individuals rather than the group, and "see the trees rather than the forest". How can teachers in upper primary and early high school teach students to organise large sets of data with widely varying…

  17. A conceptual analysis of standard setting in large-scale assessments

    NARCIS (Netherlands)

    van der Linden, Willem J.

    1994-01-01

    Elements of arbitrariness in the standard setting process are explored, and an alternative to the use of cut scores is presented. The first part of the paper analyzes the use of cut scores in large-scale assessments, discussing three different functions: (1) cut scores define the qualifications used

  18. Large-sized and highly radioactive 3H and 109Cd Langmuir-Blodgett films

    International Nuclear Information System (INIS)

    Shibata, S.; Kawakami, H.; Kato, S.

    1994-02-01

    A device for the deposition of a radioactive Langmuir-Blodgett (LB) film was developed with the use of: (1) a modified horizontal lifting method, (2) an extremely shallow trough, and (3) a surface pressure-generating system without piston oil. It made a precious radioactive subphase solution repeatedly usable while keeping its radioactivity concentration as high as possible. Any large-size thin films can be prepared by just changing the trough size. Two monomolecular-layers of Y-type films of cadmium [ 3 H] icosanoate and 109 Cd icosanoate were built up as 3 H and 109 Cd β-sources for electron spectroscopy with intensities of 1.5 GBq (40 mCi) and 7.4 MBq (200 μCi), respectively, and a size of 65x200 mm 2 . Excellent uniformity of the distribution of deposited radioactivity was confirmed by autoradiography and photometry. (author)

  19. Detection of tiny amounts of fissile materials in large-sized containers with radioactive waste

    Science.gov (United States)

    Batyaev, V. F.; Skliarov, S. V.

    2018-01-01

    The paper is devoted to non-destructive control of tiny amounts of fissile materials in large-sized containers filled with radioactive waste (RAW). The aim of this work is to model an active neutron interrogation facility for detection of fissile ma-terials inside NZK type containers with RAW and determine the minimal detectable mass of U-235 as a function of various param-eters: matrix type, nonuniformity of container filling, neutron gen-erator parameters (flux, pulse frequency, pulse duration), meas-urement time. As a result the dependence of minimal detectable mass on fissile materials location inside container is shown. Nonu-niformity of the thermal neutron flux inside a container is the main reason of the space-heterogeneity of minimal detectable mass in-side a large-sized container. Our experiments with tiny amounts of uranium-235 (<1 g) confirm the detection of fissile materials in NZK containers by using active neutron interrogation technique.

  20. Mock-up test of remote controlled dismantling apparatus for large-sized vessels (contract research)

    Energy Technology Data Exchange (ETDEWEB)

    Myodo, Masato; Miyajima, Kazutoshi [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment; Okane, Shogo [Japan Atomic Energy Research Inst., Oarai, Ibaraki (Japan). Oarai Research Establishment

    2001-03-01

    The Remote dismantling apparatus, which is equipped with multi-units for functioning of washing, cutting, collection of cut pieces and so on, has been constructed to dismantle the large-sized vessels in the JAERI's Reprocessing Test Facility (JRTF). The apparatus has five-axis movement capability and its operation is performed remotely. The mock-up tests were performed to evaluate the applicability of the apparatus to actual dismantling activities by using the mock-ups of LV-3 and LV-5 in the facility. It was confirmed that each unit was satisfactory functioned by remote operation. Efficient procedures for dismantling the large-sized vessel was studied and various date was obtained in the mock-up tests. This apparatus was found to be applicable for the actual dismantling activity in JRTF. (author)

  1. Mock-up test of remote controlled dismantling apparatus for large-sized vessels (contract research)

    International Nuclear Information System (INIS)

    Myodo, Masato; Miyajima, Kazutoshi; Okane, Shogo

    2001-03-01

    The Remote dismantling apparatus, which is equipped with multi-units for functioning of washing, cutting, collection of cut pieces and so on, has been constructed to dismantle the large-sized vessels in the JAERI's Reprocessing Test Facility (JRTF). The apparatus has five-axis movement capability and its operation is performed remotely. The mock-up tests were performed to evaluate the applicability of the apparatus to actual dismantling activities by using the mock-ups of LV-3 and LV-5 in the facility. It was confirmed that each unit was satisfactory functioned by remote operation. Efficient procedures for dismantling the large-sized vessel was studied and various date was obtained in the mock-up tests. This apparatus was found to be applicable for the actual dismantling activity in JRTF. (author)

  2. Influence Factors of Sports Bra Evaluation and Design Based on Large Size

    Directory of Open Access Journals (Sweden)

    Zhang Lingxi

    2016-01-01

    Full Text Available The purpose of this paper was to find the main influence factors of sports bra evaluation by the subjective assessment of different styles commercial sports bra, and to summarize the design elements of sports bra for large size. 10 women in large size (>C80 were chosen to evaluate 9 different sports bras. The main influence factors were extracted by factor analysis and all the product samples were classified by Q-cluster analysis. The conclusions show that breast stability, wearing comfort and bust modelling are the three key factors for sports bra evaluation. And a classification-positioning model of sports bra products was established. The findings can provide theoretical basis and guidance for the research and design of sports bras both for academic and sports or underwear enterprises, and also provide reference value for women customers.

  3. Comparative analysis of non-destructive methods to control fissile materials in large-size containers

    Directory of Open Access Journals (Sweden)

    Batyaev V.F.

    2017-01-01

    Full Text Available The analysis of various non-destructive methods to control fissile materials (FM in large-size containers filled with radioactive waste (RAW has been carried out. The difficulty of applying passive gamma-neutron monitoring FM in large containers filled with concreted RAW is shown. Selection of an active non-destructive assay technique depends on the container contents; and in case of a concrete or iron matrix with very low activity and low activity RAW the neutron radiation method appears to be more preferable as compared with the photonuclear one.

  4. Comparative analysis of non-destructive methods to control fissile materials in large-size containers

    Science.gov (United States)

    Batyaev, V. F.; Sklyarov, S. V.

    2017-09-01

    The analysis of various non-destructive methods to control fissile materials (FM) in large-size containers filled with radioactive waste (RAW) has been carried out. The difficulty of applying passive gamma-neutron monitoring FM in large containers filled with concreted RAW is shown. Selection of an active non-destructive assay technique depends on the container contents; and in case of a concrete or iron matrix with very low activity and low activity RAW the neutron radiation method appears to be more preferable as compared with the photonuclear one. Note to the reader: the pdf file has been changed on September 22, 2017.

  5. Development of superconducting poloidal field coils for medium and large size tokamaks

    International Nuclear Information System (INIS)

    Dittrich, H.-G.; Forster, S.; Hofmann, A.

    1983-01-01

    Large long pulse tokamak fusion experiments require the use of superconducting poloidal field (PF) coils. In the past not much attention has been paid to the development of such coils. Therefore a development programme has been initiated recently at KfK. In this report start with summarizing the relevant PF coil parameters of some medium and large size tokamaks presently under construction or design, respectively. The most important areas of research and development work are deduced from these parameters. Design considerations and first experimental results concerning low loss conductors, cooling concepts and structural components are given

  6. Scrum of scrums solution for large size teams using scrum methodology

    OpenAIRE

    Qurashi, Saja Al; Qureshi, M. Rizwan Jameel

    2014-01-01

    Scrum is a structured framework to support complex product development. However, Scrum methodology faces a challenge of managing large teams. To address this challenge, in this paper we propose a solution called Scrum of Scrums. In Scrum of Scrums, we divide the Scrum team into teams of the right size, and then organize them hierarchically into a Scrum of Scrums. The main goals of the proposed solution are to optimize communication between teams in Scrum of Scrums; to make the system work aft...

  7. A procedure to detect flaws inside large size marble blocks by ultrasound

    OpenAIRE

    Bramanti, Mauro; Bozzi, Edoardo

    1999-01-01

    In stone and marble industry there is considerable interest in the possibility of using ultrasound diagnostic techniques for non-destructive testing of large size blocks in order to detect internal flaws such as faults, cracks and fissures. In this paper some preliminary measurements are reported in order to acquire basic knowledge of the fundamental properties of ultrasound, such as propagation velocity and attenuation, in the media here considered. We then outline a particular diagnostic pr...

  8. Mechanical properties of duplex steel welded joints in large-size constructions

    OpenAIRE

    J. Nowacki

    2012-01-01

    Purpose: On the basis of sources and own experiments, the analysis of mechanical properties, applications as well as material and technological problems of ferritic-austenitic steel welding were carried out. It was shown the area of welding applications, particularly welding of large-size structures, on the basis of example of the FCAW method of welding of the UNS S3 1803 duplex steel in construction of chemical cargo ships.Design/methodology/approach: Welding tests were carried out for duple...

  9. DESIGN AND DEVELOPMENT OF A LARGE SIZE NON-TRACKING SOLAR COOKER

    Directory of Open Access Journals (Sweden)

    N. M. NAHAR

    2009-09-01

    Full Text Available A large size novel non-tracking solar cooker has been designed, developed and tested. The cooker has been designed in such a way that the width to length ratio for reflector and glass window is about 4 so that maximum radiation falls on the glass window. This has helped in eliminating azimuthal tracking that is required in simple hot box solar cooker towards the Sun every hour because the width to length ratio of reflector is 1. It has been found that stagnation temperatures were 118.5oC and 108oC in large size non-tracking solar cooker and hot box solar cooker respectively. It takes about 2 h for soft food and 3 h for hard food. The cooker is capable of cooking 4.0 kg of food at a time. The efficiency of the large size non-tracking solar cooker has been found to be 27.5%. The cooker saves 5175 MJ of energy per year. The cost of the cooker is Rs. 10000.00 (1.0 US$ = Rs. 50.50. The payback period has been calculated by considering 10% annual interest, 5% maintenance cost and 5% inflation in fuel prices and maintenance cost. The payback period is least, i.e. 1.58 yr., with respect to electricity and maximum, i.e. 4.89 yr., with respect to kerosene. The payback periods are in increasing order with respect to fuel: electricity, coal, firewood, liquid petroleum gas, and kerosene. The shorter payback periods suggests that the use of large size non-tracking solar cooker is economical.

  10. Preparation and provisional validation of a large size dried spike: Batch SAL-9931

    International Nuclear Information System (INIS)

    Jammet, G.; Zoigner, A.; Doubek, N.; Grabmueller, G.; Bagliano, G.

    1990-05-01

    To determine uranium and plutonium concentration using isotope dilution mass spectrometry, weighed aliquands of a synthetic mixture containing about 2 mg of Pu (with a 239 Pu abundance of about 98%) and 40 mg of U (with a 235 U enrichment of about 19%) have been prepared and verified by SAL to be used to spike samples of concentrated spent fuel solutions with a high burn-up and a low 235 U enrichment. The advantages of such a Large Size Dried (LSD) Spike have been pointed out elsewhere and proof of the usefulness in the field reported. Certified Reference Materials Pu-NBL-126, natural U-NBS-960 and 93% enriched U-NBL-116 were used to prepare a stock solution containing 1.8 mg/ml of Pu and 37.3 mg/ml of 19.4% enriched U. Before shipment to the Reprocessing Plant, aliquands of the stock solution are dried to give Large Size Dried Spikes which resist shocks encountered during transportation, so that they can readily be recovered quantitatively at the plant. This paper describes the preparation and the validation of a Large Size Dried Spike which is intended to be used as a common spike by the plant operator, the national and the IAEA inspectorates. 6 refs, 7 tabs

  11. BACHSCORE. A tool for evaluating efficiently and reliably the quality of large sets of protein structures

    Science.gov (United States)

    Sarti, E.; Zamuner, S.; Cossio, P.; Laio, A.; Seno, F.; Trovato, A.

    2013-12-01

    In protein structure prediction it is of crucial importance, especially at the refinement stage, to score efficiently large sets of models by selecting the ones that are closest to the native state. We here present a new computational tool, BACHSCORE, that allows its users to rank different structural models of the same protein according to their quality, evaluated by using the BACH++ (Bayesian Analysis Conformation Hunt) scoring function. The original BACH statistical potential was already shown to discriminate with very good reliability the protein native state in large sets of misfolded models of the same protein. BACH++ features a novel upgrade in the solvation potential of the scoring function, now computed by adapting the LCPO (Linear Combination of Pairwise Orbitals) algorithm. This change further enhances the already good performance of the scoring function. BACHSCORE can be accessed directly through the web server: bachserver.pd.infn.it. Catalogue identifier: AEQD_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEQD_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: GNU General Public License version 3 No. of lines in distributed program, including test data, etc.: 130159 No. of bytes in distributed program, including test data, etc.: 24 687 455 Distribution format: tar.gz Programming language: C++. Computer: Any computer capable of running an executable produced by a g++ compiler (4.6.3 version). Operating system: Linux, Unix OS-es. RAM: 1 073 741 824 bytes Classification: 3. Nature of problem: Evaluate the quality of a protein structural model, taking into account the possible “a priori” knowledge of a reference primary sequence that may be different from the amino-acid sequence of the model; the native protein structure should be recognized as the best model. Solution method: The contact potential scores the occurrence of any given type of residue pair in 5 possible

  12. Efficient inference of population size histories and locus-specific mutation rates from large-sample genomic variation data.

    Science.gov (United States)

    Bhaskar, Anand; Wang, Y X Rachel; Song, Yun S

    2015-02-01

    With the recent increase in study sample sizes in human genetics, there has been growing interest in inferring historical population demography from genomic variation data. Here, we present an efficient inference method that can scale up to very large samples, with tens or hundreds of thousands of individuals. Specifically, by utilizing analytic results on the expected frequency spectrum under the coalescent and by leveraging the technique of automatic differentiation, which allows us to compute gradients exactly, we develop a very efficient algorithm to infer piecewise-exponential models of the historical effective population size from the distribution of sample allele frequencies. Our method is orders of magnitude faster than previous demographic inference methods based on the frequency spectrum. In addition to inferring demography, our method can also accurately estimate locus-specific mutation rates. We perform extensive validation of our method on simulated data and show that it can accurately infer multiple recent epochs of rapid exponential growth, a signal that is difficult to pick up with small sample sizes. Lastly, we use our method to analyze data from recent sequencing studies, including a large-sample exome-sequencing data set of tens of thousands of individuals assayed at a few hundred genic regions. © 2015 Bhaskar et al.; Published by Cold Spring Harbor Laboratory Press.

  13. Folding and unfolding of large-size shell construction for application in Earth orbit

    Science.gov (United States)

    Kondyurin, Alexey; Pestrenina, Irena; Pestrenin, Valery; Rusakov, Sergey

    2016-07-01

    A future exploration of space requires a technology of large module for biological, technological, logistic and other applications in Earth orbits [1-3]. This report describes the possibility of using large-sized shell structures deployable in space. Structure is delivered to the orbit in the spaceship container. The shell is folded for the transportation. The shell material is either rigid plastic or multilayer prepreg comprising rigid reinforcements (such as reinforcing fibers). The unfolding process (bringing a construction to the unfolded state by loading the internal pressure) needs be considered at the presence of both stretching and bending deformations. An analysis of the deployment conditions (the minimum internal pressure bringing a construction from the folded state to the unfolded state) of large laminated CFRP shell structures is formulated in this report. Solution of this mechanics of deformable solids (MDS) problem of the shell structure is based on the following assumptions: the shell is made of components whose median surface has a reamer; in the separate structural element relaxed state (not stressed and not deformed) its median surface coincides with its reamer (this assumption allows choose the relaxed state of the structure correctly); structural elements are joined (sewn together) by a seam that does not resist rotation around the tangent to the seam line. The ways of large shell structures folding, whose median surface has a reamer, are suggested. Unfolding of cylindrical, conical (full and truncated cones), and large-size composite shells (cylinder-cones, cones-cones) is considered. These results show that the unfolding pressure of such large-size structures (0.01-0.2 atm.) is comparable to the deploying pressure of pneumatic parts (0.001-0.1 atm.) [3]. It would be possible to extend this approach to investigate the unfolding process of large-sized shells with ruled median surface or for non-developable surfaces. This research was

  14. Interlayer catalytic exfoliation realizing scalable production of large-size pristine few-layer graphene

    Science.gov (United States)

    Geng, Xiumei; Guo, Yufen; Li, Dongfang; Li, Weiwei; Zhu, Chao; Wei, Xiangfei; Chen, Mingliang; Gao, Song; Qiu, Shengqiang; Gong, Youpin; Wu, Liqiong; Long, Mingsheng; Sun, Mengtao; Pan, Gebo; Liu, Liwei

    2013-01-01

    Mass production of reduced graphene oxide and graphene nanoplatelets has recently been achieved. However, a great challenge still remains in realizing large-quantity and high-quality production of large-size thin few-layer graphene (FLG). Here, we create a novel route to solve the issue by employing one-time-only interlayer catalytic exfoliation (ICE) of salt-intercalated graphite. The typical FLG with a large lateral size of tens of microns and a thickness less than 2 nm have been obtained by a mild and durative ICE. The high-quality graphene layers preserve intact basal crystal planes owing to avoidance of the degradation reaction during both intercalation and ICE. Furthermore, we reveal that the high-quality FLG ensures a remarkable lithium-storage stability (>1,000 cycles) and a large reversible specific capacity (>600 mAh g-1). This simple and scalable technique acquiring high-quality FLG offers considerable potential for future realistic applications.

  15. mmpdb: An Open-Source Matched Molecular Pair Platform for Large Multiproperty Data Sets.

    Science.gov (United States)

    Dalke, Andrew; Hert, Jérôme; Kramer, Christian

    2018-05-29

    Matched molecular pair analysis (MMPA) enables the automated and systematic compilation of medicinal chemistry rules from compound/property data sets. Here we present mmpdb, an open-source matched molecular pair (MMP) platform to create, compile, store, retrieve, and use MMP rules. mmpdb is suitable for the large data sets typically found in pharmaceutical and agrochemical companies and provides new algorithms for fragment canonicalization and stereochemistry handling. The platform is written in Python and based on the RDKit toolkit. It is freely available from https://github.com/rdkit/mmpdb .

  16. Impact of sample size on principal component analysis ordination of an environmental data set: effects on eigenstructure

    Directory of Open Access Journals (Sweden)

    Shaukat S. Shahid

    2016-06-01

    Full Text Available In this study, we used bootstrap simulation of a real data set to investigate the impact of sample size (N = 20, 30, 40 and 50 on the eigenvalues and eigenvectors resulting from principal component analysis (PCA. For each sample size, 100 bootstrap samples were drawn from environmental data matrix pertaining to water quality variables (p = 22 of a small data set comprising of 55 samples (stations from where water samples were collected. Because in ecology and environmental sciences the data sets are invariably small owing to high cost of collection and analysis of samples, we restricted our study to relatively small sample sizes. We focused attention on comparison of first 6 eigenvectors and first 10 eigenvalues. Data sets were compared using agglomerative cluster analysis using Ward’s method that does not require any stringent distributional assumptions.

  17. Ultra-large size austenitic stainless steel forgings for fast breeder reactor 'Monju'

    International Nuclear Information System (INIS)

    Tsukada, Hisashi; Suzuki, Komei; Sato, Ikuo; Miura, Ritsu.

    1988-01-01

    The large SUS 304 austenitic stainless steel forgings for the reactor vessel of the prototype FBR 'Monju' of 280 MWe output were successfully manufactured. The reactor vessel contains the heart of the reactor and sodium coolant at 530 deg C, and its inside diameter is about 7 m, and height is about 18 m. It is composed of 12 large forgings, that is, very thick flanges and shalls made by ring forging and an end plate made by disk forging and hot forming, using a special press machine. The manufacture of these large forgings utilized the results of the basic test on the material properties in high temperature environment and the effect that the manufacturing factors exert on the material properties and the results of the development of manufacturing techniques for superlarge forgings. The problems were the manufacturing techniques for the large ingots of 250 t class of high purity, the hot working techniques for stainless steel of fine grain size, the forging techniques for superlarge rings and disks, and the machining techniques of high precision for particularly large diameter, thin wall rings. The manufacture of these large stainless steel forgings is reported. (Kako, I.)

  18. CUDA based Level Set Method for 3D Reconstruction of Fishes from Large Acoustic Data

    DEFF Research Database (Denmark)

    Sharma, Ojaswa; Anton, François

    2009-01-01

    Acoustic images present views of underwater dynamics, even in high depths. With multi-beam echo sounders (SONARs), it is possible to capture series of 2D high resolution acoustic images. 3D reconstruction of the water column and subsequent estimation of fish abundance and fish species identificat...... of suppressing threshold and show its convergence as the evolution proceeds. We also present a GPU based streaming computation of the method using NVIDIA's CUDA framework to handle large volume data-sets. Our implementation is optimised for memory usage to handle large volumes....

  19. Security Optimization for Distributed Applications Oriented on Very Large Data Sets

    Directory of Open Access Journals (Sweden)

    Mihai DOINEA

    2010-01-01

    Full Text Available The paper presents the main characteristics of applications which are working with very large data sets and the issues related to security. First section addresses the optimization process and how it is approached when dealing with security. The second section describes the concept of very large datasets management while in the third section the risks related are identified and classified. Finally, a security optimization schema is presented with a cost-efficiency analysis upon its feasibility. Conclusions are drawn and future approaches are identified.

  20. Large Scale Behavior and Droplet Size Distributions in Crude Oil Jets and Plumes

    Science.gov (United States)

    Katz, Joseph; Murphy, David; Morra, David

    2013-11-01

    The 2010 Deepwater Horizon blowout introduced several million barrels of crude oil into the Gulf of Mexico. Injected initially as a turbulent jet containing crude oil and gas, the spill caused formation of a subsurface plume stretching for tens of miles. The behavior of such buoyant multiphase plumes depends on several factors, such as the oil droplet and bubble size distributions, current speed, and ambient stratification. While large droplets quickly rise to the surface, fine ones together with entrained seawater form intrusion layers. Many elements of the physics of droplet formation by an immiscible turbulent jet and their resulting size distribution have not been elucidated, but are known to be significantly influenced by the addition of dispersants, which vary the Weber Number by orders of magnitude. We present experimental high speed visualizations of turbulent jets of sweet petroleum crude oil (MC 252) premixed with Corexit 9500A dispersant at various dispersant to oil ratios. Observations were conducted in a 0.9 m × 0.9 m × 2.5 m towing tank, where large-scale behavior of the jet, both stationary and towed at various speeds to simulate cross-flow, have been recorded at high speed. Preliminary data on oil droplet size and spatial distributions were also measured using a videoscope and pulsed light sheet. Sponsored by Gulf of Mexico Research Initiative (GoMRI).

  1. Secondary data analysis of large data sets in urology: successes and errors to avoid.

    Science.gov (United States)

    Schlomer, Bruce J; Copp, Hillary L

    2014-03-01

    Secondary data analysis is the use of data collected for research by someone other than the investigator. In the last several years there has been a dramatic increase in the number of these studies being published in urological journals and presented at urological meetings, especially involving secondary data analysis of large administrative data sets. Along with this expansion, skepticism for secondary data analysis studies has increased for many urologists. In this narrative review we discuss the types of large data sets that are commonly used for secondary data analysis in urology, and discuss the advantages and disadvantages of secondary data analysis. A literature search was performed to identify urological secondary data analysis studies published since 2008 using commonly used large data sets, and examples of high quality studies published in high impact journals are given. We outline an approach for performing a successful hypothesis or goal driven secondary data analysis study and highlight common errors to avoid. More than 350 secondary data analysis studies using large data sets have been published on urological topics since 2008 with likely many more studies presented at meetings but never published. Nonhypothesis or goal driven studies have likely constituted some of these studies and have probably contributed to the increased skepticism of this type of research. However, many high quality, hypothesis driven studies addressing research questions that would have been difficult to conduct with other methods have been performed in the last few years. Secondary data analysis is a powerful tool that can address questions which could not be adequately studied by another method. Knowledge of the limitations of secondary data analysis and of the data sets used is critical for a successful study. There are also important errors to avoid when planning and performing a secondary data analysis study. Investigators and the urological community need to strive to use

  2. From nanoparticles to large aerosols: Ultrafast measurement methods for size and concentration

    International Nuclear Information System (INIS)

    Keck, Lothar; Spielvogel, Juergen; Grimm, Hans

    2009-01-01

    A major challenge in aerosol technology is the fast measurement of number size distributions with good accuracy and size resolution. The dedicated instruments are frequently based on particle charging and electric detection. Established fast systems, however, still feature a number of shortcomings. We have developed a new instrument that constitutes of a high flow Differential Mobility Analyser (high flow DMA) and a high sensitivity Faraday Cup Electrometer (FCE). The system enables variable flow rates of up to 150 lpm, and the scan time for size distribution can be shortened considerably due to the short residence time of the particles in the DMA. Three different electrodes can be employed in order to cover a large size range. First test results demonstrate that the scan time can be reduced to less than 1 s for small particles, and that the results from the fast scans feature no significant difference to the results from established slow method. The fields of application for the new instrument comprise the precise monitoring of fast processes with nanoparticles, including monitoring of engine exhaust in automotive research.

  3. From nanoparticles to large aerosols: Ultrafast measurement methods for size and concentration

    Science.gov (United States)

    Keck, Lothar; Spielvogel, Jürgen; Grimm, Hans

    2009-05-01

    A major challenge in aerosol technology is the fast measurement of number size distributions with good accuracy and size resolution. The dedicated instruments are frequently based on particle charging and electric detection. Established fast systems, however, still feature a number of shortcomings. We have developed a new instrument that constitutes of a high flow Differential Mobility Analyser (high flow DMA) and a high sensitivity Faraday Cup Electrometer (FCE). The system enables variable flow rates of up to 150 lpm, and the scan time for size distribution can be shortened considerably due to the short residence time of the particles in the DMA. Three different electrodes can be employed in order to cover a large size range. First test results demonstrate that the scan time can be reduced to less than 1 s for small particles, and that the results from the fast scans feature no significant difference to the results from established slow method. The fields of application for the new instrument comprise the precise monitoring of fast processes with nanoparticles, including monitoring of engine exhaust in automotive research.

  4. Analyzing large data sets from XGC1 magnetic fusion simulations using apache spark

    Energy Technology Data Exchange (ETDEWEB)

    Churchill, R. Michael [Princeton Plasma Physics Lab. (PPPL), Princeton, NJ (United States)

    2016-11-21

    Apache Spark is explored as a tool for analyzing large data sets from the magnetic fusion simulation code XGCI. Implementation details of Apache Spark on the NERSC Edison supercomputer are discussed, including binary file reading, and parameter setup. Here, an unsupervised machine learning algorithm, k-means clustering, is applied to XGCI particle distribution function data, showing that highly turbulent spatial regions do not have common coherent structures, but rather broad, ring-like structures in velocity space.

  5. Scalable Algorithms for Clustering Large Geospatiotemporal Data Sets on Manycore Architectures

    Science.gov (United States)

    Mills, R. T.; Hoffman, F. M.; Kumar, J.; Sreepathi, S.; Sripathi, V.

    2016-12-01

    The increasing availability of high-resolution geospatiotemporal data sets from sources such as observatory networks, remote sensing platforms, and computational Earth system models has opened new possibilities for knowledge discovery using data sets fused from disparate sources. Traditional algorithms and computing platforms are impractical for the analysis and synthesis of data sets of this size; however, new algorithmic approaches that can effectively utilize the complex memory hierarchies and the extremely high levels of available parallelism in state-of-the-art high-performance computing platforms can enable such analysis. We describe a massively parallel implementation of accelerated k-means clustering and some optimizations to boost computational intensity and utilization of wide SIMD lanes on state-of-the art multi- and manycore processors, including the second-generation Intel Xeon Phi ("Knights Landing") processor based on the Intel Many Integrated Core (MIC) architecture, which includes several new features, including an on-package high-bandwidth memory. We also analyze the code in the context of a few practical applications to the analysis of climatic and remotely-sensed vegetation phenology data sets, and speculate on some of the new applications that such scalable analysis methods may enable.

  6. Impact basins on Ganymede and Callisto and implications for the large-projectile size distribution

    Science.gov (United States)

    Wagner, R.; Neukum, G.; Wolf, U.; Greeley, R.; Klemaszewski, J. E.

    2003-04-01

    It has been conjectured that the projectile family which impacted the Galilean Satellites of Jupiter was depleted in large projectiles, concluded from a ''dearth'' in large craters (> 60 km) (e.g. [1]). Geologic mapping, aided by spatial filtering of new Galileo as well as older Voyager data shows, however, that large projectiles have left an imprint of palimpsests and multi-ring structures on both Ganymede and Callisto (e. g. [2]). Most of these impact structures are heavily degraded and hence difficult to recognize. In this paper, we present (1) maps showing the outlines of these basins, and (2) derive updated crater size-frequency diagrams of the two satellites. The crater diameter from a palimpsest diameter was reconstructed using a formula derived by [3]. The calculation of the crater diameter Dc from the outer boundary Do of a multi-ring structure is much less constrained and on the order of Dc = k \\cdot Do , with k ≈ 0.25-0.3 [4]. Despite the uncertainties in locating the ''true'' crater rims, the resulting shape of the distribution in the range from kilometer-sized craters to sizes of ≈ 500 km is lunar-like and strongly suggests a collisionally evolved projectile family, very likely of asteroidal origin. An alternative explanation for this shape could be that comets are collisionally evolved bodies in a similar way as are asteroids, which as of yet is still uncertain and in discussion. Also, the crater size distributions on Ganymede and Callisto are shifted towards smaller crater sizes compared to the Moon, caused by a much lower impact velocity of impactors which preferentially were in planetocentric orbits [5]. References: [1] Strom et al., JGR 86, 8659-8674, 1981. [2] J. E. Klemaszewski et al., Ann. Geophys. 16, suppl. III, 1998. [3] Iaquinta-Ridolfi &Schenk, LPSC XXVI (abstr.), 651-652, 1995. [4] Schenk &Moore, LPSC XXX, abstr. No. 1786 [CD-Rom], 1999. [5] Horedt & Neukum, JGR 89, 10,405-10,410, 1984.

  7. Parallel analysis tools and new visualization techniques for ultra-large climate data set

    Energy Technology Data Exchange (ETDEWEB)

    Middleton, Don [National Center for Atmospheric Research, Boulder, CO (United States); Haley, Mary [National Center for Atmospheric Research, Boulder, CO (United States)

    2014-12-10

    ParVis was a project funded under LAB 10-05: “Earth System Modeling: Advanced Scientific Visualization of Ultra-Large Climate Data Sets”. Argonne was the lead lab with partners at PNNL, SNL, NCAR and UC-Davis. This report covers progress from January 1st, 2013 through Dec 1st, 2014. Two previous reports covered the period from Summer, 2010, through September 2011 and October 2011 through December 2012, respectively. While the project was originally planned to end on April 30, 2013, personnel and priority changes allowed many of the institutions to continue work through FY14 using existing funds. A primary focus of ParVis was introducing parallelism to climate model analysis to greatly reduce the time-to-visualization for ultra-large climate data sets. Work in the first two years was conducted on two tracks with different time horizons: one track to provide immediate help to climate scientists already struggling to apply their analysis to existing large data sets and another focused on building a new data-parallel library and tool for climate analysis and visualization that will give the field a platform for performing analysis and visualization on ultra-large datasets for the foreseeable future. In the final 2 years of the project, we focused mostly on the new data-parallel library and associated tools for climate analysis and visualization.

  8. Verification measurements of the IRMM-1027 and the IAEA large-sized dried (LSD) spikes

    International Nuclear Information System (INIS)

    Jakopic, R.; Aregbe, Y.; Richter, S.

    2017-01-01

    In the frame of the accountancy measurements of the fissile materials, reliable determinations of the plutonium and uranium content in spent nuclear fuel are required to comply with international safeguards agreements. Large-sized dried (LSD) spikes of enriched "2"3"5U and "2"3"9Pu for isotope dilution mass spectrometry (IDMS) analysis are routinely applied in reprocessing plants for this purpose. A correct characterisation of these elements is a pre-requirement for achieving high accuracy in IDMS analyses. This paper will present the results of external verification measurements of such LSD spikes performed by the European Commission and the International Atomic Energy Agency. (author)

  9. RESOURCE SAVING TECHNOLOGICAL PROCESS OF LARGE-SIZE DIE THERMAL TREATMENT

    Directory of Open Access Journals (Sweden)

    L. A. Glazkov

    2009-01-01

    Full Text Available The given paper presents a development of a technological process pertaining to hardening large-size parts made of die steel. The proposed process applies a water-air mixture instead of a conventional hardening medium that is industrial oil.While developing this new technological process it has been necessary to solve the following problems: reduction of thermal treatment duration, reduction of power resource expense (natural gas and mineral oil, elimination of fire danger and increase of process ecological efficiency. 

  10. Estimation of the sizes of hot nuclear systems from particle-particle large angle kinematical correlations

    International Nuclear Information System (INIS)

    La Ville, J.L.; Bizard, G.; Durand, D.; Jin, G.M.; Rosato, E.

    1990-06-01

    Light fragment emission, when triggered by large transverse momentum protons shows specific kinematical correlations due to recoil effects of the excited emitting source. Such effects have been observed in azimuthal angular distributions of He-particles produced in collisions induced by 94 MeV/u 16 0 ions on Al, Ni and Au targets. A model calculation assuming a two-stage mechanism (formation and sequential decay of a hot source) gives a good description of these whole data. From this succesfull confrontation, it is possible to estimate the size of the emitting system

  11. Introduction to Large-sized Test Facility for validating Containment Integrity under Severe Accidents

    International Nuclear Information System (INIS)

    Na, Young Su; Hong, Seongwan; Hong, Seongho; Min, Beongtae

    2014-01-01

    An overall assessment of containment integrity can be conducted properly by examining the hydrogen behavior in the containment building. Under severe accidents, an amount of hydrogen gases can be generated by metal oxidation and corium-concrete interaction. Hydrogen behavior in the containment building strongly depends on complicated thermal hydraulic conditions with mixed gases and steam. The performance of a PAR can be directly affected by the thermal hydraulic conditions, steam contents, gas mixture behavior and aerosol characteristics, as well as the operation of other engineering safety systems such as a spray. The models in computer codes for a severe accident assessment can be validated based on the experiment results in a large-sized test facility. The Korea Atomic Energy Research Institute (KAERI) is now preparing a large-sized test facility to examine in detail the safety issues related with hydrogen including the performance of safety devices such as a PAR in various severe accident situations. This paper introduces the KAERI test facility for validating the containment integrity under severe accidents. To validate the containment integrity, a large-sized test facility is necessary for simulating complicated phenomena induced by an amount of steam and gases, especially hydrogen released into the containment building under severe accidents. A pressure vessel 9.5 m in height and 3.4 m in diameter was designed at the KAERI test facility for the validating containment integrity, which was based on the THAI test facility with the experimental safety and the reliable measurement systems certified for a long time. This large-sized pressure vessel operated in steam and iodine as a corrosive agent was made by stainless steel 316L because of corrosion resistance for a long operating time, and a vessel was installed in at KAERI in March 2014. In the future, the control systems for temperature and pressure in a vessel will be constructed, and the measurement system

  12. Investigation of Low-Cost Surface Processing Techniques for Large-Size Multicrystalline Silicon Solar Cells

    OpenAIRE

    Cheng, Yuang-Tung; Ho, Jyh-Jier; Lee, William J.; Tsai, Song-Yeu; Lu, Yung-An; Liou, Jia-Jhe; Chang, Shun-Hsyung; Wang, Kang L.

    2010-01-01

    The subject of the present work is to develop a simple and effective method of enhancing conversion efficiency in large-size solar cells using multicrystalline silicon (mc-Si) wafer. In this work, industrial-type mc-Si solar cells with area of 125×125 mm2 were acid etched to produce simultaneously POCl3 emitters and silicon nitride deposition by plasma-enhanced chemical vapor deposited (PECVD). The study of surface morphology and reflectivity of different mc-Si etched surfaces has also been d...

  13. Development and introduction of stamping technique for large-size laterals of NPP pipelines

    International Nuclear Information System (INIS)

    Romashko, N.I.; Moshnin, E.N.; Timokhin, V.S.; Bryukhanov, Yu.V.; Lebedev, V.A.

    1984-01-01

    The results of development and introduction of stamping technique for large-size laterals of NPP high-pressure pipelines are presented. The main experimental data characterizing technological possibilities of the process are given. The technological process and design of the stamp assure production of laterals from ovalized bars per one heating of the bar and per one running of the press cronnhead. Introduction of new technology decreased labour input of lateral production, reliability and serviceability of pipelines increased in this case. Introduction of this technology gives a considerable benefit

  14. A simple, compact, and rigid piezoelectric step motor with large step size

    Science.gov (United States)

    Wang, Qi; Lu, Qingyou

    2009-08-01

    We present a novel piezoelectric stepper motor featuring high compactness, rigidity, simplicity, and any direction operability. Although tested in room temperature, it is believed to work in low temperatures, owing to its loose operation conditions and large step size. The motor is implemented with a piezoelectric scanner tube that is axially cut into almost two halves and clamp holds a hollow shaft inside at both ends via the spring parts of the shaft. Two driving voltages that singly deform the two halves of the piezotube in one direction and recover simultaneously will move the shaft in the opposite direction, and vice versa.

  15. Temperature Uniformity of Wafer on a Large-Sized Susceptor for a Nitride Vertical MOCVD Reactor

    International Nuclear Information System (INIS)

    Li Zhi-Ming; Jiang Hai-Ying; Han Yan-Bin; Li Jin-Ping; Yin Jian-Qin; Zhang Jin-Cheng

    2012-01-01

    The effect of coil location on wafer temperature is analyzed in a vertical MOCVD reactor by induction heating. It is observed that the temperature distribution in the wafer with the coils under the graphite susceptor is more uniform than that with the coils around the outside wall of the reactor. For the case of coils under the susceptor, we find that the thickness of the susceptor, the distance from the coils to the susceptor bottom and the coil turns significantly affect the temperature uniformity of the wafer. An optimization process is executed for a 3-inch susceptor with this kind of structure, resulting in a large improvement in the temperature uniformity. A further optimization demonstrates that the new susceptor structure is also suitable for either multiple wafers or large-sized wafers approaching 6 and 8 inches

  16. Breeding and Genetics Symposium: really big data: processing and analysis of very large data sets.

    Science.gov (United States)

    Cole, J B; Newman, S; Foertter, F; Aguilar, I; Coffey, M

    2012-03-01

    Modern animal breeding data sets are large and getting larger, due in part to recent availability of high-density SNP arrays and cheap sequencing technology. High-performance computing methods for efficient data warehousing and analysis are under development. Financial and security considerations are important when using shared clusters. Sound software engineering practices are needed, and it is better to use existing solutions when possible. Storage requirements for genotypes are modest, although full-sequence data will require greater storage capacity. Storage requirements for intermediate and results files for genetic evaluations are much greater, particularly when multiple runs must be stored for research and validation studies. The greatest gains in accuracy from genomic selection have been realized for traits of low heritability, and there is increasing interest in new health and management traits. The collection of sufficient phenotypes to produce accurate evaluations may take many years, and high-reliability proofs for older bulls are needed to estimate marker effects. Data mining algorithms applied to large data sets may help identify unexpected relationships in the data, and improved visualization tools will provide insights. Genomic selection using large data requires a lot of computing power, particularly when large fractions of the population are genotyped. Theoretical improvements have made possible the inversion of large numerator relationship matrices, permitted the solving of large systems of equations, and produced fast algorithms for variance component estimation. Recent work shows that single-step approaches combining BLUP with a genomic relationship (G) matrix have similar computational requirements to traditional BLUP, and the limiting factor is the construction and inversion of G for many genotypes. A naïve algorithm for creating G for 14,000 individuals required almost 24 h to run, but custom libraries and parallel computing reduced that to

  17. Experimental investigation on the influence of instrument settings on pixel size and nonlinearity in SEM image formation

    DEFF Research Database (Denmark)

    Carli, Lorenzo; Genta, Gianfranco; Cantatore, Angela

    2010-01-01

    The work deals with an experimental investigation on the influence of three Scanning Electron Microscope (SEM) instrument settings, accelerating voltage, spot size and magnification, on the image formation process. Pixel size and nonlinearity were chosen as output parameters related to image...... quality and resolution. A silicon grating calibrated artifact was employed to investigate qualitatively and quantitatively, through a designed experiment approach, the parameters relevance. SEM magnification was found to account by far for the largest contribution on both parameters under consideration...

  18. Application of electron beam welding to large size pressure vessels made of thick low alloy steel

    International Nuclear Information System (INIS)

    Kuri, S.; Yamamoto, M.; Aoki, S.; Kimura, M.; Nayama, M.; Takano, G.

    1993-01-01

    The authors describe the results of studies for application of the electron beam welding to the large size pressure vessels made of thick low alloy steel (ASME A533 Gr.B cl.2 and A533 Gr.A cl.1). Two major problems for applying the EBW, the poor toughness of weld metal and the equipment to weld huge pressure vessels are focused on. For the first problem, the effects of Ni content of weld metal, welding conditions and post weld heat treatment are investigated. For the second problem, an applicability of the local vacuum EBW to a large size pressure vessel made of thick plate is qualified by the construction of a 120 mm thick, 2350 mm outside diameter cylindrical model. The model was electron beam welded using local vacuum chamber and the performance of the weld joint is investigated. Based on these results, the electron beam welding has been applied to the production of a steam generator for a PWR. (author). 3 refs., 10 figs., 4 tabs

  19. A new type of intelligent wireless sensing network for health monitoring of large-size structures

    Science.gov (United States)

    Lei, Ying; Liu, Ch.; Wu, D. T.; Tang, Y. L.; Wang, J. X.; Wu, L. J.; Jiang, X. D.

    2009-07-01

    In recent years, some innovative wireless sensing systems have been proposed. However, more exploration and research on wireless sensing systems are required before wireless systems can substitute for the traditional wire-based systems. In this paper, a new type of intelligent wireless sensing network is proposed for the heath monitoring of large-size structures. Hardware design of the new wireless sensing units is first studied. The wireless sensing unit mainly consists of functional modules of: sensing interface, signal conditioning, signal digitization, computational core, wireless communication and battery management. Then, software architecture of the unit is introduced. The sensing network has a two-level cluster-tree architecture with Zigbee communication protocol. Important issues such as power saving and fault tolerance are considered in the designs of the new wireless sensing units and sensing network. Each cluster head in the network is characterized by its computational capabilities that can be used to implement the computational methodologies of structural health monitoring; making the wireless sensing units and sensing network have "intelligent" characteristics. Primary tests on the measurement data collected by the wireless system are performed. The distributed computational capacity of the intelligent sensing network is also demonstrated. It is shown that the new type of intelligent wireless sensing network provides an efficient tool for structural health monitoring of large-size structures.

  20. Detection of tiny amounts of fissile materials in large-sized containers with radioactive waste

    Directory of Open Access Journals (Sweden)

    Batyaev V.F.

    2018-01-01

    Full Text Available The paper is devoted to non-destructive control of tiny amounts of fissile materials in large-sized containers filled with radioactive waste (RAW. The aim of this work is to model an active neutron interrogation facility for detection of fissile ma-terials inside NZK type containers with RAW and determine the minimal detectable mass of U-235 as a function of various param-eters: matrix type, nonuniformity of container filling, neutron gen-erator parameters (flux, pulse frequency, pulse duration, meas-urement time. As a result the dependence of minimal detectable mass on fissile materials location inside container is shown. Nonu-niformity of the thermal neutron flux inside a container is the main reason of the space-heterogeneity of minimal detectable mass in-side a large-sized container. Our experiments with tiny amounts of uranium-235 (<1 g confirm the detection of fissile materials in NZK containers by using active neutron interrogation technique.

  1. An automated system for the preparation of Large Size Dried (LSD) Spikes

    Energy Technology Data Exchange (ETDEWEB)

    Verbruggen, A.; Bauwens, J.; Jakobsson, U.; Eykens, R.; Wellum, R.; Aregbe, Y. [European Commission - Joint Research Centre, Institute for Reference Materials and Measurements (IRMM), Retieseweg 211, B2440 Geel (Belgium); Van De Steene, N. [Nucomat, Mercatorstraat 206, B9100 Sint Niklaas (Belgium)

    2008-07-01

    Large size dried (LSD) spikes have been produced to fulfill the existing requirement for reliable and traceable isotopic reference materials for nuclear safeguards. A system to produce certified nuclear isotopic reference material as a U/Pu mixture in the form of large size dried spikes, comparable to those produced using traditional methods has been installed in collaboration with Nucomat, a company with a recognized reputation in design and development of integrated automated systems. The major components of the system are a robot, two balances, a dispenser and a drying unit fitted into a glove box. The robot is software driven and designed to control all movements inside the glove-box, to identify unambiguously the penicillin vials with a bar-code reader, to dispense the LSD batch solution into the vials and to weigh the amount dispensed. The system functionality has been evaluated and the performance validated by comparing the results from a series of samples dispensed and weighed by the automated system with the results by manual substitution weighing. After applying the proper correction factors to the data from the automated system balance no significant difference was observed between the two. However, an additional component of uncertainty of 3*10{sup -4} is introduced in the uncertainty budget for the certified weights provided by the automatic system. (authors)

  2. An automated system for the preparation of Large Size Dried (LSD) Spikes

    International Nuclear Information System (INIS)

    Verbruggen, A.; Bauwens, J.; Jakobsson, U.; Eykens, R.; Wellum, R.; Aregbe, Y.; Van De Steene, N.

    2008-01-01

    Large size dried (LSD) spikes have been produced to fulfill the existing requirement for reliable and traceable isotopic reference materials for nuclear safeguards. A system to produce certified nuclear isotopic reference material as a U/Pu mixture in the form of large size dried spikes, comparable to those produced using traditional methods has been installed in collaboration with Nucomat, a company with a recognized reputation in design and development of integrated automated systems. The major components of the system are a robot, two balances, a dispenser and a drying unit fitted into a glove box. The robot is software driven and designed to control all movements inside the glove-box, to identify unambiguously the penicillin vials with a bar-code reader, to dispense the LSD batch solution into the vials and to weigh the amount dispensed. The system functionality has been evaluated and the performance validated by comparing the results from a series of samples dispensed and weighed by the automated system with the results by manual substitution weighing. After applying the proper correction factors to the data from the automated system balance no significant difference was observed between the two. However, an additional component of uncertainty of 3*10 -4 is introduced in the uncertainty budget for the certified weights provided by the automatic system. (authors)

  3. Q0000-398 is a high-redshift quasar with a large angular size

    International Nuclear Information System (INIS)

    Gearhart, M.R.; Pacht, E.

    1977-01-01

    A study is described, using the three-element interferrometer at the National Radio Astronomy Observatory, West Virginia, to investigate whether any quasars exist that might be radio sources. It was found that Q0000-398 appeared to be a quasar of high red shift and large angular size. The interferrometer was operated with a 300-1200-1500 m baseline configuration at 2695 MHz. The radio map for Q0000-398 is shown, and has two weak components separated by 134 +- 40 arc s. If these components are associated with the optical object this quasar has the largest known angular size for its red shift value. The results reported for Q0000-398 and other quasars having considerable angular extent demonstrate the importance of considering radio selection effects in the angular diameter-red shift relationship, and since any radio selection effects are removed when quasars are selected optically, more extensive mapping programs should be undertaken, looking particularly for large scale structure around optically selected high-z quasars. (U.K.)

  4. A hybrid adaptive large neighborhood search heuristic for lot-sizing with setup times

    DEFF Research Database (Denmark)

    Muller, Laurent Flindt; Spoorendonk, Simon; Pisinger, David

    2012-01-01

    This paper presents a hybrid of a general heuristic framework and a general purpose mixed-integer programming (MIP) solver. The framework is based on local search and an adaptive procedure which chooses between a set of large neighborhoods to be searched. A mixed integer programming solver and its......, and the upper bounds found by the commercial MIP solver ILOG CPLEX using state-of-the-art MIP formulations. Furthermore, we improve the best known solutions on 60 out of 100 and improve the lower bound on all 100 instances from the literature...

  5. —Does Demand Fall When Customers Perceive That Prices Are Unfair? The Case of Premium Pricing for Large Sizes

    OpenAIRE

    Eric T. Anderson; Duncan I. Simester

    2008-01-01

    We analyze a large-scale field test conducted with a mail-order catalog firm to investigate how customers react to premium prices for larger sizes of women's apparel. We find that customers who demand large sizes react unfavorably to paying a higher price than customers for small sizes. Further investigation suggests that these consumers perceive that the price premium is unfair. Overall, premium pricing led to a 6% to 8% decrease in gross profits.

  6. Ulysses: accurate detection of low-frequency structural variations in large insert-size sequencing libraries.

    Science.gov (United States)

    Gillet-Markowska, Alexandre; Richard, Hugues; Fischer, Gilles; Lafontaine, Ingrid

    2015-03-15

    The detection of structural variations (SVs) in short-range Paired-End (PE) libraries remains challenging because SV breakpoints can involve large dispersed repeated sequences, or carry inherent complexity, hardly resolvable with classical PE sequencing data. In contrast, large insert-size sequencing libraries (Mate-Pair libraries) provide higher physical coverage of the genome and give access to repeat-containing regions. They can thus theoretically overcome previous limitations as they are becoming routinely accessible. Nevertheless, broad insert size distributions and high rates of chimerical sequences are usually associated to this type of libraries, which makes the accurate annotation of SV challenging. Here, we present Ulysses, a tool that achieves drastically higher detection accuracy than existing tools, both on simulated and real mate-pair sequencing datasets from the 1000 Human Genome project. Ulysses achieves high specificity over the complete spectrum of variants by assessing, in a principled manner, the statistical significance of each possible variant (duplications, deletions, translocations, insertions and inversions) against an explicit model for the generation of experimental noise. This statistical model proves particularly useful for the detection of low frequency variants. SV detection performed on a large insert Mate-Pair library from a breast cancer sample revealed a high level of somatic duplications in the tumor and, to a lesser extent, in the blood sample as well. Altogether, these results show that Ulysses is a valuable tool for the characterization of somatic mosaicism in human tissues and in cancer genomes. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  7. Computed Tomographic Window Setting for Bronchial Measurement to Guide Double-Lumen Tube Size.

    Science.gov (United States)

    Seo, Jeong-Hwa; Bae, Jinyoung; Paik, Hyesun; Koo, Chang-Hoon; Bahk, Jae-Hyon

    2018-04-01

    The bronchial diameter measured on computed tomography (CT) can be used to guide double-lumen tube (DLT) sizes objectively. The bronchus is known to be measured most accurately in the so-called bronchial CT window. The authors investigated whether using the bronchial window results in the selection of more appropriately sized DLTs than using the other windows. CT image analysis and prospective randomized study. Tertiary hospital. Adults receiving left-sided DLTs. The authors simulated selection of DLT sizes based on the left bronchial diameters measured in the lung (width 1,500 Hounsfield unit [HU] and level -700 HU), bronchial (1,000 HU and -450 HU), and mediastinal (400 HU and 25 HU) CT windows. Furthermore, patients were randomly assigned to undergo imaging with either the bronchial or mediastinal window to guide DLT sizes. Using the underwater seal technique, the authors assessed whether the DLT was appropriately sized, undersized, or oversized for the patient. On 130 CT images, the bronchial diameter (9.9 ± 1.2 mm v 10.5 ± 1.3 mm v 11.7 ± 1.3 mm) and the selected DLT size were different in the lung, bronchial, and mediastinal windows, respectively (p study, oversized tubes were chosen less frequently in the bronchial window than in the mediastinal window (6/110 v 23/111; risk ratio 0.38; 95% CI 0.19-0.79; p = 0.003). No tubes were undersized after measurements in these two windows. The bronchial measurement in the bronchial window guided more appropriately sized DLTs compared with the lung or mediastinal windows. Copyright © 2017 Elsevier Inc. All rights reserved.

  8. Set of CAMAC modules on the base of large integrated circuits for an accelerator synchronization system

    International Nuclear Information System (INIS)

    Glejbman, Eh.M.; Pilyar, N.V.

    1986-01-01

    Parameters of functional moduli in the CAMAC standard developed for accelerator synchronization system are presented. They comprise BZN-8K and BZ-8K digital delay circuits, timing circuit and pulse selection circuit. In every module 3 large integral circuits of KR 580 VI53 type programmed timer, circuits of the given system bus bar interface with bus bars of crate, circuits of data recording control, 2 peripheric storage devices, circuits of initial regime setting, input and output shapers, circuits of installation and removal of blocking in channels are used

  9. Envision: An interactive system for the management and visualization of large geophysical data sets

    Science.gov (United States)

    Searight, K. R.; Wojtowicz, D. P.; Walsh, J. E.; Pathi, S.; Bowman, K. P.; Wilhelmson, R. B.

    1995-01-01

    Envision is a software project at the University of Illinois and Texas A&M, funded by NASA's Applied Information Systems Research Project. It provides researchers in the geophysical sciences convenient ways to manage, browse, and visualize large observed or model data sets. Envision integrates data management, analysis, and visualization of geophysical data in an interactive environment. It employs commonly used standards in data formats, operating systems, networking, and graphics. It also attempts, wherever possible, to integrate with existing scientific visualization and analysis software. Envision has an easy-to-use graphical interface, distributed process components, and an extensible design. It is a public domain package, freely available to the scientific community.

  10. The higher infinite large cardinals in set theory from their beginnings

    CERN Document Server

    Kanamori, Akihiro

    2003-01-01

    The theory of large cardinals is currently a broad mainstream of modern set theory, the main area of investigation for the analysis of the relative consistency of mathematical propositions and possible new axioms for mathematics. The first of a projected multi-volume series, this book provides a comprehensive account of the theory of large cardinals from its beginnings and some of the direct outgrowths leading to the frontiers of contempory research. A "genetic" approach is taken, presenting the subject in the context of its historical development. With hindsight the consequential avenues are pursued and the most elegant or accessible expositions given. With open questions and speculations provided throughout the reader should not only come to appreciate the scope and coherence of the overall enterpreise but also become prepared to pursue research in several specific areas by studying the relevant sections.

  11. Treatment of severe pulmonary hypertension in the setting of the large patent ductus arteriosus.

    Science.gov (United States)

    Niu, Mary C; Mallory, George B; Justino, Henri; Ruiz, Fadel E; Petit, Christopher J

    2013-05-01

    Treatment of the large patent ductus arteriosus (PDA) in the setting of pulmonary hypertension (PH) is challenging. Left patent, the large PDA can result in irreversible pulmonary vascular disease. Occlusion, however, may lead to right ventricular failure for certain patients with severe PH. Our center has adopted a staged management strategy using medical management, noninvasive imaging, and invasive cardiac catheterization to treat PH in the presence of a large PDA. This approach determines the safety of ductal closure but also leverages medical therapy to create an opportunity for safe PDA occlusion. We reviewed our experience with this approach. Patients with both severe PH and PDAs were studied. PH treatment history and hemodynamic data obtained during catheterizations were reviewed. Repeat catheterizations, echocardiograms, and clinical status at latest follow-up were also reviewed. Seven patients had both PH and large, unrestrictive PDAs. At baseline, all patients had near-systemic right ventricular pressures. Nine catheterizations were performed. Two patients underwent 2 catheterizations each due to poor initial response to balloon test occlusion. Six of 7 patients exhibited subsystemic pulmonary pressures during test occlusion and underwent successful PDA occlusion. One patient did not undergo PDA occlusion. In follow-up, 2 additional catheterizations were performed after successful PDA occlusion for subsequent hemodynamic assessment. At the latest follow-up, the 6 patients who underwent PDA occlusion are well, with continued improvement in PH. Five patients remain on PH treatment. A staged approach to PDA closure for patients with severe PH is an effective treatment paradigm. Aggressive treatment of PH creates a window of opportunity for PDA occlusion, echocardiography assists in identifying the timing for closure, and balloon test occlusion during cardiac catheterization is critical in determining safety of closure. By safely eliminating the large PDA

  12. Analyzing Damping Vibration Methods of Large-Size Space Vehicles in the Earth's Magnetic Field

    Directory of Open Access Journals (Sweden)

    G. A. Shcheglov

    2016-01-01

    Full Text Available It is known that most of today's space vehicles comprise large antennas, which are bracket-attached to the vehicle body. Dimensions of reflector antennas may be of 30 ... 50 m. The weight of such constructions can reach approximately 200 kg.Since the antenna dimensions are significantly larger than the size of the vehicle body and the points to attach the brackets to the space vehicles have a low stiffness, conventional dampers may be inefficient. The paper proposes to consider the damping antenna in terms of its interaction with the Earth's magnetic field.A simple dynamic model of the space vehicle equipped with a large-size structure is built. The space vehicle is a parallelepiped to which the antenna is attached through a beam.To solve the model problems, was used a simplified model of Earth's magnetic field: uniform, with intensity lines parallel to each other and perpendicular to the plane of the antenna.The paper considers two layouts of coils with respect to the antenna, namely: a vertical one in which an axis of magnetic dipole is perpendicular to the antenna plane, and a horizontal layout in which an axis of magnetic dipole lies in the antenna plane. It also explores two ways for magnetic damping of oscillations: through the controlled current that is supplied from the power supply system of the space vehicle, and by the self-induction current in the coil. Thus, four objectives were formulated.In each task was formulated an oscillation equation. Then a ratio of oscillation amplitudes and their decay time were estimated. It was found that each task requires the certain parameters either of the antenna itself, its dimensions and moment of inertia, or of the coil and, respectively, the current, which is supplied from the space vehicle. In each task for these parameters were found the ranges, which allow us to tell of efficient damping vibrations.The conclusion can be drawn based on the analysis of tasks that a specialized control system

  13. The role of consumer satisfaction, consideration set size, variety seeking and convenience orientation in explaining seafood consumption in Vietnam

    OpenAIRE

    Ninh, Thi Kim Anh

    2010-01-01

    The study examines the relationship betweens convenience food and seafood consumption in Vietnam through a replication and an extension of studies of Rortveit and Olsen (2007; 2009). The main purpose of this study is to give an understanding of the role of consumers’ satisfaction, consideration set size, variety seeking, and convenience in explaining seafood consumption behavior in Vietnam.

  14. Experimental river delta size set by multiple floods and backwater hydrodynamics.

    Science.gov (United States)

    Ganti, Vamsi; Chadwick, Austin J; Hassenruck-Gudipati, Hima J; Fuller, Brian M; Lamb, Michael P

    2016-05-01

    River deltas worldwide are currently under threat of drowning and destruction by sea-level rise, subsidence, and oceanic storms, highlighting the need to quantify their growth processes. Deltas are built through construction of sediment lobes, and emerging theories suggest that the size of delta lobes scales with backwater hydrodynamics, but these ideas are difficult to test on natural deltas that evolve slowly. We show results of the first laboratory delta built through successive deposition of lobes that maintain a constant size. We show that the characteristic size of delta lobes emerges because of a preferential avulsion node-the location where the river course periodically and abruptly shifts-that remains fixed spatially relative to the prograding shoreline. The preferential avulsion node in our experiments is a consequence of multiple river floods and Froude-subcritical flows that produce persistent nonuniform flows and a peak in net channel deposition within the backwater zone of the coastal river. In contrast, experimental deltas without multiple floods produce flows with uniform velocities and delta lobes that lack a characteristic size. Results have broad applications to sustainable management of deltas and for decoding their stratigraphic record on Earth and Mars.

  15. Flow induced vibration of the large-sized sodium valve for MONJU

    International Nuclear Information System (INIS)

    Sato, K.

    1977-01-01

    Measurements have been made on the hydraulic characteristics of the large-sized sodium valves in the hydraulic simulation test loop with water as fluid. The following three prototype sodium valves were tested; (1) 22-inch wedge gate type isolation valve, (2) 22-inch butterfly type isolation valve, and (3) 16-inch butterfly type control valve. In the test, accelerations of flow induced vibrations were measured as a function of flow velocity and disk position. The excitation mechanism of the vibrations is not fully interpreted in these tests due to the complexity of the phenomena, but the experimental results suggest that it closely depends on random pressure fluctuations near the valve disk and flow separation at the contracted cross section between the valve seat and the disk. The intensity of flow induced vibrations suddenly increases at a certain critical condition, which depends on the type of valve and is proportional to fluid velocity. (author)

  16. Flow induced vibration of the large-sized sodium valve for MONJU

    Energy Technology Data Exchange (ETDEWEB)

    Sato, K [Sodium Engineering Division, O-arai Engineering Centre, Power Reactor and Nuclear Fuel Development Corporation, Nariata-cho, O-arai Machi, Ibaraki-ken (Japan)

    1977-12-01

    Measurements have been made on the hydraulic characteristics of the large-sized sodium valves in the hydraulic simulation test loop with water as fluid. The following three prototype sodium valves were tested; (1) 22-inch wedge gate type isolation valve, (2) 22-inch butterfly type isolation valve, and (3) 16-inch butterfly type control valve. In the test, accelerations of flow induced vibrations were measured as a function of flow velocity and disk position. The excitation mechanism of the vibrations is not fully interpreted in these tests due to the complexity of the phenomena, but the experimental results suggest that it closely depends on random pressure fluctuations near the valve disk and flow separation at the contracted cross section between the valve seat and the disk. The intensity of flow induced vibrations suddenly increases at a certain critical condition, which depends on the type of valve and is proportional to fluid velocity. (author)

  17. A Novel Read Scheme for Large Size One-Resistor Resistive Random Access Memory Array.

    Science.gov (United States)

    Zackriya, Mohammed; Kittur, Harish M; Chin, Albert

    2017-02-10

    The major issue of RRAM is the uneven sneak path that limits the array size. For the first time record large One-Resistor (1R) RRAM array of 128x128 is realized, and the array cells at the worst case still have good Low-/High-Resistive State (LRS/HRS) current difference of 378 nA/16 nA, even without using the selector device. This array has extremely low read current of 9.7 μA due to both low-current RRAM device and circuit interaction, where a novel and simple scheme of a reference point by half selected cell and a differential amplifier (DA) were implemented in the circuit design.

  18. Study on growth techniques and macro defects of large-size Nd:YAG laser crystal

    Science.gov (United States)

    Quan, Jiliang; Yang, Xin; Yang, Mingming; Ma, Decai; Huang, Jinqiang; Zhu, Yunzhong; Wang, Biao

    2018-02-01

    Large-size neodymium-doped yttrium aluminum garnet (Nd:YAG) single crystals were grown by the Czochralski method. The extinction ratio and wavefront distortion of the crystal were tested to determine the optical homogeneity. Moreover, under different growth conditions, the macro defects of inclusion, striations, and cracking in the as-grown Nd:YAG crystals were analyzed. Specifically, the inclusion defects were characterized using scanning electron microscopy and energy dispersive spectroscopy. The stresses of growth striations and cracking were studied via a parallel plane polariscope. These results demonstrate that improper growth parameters and temperature fields can enhance defects significantly. Thus, by adjusting the growth parameters and optimizing the thermal environment, high-optical-quality Nd:YAG crystals with a diameter of 80 mm and a total length of 400 mm have been obtained successfully.

  19. Designing Websites for Displaying Large Data Sets and Images on Multiple Platforms

    Science.gov (United States)

    Anderson, A.; Wolf, V. G.; Garron, J.; Kirschner, M.

    2012-12-01

    The desire to build websites to analyze and display ever increasing amounts of scientific data and images pushes for web site designs which utilize large displays, and to use the display area as efficiently as possible. Yet, scientists and users of their data are increasingly wishing to access these websites in the field and on mobile devices. This results in the need to develop websites that can support a wide range of devices and screen sizes, and to optimally use whatever display area is available. Historically, designers have addressed this issue by building two websites; one for mobile devices, and one for desktop environments, resulting in increased cost, duplicity of work, and longer development times. Recent advancements in web design technology and techniques have evolved which allow for the development of a single website that dynamically adjusts to the type of device being used to browse the website (smartphone, tablet, desktop). In addition they provide the opportunity to truly optimize whatever display area is available. HTML5 and CSS3 give web designers media query statements which allow design style sheets to be aware of the size of the display being used, and to format web content differently based upon the queried response. Web elements can be rendered in a different size, position, or even removed from the display entirely, based upon the size of the display area. Using HTML5/CSS3 media queries in this manner is referred to as "Responsive Web Design" (RWD). RWD in combination with technologies such as LESS and Twitter Bootstrap allow the web designer to build web sites which not only dynamically respond to the browser display size being used, but to do so in very controlled and intelligent ways, ensuring that good layout and graphic design principles are followed while doing so. At the University of Alaska Fairbanks, the Alaska Satellite Facility SAR Data Center (ASF) recently redesigned their popular Vertex application and converted it from a

  20. Making sense of large data sets without annotations: analyzing age-related correlations from lung CT scans

    Science.gov (United States)

    Dicente Cid, Yashin; Mamonov, Artem; Beers, Andrew; Thomas, Armin; Kovalev, Vassili; Kalpathy-Cramer, Jayashree; Müller, Henning

    2017-03-01

    The analysis of large data sets can help to gain knowledge about specific organs or on specific diseases, just as big data analysis does in many non-medical areas. This article aims to gain information from 3D volumes, so the visual content of lung CT scans of a large number of patients. In the case of the described data set, only little annotation is available on the patients that were all part of an ongoing screening program and besides age and gender no information on the patient and the findings was available for this work. This is a scenario that can happen regularly as image data sets are produced and become available in increasingly large quantities but manual annotations are often not available and also clinical data such as text reports are often harder to share. We extracted a set of visual features from 12,414 CT scans of 9,348 patients that had CT scans of the lung taken in the context of a national lung screening program in Belarus. Lung fields were segmented by two segmentation algorithms and only cases where both algorithms were able to find left and right lung and had a Dice coefficient above 0.95 were analyzed. This assures that only segmentations of good quality were used to extract features of the lung. Patients ranged in age from 0 to 106 years. Data analysis shows that age can be predicted with a fairly high accuracy for persons under 15 years. Relatively good results were also obtained between 30 and 65 years where a steady trend is seen. For young adults and older people the results are not as good as variability is very high in these groups. Several visualizations of the data show the evolution patters of the lung texture, size and density with age. The experiments allow learning the evolution of the lung and the gained results show that even with limited metadata we can extract interesting information from large-scale visual data. These age-related changes (for example of the lung volume, the density histogram of the tissue) can also be

  1. Larval assemblages of large and medium-sized pelagic species in the Straits of Florida

    Science.gov (United States)

    Richardson, David E.; Llopiz, Joel K.; Guigand, Cedric M.; Cowen, Robert K.

    2010-07-01

    Critical gaps in our understanding of the distributions, interactions, life histories and preferred habitats of large and medium-size pelagic fishes severely constrain the implementation of ecosystem-based, spatially structured fisheries management approaches. In particular, spawning distributions and the environmental characteristics associated with the early life stages are poorly documented. In this study, we consider the diversity, assemblages, and associated habitat of the larvae of large and medium-sized pelagic species collected during 2 years of monthly surveys across the Straits of Florida. In total, 36 taxa and 14,295 individuals were collected, with the highest diversity occurring during the summer and in the western, frontal region of the Florida Current. Only a few species (e.g. Thunnus obesus, T. alalunga, Tetrapturus pfluegeri) considered for this study were absent. Small scombrids (e.g. T. atlanticus, Katsuwonus pelamis, Auxis spp.) and gempylids dominated the catch and were orders of magnitude more abundant than many of the rare species (e.g. Thunnus thynnus,Kajikia albida). Both constrained (CCA) and unconstrained (NMDS) multivariate analyses revealed a number of species groupings including: (1) a summer Florida edge assemblage (e.g. Auxis spp., Euthynnus alleterattus, Istiophorus platypterus); (2) a summer offshore assemblage (e.g. Makaira nigricans, T. atlanticus, Ruvettus pretiosus, Lampris guttatus); (3) an ubiquitous assemblage (e.g. K. pelamis, Coryphaena hippurus, Xiphias gladius); and (4) a spring/winter assemblage that was widely dispersed in space (e.g. trachipterids). The primary environmental factors associated with these assemblages were sea-surface temperature (highest in summer-early fall), day length (highest in early summer), thermocline depth (shallowest on the Florida side) and fluorescence (highest on the Florida side). Overall, the results of this study provide insights into how a remarkable diversity of pelagic species

  2. Preparation and validation of a large size dried spike: Batch SAL-9924

    International Nuclear Information System (INIS)

    Bagliano, G.; Cappis, J.; Doubek, N.; Jammet, G.; Raab, W.; Zoigner, A.

    1989-12-01

    To determine uranium and plutonium concentration using isotope dilution mass spectrometry, weighed aliquands of a synthetic mixture containing 2 to 4 mg of Pu (with a 239 Pu abundance of about 97%) and 40 to 200 mg of U (with a 235 U enrichment of about 18%) can be advantageously used to spike a concentrated spent fuel solution with a high burn up and with a low 235 U enrichment. This will simplify the conditioning of the sample by 1) reduced time of preparation (from more than one day used for the conventional technique to 2-3 hours); 2) reduced burden for the operator with a clear easiness for the inspector to witness the entire procedure (accurate dilution of the spent fuel sample before spiking being no longer necessary). Furthermore this type of spike could be used as a common spike for the operator and the inspector. The source materials are available in sufficient quantity and are enough cheaper than the commonly used 233 U and 242 Pu or 244 Pu tracer that the costs of the overall Operator-Inspector procedures will be reduced. Certified Reference Materials Pu-NBL-126, natural U-NBS-960 and 93% enriched U-NBL-116 were used to prepare a stock solution containing 1.7 mg/ml of Pu and 68 mg/ml of 17.5% enriched U. Before shipment to the Reprocessing Plant, aliquands of the stock solution must be dried to give Large Size Dried Spikes which resist shocks encountered during transportation, so that they can readily be recovered quantitatively at the plant. This paper describes the preparation and the validation of the Large Size Dried Spike. Proof of usefulness in the field will be done at a later date in parallel with analysis by the conventional technique. Refs and tabs

  3. Development of high sensitivity and high speed large size blank inspection system LBIS

    Science.gov (United States)

    Ohara, Shinobu; Yoshida, Akinori; Hirai, Mitsuo; Kato, Takenori; Moriizumi, Koichi; Kusunose, Haruhiko

    2017-07-01

    The production of high-resolution flat panel displays (FPDs) for mobile phones today requires the use of high-quality large-size photomasks (LSPMs). Organic light emitting diode (OLED) displays use several transistors on each pixel for precise current control and, as such, the mask patterns for OLED displays are denser and finer than the patterns for the previous generation displays throughout the entire mask surface. It is therefore strongly demanded that mask patterns be produced with high fidelity and free of defect. To enable the production of a high quality LSPM in a short lead time, the manufacturers need a high-sensitivity high-speed mask blank inspection system that meets the requirement of advanced LSPMs. Lasertec has developed a large-size blank inspection system called LBIS, which achieves high sensitivity based on a laser-scattering technique. LBIS employs a high power laser as its inspection light source. LBIS's delivery optics, including a scanner and F-Theta scan lens, focus the light from the source linearly on the surface of the blank. Its specially-designed optics collect the light scattered by particles and defects generated during the manufacturing process, such as scratches, on the surface and guide it to photo multiplier tubes (PMTs) with high efficiency. Multiple PMTs are used on LBIS for the stable detection of scattered light, which may be distributed at various angles due to irregular shapes of defects. LBIS captures 0.3mμ PSL at a detection rate of over 99.5% with uniform sensitivity. Its inspection time is 20 minutes for a G8 blank and 35 minutes for G10. The differential interference contrast (DIC) microscope on the inspection head of LBIS captures high-contrast review images after inspection. The images are classified automatically.

  4. Complexity analysis on public transport networks of 97 large- and medium-sized cities in China

    Science.gov (United States)

    Tian, Zhanwei; Zhang, Zhuo; Wang, Hongfei; Ma, Li

    2018-04-01

    The traffic situation in Chinese urban areas is continuing to deteriorate. To make a better planning and designing of the public transport system, it is necessary to make profound research on the structure of urban public transport networks (PTNs). We investigate 97 large- and medium-sized cities’ PTNs in China, construct three types of network models — bus stop network, bus transit network and bus line network, then analyze the structural characteristics of them. It is revealed that bus stop network is small-world and scale-free, bus transit network and bus line network are both small-world. Betweenness centrality of each city’s PTN shows similar distribution pattern, although these networks’ size is various. When classifying cities according to the characteristics of PTNs or economic development level, the results are similar. It means that the development of cities’ economy and transport network has a strong correlation, PTN expands in a certain model with the development of economy.

  5. Resonant atom-field interaction in large-size coupled-cavity arrays

    International Nuclear Information System (INIS)

    Ciccarello, Francesco

    2011-01-01

    We consider an array of coupled cavities with staggered intercavity couplings, where each cavity mode interacts with an atom. In contrast to large-size arrays with uniform hopping rates where the atomic dynamics is known to be frozen in the strong-hopping regime, we show that resonant atom-field dynamics with significant energy exchange can occur in the case of staggered hopping rates even in the thermodynamic limit. This effect arises from the joint emergence of an energy gap in the free photonic dispersion relation and a discrete frequency at the gap's center. The latter corresponds to a bound normal mode stemming solely from the finiteness of the array length. Depending on which cavity is excited, either the atomic dynamics is frozen or a Jaynes-Cummings-like energy exchange is triggered between the bound photonic mode and its atomic analog. As these phenomena are effective with any number of cavities, they are prone to be experimentally observed even in small-size arrays.

  6. Beauty, body size and wages: Evidence from a unique data set.

    Science.gov (United States)

    Oreffice, Sonia; Quintana-Domeque, Climent

    2016-09-01

    We analyze how attractiveness rated at the start of the interview in the German General Social Survey is related to weight, height, and body mass index (BMI), separately by gender and accounting for interviewers' characteristics or fixed effects. We show that height, weight, and BMI all strongly contribute to male and female attractiveness when attractiveness is rated by opposite-sex interviewers, and that anthropometric characteristics are irrelevant to male interviewers when assessing male attractiveness. We also estimate whether, controlling for beauty, body size measures are related to hourly wages. We find that anthropometric attributes play a significant role in wage regressions in addition to attractiveness, showing that body size cannot be dismissed as a simple component of beauty. Our findings are robust to controlling for health status and accounting for selection into working. Copyright © 2016 Elsevier B.V. All rights reserved.

  7. The limits of weak selection and large population size in evolutionary game theory.

    Science.gov (United States)

    Sample, Christine; Allen, Benjamin

    2017-11-01

    Evolutionary game theory is a mathematical approach to studying how social behaviors evolve. In many recent works, evolutionary competition between strategies is modeled as a stochastic process in a finite population. In this context, two limits are both mathematically convenient and biologically relevant: weak selection and large population size. These limits can be combined in different ways, leading to potentially different results. We consider two orderings: the [Formula: see text] limit, in which weak selection is applied before the large population limit, and the [Formula: see text] limit, in which the order is reversed. Formal mathematical definitions of the [Formula: see text] and [Formula: see text] limits are provided. Applying these definitions to the Moran process of evolutionary game theory, we obtain asymptotic expressions for fixation probability and conditions for success in these limits. We find that the asymptotic expressions for fixation probability, and the conditions for a strategy to be favored over a neutral mutation, are different in the [Formula: see text] and [Formula: see text] limits. However, the ordering of limits does not affect the conditions for one strategy to be favored over another.

  8. Study of large size fiber reinforced cement containers for solid wastes from dismantling

    International Nuclear Information System (INIS)

    Jaouen, C.

    1990-01-01

    The production of large-sized metallic waste by dismantling operations, and the evolution of the specifications of the waste to be stored in the different European countries will create a need for large standard containers for the transport and final disposal of the corresponding waste. The research conducted during the 1984-1988 programme, supported by the Commission of European Communities, and based on a comparative study of high-grade concrete materials, reinforced with organic or metallic fibres, led to the development of a high performance container meeting international transport recommendations as well as French requirements for shallow-ground disposal. The material selected, consisting of high-performance mortar with metal fibre reinforcement, was the subject of an intensive programme of characterization tests conducted in close cooperation with LAFARGE Company, demonstrating the achievement of mechanical and physical properties comfortably above the regulatory requirements. The construction of an industrial prototype and the subsequent economic analysis served to guarantee the industrial feasibility and cost of this system, in which attempts were made to optimize the finished package product, including its closure system

  9. Size matters: the ethical, legal, and social issues surrounding large-scale genetic biobank initiatives

    Directory of Open Access Journals (Sweden)

    Klaus Lindgaard Hoeyer

    2012-04-01

    Full Text Available During the past ten years the complex ethical, legal and social issues (ELSI typically surrounding large-scale genetic biobank research initiatives have been intensely debated in academic circles. In many ways genetic epidemiology has undergone a set of changes resembling what in physics has been called a transition into Big Science. This article outlines consequences of this transition and suggests that the change in scale implies challenges to the roles of scientists and public alike. An overview of key issues is presented, and it is argued that biobanks represent not just scientific endeavors with purely epistemic objectives, but also political projects with social implications. As such, they demand clever maneuvering among social interests to succeed.

  10. Generating a taxonomy of spatially cued attention for visual discrimination: Effects of judgment precision and set size on attention

    Science.gov (United States)

    Hetley, Richard; Dosher, Barbara Anne; Lu, Zhong-Lin

    2014-01-01

    Attention precues improve the performance of perceptual tasks in many but not all circumstances. These spatial attention effects may depend upon display set size or workload, and have been variously attributed to external noise filtering, stimulus enhancement, contrast gain, or response gain, or to uncertainty or other decision effects. In this study, we document systematically different effects of spatial attention in low- and high-precision judgments, with and without external noise, and in different set sizes in order to contribute to the development of a taxonomy of spatial attention. An elaborated perceptual template model (ePTM) provides an integrated account of a complex set of effects of spatial attention with just two attention factors: a set-size dependent exclusion or filtering of external noise and a narrowing of the perceptual template to focus on the signal stimulus. These results are related to the previous literature by classifying the judgment precision and presence of external noise masks in those experiments, suggesting a taxonomy of spatially cued attention in discrimination accuracy. PMID:24939234

  11. Generating a taxonomy of spatially cued attention for visual discrimination: effects of judgment precision and set size on attention.

    Science.gov (United States)

    Hetley, Richard; Dosher, Barbara Anne; Lu, Zhong-Lin

    2014-11-01

    Attention precues improve the performance of perceptual tasks in many but not all circumstances. These spatial attention effects may depend upon display set size or workload, and have been variously attributed to external noise filtering, stimulus enhancement, contrast gain, or response gain, or to uncertainty or other decision effects. In this study, we document systematically different effects of spatial attention in low- and high-precision judgments, with and without external noise, and in different set sizes in order to contribute to the development of a taxonomy of spatial attention. An elaborated perceptual template model (ePTM) provides an integrated account of a complex set of effects of spatial attention with just two attention factors: a set-size dependent exclusion or filtering of external noise and a narrowing of the perceptual template to focus on the signal stimulus. These results are related to the previous literature by classifying the judgment precision and presence of external noise masks in those experiments, suggesting a taxonomy of spatially cued attention in discrimination accuracy.

  12. AN EFFICIENT DATA MINING METHOD TO FIND FREQUENT ITEM SETS IN LARGE DATABASE USING TR- FCTM

    Directory of Open Access Journals (Sweden)

    Saravanan Suba

    2016-01-01

    Full Text Available Mining association rules in large database is one of most popular data mining techniques for business decision makers. Discovering frequent item set is the core process in association rule mining. Numerous algorithms are available in the literature to find frequent patterns. Apriori and FP-tree are the most common methods for finding frequent items. Apriori finds significant frequent items using candidate generation with more number of data base scans. FP-tree uses two database scans to find significant frequent items without using candidate generation. This proposed TR-FCTM (Transaction Reduction- Frequency Count Table Method discovers significant frequent items by generating full candidates once to form frequency count table with one database scan. Experimental results of TR-FCTM shows that this algorithm outperforms than Apriori and FP-tree.

  13. Leaf transpiration plays a role in phosphorus acquisition among a large set of chickpea genotypes.

    Science.gov (United States)

    Pang, Jiayin; Zhao, Hongxia; Bansal, Ruchi; Bohuon, Emilien; Lambers, Hans; Ryan, Megan H; Siddique, Kadambot H M

    2018-01-09

    Low availability of inorganic phosphorus (P) is considered a major constraint for crop productivity worldwide. A unique set of 266 chickpea (Cicer arietinum L.) genotypes, originating from 29 countries and with diverse genetic background, were used to study P-use efficiency. Plants were grown in pots containing sterilized river sand supplied with P at a rate of 10 μg P g -1 soil as FePO 4 , a poorly soluble form of P. The results showed large genotypic variation in plant growth, shoot P content, physiological P-use efficiency, and P-utilization efficiency in response to low P supply. Further investigation of a subset of 100 chickpea genotypes with contrasting growth performance showed significant differences in photosynthetic rate and photosynthetic P-use efficiency. A positive correlation was found between leaf P concentration and transpiration rate of the young fully expanded leaves. For the first time, our study has suggested a role of leaf transpiration in P acquisition, consistent with transpiration-driven mass flow in chickpea grown in low-P sandy soils. The identification of 6 genotypes with high plant growth, P-acquisition, and P-utilization efficiency suggests that the chickpea reference set can be used in breeding programmes to improve both P-acquisition and P-utilization efficiency under low-P conditions. © 2018 John Wiley & Sons Ltd.

  14. Resolution 8.069/12. It approve the regulations for the large size structures installation, destined for wind power generation

    International Nuclear Information System (INIS)

    2012-01-01

    This resolution approve the regulations for the large size structures installation, destined to wind power generation. The objective of this rule is to regulate the urban conditions of the facilities and the environmental guarantees, safety and inhabitants wholesomeness

  15. Technology for Obtaining Large Size Complex Oxide Crystals for Experiments on Muon-Electron Conversion Registration in High Energy Physics

    Directory of Open Access Journals (Sweden)

    Gerasymov, Ya.

    2014-11-01

    Full Text Available Technological approaches for qualitative large size scintillation crystals growing based on rare-earth silicates are proposed. A method of iridium crucibles charging using eutectic phase instead of a oxyorthosilicate was developed.

  16. Effects of display set size and its variability on the event-related potentials during a visual search task

    OpenAIRE

    Miyatani, Makoto; Sakata, Sumiko

    1999-01-01

    This study investigated the effects of display set size and its variability on the event-related potentials (ERPs) during a visual search task. In Experiment 1, subjects were required to respond if a visual display, which consisted of two, four, or six alphabets, contained one of two members of memory set. In Experiment 2, subjects detected the change of the shape of a fixation stimulus, which was surrounded by the same alphabets as in Experiment 1. In the search task (Experiment 1), the incr...

  17. Covariance approximation for large multivariate spatial data sets with an application to multiple climate model errors

    KAUST Repository

    Sang, Huiyan

    2011-12-01

    This paper investigates the cross-correlations across multiple climate model errors. We build a Bayesian hierarchical model that accounts for the spatial dependence of individual models as well as cross-covariances across different climate models. Our method allows for a nonseparable and nonstationary cross-covariance structure. We also present a covariance approximation approach to facilitate the computation in the modeling and analysis of very large multivariate spatial data sets. The covariance approximation consists of two parts: a reduced-rank part to capture the large-scale spatial dependence, and a sparse covariance matrix to correct the small-scale dependence error induced by the reduced rank approximation. We pay special attention to the case that the second part of the approximation has a block-diagonal structure. Simulation results of model fitting and prediction show substantial improvement of the proposed approximation over the predictive process approximation and the independent blocks analysis. We then apply our computational approach to the joint statistical modeling of multiple climate model errors. © 2012 Institute of Mathematical Statistics.

  18. A Core Set Based Large Vector-Angular Region and Margin Approach for Novelty Detection

    Directory of Open Access Journals (Sweden)

    Jiusheng Chen

    2016-01-01

    Full Text Available A large vector-angular region and margin (LARM approach is presented for novelty detection based on imbalanced data. The key idea is to construct the largest vector-angular region in the feature space to separate normal training patterns; meanwhile, maximize the vector-angular margin between the surface of this optimal vector-angular region and abnormal training patterns. In order to improve the generalization performance of LARM, the vector-angular distribution is optimized by maximizing the vector-angular mean and minimizing the vector-angular variance, which separates the normal and abnormal examples well. However, the inherent computation of quadratic programming (QP solver takes O(n3 training time and at least O(n2 space, which might be computational prohibitive for large scale problems. By (1+ε  and  (1-ε-approximation algorithm, the core set based LARM algorithm is proposed for fast training LARM problem. Experimental results based on imbalanced datasets have validated the favorable efficiency of the proposed approach in novelty detection.

  19. Evaluation of digital soil mapping approaches with large sets of environmental covariates

    Science.gov (United States)

    Nussbaum, Madlene; Spiess, Kay; Baltensweiler, Andri; Grob, Urs; Keller, Armin; Greiner, Lucie; Schaepman, Michael E.; Papritz, Andreas

    2018-01-01

    The spatial assessment of soil functions requires maps of basic soil properties. Unfortunately, these are either missing for many regions or are not available at the desired spatial resolution or down to the required soil depth. The field-based generation of large soil datasets and conventional soil maps remains costly. Meanwhile, legacy soil data and comprehensive sets of spatial environmental data are available for many regions. Digital soil mapping (DSM) approaches relating soil data (responses) to environmental data (covariates) face the challenge of building statistical models from large sets of covariates originating, for example, from airborne imaging spectroscopy or multi-scale terrain analysis. We evaluated six approaches for DSM in three study regions in Switzerland (Berne, Greifensee, ZH forest) by mapping the effective soil depth available to plants (SD), pH, soil organic matter (SOM), effective cation exchange capacity (ECEC), clay, silt, gravel content and fine fraction bulk density for four soil depths (totalling 48 responses). Models were built from 300-500 environmental covariates by selecting linear models through (1) grouped lasso and (2) an ad hoc stepwise procedure for robust external-drift kriging (georob). For (3) geoadditive models we selected penalized smoothing spline terms by component-wise gradient boosting (geoGAM). We further used two tree-based methods: (4) boosted regression trees (BRTs) and (5) random forest (RF). Lastly, we computed (6) weighted model averages (MAs) from the predictions obtained from methods 1-5. Lasso, georob and geoGAM successfully selected strongly reduced sets of covariates (subsets of 3-6 % of all covariates). Differences in predictive performance, tested on independent validation data, were mostly small and did not reveal a single best method for 48 responses. Nevertheless, RF was often the best among methods 1-5 (28 of 48 responses), but was outcompeted by MA for 14 of these 28 responses. RF tended to over

  20. Body size evolution in an old insect order: No evidence for Cope's Rule in spite of fitness benefits of large size.

    Science.gov (United States)

    Waller, John T; Svensson, Erik I

    2017-09-01

    We integrate field data and phylogenetic comparative analyses to investigate causes of body size evolution and stasis in an old insect order: odonates ("dragonflies and damselflies"). Fossil evidence for "Cope's Rule" in odonates is weak or nonexistent since the last major extinction event 65 million years ago, yet selection studies show consistent positive selection for increased body size among adults. In particular, we find that large males in natural populations of the banded demoiselle (Calopteryx splendens) over several generations have consistent fitness benefits both in terms of survival and mating success. Additionally, there was no evidence for stabilizing or conflicting selection between fitness components within the adult life-stage. This lack of stabilizing selection during the adult life-stage was independently supported by a literature survey on different male and female fitness components from several odonate species. We did detect several significant body size shifts among extant taxa using comparative methods and a large new molecular phylogeny for odonates. We suggest that the lack of Cope's rule in odonates results from conflicting selection between fitness advantages of large adult size and costs of long larval development. We also discuss competing explanations for body size stasis in this insect group. © 2017 The Author(s). Evolution © 2017 The Society for the Study of Evolution.

  1. Large increase in nest size linked to climate change: an indicator of life history, senescence and condition.

    Science.gov (United States)

    Møller, Anders Pape; Nielsen, Jan Tøttrup

    2015-11-01

    Many animals build extravagant nests that exceed the size required for successful reproduction. Large nests may signal the parenting ability of nest builders suggesting that nests may have a signaling function. In particular, many raptors build very large nests for their body size. We studied nest size in the goshawk Accipiter gentilis, which is a top predator throughout most of the Nearctic. Both males and females build nests, and males provision their females and offspring with food. Nest volume in the goshawk is almost three-fold larger than predicted from their body size. Nest size in the goshawk is highly variable and may reach more than 600 kg for a bird that weighs ca. 1 kg. While 8.5% of nests fell down, smaller nests fell down more often than large nests. There was a hump-shaped relationship between nest volume and female age, with a decline in nest volume late in life, as expected for senescence. Clutch size increased with nest volume. Nest volume increased during 1977-2014 in an accelerating fashion, linked to increasing spring temperature during April, when goshawks build and start reproduction. These findings are consistent with nest size being a reliable signal of parental ability, with large nest size signaling superior parenting ability and senescence, and also indicating climate warming.

  2. Cybele: a large size ion source of module construction for Tore-Supra injector

    International Nuclear Information System (INIS)

    Simonin, A.; Garibaldi, P.

    2005-01-01

    A 70 keV 40 A hydrogen beam injector has been developed at Cadarache for plasma diagnostic purpose (MSE diagnostic and Charge exchange) on the Tore-Supra Tokamak. This injector daily operates with a large size ions source (called Pagoda) which does not completely fulfill all the requirements necessary for the present experiment. As a consequence, the development of a new ion source (called Cybele) has been underway whose objective is to meet high proton rate (>80%), current density of 160 mA/cm 2 within 5% of uniformity on the whole extraction surface for long shot operation (from 1 to 100 s). Moreover, the main particularity of Cybele is the module construction concept: it is composed of five source modules vertically juxtaposed, with a special orientation which fits the curved extraction surface of the injector; this curvature ensures a geometrical focalization of the neutral beam 7 m downstream in the Tore-Supra chamber. Cybele will be tested first in positive ion production for the Tore-Supra injector, and afterward in negative ion production mode; its modular concept could be advantageous to ensure plasma uniformity on the large extraction surface (about 1 m 2 ) of the ITER neutral beam injector. A module prototype (called the Drift Source) has already been developed in the past and optimized in the laboratory both for positive and negative ion production, where it has met the ITER ion source requirements in terms of D-current density (200 A/m 2 ), source pressure (0.3 Pa), uniformity and arc efficiency (0.015 A D-/kW). (authors)

  3. Large explosive basaltic eruptions at Katla volcano, Iceland: Fragmentation, grain size and eruption dynamics

    Science.gov (United States)

    Schmith, Johanne; Höskuldsson, Ármann; Holm, Paul Martin; Larsen, Guðrún

    2018-04-01

    Katla volcano in Iceland produces hazardous large explosive basaltic eruptions on a regular basis, but very little quantitative data for future hazard assessments exist. Here details on fragmentation mechanism and eruption dynamics are derived from a study of deposit stratigraphy with detailed granulometry and grain morphology analysis, granulometric modeling, componentry and the new quantitative regularity index model of fragmentation mechanism. We show that magma/water interaction is important in the ash generation process, but to a variable extent. By investigating the large explosive basaltic eruptions from 1755 and 1625, we document that eruptions of similar size and magma geochemistry can have very different fragmentation dynamics. Our models show that fragmentation in the 1755 eruption was a combination of magmatic degassing and magma/water-interaction with the most magma/water-interaction at the beginning of the eruption. The fragmentation of the 1625 eruption was initially also a combination of both magmatic and phreatomagmatic processes, but magma/water-interaction diminished progressively during the later stages of the eruption. However, intense magma/water interaction was reintroduced during the final stages of the eruption dominating the fine fragmentation at the end. This detailed study of fragmentation changes documents that subglacial eruptions have highly variable interaction with the melt water showing that the amount and access to melt water changes significantly during eruptions. While it is often difficult to reconstruct the progression of eruptions that have no quantitative observational record, this study shows that integrating field observations and granulometry with the new regularity index can form a coherent model of eruption evolution.

  4. Strength and fatigue testing of large size wind turbines rotors. Vol. II: Full size natural vibration and static strength test, a reference case

    Energy Technology Data Exchange (ETDEWEB)

    Arias, F.; Soria, E.

    1996-12-01

    This report shows the methods and procedures selected to define a strength test for large size wind turbine, anyway in particular it application on a 500 kW blade and it results obtained in the test carried out in july of 1995 in Asinel`s test plant (Madrid). Henceforth, this project is designed in an abbreviate form whit the acronym SFAT. (Author)

  5. Strength and fatigue testing of large size wind turbines rotors. Volume II. Full size natural vibration and static strength test, a reference case

    International Nuclear Information System (INIS)

    Arias, F.; Soria, E.

    1996-01-01

    This report shows the methods and procedures selected to define a strength test for large size wind turbine, anyway in particularly it application on a 500 kW blade and it results obtained in the test carried out in july of 1995 in Asinel test plant (Madrid). Henceforth, this project is designed in an abbreviate form whit the acronym SFAT. (Author)

  6. Prototyping a large field size IORT applicator for a mobile linear accelerator

    Energy Technology Data Exchange (ETDEWEB)

    Janssen, Rogier W J; Dries, Wim J F [Catharina-Hospital Eindhoven, PO Box 1350, 5602 ZA, Eindhoven (Netherlands); Faddegon, Bruce A [University of California San Francisco Comprehensive Cancer Center, 1600 Divisadero Street, San Francisco, CA 94115-1708 (United States)], E-mail: rogier.janssen@mac.com

    2008-04-21

    The treatment of large tumors such as sarcomas with intra-operative radiotherapy using a Mobetron (registered) is often complicated because of the limited field size of the primary collimator and the available applicators (max Oe100 mm). To circumvent this limitation a prototype rectangular applicator of 80 x 150 mm{sup 2} was designed and built featuring an additional scattering foil located at the top of the applicator. Because of its proven accuracy in modeling linear accelerator components the design was based on the EGSnrc Monte Carlo simulation code BEAMnrc. First, the Mobetron (registered) treatment head was simulated both without an applicator and with a standard 100 mm applicator. Next, this model was used to design an applicator foil consisting of a rectangular Al base plate covering the whole beam and a pyramid of four stacked cylindrical slabs of different diameters centered on top of it. This foil was mounted on top of a plain rectangular Al tube. A prototype was built and tested with diode dosimetry in a water tank. Here, the prototype showed clinically acceptable 80 x 150 mm{sup 2} dose distributions for 4 MeV, 6 MeV and 9 MeV, obviating the use of complicated multiple irradiations with abutting field techniques. In addition, the measurements agreed well with the MC simulations, typically within 2%/1 mm.

  7. Prototyping a large field size IORT applicator for a mobile linear accelerator

    International Nuclear Information System (INIS)

    Janssen, Rogier W J; Dries, Wim J F; Faddegon, Bruce A

    2008-01-01

    The treatment of large tumors such as sarcomas with intra-operative radiotherapy using a Mobetron (registered) is often complicated because of the limited field size of the primary collimator and the available applicators (max Oe100 mm). To circumvent this limitation a prototype rectangular applicator of 80 x 150 mm 2 was designed and built featuring an additional scattering foil located at the top of the applicator. Because of its proven accuracy in modeling linear accelerator components the design was based on the EGSnrc Monte Carlo simulation code BEAMnrc. First, the Mobetron (registered) treatment head was simulated both without an applicator and with a standard 100 mm applicator. Next, this model was used to design an applicator foil consisting of a rectangular Al base plate covering the whole beam and a pyramid of four stacked cylindrical slabs of different diameters centered on top of it. This foil was mounted on top of a plain rectangular Al tube. A prototype was built and tested with diode dosimetry in a water tank. Here, the prototype showed clinically acceptable 80 x 150 mm 2 dose distributions for 4 MeV, 6 MeV and 9 MeV, obviating the use of complicated multiple irradiations with abutting field techniques. In addition, the measurements agreed well with the MC simulations, typically within 2%/1 mm

  8. Preferential enrichment of large-sized very low density lipoprotein populations with transferred cholesteryl esters

    International Nuclear Information System (INIS)

    Eisenberg, S.

    1985-01-01

    The effect of lipid transfer proteins on the exchange and transfer of cholesteryl esters from rat plasma HDL2 to human very low (VLDL) and low density (LDL) lipoprotein populations was studied. The use of a combination of radiochemical and chemical methods allowed separate assessment of [ 3 H]cholesteryl ester exchange and of cholesteryl ester transfer. VLDL-I was the preferred acceptor for transferred cholesteryl esters, followed by VLDL-II and VLDL-III. LDL did not acquire cholesteryl esters. The contribution of exchange of [ 3 H]cholesteryl esters to total transfer was highest for LDL and decreased in reverse order along the VLDL density range. Inactivation of lecithin: cholesterol acyltransferase (LCAT) and heating the HDL2 for 60 min at 56 degrees C accelerated transfer and exchange of [ 3 H]cholesteryl esters. Addition of lipid transfer proteins increased cholesterol esterification in all systems. The data demonstrate that large-sized, triglyceride-rich VLDL particles are preferred acceptors for transferred cholesteryl esters. It is suggested that enrichment of very low density lipoproteins with cholesteryl esters reflects the triglyceride content of the particles

  9. Investigation of Low-Cost Surface Processing Techniques for Large-Size Multicrystalline Silicon Solar Cells

    Directory of Open Access Journals (Sweden)

    Yuang-Tung Cheng

    2010-01-01

    Full Text Available The subject of the present work is to develop a simple and effective method of enhancing conversion efficiency in large-size solar cells using multicrystalline silicon (mc-Si wafer. In this work, industrial-type mc-Si solar cells with area of 125×125 mm2 were acid etched to produce simultaneously POCl3 emitters and silicon nitride deposition by plasma-enhanced chemical vapor deposited (PECVD. The study of surface morphology and reflectivity of different mc-Si etched surfaces has also been discussed in this research. Using our optimal acid etching solution ratio, we are able to fabricate mc-Si solar cells of 16.34% conversion efficiency with double layers silicon nitride (Si3N4 coating. From our experiment, we find that depositing double layers silicon nitride coating on mc-Si solar cells can get the optimal performance parameters. Open circuit (Voc is 616 mV, short circuit current (Jsc is 34.1 mA/cm2, and minority carrier diffusion length is 474.16 μm. The isotropic texturing and silicon nitride layers coating approach contribute to lowering cost and achieving high efficiency in mass production.

  10. Reparation and validation of a large size dried spike: Batch SAL-9951

    International Nuclear Information System (INIS)

    Doubek, N.; Jammet, G.; Zoigner, A.

    1991-02-01

    To determine uranium and plutonium concentration using isotope dilution mass spectrometry, weighed aliquands of a synthetic mixture containing about 2mg of Pu (with a 239 Pu abundance of about 98%) and 37mg of U (with a 235 U enrichment of about 19%) have been prepared by the IAEA-SAL and verified by three analytical laboratories: NMCC-SAL, OEFZS, IAEA-SAL; they will be used to spike samples of concentrated spent fuel solutions with a high burnup and a low 235 U enrichment. Certified Reference Materials Pu-NBL-126, natural U-NBL-112A and 93% enriched U-NBL-116 were used to prepare a stock solution containing about 3.2 mg/ml of Pu and 64.3 mg/ml of 18.7% enriched U. Before shipment to the Reprocessing Plant, aliquands of the stock solution are dried to give Large Size Dried (LSD) Spikes which resist shocks encountered during transportation, so that they can readily be recovered quantitatively at the plant. This paper describes the preparation and the validation of a fifth batch of LSD-spike which is intended to be used as a common spike by the plant operator, the national and the IAEA inspectorates. 7 refs, 6 tabs

  11. Hydrophobic polymers modification of mesoporous silica with large pore size for drug release

    Energy Technology Data Exchange (ETDEWEB)

    Zhu Shenmin, E-mail: smzhu@sjtu.edu.c [Shanghai Jiao Tong University, State Key Lab of Metal Matrix Composites (China); Zhang Di; Yang Na [Fudan University, Ministry of Education, Key Lab of Molecular Engineering of Polymers (China)

    2009-04-15

    Mesostructure cellular foam (MCF) materials were modified with hydrophobic polyisoprene (PI) through free radical polymerization in the pores network, and the resulting materials (MCF-PI) were investigated as matrices for drug storage. The successful synthesis of PI inside MCF was characterized by Fourier transform infrared (FT-IR), hydrogen nuclear magnetic resonance ({sup 1}H NMR), X-ray diffraction patterns (XRD) and nitrogen adsorption/desorption measurements. It was interesting to find the resultant system held a relatively large pore size (19.5 nm) and pore volume (1.02 cm{sup 3} g{sup -1}), which would benefit for drug storage. Ibuprofen (IBU) and vancomycin were selected as model drugs and loaded onto unmodified MCF and modified MCF (MCF-PI). The adsorption capacities of these model drugs on MCF-PI were observed increase as compared to that of on pure MCF, due to the trap effects induced by polyisoprene chains inside the pores. The delivery system of MCF-PI was found to be more favorable for the adsorption of IBU (31 wt%, IBU/silica), possibly attributing to the hydrophobic interaction between IBU and PI formed on the internal surface of MCF matrix. The release of drug through the porous network was investigated by measuring uptake and release of IBU.

  12. Measurement of the thickness of the sprayed nickel coatings on large-sized cast iron products

    Directory of Open Access Journals (Sweden)

    В. А. Сясько

    2016-11-01

    Full Text Available Modern industries increasingly use automatic spraying of heat-resistant Nickel  coating with a thickness  of      T = 1-3 mm for large-size parts made of cast iron with nodular graphite. The process of coating application is characterized by time-dependent behavior of its relative magnetic permeability, μс , that is a function of relaxation time, which can be as long as 24 hours, and by μс deviation from point to point on the surface. Aspects of eddy-current phase method for measuring the T value are considered. The structure of four- winding eddy current transformer transducers is described and results of calculation and optimization of their parameters are presented. The influence of controlled and interfering parameters is considered. Based  on the above results, a two-channel combined transducer is developed  providing measurement  error  of ΔТ ≤ ±(0.03T + 0.02 mm  in the shop environment in the process of coating application and in the final product check. Results of tests on reference specimens and of application in production processes are presented.

  13. Key Supplier Relationship Management: The Case of Croatian Medium-Sized and Large Manufacturing Companies

    Directory of Open Access Journals (Sweden)

    Dario Miočević

    2011-06-01

    Full Text Available The key supplier relationship management represents a vital organizational process. Companies should pay attention not only to managing customer relationships but also to managing relationships with suppliers in order to perform well. They should identify the extent to which a certain supplier adds value through the procurement process. In this line of reasoning, both theory and practice make a distinction between strategic (key and non-strategic (transactional suppliers. By employing the segmentation of the supply market, companies balance their supplier portfolio and are capable of identifying the key suppliers. They can also develop specific programs and initiatives that are aimed at preserving these relationships. In the empirical part of the paper, a survey was conducted on a sample of 123 medium-sized and large Croatian manufacturing companies. The structural model involving the relationship between the key supplier relationship management and value-oriented purchasing was tested. The results indicate that there is a statistically direct, positive and significant relationship between these two constructs. Likewise, the results stress that a theoretical conceptualization and operationalization of the key supplier relationship management construct is both valid and justified. Finally, the theoretical and practical implications and limitations of this study are offered.

  14. Trees of unusual size: biased inference of early bursts from large molecular phylogenies.

    Directory of Open Access Journals (Sweden)

    Matthew W Pennell

    Full Text Available An early burst of speciation followed by a subsequent slowdown in the rate of diversification is commonly inferred from molecular phylogenies. This pattern is consistent with some verbal theory of ecological opportunity and adaptive radiations. One often-overlooked source of bias in these studies is that of sampling at the level of whole clades, as researchers tend to choose large, speciose clades to study. In this paper, we investigate the performance of common methods across the distribution of clade sizes that can be generated by a constant-rate birth-death process. Clades which are larger than expected for a given constant-rate branching process tend to show a pattern of an early burst even when both speciation and extinction rates are constant through time. All methods evaluated were susceptible to detecting this false signature when extinction was low. Under moderate extinction, both the [Formula: see text]-statistic and diversity-dependent models did not detect such a slowdown but only because the signature of a slowdown was masked by subsequent extinction. Some models which estimate time-varying speciation rates are able to detect early bursts under higher extinction rates, but are extremely prone to sampling bias. We suggest that examining clades in isolation may result in spurious inferences that rates of diversification have changed through time.

  15. Fabrication of large size alginate beads for three-dimensional cell-cluster culture

    Science.gov (United States)

    Zhang, Zhengtao; Ruan, Meilin; Liu, Hongni; Cao, Yiping; He, Rongxiang

    2017-08-01

    We fabricated large size alginate beads using a simple microfluidic device under a co-axial injection regime. This device was made by PDMS casting with a mold formed by small diameter metal and polytetrafluorothylene tubes. Droplets of 2% sodium alginate were generated in soybean oil through the device and then cross-linked in a 2% CaCl2 solution, which was mixed tween80 with at a concentration of 0.4 to 40% (w/v). Our results showed that the morphology of the produced alginate beads strongly depends on the tween80 concentration. With the increase of concentration of tween80, the shape of the alginate beads varied from semi-spherical to tailed-spherical, due to the decrease of interface tension between oil and cross-link solution. To access the biocompatibility of the approach, MCF-7 cells were cultured with the alginate beads, showing the formation of cancer cells clusters which might be useful for future studies.

  16. Preparation and provisional validation of a large size dried spike: Batch SAL-9934

    International Nuclear Information System (INIS)

    Jammet, G.; Zoigner, A.; Doubek, N.; Aigner, H.; Deron, S.; Bagliano, G.

    1990-05-01

    To determine uranium and plutonium concentration using isotope dilution mass spectrometry, weighed aliquands of a synthetic mixture containing about 2 mg of Pu (with a 239 Pu abundance of about 98%) and 40 mg of U (with a 235 U enrichment of about 19%) have been prepared and verified by SAL to be used to spike samples of concentrated spent fuel solutions with a high burn-up and a low 235 U enrichment. Certified Reference Materials Pu-NBL-126, natural U-NBS-960 and 93% enriched U-NBL-116 were used to prepare a stock solution containing 3.2 mg/ml of Pu and 64.3 mg/ml of 18.8% enriched U. Before shipment to the Reprocessing Plant, aliquands of the stock solution are dried to give Large Size Dried (LSD) Spikes which resist shocks encountered during transportation, so that they can readily be recovered quantitatively at the plant. This paper describes the preparation and the validation of a third batch of LSD-Spike which is intended to be used as a common spike by the plant operator, the national and the IAEA inspectorates. 6 refs, 6 tabs

  17. Food hygiene training in small to medium-sized care settings.

    Science.gov (United States)

    Seaman, Phillip; Eves, Anita

    2008-10-01

    Adoption of safe food handling practices is essential to effectively manage food safety. This study explores the impact of basic or foundation level food hygiene training on the attitudes and intentions of food handlers in care settings, using questionnaires based on the Theory of Planned Behaviour. Interviews were also conducted with food handlers and their managers to ascertain beliefs about the efficacy of, perceived barriers to, and relevance of food hygiene training. Most food handlers had undertaken formal food hygiene training; however, many who had not yet received training were preparing food, including high risk foods. Appropriate pre-training support and on-going supervision appeared to be lacking, thus limiting the effectiveness of training. Findings showed Subjective Norm to be the most significant influence on food handlers' intention to perform safe food handling practices, irrespective of training status, emphasising the role of important others in determining desirable behaviours.

  18. Analyzing large data sets acquired through telemetry from rats exposed to organophosphorous compounds: an EEG study.

    Science.gov (United States)

    de Araujo Furtado, Marcio; Zheng, Andy; Sedigh-Sarvestani, Madineh; Lumley, Lucille; Lichtenstein, Spencer; Yourick, Debra

    2009-10-30

    The organophosphorous compound soman is an acetylcholinesterase inhibitor that causes damage to the brain. Exposure to soman causes neuropathology as a result of prolonged and recurrent seizures. In the present study, long-term recordings of cortical EEG were used to develop an unbiased means to quantify measures of seizure activity in a large data set while excluding other signal types. Rats were implanted with telemetry transmitters and exposed to soman followed by treatment with therapeutics similar to those administered in the field after nerve agent exposure. EEG, activity and temperature were recorded continuously for a minimum of 2 days pre-exposure and 15 days post-exposure. A set of automatic MATLAB algorithms have been developed to remove artifacts and measure the characteristics of long-term EEG recordings. The algorithms use short-time Fourier transforms to compute the power spectrum of the signal for 2-s intervals. The spectrum is then divided into the delta, theta, alpha, and beta frequency bands. A linear fit to the power spectrum is used to distinguish normal EEG activity from artifacts and high amplitude spike wave activity. Changes in time spent in seizure over a prolonged period are a powerful indicator of the effects of novel therapeutics against seizures. A graphical user interface has been created that simultaneously plots the raw EEG in the time domain, the power spectrum, and the wavelet transform. Motor activity and temperature are associated with EEG changes. The accuracy of this algorithm is also verified against visual inspection of video recordings up to 3 days after exposure.

  19. High-Throughput Tabular Data Processor - Platform independent graphical tool for processing large data sets.

    Science.gov (United States)

    Madanecki, Piotr; Bałut, Magdalena; Buckley, Patrick G; Ochocka, J Renata; Bartoszewski, Rafał; Crossman, David K; Messiaen, Ludwine M; Piotrowski, Arkadiusz

    2018-01-01

    High-throughput technologies generate considerable amount of data which often requires bioinformatic expertise to analyze. Here we present High-Throughput Tabular Data Processor (HTDP), a platform independent Java program. HTDP works on any character-delimited column data (e.g. BED, GFF, GTF, PSL, WIG, VCF) from multiple text files and supports merging, filtering and converting of data that is produced in the course of high-throughput experiments. HTDP can also utilize itemized sets of conditions from external files for complex or repetitive filtering/merging tasks. The program is intended to aid global, real-time processing of large data sets using a graphical user interface (GUI). Therefore, no prior expertise in programming, regular expression, or command line usage is required of the user. Additionally, no a priori assumptions are imposed on the internal file composition. We demonstrate the flexibility and potential of HTDP in real-life research tasks including microarray and massively parallel sequencing, i.e. identification of disease predisposing variants in the next generation sequencing data as well as comprehensive concurrent analysis of microarray and sequencing results. We also show the utility of HTDP in technical tasks including data merge, reduction and filtering with external criteria files. HTDP was developed to address functionality that is missing or rudimentary in other GUI software for processing character-delimited column data from high-throughput technologies. Flexibility, in terms of input file handling, provides long term potential functionality in high-throughput analysis pipelines, as the program is not limited by the currently existing applications and data formats. HTDP is available as the Open Source software (https://github.com/pmadanecki/htdp).

  20. A Full-size High Temperature Superconducting Coil Employed in a Wind Turbine Generator Set-up

    DEFF Research Database (Denmark)

    Song, Xiaowei (Andy); Mijatovic, Nenad; Kellers, Jürgen

    2016-01-01

    A full-size stationary experimental set-up, which is a pole pair segment of a 2 MW high temperature superconducting (HTS) wind turbine generator, has been built and tested under the HTS-GEN project in Denmark. The performance of the HTS coil is crucial to the set-up, and further to the development...... is tested in LN2 first, and then tested in the set-up so that the magnetic environment in a real generator is reflected. The experimental results are reported, followed by a finite element simulation and a discussion on the deviation of the results. The tested and estimated Ic in LN2 are 148 A and 143 A...

  1. Considerations for Observational Research Using Large Data Sets in Radiation Oncology

    Energy Technology Data Exchange (ETDEWEB)

    Jagsi, Reshma, E-mail: rjagsi@med.umich.edu [Department of Radiation Oncology, University of Michigan, Ann Arbor, Michigan (United States); Bekelman, Justin E. [Departments of Radiation Oncology and Medical Ethics and Health Policy, University of Pennsylvania Perelman School of Medicine, Philadelphia, Pennsylvania (United States); Chen, Aileen [Department of Radiation Oncology, Harvard Medical School, Boston, Massachusetts (United States); Chen, Ronald C. [Department of Radiation Oncology, University of North Carolina at Chapel Hill School of Medicine, Chapel Hill, North Carolina (United States); Hoffman, Karen [Department of Radiation Oncology, Division of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Tina Shih, Ya-Chen [Department of Medicine, Section of Hospital Medicine, The University of Chicago, Chicago, Illinois (United States); Smith, Benjamin D. [Department of Radiation Oncology, Division of Radiation Oncology, and Department of Health Services Research, The University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Yu, James B. [Yale School of Medicine, New Haven, Connecticut (United States)

    2014-09-01

    The radiation oncology community has witnessed growing interest in observational research conducted using large-scale data sources such as registries and claims-based data sets. With the growing emphasis on observational analyses in health care, the radiation oncology community must possess a sophisticated understanding of the methodological considerations of such studies in order to evaluate evidence appropriately to guide practice and policy. Because observational research has unique features that distinguish it from clinical trials and other forms of traditional radiation oncology research, the International Journal of Radiation Oncology, Biology, Physics assembled a panel of experts in health services research to provide a concise and well-referenced review, intended to be informative for the lay reader, as well as for scholars who wish to embark on such research without prior experience. This review begins by discussing the types of research questions relevant to radiation oncology that large-scale databases may help illuminate. It then describes major potential data sources for such endeavors, including information regarding access and insights regarding the strengths and limitations of each. Finally, it provides guidance regarding the analytical challenges that observational studies must confront, along with discussion of the techniques that have been developed to help minimize the impact of certain common analytical issues in observational analysis. Features characterizing a well-designed observational study include clearly defined research questions, careful selection of an appropriate data source, consultation with investigators with relevant methodological expertise, inclusion of sensitivity analyses, caution not to overinterpret small but significant differences, and recognition of limitations when trying to evaluate causality. This review concludes that carefully designed and executed studies using observational data that possess these qualities hold

  2. Considerations for Observational Research Using Large Data Sets in Radiation Oncology

    International Nuclear Information System (INIS)

    Jagsi, Reshma; Bekelman, Justin E.; Chen, Aileen; Chen, Ronald C.; Hoffman, Karen; Tina Shih, Ya-Chen; Smith, Benjamin D.; Yu, James B.

    2014-01-01

    The radiation oncology community has witnessed growing interest in observational research conducted using large-scale data sources such as registries and claims-based data sets. With the growing emphasis on observational analyses in health care, the radiation oncology community must possess a sophisticated understanding of the methodological considerations of such studies in order to evaluate evidence appropriately to guide practice and policy. Because observational research has unique features that distinguish it from clinical trials and other forms of traditional radiation oncology research, the International Journal of Radiation Oncology, Biology, Physics assembled a panel of experts in health services research to provide a concise and well-referenced review, intended to be informative for the lay reader, as well as for scholars who wish to embark on such research without prior experience. This review begins by discussing the types of research questions relevant to radiation oncology that large-scale databases may help illuminate. It then describes major potential data sources for such endeavors, including information regarding access and insights regarding the strengths and limitations of each. Finally, it provides guidance regarding the analytical challenges that observational studies must confront, along with discussion of the techniques that have been developed to help minimize the impact of certain common analytical issues in observational analysis. Features characterizing a well-designed observational study include clearly defined research questions, careful selection of an appropriate data source, consultation with investigators with relevant methodological expertise, inclusion of sensitivity analyses, caution not to overinterpret small but significant differences, and recognition of limitations when trying to evaluate causality. This review concludes that carefully designed and executed studies using observational data that possess these qualities hold

  3. Registering coherent change detection products associated with large image sets and long capture intervals

    Science.gov (United States)

    Perkins, David Nikolaus; Gonzales, Antonio I

    2014-04-08

    A set of co-registered coherent change detection (CCD) products is produced from a set of temporally separated synthetic aperture radar (SAR) images of a target scene. A plurality of transformations are determined, which transformations are respectively for transforming a plurality of the SAR images to a predetermined image coordinate system. The transformations are used to create, from a set of CCD products produced from the set of SAR images, a corresponding set of co-registered CCD products.

  4. Large-size deployable construction heated by solar irradiation in free space

    Science.gov (United States)

    Pestrenina, Irena; Kondyurin, Alexey; Pestrenin, Valery; Kashin, Nickolay; Naymushin, Alexey

    Large-size deployable construction in free space with subsequent direct curing was invented more than fifteen years ago (Briskman et al., 1997 and Kondyurin, 1998). It caused a lot of scientific problems, one of which is a possibility to use the solar energy for initiation of the curing reaction. This paper is devoted to investigate the curing process under sun irradiation during a space flight in Earth orbits. A rotation of the construction is considered. This motion can provide an optimal temperature distribution in the construction that is required for the polymerization reaction. The cylindrical construction of 80 m length with two hemispherical ends of 10 m radius is considered. The wall of the construction of 10 mm carbon fibers/epoxy matrix composite is irradiated by heat flux from the sun and radiates heat from the external surface by the Stefan- Boltzmann law. A stage of polymerization reaction is calculated as a function of temperature/time based on the laboratory experiments with certified composite materials for space exploitation. The curing kinetics of the composite is calculated for different inclination Low Earth Orbits (300 km altitude) and Geostationary Earth Orbit (40000 km altitude). The results show that • the curing process depends strongly on the Earth orbit and the rotation of the construction; • the optimal flight orbit and rotation can be found to provide the thermal regime that is sufficient for the complete curing of the considered construction. The study is supported by RFBR grant No.12-08-00970-a. 1. Briskman V., A.Kondyurin, K.Kostarev, V.Leontyev, M.Levkovich, A.Mashinsky, G.Nechitailo, T.Yudina, Polymerization in microgravity as a new process in space technology, Paper No IAA-97-IAA.12.1.07, 48th International Astronautical Congress, October 6-10, 1997, Turin Italy. 2. Kondyurin A.V., Building the shells of large space stations by the polymerisation of epoxy composites in open space, Int. Polymer Sci. and Technol., v.25, N4

  5. Response to a Large Polio Outbreak in a Setting of Conflict - Middle East, 2013-2015.

    Science.gov (United States)

    Mbaeyi, Chukwuma; Ryan, Michael J; Smith, Philip; Mahamud, Abdirahman; Farag, Noha; Haithami, Salah; Sharaf, Magdi; Jorba, Jaume C; Ehrhardt, Derek

    2017-03-03

    As the world advances toward the eradication of polio, outbreaks of wild poliovirus (WPV) in polio-free regions pose a substantial risk to the timeline for global eradication. Countries and regions experiencing active conflict, chronic insecurity, and large-scale displacement of persons are particularly vulnerable to outbreaks because of the disruption of health care and immunization services (1). A polio outbreak occurred in the Middle East, beginning in Syria in 2013 with subsequent spread to Iraq (2). The outbreak occurred 2 years after the onset of the Syrian civil war, resulted in 38 cases, and was the first time WPV was detected in Syria in approximately a decade (3,4). The national governments of eight countries designated the outbreak a public health emergency and collaborated with partners in the Global Polio Eradication Initiative (GPEI) to develop a multiphase outbreak response plan focused on improving the quality of acute flaccid paralysis (AFP) surveillance* and administering polio vaccines to >27 million children during multiple rounds of supplementary immunization activities (SIAs). † Successful implementation of the response plan led to containment and interruption of the outbreak within 6 months of its identification. The concerted approach adopted in response to this outbreak could serve as a model for responding to polio outbreaks in settings of conflict and political instability.

  6. ObspyDMT: a Python toolbox for retrieving and processing large seismological data sets

    Directory of Open Access Journals (Sweden)

    K. Hosseini

    2017-10-01

    Full Text Available We present obspyDMT, a free, open-source software toolbox for the query, retrieval, processing and management of seismological data sets, including very large, heterogeneous and/or dynamically growing ones. ObspyDMT simplifies and speeds up user interaction with data centers, in more versatile ways than existing tools. The user is shielded from the complexities of interacting with different data centers and data exchange protocols and is provided with powerful diagnostic and plotting tools to check the retrieved data and metadata. While primarily a productivity tool for research seismologists and observatories, easy-to-use syntax and plotting functionality also make obspyDMT an effective teaching aid. Written in the Python programming language, it can be used as a stand-alone command-line tool (requiring no knowledge of Python or can be integrated as a module with other Python codes. It facilitates data archiving, preprocessing, instrument correction and quality control – routine but nontrivial tasks that can consume much user time. We describe obspyDMT's functionality, design and technical implementation, accompanied by an overview of its use cases. As an example of a typical problem encountered in seismogram preprocessing, we show how to check for inconsistencies in response files of two example stations. We also demonstrate the fully automated request, remote computation and retrieval of synthetic seismograms from the Synthetics Engine (Syngine web service of the Data Management Center (DMC at the Incorporated Research Institutions for Seismology (IRIS.

  7. Development of estrogen receptor beta binding prediction model using large sets of chemicals.

    Science.gov (United States)

    Sakkiah, Sugunadevi; Selvaraj, Chandrabose; Gong, Ping; Zhang, Chaoyang; Tong, Weida; Hong, Huixiao

    2017-11-03

    We developed an ER β binding prediction model to facilitate identification of chemicals specifically bind ER β or ER α together with our previously developed ER α binding model. Decision Forest was used to train ER β binding prediction model based on a large set of compounds obtained from EADB. Model performance was estimated through 1000 iterations of 5-fold cross validations. Prediction confidence was analyzed using predictions from the cross validations. Informative chemical features for ER β binding were identified through analysis of the frequency data of chemical descriptors used in the models in the 5-fold cross validations. 1000 permutations were conducted to assess the chance correlation. The average accuracy of 5-fold cross validations was 93.14% with a standard deviation of 0.64%. Prediction confidence analysis indicated that the higher the prediction confidence the more accurate the predictions. Permutation testing results revealed that the prediction model is unlikely generated by chance. Eighteen informative descriptors were identified to be important to ER β binding prediction. Application of the prediction model to the data from ToxCast project yielded very high sensitivity of 90-92%. Our results demonstrated ER β binding of chemicals could be accurately predicted using the developed model. Coupling with our previously developed ER α prediction model, this model could be expected to facilitate drug development through identification of chemicals that specifically bind ER β or ER α .

  8. ObspyDMT: a Python toolbox for retrieving and processing large seismological data sets

    Science.gov (United States)

    Hosseini, Kasra; Sigloch, Karin

    2017-10-01

    We present obspyDMT, a free, open-source software toolbox for the query, retrieval, processing and management of seismological data sets, including very large, heterogeneous and/or dynamically growing ones. ObspyDMT simplifies and speeds up user interaction with data centers, in more versatile ways than existing tools. The user is shielded from the complexities of interacting with different data centers and data exchange protocols and is provided with powerful diagnostic and plotting tools to check the retrieved data and metadata. While primarily a productivity tool for research seismologists and observatories, easy-to-use syntax and plotting functionality also make obspyDMT an effective teaching aid. Written in the Python programming language, it can be used as a stand-alone command-line tool (requiring no knowledge of Python) or can be integrated as a module with other Python codes. It facilitates data archiving, preprocessing, instrument correction and quality control - routine but nontrivial tasks that can consume much user time. We describe obspyDMT's functionality, design and technical implementation, accompanied by an overview of its use cases. As an example of a typical problem encountered in seismogram preprocessing, we show how to check for inconsistencies in response files of two example stations. We also demonstrate the fully automated request, remote computation and retrieval of synthetic seismograms from the Synthetics Engine (Syngine) web service of the Data Management Center (DMC) at the Incorporated Research Institutions for Seismology (IRIS).

  9. The research of the quantitative prediction of the deposits concentrated regions of the large and super-large sized mineral deposits in China

    International Nuclear Information System (INIS)

    Zhao Zhenyu; Wang Shicheng

    2003-01-01

    By the general theory and method of mineral resources prognosis of synthetic information, the locative and quantitative prediction of the large and super-large sized mineral deposits of solid resources of 1 : 5,000,000 are developed in china. The deposit concentrated regions is model unit, the anomaly concentrated regions is prediction unit. The mineral prognosis of synthetic information is developed on GIS platform. The technical route and work method of looking for the large and super-large sized mineral resources and basic principle of compiling attribute table of the variables and the response variables are mentioned. In research of prediction of resources quantity, the locative and quantitative prediction are processed by separately the quantification theory Ⅲ and the corresponding characteristic analysis, two methods are compared. It is very important for resources prediction of western ten provinces in china, it is helpful. (authors)

  10. Design of large size segmented GEM foils and Drift PCB for CBM MUCH

    International Nuclear Information System (INIS)

    Saini, J.; Dubey, A.K.; Chattopadhyay, S.

    2016-01-01

    Triple GEM (Gas Electron Multiplier), sector shaped detectors will be used for Muon tracking in the Compressed Baryonic Matter (CBM) experiment at Anti-proton Ion Research (FAIR) facility at Darmstadt, Germany. The sizes of the detectors modules in the Muon Chambers (MUCH) are of the order of 1 meter with active area of about 75cms. Progressive pad geometry is chosen for the readout from these detectors. In construction of these chambers, three GEM foils are stacked on top of each other in a 3/2/2/2 gap configuration. The GEM foils are double layered copper clad 50μm thin Kapton foil. Each GEM foil has millions of holes on it. Foils of large surface area are prone to damages due to discharges owing to the high capacitance of the foil. Hence, these foils have their top surfaces divided into segments of about 100 sq.cm. Further segmentation may be necessary when there are high rate requirements, as in the case of CBM. For the GEM foils of CBM MUCH, a 24 segment layout has been adopted. Short-circuit in any of the GEM-holes will make entire foil un-usable. To reduce such occurrences, segment to segment isolation using opto-coupler in series with the GEM-foil segments has been introduced. Hence, a novel design for GEM chamber drift-PCB and foils has been made. In this scheme, each segment is powered and controlled individually. At the same time, the design takes into account, the space constraints, not only in x-y plane, but also in the z, due to compact assembly of MUCH detector layers

  11. Growth of large-size-two-dimensional crystalline pentacene grains for high performance organic thin film transistors

    Directory of Open Access Journals (Sweden)

    Chuan Du

    2012-06-01

    Full Text Available New approach is presented for growth of pentacene crystalline thin film with large grain size. Modification of dielectric surfaces using a monolayer of small molecule results in the formation of pentacene thin films with well ordered large crystalline domain structures. This suggests that pentacene molecules may have significantly large diffusion constant on the modified surface. An average hole mobility about 1.52 cm2/Vs of pentacene based organic thin film transistors (OTFTs is achieved with good reproducibility.

  12. Optimization of the size and shape of the set-in nozzle for a PWR reactor pressure vessel

    Energy Technology Data Exchange (ETDEWEB)

    Murtaza, Usman Tariq, E-mail: maniiut@yahoo.com; Javed Hyder, M., E-mail: hyder@pieas.edu.pk

    2015-04-01

    Highlights: • The size and shape of the set-in nozzle of the RPV have been optimized. • The optimized nozzle ensure the reduction of the mass around 198 kg per nozzle. • The mass of the RPV should be minimized for better fracture toughness. - Abstract: The objective of this research work is to optimize the size and shape of the set-in nozzle for a typical reactor pressure vessel (RPV) of a 300 MW pressurized water reactor. The analysis was performed by optimizing the four design variables which control the size and shape of the nozzle. These variables are inner radius of the nozzle, thickness of the nozzle, taper angle at the nozzle-cylinder intersection, and the point where taper of the nozzle starts from. It is concluded that the optimum design of the nozzle is the one that minimizes the two conflicting state variables, i.e., the stress intensity (Tresca yield criterion) and the mass of the RPV.

  13. Gastro-oesophageal reflux in large-sized, deep-chested versus small-sized, barrel-chested dogs undergoing spinal surgery in sternal recumbency.

    Science.gov (United States)

    Anagnostou, Tilemahos L; Kazakos, George M; Savvas, Ioannis; Kostakis, Charalampos; Papadopoulou, Paraskevi

    2017-01-01

    The aim of this study was to investigate whether an increased frequency of gastro-oesophageal reflux (GOR) is more common in large-sized, deep-chested dogs undergoing spinal surgery in sternal recumbency than in small-sized, barrelchested dogs. Prospective, cohort study. Nineteen small-sized, barrel-chested dogs (group B) and 26 large-sized, deep-chested dogs (group D). All animals were premedicated with intramuscular (IM) acepromazine (0.05 mg kg -1 ) and pethidine (3 mg kg -1 ) IM. Anaesthesia was induced with intravenous sodium thiopental and maintained with halothane in oxygen. Lower oesophageal pH was monitored continuously after induction of anaesthesia. Gastro-oesophageal reflux was considered to have occurred whenever pH values > 7.5 or < 4 were recorded. If GOR was detected during anaesthesia, measures were taken to avoid aspiration of gastric contents into the lungs and to prevent the development of oesophagitis/oesophageal stricture. The frequency of GOR during anaesthesia was significantly higher in group D (6/26 dogs; 23.07%) than in group B (0/19 dogs; 0%) (p = 0.032). Signs indicative of aspiration pneumonia, oesophagitis or oesophageal stricture were not reported in any of the GOR cases. In large-sized, deep-chested dogs undergoing spinal surgery in sternal recumbency, it would seem prudent to consider measures aimed at preventing GOR and its potentially devastating consequences (oesophagitis/oesophageal stricture, aspiration pneumonia). Copyright © 2016 Association of Veterinary Anaesthetists and American College of Veterinary Anesthesia and Analgesia. Published by Elsevier Ltd. All rights reserved.

  14. Validation and evaluation of common large-area display set (CLADS) performance specification

    Science.gov (United States)

    Hermann, David J.; Gorenflo, Ronald L.

    1998-09-01

    Battelle is under contract with Warner Robins Air Logistics Center to design a Common Large Area Display Set (CLADS) for use in multiple Command, Control, Communications, Computers, and Intelligence (C4I) applications that currently use 19- inch Cathode Ray Tubes (CRTs). Battelle engineers have built and fully tested pre-production prototypes of the CLADS design for AWACS, and are completing pre-production prototype displays for three other platforms simultaneously. With the CLADS design, any display technology that can be packaged to meet the form, fit, and function requirements defined by the Common Large Area Display Head Assembly (CLADHA) performance specification is a candidate for CLADS applications. This technology independent feature reduced the risk of CLADS development, permits life long technology insertion upgrades without unnecessary redesign, and addresses many of the obsolescence problems associated with COTS technology-based acquisition. Performance and environmental testing were performed on the AWACS CLADS and continues on other platforms as a part of the performance specification validation process. A simulator assessment and flight assessment were successfully completed for the AWACS CLADS, and lessons learned from these assessments are being incorporated into the performance specifications. Draft CLADS specifications were released to potential display integrators and manufacturers for review in 1997, and the final version of the performance specifications are scheduled to be released to display integrators and manufacturers in May, 1998. Initial USAF applications include replacements for the E-3 AWACS color monitor assembly, E-8 Joint STARS graphics display unit, and ABCCC airborne color display. Initial U.S. Navy applications include the E-2C ACIS display. For these applications, reliability and maintainability are key objectives. The common design will reduce the cost of operation and maintenance by an estimated 3.3M per year on E-3 AWACS

  15. Size and shape characteristics of drumlins, derived from a large sample, and associated scaling laws

    Science.gov (United States)

    Clark, Chris D.; Hughes, Anna L. C.; Greenwood, Sarah L.; Spagnolo, Matteo; Ng, Felix S. L.

    2009-04-01

    Ice sheets flowing across a sedimentary bed usually produce a landscape of blister-like landforms streamlined in the direction of the ice flow and with each bump of the order of 10 2 to 10 3 m in length and 10 1 m in relief. Such landforms, known as drumlins, have mystified investigators for over a hundred years. A satisfactory explanation for their formation, and thus an appreciation of their glaciological significance, has remained elusive. A recent advance has been in numerical modelling of the land-forming process. In anticipation of future modelling endeavours, this paper is motivated by the requirement for robust data on drumlin size and shape for model testing. From a systematic programme of drumlin mapping from digital elevation models and satellite images of Britain and Ireland, we used a geographic information system to compile a range of statistics on length L, width W, and elongation ratio E (where E = L/ W) for a large sample. Mean L, is found to be 629 m ( n = 58,983), mean W is 209 m and mean E is 2.9 ( n = 37,043). Most drumlins are between 250 and 1000 metres in length; between 120 and 300 metres in width; and between 1.7 and 4.1 times as long as they are wide. Analysis of such data and plots of drumlin width against length reveals some new insights. All frequency distributions are unimodal from which we infer that the geomorphological label of 'drumlin' is fair in that this is a true single population of landforms, rather than an amalgam of different landform types. Drumlin size shows a clear minimum bound of around 100 m (horizontal). Maybe drumlins are generated at many scales and this is the minimum, or this value may be an indication of the fundamental scale of bump generation ('proto-drumlins') prior to them growing and elongating. A relationship between drumlin width and length is found (with r2 = 0.48) and that is approximately W = 7 L 1/2 when measured in metres. A surprising and sharply-defined line bounds the data cloud plotted in E- W

  16. Determining the Variability of Lesion Size Measurements from CT Patient Data Sets Acquired under “No Change” Conditions

    Directory of Open Access Journals (Sweden)

    Michael F. McNitt-Gray

    2015-02-01

    Full Text Available PURPOSE: To determine the variability of lesion size measurements in computed tomography data sets of patients imaged under a “no change” (“coffee break” condition and to determine the impact of two reading paradigms on measurement variability. METHOD AND MATERIALS: Using data sets from 32 non-small cell lung cancer patients scanned twice within 15 minutes (“no change”, measurements were performed by five radiologists in two phases: (1 independent reading of each computed tomography dataset (timepoint: (2 a locked, sequential reading of datasets. Readers performed measurements using several sizing methods, including one-dimensional (1D longest in-slice dimension and 3D semi-automated segmented volume. Change in size was estimated by comparing measurements performed on both timepoints for the same lesion, for each reader and each measurement method. For each reading paradigm, results were pooled across lesions, across readers, and across both readers and lesions, for each measurement method. RESULTS: The mean percent difference (±SD when pooled across both readers and lesions for 1D and 3D measurements extracted from contours was 2.8 ± 22.2% and 23.4 ± 105.0%, respectively, for the independent reads. For the locked, sequential reads, the mean percent differences (±SD reduced to 2.52 ± 14.2% and 7.4 ± 44.2% for the 1D and 3D measurements, respectively. CONCLUSION: Even under a “no change” condition between scans, there is variation in lesion size measurements due to repeat scans and variations in reader, lesion, and measurement method. This variation is reduced when using a locked, sequential reading paradigm compared to an independent reading paradigm.

  17. On sets of vectors of a finite vector space in which every subset of basis size is a basis II

    OpenAIRE

    Ball, Simeon; De Beule, Jan

    2012-01-01

    This article contains a proof of the MDS conjecture for k a parts per thousand currency sign 2p - 2. That is, that if S is a set of vectors of in which every subset of S of size k is a basis, where q = p (h) , p is prime and q is not and k a parts per thousand currency sign 2p - 2, then |S| a parts per thousand currency sign q + 1. It also contains a short proof of the same fact for k a parts per thousand currency sign p, for all q.

  18. Safety requirements and options for a large size fast neutron reactor

    International Nuclear Information System (INIS)

    Cogne, F.; Megy, J.; Robert, E.; Benmergui, A.; Villeneuve, J.

    1977-01-01

    Starting from the experience gained in the safety evaluation of the PHENIX reactor, and from results already obtained in the safety studies on fast neutron reactors, the French regulatory bodies have defined since 1973 what could be the requirements and the recommendations in the matter of safety for the first large size ''prototype'' fast neutron power plant of 1200 MWe. Those requirements and recommendations, while not being compulsory due to the evolution of this type of reactors, will be used as a basis for the technical regulation that will be established in France in this field. They define particularly the care to be taken in the following areas which are essential for safety: the protection systems, the primary coolant system, the prevention of accidents at the core level, the measures to be taken with regard to the whole core accident and to the containment, the protection against sodium fires, and the design as a function of external aggressions. In applying these recommendations, the CREYS-MALVILLE plant designers have tried to achieve redundancy in the safety related systems and have justified the safety of the design with regard to the various involved phenomena. In particular, the extensive research made at the levels of the fuel and of the core instrumentation makes it possible to achieve the best defence to avoid the development of core accidents. The overall examination of the measures taken, from the standpoint of prevention and surveyance as well as from the standpoint of means of action led the French regulatory bodies to propose the construction permit of the CREYS MALVILLE plant, provided that additional examinations by the regulatory bodies be made during the construction of the plant on some technological aspects not fully clarified at the authorization time. The conservatism of the corresponding requirements should be demonstrated prior to the commissioning of the power plant. To pursue a programme on reactors of this type, or even more

  19. Effect of crowd size on patient volume at a large, multipurpose, indoor stadium.

    Science.gov (United States)

    De Lorenzo, R A; Gray, B C; Bennett, P C; Lamparella, V J

    1989-01-01

    A prediction of patient volume expected at "mass gatherings" is desirable in order to provide optimal on-site emergency medical care. While several methods of predicting patient loads have been suggested, a reliable technique has not been established. This study examines the frequency of medical emergencies at the Syracuse University Carrier Dome, a 50,500-seat indoor stadium. Patient volume and level of care at collegiate basketball and football games as well as rock concerts, over a 7-year period were examined and tabulated. This information was analyzed using simple regression and nonparametric statistical methods to determine level of correlation between crowd size and patient volume. These analyses demonstrated no statistically significant increase in patient volume for increasing crowd size for basketball and football events. There was a small but statistically significant increase in patient volume for increasing crowd size for concerts. A comparison of similar crowd size for each of the three events showed that patient frequency is greatest for concerts and smallest for basketball. The study suggests that crowd size alone has only a minor influence on patient volume at any given event. Structuring medical services based solely on expected crowd size and not considering other influences such as event type and duration may give poor results.

  20. When bigger is not better: selection against large size, high condition and fast growth in juvenile lemon sharks.

    Science.gov (United States)

    Dibattista, J D; Feldheim, K A; Gruber, S H; Hendry, A P

    2007-01-01

    Selection acting on large marine vertebrates may be qualitatively different from that acting on terrestrial or freshwater organisms, but logistical constraints have thus far precluded selection estimates for the former. We overcame these constraints by exhaustively sampling and repeatedly recapturing individuals in six cohorts of juvenile lemon sharks (450 age-0 and 255 age-1 fish) at an enclosed nursery site (Bimini, Bahamas). Data on individual size, condition factor, growth rate and inter-annual survival were used to test the 'bigger is better', 'fatter is better' and 'faster is better' hypotheses of life-history theory. For age-0 sharks, selection on all measured traits was weak, and generally acted against large size and high condition. For age-1 sharks, selection was much stronger, and consistently acted against large size and fast growth. These results suggest that selective pressures at Bimini may be constraining the evolution of large size and fast growth, an observation that fits well with the observed small size and low growth rate of juveniles at this site. Our results support those of some other recent studies in suggesting that bigger/fatter/faster is not always better, and may often be worse.

  1. PC-based support programs coupled with the sets code for large fault tree analysis

    International Nuclear Information System (INIS)

    Hioki, K.; Nakai, R.

    1989-01-01

    Power Reactor and Nuclear Fuel Development Corporation (PNC) has developed four PC programs: IEIQ (Initiating Event Identification and Quantification), MODESTY (Modular Even Description for a Variety of Systems), FAUST (Fault Summary Tables Generation Program) and ETAAS (Event Tree Analysis Assistant System). These programs prepare the input data for the SETS (Set Equation Transformation System) code and construct and quantify event trees (E/Ts) using the output of the SETS code. The capability of these programs is described and some examples of the results are presented in this paper. With these PC programs and the SETS code, PSA can now be performed with more consistency and less manpower

  2. Large parallel volumes of finite and compact sets in d-dimensional Euclidean space

    DEFF Research Database (Denmark)

    Kampf, Jürgen; Kiderlen, Markus

    The r-parallel volume V (Cr) of a compact subset C in d-dimensional Euclidean space is the volume of the set Cr of all points of Euclidean distance at most r > 0 from C. According to Steiner’s formula, V (Cr) is a polynomial in r when C is convex. For finite sets C satisfying a certain geometric...

  3. Prey size and availability limits maximum size of rainbow trout in a large tailwater: insights from a drift-foraging bioenergetics model

    Science.gov (United States)

    Dodrill, Michael J.; Yackulic, Charles B.; Kennedy, Theodore A.; Haye, John W

    2016-01-01

    The cold and clear water conditions present below many large dams create ideal conditions for the development of economically important salmonid fisheries. Many of these tailwater fisheries have experienced declines in the abundance and condition of large trout species, yet the causes of these declines remain uncertain. Here, we develop, assess, and apply a drift-foraging bioenergetics model to identify the factors limiting rainbow trout (Oncorhynchus mykiss) growth in a large tailwater. We explored the relative importance of temperature, prey quantity, and prey size by constructing scenarios where these variables, both singly and in combination, were altered. Predicted growth matched empirical mass-at-age estimates, particularly for younger ages, demonstrating that the model accurately describes how current temperature and prey conditions interact to determine rainbow trout growth. Modeling scenarios that artificially inflated prey size and abundance demonstrate that rainbow trout growth is limited by the scarcity of large prey items and overall prey availability. For example, shifting 10% of the prey biomass to the 13 mm (large) length class, without increasing overall prey biomass, increased lifetime maximum mass of rainbow trout by 88%. Additionally, warmer temperatures resulted in lower predicted growth at current and lower levels of prey availability; however, growth was similar across all temperatures at higher levels of prey availability. Climate change will likely alter flow and temperature regimes in large rivers with corresponding changes to invertebrate prey resources used by fish. Broader application of drift-foraging bioenergetics models to build a mechanistic understanding of how changes to habitat conditions and prey resources affect growth of salmonids will benefit management of tailwater fisheries.

  4. A comparison of workplace safety perceptions among financial decision-makers of medium- vs. large-size companies.

    Science.gov (United States)

    Huang, Yueng-Hsiang; Leamon, Tom B; Courtney, Theodore K; Chen, Peter Y; DeArmond, Sarah

    2011-01-01

    This study, through a random national survey in the U.S., explored how corporate financial decision-makers perceive important workplace safety issues as a function of the size of the company for which they worked (medium- vs. large-size companies). Telephone surveys were conducted with 404 U.S. corporate financial decision-makers: 203 from medium-size companies and 201 from large companies. Results showed that the patterns of responding for participants from medium- and large-size companies were somewhat similar. The top-rated safety priorities in resource allocation reported by participants from both groups were overexertion, repetitive motion, and bodily reaction. They believed that there were direct and indirect costs associated with workplace injuries and for every dollar spent improving workplace safety, more than four dollars would be returned. They perceived the top benefits of an effective safety program to be predominately financial in nature - increased productivity and reduced costs - and the safety modification participants mentioned most often was to have more/better safety-focused training. However, more participants from large- than medium-size companies reported that "falling on the same level" was the major cause of workers' compensation loss, which is in line with industry loss data. Participants from large companies were more likely to see their safety programs as better than those of other companies in their industries, and those of medium-size companies were more likely to mention that there were no improvements needed for their companies. Copyright © 2009 Elsevier Ltd. All rights reserved.

  5. Hierarchical Cantor set in the large scale structure with torus geometry

    Energy Technology Data Exchange (ETDEWEB)

    Murdzek, R. [Physics Department, ' Al. I. Cuza' University, Blvd. Carol I, Nr. 11, Iassy 700506 (Romania)], E-mail: rmurdzek@yahoo.com

    2008-12-15

    The formation of large scale structures is considered within a model with string on toroidal space-time. Firstly, the space-time geometry is presented. In this geometry, the Universe is represented by a string describing a torus surface. Thereafter, the large scale structure of the Universe is derived from the string oscillations. The results are in agreement with the cellular structure of the large scale distribution and with the theory of a Cantorian space-time.

  6. Estimation of body-size traits by photogrammetry in large mammals to inform conservation.

    Science.gov (United States)

    Berger, Joel

    2012-10-01

    Photography, including remote imagery and camera traps, has contributed substantially to conservation. However, the potential to use photography to understand demography and inform policy is limited. To have practical value, remote assessments must be reasonably accurate and widely deployable. Prior efforts to develop noninvasive methods of estimating trait size have been motivated by a desire to answer evolutionary questions, measure physiological growth, or, in the case of illegal trade, assess economics of horn sizes; but rarely have such methods been directed at conservation. Here I demonstrate a simple, noninvasive photographic technique and address how knowledge of values of individual-specific metrics bears on conservation policy. I used 10 years of data on juvenile moose (Alces alces) to examine whether body size and probability of survival are positively correlated in cold climates. I investigated whether the presence of mothers improved juvenile survival. The posited latter relation is relevant to policy because harvest of adult females has been permitted in some Canadian and American jurisdictions under the assumption that probability of survival of young is independent of maternal presence. The accuracy of estimates of head sizes made from photographs exceeded 98%. The estimates revealed that overwinter juvenile survival had no relation to the juvenile's estimated mass (p < 0.64) and was more strongly associated with maternal presence (p < 0.02) than winter snow depth (p < 0.18). These findings highlight the effects on survival of a social dynamic (the mother-young association) rather than body size and suggest a change in harvest policy will increase survival. Furthermore, photographic imaging of growth of individual juvenile muskoxen (Ovibos moschatus) over 3 Arctic winters revealed annual variability in size, which supports the idea that noninvasive monitoring may allow one to detect how some environmental conditions ultimately affect body growth.

  7. Large Time Asymptotics for a Continuous Coagulation-Fragmentation Model with Degenerate Size-Dependent Diffusion

    KAUST Repository

    Desvillettes, Laurent; Fellner, Klemens

    2010-01-01

    We study a continuous coagulation-fragmentation model with constant kernels for reacting polymers (see [M. Aizenman and T. Bak, Comm. Math. Phys., 65 (1979), pp. 203-230]). The polymers are set to diffuse within a smooth bounded one

  8. Setting Priorities For Large Research Facility Projects Supported By the National Science Foundation

    National Research Council Canada - National Science Library

    2005-01-01

    ...) level has stalled in the face of a backlog of approved but unfunded projects. Second, the rationale and criteria used to select projects and set priorities among projects for MREFC funding have not been clearly and publicly articulated...

  9. A Fast Logdet Divergence Based Metric Learning Algorithm for Large Data Sets Classification

    Directory of Open Access Journals (Sweden)

    Jiangyuan Mei

    2014-01-01

    the basis of classifiers, for example, the k-nearest neighbors classifier. Experiments on benchmark data sets demonstrate that the proposed algorithm compares favorably with the state-of-the-art methods.

  10. Solving large sets of coupled equations iteratively by vector processing on the CYBER 205 computer

    International Nuclear Information System (INIS)

    Tolsma, L.D.

    1985-01-01

    The set of coupled linear second-order differential equations which has to be solved for the quantum-mechanical description of inelastic scattering of atomic and nuclear particles can be rewritten as an equivalent set of coupled integral equations. When some type of functions is used as piecewise analytic reference solutions, the integrals that arise in this set can be evaluated analytically. The set of integral equations can be solved iteratively. For the results mentioned an inward-outward iteration scheme has been applied. A concept of vectorization of coupled-channel Fortran programs, based on this integral method, is presented for the use on the Cyber 205 computer. It turns out that, for two heavy ion nuclear scattering test cases, this vector algorithm gives an overall speed-up of about a factor of 2 to 3 compared to a highly optimized scalar algorithm for a one vector pipeline computer

  11. Annotating gene sets by mining large literature collections with protein networks.

    Science.gov (United States)

    Wang, Sheng; Ma, Jianzhu; Yu, Michael Ku; Zheng, Fan; Huang, Edward W; Han, Jiawei; Peng, Jian; Ideker, Trey

    2018-01-01

    Analysis of patient genomes and transcriptomes routinely recognizes new gene sets associated with human disease. Here we present an integrative natural language processing system which infers common functions for a gene set through automatic mining of the scientific literature with biological networks. This system links genes with associated literature phrases and combines these links with protein interactions in a single heterogeneous network. Multiscale functional annotations are inferred based on network distances between phrases and genes and then visualized as an ontology of biological concepts. To evaluate this system, we predict functions for gene sets representing known pathways and find that our approach achieves substantial improvement over the conventional text-mining baseline method. Moreover, our system discovers novel annotations for gene sets or pathways without previously known functions. Two case studies demonstrate how the system is used in discovery of new cancer-related pathways with ontological annotations.

  12. Large Scale Metric Learning for Distance-Based Image Classification on Open Ended Data Sets

    NARCIS (Netherlands)

    Mensink, T.; Verbeek, J.; Perronnin, F.; Csurka, G.; Farinella, G.M.; Battiato, S.; Cipolla, R,

    2013-01-01

    Many real-life large-scale datasets are open-ended and dynamic: new images are continuously added to existing classes, new classes appear over time, and the semantics of existing classes might evolve too. Therefore, we study large-scale image classification methods that can incorporate new classes

  13. 40 CFR 141.81 - Applicability of corrosion control treatment steps to small, medium-size and large water systems.

    Science.gov (United States)

    2010-07-01

    ... treatment steps to small, medium-size and large water systems. 141.81 Section 141.81 Protection of... WATER REGULATIONS Control of Lead and Copper § 141.81 Applicability of corrosion control treatment steps...). (ii) A report explaining the test methods used by the water system to evaluate the corrosion control...

  14. Does company size matter? Validation of an integrative model of safety behavior across small and large construction companies.

    Science.gov (United States)

    Guo, Brian H W; Yiu, Tak Wing; González, Vicente A

    2018-02-01

    Previous safety climate studies primarily focused on either large construction companies or the construction industry as a whole, while little is known about whether company size has significant effects on workers' understanding of safety climate measures and relationships between safety climate factors and safety behavior. Thus, this study aims to: (a) test the measurement equivalence (ME) of a safety climate measure across workers from small and large companies; (b) investigate if company size alters the causal structure of the integrative model developed by Guo, Yiu, and González (2016). Data were collected from 253 construction workers in New Zealand using a safety climate measure. This study used multi-group confirmatory factor analyses (MCFA) to test the measurement equivalence of the safety climate measure and structure invariance of the integrative model. Results indicate that workers from small and large companies understood the safety climate measure in a similar manner. In addition, it was suggested that company size does not change the causal structure and mediational processes of the integrative model. Both measurement equivalence of the safety climate measure and structural invariance of the integrative model were supported by this study. Practical applications: Findings of this study provided strong support for a meaningful use of the safety climate measure across construction companies in different sizes. Safety behavior promotion strategies designed based on the integrative model may be well suited for both large and small companies. Copyright © 2017 National Safety Council and Elsevier Ltd. All rights reserved.

  15. Is ‘fuzzy theory’ an appropriate tool for large size problems?

    CERN Document Server

    Biswas, Ranjit

    2016-01-01

    The work in this book is based on philosophical as well as logical views on the subject of decoding the ‘progress’ of decision making process in the cognition system of a decision maker (be it a human or an animal or a bird or any living thing which has a brain) while evaluating the membership value µ(x) in a fuzzy set or in an intuitionistic fuzzy set or in any such soft computing set model or in a crisp set. A new theory is introduced called by “Theory of CIFS”. The following two hypothesis are hidden facts in fuzzy computing or in any soft computing process :- Fact-1: A decision maker (intelligent agent) can never use or apply ‘fuzzy theory’ or any soft-computing set theory without intuitionistic fuzzy system. Fact-2 : The Fact-1 does not necessarily require that a fuzzy decision maker (or a crisp ordinary decision maker or a decision maker with any other soft theory models or a decision maker like animal/bird which has brain, etc.) must be aware or knowledgeable about IFS Theory! The “Theor...

  16. PERSPECTIVE TECHNOLOGIES OF THERMAL HARDENING OF LARGE-SIZE ARTICLES OF TWO-PHASE TITANIUM ALLOYS

    Directory of Open Access Journals (Sweden)

    V. N. Fedulov

    2005-01-01

    Full Text Available The article is dedicated to the development and industrial assimilation of the fundamentally new methods of thermal strengthening of large articles out of hardenable titanic alloys.

  17. Job Stress in the United Kingdom: Are Small and Medium-Sized Enterprises and Large Enterprises Different?

    Science.gov (United States)

    Lai, Yanqing; Saridakis, George; Blackburn, Robert

    2015-08-01

    This paper examines the relationships between firm size and employees' experience of work stress. We used a matched employer-employee dataset (Workplace Employment Relations Survey 2011) that comprises of 7182 employees from 1210 private organizations in the United Kingdom. Initially, we find that employees in small and medium-sized enterprises experience lower level of overall job stress than those in large enterprises, although the effect disappears when we control for individual and organizational characteristics in the model. We also find that quantitative work overload, job insecurity and poor promotion opportunities, good work relationships and poor communication are strongly associated with job stress in the small and medium-sized enterprises, whereas qualitative work overload, poor job autonomy and employee engagements are more related with larger enterprises. Hence, our estimates show that the association and magnitude of estimated effects differ significantly by enterprise size. Copyright © 2013 John Wiley & Sons, Ltd.

  18. Flexible Multi-Bit Feedback Design for HARQ Operation of Large-Size Data Packets in 5G

    DEFF Research Database (Denmark)

    Khosravirad, Saeed; Mudolo, Luke; Pedersen, Klaus I.

    2017-01-01

    large-size data packet thanks to which the transmitter node can reduce the retransmission size to only include the initially failed segments of the packet. We study the effect of feedback size on retransmission efficiency through extensive link-level simulations over realistic channel models. Numerical......A reliable feedback channel is vital to report decoding acknowledgments in retransmission mechanisms such as the hybrid automatic repeat request (HARQ). While the feedback bits are known to be costly for the wireless link, a feedback message more informative than the conventional single......-bit feedback can increase resource utilization efficiency. Considering the practical limitations for increasing feedback message size, this paper proposes a framework for the design of flexible-content multi-bit feedback. The proposed design is capable of efficiently indicating the faulty segments of a failed...

  19. Large scale mapping of groundwater resources using a highly integrated set of tools

    DEFF Research Database (Denmark)

    Søndergaard, Verner; Auken, Esben; Christiansen, Anders Vest

    large areas with information from an optimum number of new investigation boreholes, existing boreholes, logs and water samples to get an integrated and detailed description of the groundwater resources and their vulnerability.Development of more time efficient and airborne geophysical data acquisition...... platforms (e.g. SkyTEM) have made large-scale mapping attractive and affordable in the planning and administration of groundwater resources. The handling and optimized use of huge amounts of geophysical data covering large areas has also required a comprehensive database, where data can easily be stored...

  20. The maximum sizes of large scale structures in alternative theories of gravity

    Energy Technology Data Exchange (ETDEWEB)

    Bhattacharya, Sourav [IUCAA, Pune University Campus, Post Bag 4, Ganeshkhind, Pune, 411 007 India (India); Dialektopoulos, Konstantinos F. [Dipartimento di Fisica, Università di Napoli ' Federico II' , Complesso Universitario di Monte S. Angelo, Edificio G, Via Cinthia, Napoli, I-80126 Italy (Italy); Romano, Antonio Enea [Instituto de Física, Universidad de Antioquia, Calle 70 No. 52–21, Medellín (Colombia); Skordis, Constantinos [Department of Physics, University of Cyprus, 1 Panepistimiou Street, Nicosia, 2109 Cyprus (Cyprus); Tomaras, Theodore N., E-mail: sbhatta@iitrpr.ac.in, E-mail: kdialekt@gmail.com, E-mail: aer@phys.ntu.edu.tw, E-mail: skordis@ucy.ac.cy, E-mail: tomaras@physics.uoc.gr [Institute of Theoretical and Computational Physics and Department of Physics, University of Crete, 70013 Heraklion (Greece)

    2017-07-01

    The maximum size of a cosmic structure is given by the maximum turnaround radius—the scale where the attraction due to its mass is balanced by the repulsion due to dark energy. We derive generic formulae for the estimation of the maximum turnaround radius in any theory of gravity obeying the Einstein equivalence principle, in two situations: on a spherically symmetric spacetime and on a perturbed Friedman-Robertson-Walker spacetime. We show that the two formulae agree. As an application of our formula, we calculate the maximum turnaround radius in the case of the Brans-Dicke theory of gravity. We find that for this theory, such maximum sizes always lie above the ΛCDM value, by a factor 1 + 1/3ω, where ω>> 1 is the Brans-Dicke parameter, implying consistency of the theory with current data.

  1. Occupational lifting during pregnancy and child's birth size in a large cohort study

    DEFF Research Database (Denmark)

    Juhl, Mette; Larsen, Pernille Stemann; Andersen, Per Kragh

    2014-01-01

    OBJECTIVES: It has been suggested that the handling of heavy loads during pregnancy is associated with impaired fetal growth. We examined the association between quantity and frequency of maternal occupational lifting and the child's size at birth, measured by weight, length, ponderal index, small......-for-gestational-age (SGA), abdominal circumference, head circumference, and placental weight. METHODS: We analyzed birth size from the Danish Medical Birth Registry of 66 693 live-born children in the Danish National Birth Cohort according to the mother's self-reported information on occupational lifting from telephone...... women with occupational lifting versus women with no lifting, but the differences were very small, and there was a statistically significant trend only for placental weight showing lighter weight with increasing number of kilos lifted per day. In jobs likely to include person-lifting, we found increased...

  2. A test of the mean density approximation for Lennard-Jones mixtures with large size ratios

    International Nuclear Information System (INIS)

    Ely, J.F.

    1986-01-01

    The mean density approximation for mixture radial distribution functions plays a central role in modern corresponding-states theories. This approximation is reasonably accurate for systems that do not differ widely in size and energy ratios and which are nearly equimolar. As the size ratio increases, however, or if one approaches an infinite dilution of one of the components, the approximation becomes progressively worse, especially for the small molecule pair. In an attempt to better understand and improve this approximation, isothermal molecular dynamics simulations have been performed on a series of Lennard-Jones mixtures. Thermodynamic properties, including the mixture radial distribution functions, have been obtained at seven compositions ranging from 5 to 95 mol%. In all cases the size ratio was fixed at two and three energy ratios were investigated, 22 / 11 =0.5, 1.0, and 1.5. The results of the simulations are compared with the mean density approximation and a modification to integrals evaluated with the mean density approximation is proposed

  3. Sediment size of surface floodplain sediments along a large lowland river

    Science.gov (United States)

    Swanson, K. M.; Day, G.; Dietrich, W. E.

    2007-12-01

    Data on size distribution of surface sediment across a floodplain should place important constraints of modeling of floodplain deposition. Diffusive or advective models would predict that, generally, grain size should decrease away from channel banks. Variations in grain size downstream along floodplains may depend on downstream fining of river bed material, exchange rate with river banks and net deposition onto the floodplain. Here we report detailed grain size analyses taken from 17 floodplain transects along 450 km (along channel distance) reach of the middle Fly River, Papua New Guinea. Field studies have documented a systematic change in floodplain characteristics downstream from forested, more topographically elevated and topography bounded by an actively shifting mainstem channel to a downstream swamp grass, low elevation topography along which the river meanders are currently stagnant. Frequency and duration of flooding increase downstream. Flooding occurs both by overbank flows and by injections of floodwaters up tributary and tie channels connected to the mainstem. Previous studies show that about 40% of the total discharge of water passes across the floodplain, and, correspondingly, about 40% of the total load is deposited on the plain - decreasing exponentially from channel bank. We find that floodplain sediment is most sandy at the channel bank. Grain size rapidly declines away from the bank, but surprisingly two trends were also observed. A relatively short distance from the bank the surface material is finest, but with further distance from the bank (out to greater than 1 km from the 250 m wide channel) clay content decreases and silt content increases. The changes are small but repeated at most of the transects. The second trend is that bank material fines downstream, corresponding to a downstream finding bed material, but once away from the bank, there is a weak tendency for a given distance away from the bank the floodplain surface deposits to

  4. Setting Learning Analytics in Context: Overcoming the Barriers to Large-Scale Adoption

    Science.gov (United States)

    Ferguson, Rebecca; Macfadyen, Leah P.; Clow, Doug; Tynan, Belinda; Alexander, Shirley; Dawson, Shane

    2014-01-01

    A core goal for most learning analytic projects is to move from small-scale research towards broader institutional implementation, but this introduces a new set of challenges because institutions are stable systems, resistant to change. To avoid failure and maximize success, implementation of learning analytics at scale requires explicit and…

  5. Fatigue-crack propagation in gamma-based titanium aluminide alloys at large and small crack sizes

    International Nuclear Information System (INIS)

    Kruzic, J.J.; Campbell, J.P.; Ritchie, R.O.

    1999-01-01

    Most evaluations of the fracture and fatigue-crack propagation properties of γ+α 2 titanium aluminide alloys to date have been performed using standard large-crack samples, e.g., compact-tension specimens containing crack sizes which are on the order of tens of millimeters, i.e., large compared to microstructural dimensions. However, these alloys have been targeted for applications, such as blades in gas-turbine engines, where relevant crack sizes are much smaller ( 5 mm) and (c ≅ 25--300 microm) cracks in a γ-TiAl based alloy, of composition Ti-47Al-2Nb-2Cr-0.2B (at.%), specifically for duplex (average grain size approximately17 microm) and refined lamellar (average colony size ≅150 microm) microstructures. It is found that, whereas the lamellar microstructure displays far superior fracture toughness and fatigue-crack growth resistance in the presence of large cracks, in small-crack testing the duplex microstructure exhibits a better combination of properties. The reasons for such contrasting behavior are examined in terms of the intrinsic and extrinsic (i.e., crack bridging) contributions to cyclic crack advance

  6. Highly crystallized nanometer-sized zeolite a with large Cs adsorption capability for the decontamination of water.

    Science.gov (United States)

    Torad, Nagy L; Naito, Masanobu; Tatami, Junichi; Endo, Akira; Leo, Sin-Yen; Ishihara, Shinsuke; Wu, Kevin C-W; Wakihara, Toru; Yamauchi, Yusuke

    2014-03-01

    Nanometer-sized zeolite A with a large cesium (Cs) uptake capability is prepared through a simple post-milling recrystallization method. This method is suitable for producing nanometer-sized zeolite in large scale, as additional organic compounds are not needed to control zeolite nucleation and crystal growth. Herein, we perform a quartz crystal microbalance (QCM) study to evaluate the uptake ability of Cs ions by zeolite, to the best of our knowledge, for the first time. In comparison to micrometer-sized zeolite A, nanometer-sized zeolite A can rapidly accommodate a larger amount of Cs ions into the zeolite crystal structure, owing to its high external surface area. Nanometer-sized zeolite is a promising candidate for the removal of radioactive Cs ions from polluted water. Our QCM study on Cs adsorption uptake behavior provides the information of adsorption kinetics (e.g., adsorption amounts and rates). This technique is applicable to other zeolites, which will be highly valuable for further consideration of radioactive Cs removal in the future. Copyright © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  7. Neonatal L-glutamine modulates anxiety-like behavior, cortical spreading depression, and microglial immunoreactivity: analysis in developing rats suckled on normal size- and large size litters.

    Science.gov (United States)

    de Lima, Denise Sandrelly Cavalcanti; Francisco, Elian da Silva; Lima, Cássia Borges; Guedes, Rubem Carlos Araújo

    2017-02-01

    In mammals, L-glutamine (Gln) can alter the glutamate-Gln cycle and consequently brain excitability. Here, we investigated in developing rats the effect of treatment with different doses of Gln on anxiety-like behavior, cortical spreading depression (CSD), and microglial activation expressed as Iba1-immunoreactivity. Wistar rats were suckled in litters with 9 and 15 pups (groups L 9 and L 15 ; respectively, normal size- and large size litters). From postnatal days (P) 7-27, the animals received Gln per gavage (250, 500 or 750 mg/kg/day), or vehicle (water), or no treatment (naive). At P28 and P30, we tested the animals, respectively, in the elevated plus maze and open field. At P30-35, we measured CSD parameters (velocity of propagation, amplitude, and duration). Fixative-perfused brains were processed for microglial immunolabeling with anti-IBA-1 antibodies to analyze cortical microglia. Rats treated with Gln presented an anxiolytic behavior and accelerated CSD propagation when compared to the water- and naive control groups. Furthermore, CSD velocity was higher (p litter sizes, and for microglial activation in the L 15 groups. Besides confirming previous electrophysiological findings (CSD acceleration after Gln), our data demonstrate for the first time a behavioral and microglial activation that is associated with early Gln treatment in developing animals, and that is possibly operated via changes in brain excitability.

  8. Participation and Collaborative Learning in Large Class Sizes: Wiki, Can You Help Me?

    Science.gov (United States)

    de Arriba, Raúl

    2017-01-01

    Collaborative learning has a long tradition within higher education. However, its application in classes with a large number of students is complicated, since it is a teaching method that requires a high level of participation from the students and careful monitoring of the process by the educator. This article presents an experience in…

  9. Organizing Corporate Social Responsibility in Small and Large Firms: Size Matters

    NARCIS (Netherlands)

    Baumann-Pauly, D.; Wickert, C.M.J.; Spence, L.; Scherer, A.G.

    2013-01-01

    Based on the findings of a qualitative empirical study of corporate social responsibility (CSR) in Swiss MNCs and SMEs, we suggest that smaller firms are not necessarily less advanced in organizing CSR than large firms. Results according to theoretically derived assessment frameworks illustrate the

  10. A function accounting for training set size and marker density to model the average accuracy of genomic prediction.

    Science.gov (United States)

    Erbe, Malena; Gredler, Birgit; Seefried, Franz Reinhold; Bapst, Beat; Simianer, Henner

    2013-01-01

    Prediction of genomic breeding values is of major practical relevance in dairy cattle breeding. Deterministic equations have been suggested to predict the accuracy of genomic breeding values in a given design which are based on training set size, reliability of phenotypes, and the number of independent chromosome segments ([Formula: see text]). The aim of our study was to find a general deterministic equation for the average accuracy of genomic breeding values that also accounts for marker density and can be fitted empirically. Two data sets of 5'698 Holstein Friesian bulls genotyped with 50 K SNPs and 1'332 Brown Swiss bulls genotyped with 50 K SNPs and imputed to ∼600 K SNPs were available. Different k-fold (k = 2-10, 15, 20) cross-validation scenarios (50 replicates, random assignment) were performed using a genomic BLUP approach. A maximum likelihood approach was used to estimate the parameters of different prediction equations. The highest likelihood was obtained when using a modified form of the deterministic equation of Daetwyler et al. (2010), augmented by a weighting factor (w) based on the assumption that the maximum achievable accuracy is [Formula: see text]. The proportion of genetic variance captured by the complete SNP sets ([Formula: see text]) was 0.76 to 0.82 for Holstein Friesian and 0.72 to 0.75 for Brown Swiss. When modifying the number of SNPs, w was found to be proportional to the log of the marker density up to a limit which is population and trait specific and was found to be reached with ∼20'000 SNPs in the Brown Swiss population studied.

  11. A function accounting for training set size and marker density to model the average accuracy of genomic prediction.

    Directory of Open Access Journals (Sweden)

    Malena Erbe

    Full Text Available Prediction of genomic breeding values is of major practical relevance in dairy cattle breeding. Deterministic equations have been suggested to predict the accuracy of genomic breeding values in a given design which are based on training set size, reliability of phenotypes, and the number of independent chromosome segments ([Formula: see text]. The aim of our study was to find a general deterministic equation for the average accuracy of genomic breeding values that also accounts for marker density and can be fitted empirically. Two data sets of 5'698 Holstein Friesian bulls genotyped with 50 K SNPs and 1'332 Brown Swiss bulls genotyped with 50 K SNPs and imputed to ∼600 K SNPs were available. Different k-fold (k = 2-10, 15, 20 cross-validation scenarios (50 replicates, random assignment were performed using a genomic BLUP approach. A maximum likelihood approach was used to estimate the parameters of different prediction equations. The highest likelihood was obtained when using a modified form of the deterministic equation of Daetwyler et al. (2010, augmented by a weighting factor (w based on the assumption that the maximum achievable accuracy is [Formula: see text]. The proportion of genetic variance captured by the complete SNP sets ([Formula: see text] was 0.76 to 0.82 for Holstein Friesian and 0.72 to 0.75 for Brown Swiss. When modifying the number of SNPs, w was found to be proportional to the log of the marker density up to a limit which is population and trait specific and was found to be reached with ∼20'000 SNPs in the Brown Swiss population studied.

  12. Mapping trends of large and medium size carnivores of conservation interest in Romania

    Directory of Open Access Journals (Sweden)

    Constantin Cazacu

    2014-07-01

    Full Text Available We analysed yearly estimates of population size data during 2001-2012 for five carnivores species of conservation interest (Ursus arctos, Canis lupus, Lynx lynx, Felis silvestris and Canis aureus. Population size estimations were done by the game management authorities and integrated by the competent authorities on the Ministry of Environment and Climate Change. Trends in data were detected using non-parametric Mann-Kendall test. This test was chosen considering the short length of data series and its usefulness for non-normal distributed data. The trend was tested at three spatial scales: game management units (n=1565, biogeographical region (n=5 and national. Trends depicted for each game management unit were plotted using ArcGIS, resulting species trend distribution maps. For the studied period increasing population trends were observed for Ursus arctos, Canis lupus, Canis aureus and Lynx lynx, while for Felis silvestris there was no trend recorded. Such an analysis in especially useful for conservation proposes, game management and reporting obligations under article 17 of the EC Habitat Directive, using population trend as a proxy for population dynamics. We conclude that the status of the five carnivore species is favourable during the study period.

  13. Networks and landscapes: a framework for setting goals and evaluating performance at the large landscape scale

    Science.gov (United States)

    R Patrick Bixler; Shawn Johnson; Kirk Emerson; Tina Nabatchi; Melly Reuling; Charles Curtin; Michele Romolini; Morgan Grove

    2016-01-01

    The objective of large landscape conser vation is to mitigate complex ecological problems through interventions at multiple and overlapping scales. Implementation requires coordination among a diverse network of individuals and organizations to integrate local-scale conservation activities with broad-scale goals. This requires an understanding of the governance options...

  14. Design of Availability-Dependent Distributed Services in Large-Scale Uncooperative Settings

    Science.gov (United States)

    Morales, Ramses Victor

    2009-01-01

    Thesis Statement: "Availability-dependent global predicates can be efficiently and scalably realized for a class of distributed services, in spite of specific selfish and colluding behaviors, using local and decentralized protocols". Several types of large-scale distributed systems spanning the Internet have to deal with availability variations…

  15. Towards better segmentation of large floating point 3D astronomical data sets : first results

    NARCIS (Netherlands)

    Moschini, Ugo; Teeninga, Paul; Wilkinson, Michael; Giese, Nadine; Punzo, Davide; van der Hulst, Jan M.; Trager, Scott

    2014-01-01

    In any image segmentation task, noise must be separated from the actual information and the relevant pixels grouped into objects of interest, on which measures can later be applied. This should be done efficiently on large astronomical surveys with floating point datasets with resolution of the

  16. Querying and reasoning over large scale building data sets : an outline of a performance benchmark

    NARCIS (Netherlands)

    Pauwels, P.; Mendes de Farias, T.; Zhang, C.; Roxin, A.; Beetz, J.; De Roo, J.; Nicolle, C.

    2016-01-01

    The architectural design and construction domains work on a daily basis with massive amounts of data. Properly managing, exchanging and exploiting these data is an ever ongoing challenge in this domain. This has resulted in large semantic RDF graphs that are to be combined with a significant number

  17. THE PHYSICS OF PROTOPLANETESIMAL DUST AGGLOMERATES. VI. EROSION OF LARGE AGGREGATES AS A SOURCE OF MICROMETER-SIZED PARTICLES

    International Nuclear Information System (INIS)

    Schraepler, Rainer; Blum, Juergen

    2011-01-01

    Observed protoplanetary disks consist of a large amount of micrometer-sized particles. Dullemond and Dominik pointed out for the first time the difficulty in explaining the strong mid-infrared excess of classical T Tauri stars without any dust-retention mechanisms. Because high relative velocities in between micrometer-sized and macroscopic particles exist in protoplanetary disks, we present experimental results on the erosion of macroscopic agglomerates consisting of micrometer-sized spherical particles via the impact of micrometer-sized particles. We find that after an initial phase, in which an impacting particle erodes up to 10 particles of an agglomerate, the impacting particles compress the agglomerate's surface, which partly passivates the agglomerates against erosion. Due to this effect, the erosion halts for impact velocities up to ∼30 m s -1 within our error bars. For higher velocities, the erosion is reduced by an order of magnitude. This outcome is explained and confirmed by a numerical model. In a next step, we build an analytical disk model and implement the experimentally found erosive effect. The model shows that erosion is a strong source of micrometer-sized particles in a protoplanetary disk. Finally, we use the stationary solution of this model to explain the amount of micrometer-sized particles in the observational infrared data of Furlan et al.

  18. Distributed Large Independent Sets in One Round On Bounded-independence Graphs

    OpenAIRE

    Halldorsson , Magnus M.; Konrad , Christian

    2015-01-01

    International audience; We present a randomized one-round, single-bit messages, distributed algorithm for the maximum independent set problem in polynomially bounded-independence graphs with poly-logarithmic approximation factor. Bounded-independence graphs capture various models of wireless networks such as the unit disc graphs model and the quasi unit disc graphs model. For instance, on unit disc graphs, our achieved approximation ratio is O((log(n)/log(log(n)))^2).A starting point of our w...

  19. Development of large size Micromegas detectors for the upgrade of the ATLAS experiments

    CERN Document Server

    Bianco, Michele

    2014-01-01

    The luminosity upgrade of the Large Hadron Collider at CERN f oresees a luminosity increase by a factor 3 compared to the LHC luminosity design value. To c ope with the corresponding rate increase, the Muon System of the ATLAS experiment at CER N needs to be upgraded. In the first station of the high rapidity region, micromegas det ectors have been chosen as the main tracking chambers but will, at the same time, also contribut e to the trigger. We describe the R&D; efforts that led to the construction of the first (1 × 2.4 m 2 ) large micromegas detectors at CERN and outline the next steps towards the construction of the 12 00 m 2 of micromegas detectors for the ATLAS upgrade. The technical solutions, adopted in the c onstruction of the chamber as well as results on the detector performance with cosmic rays are s hown.

  20. Post-mastectomy radiation in large node-negative breast tumors: Does size really matter?

    International Nuclear Information System (INIS)

    Floyd, Scott R.; Taghian, Alphonse G.

    2009-01-01

    Treatment decisions regarding local control can be particularly challenging for T3N0 breast tumors because of difficulty in estimating rates of local failure after mastectomy. Reports in the literature detailing the rates of local failure vary widely, likely owing to the uncommon incidence of this clinical situation. The literature regarding this clinical scenario is reviewed, including recent reports that specifically address the issue of local failure rates after mastectomy in the absence of radiation for large node-negative breast tumors.

  1. Large-size high-performance transparent amorphous silicon sensors for laser beam position detection

    International Nuclear Information System (INIS)

    Calderon, A.; Martinez-Rivero, C.; Matorras, F.; Rodrigo, T.; Sobron, M.; Vila, I.; Virto, A.L.; Alberdi, J.; Arce, P.; Barcala, J.M.; Calvo, E.; Ferrando, A.; Josa, M.I.; Luque, J.M.; Molinero, A.; Navarrete, J.; Oller, J.C.; Yuste, C.; Koehler, C.; Lutz, B.; Schubert, M.B.; Werner, J.H.

    2006-01-01

    We present the measured performance of a new generation of semitransparent amorphous silicon position detectors. They have a large sensitive area (30x30mm 2 ) and show good properties such as a high response (about 20mA/W), an intrinsic position resolution better than 3μm, a spatial-point reconstruction precision better than 10μm, deflection angles smaller than 10μrad and a transmission power in the visible and NIR higher than 70%

  2. Diversity of medium and large sized mammals in a Cerrado fragment of central Brazil

    Directory of Open Access Journals (Sweden)

    F.S. Campos

    2013-11-01

    Full Text Available Studies related to community ecology of medium and large mammals represent a priority in developing strategies for conservation of their habitats. Due to the significant ecological importance of these species, a concern in relation to anthropogenic pressures arises since their populations are vulnerable to hunting and fragmentation. In this study, we aimed to analyze the diversity of medium and large mammals in a representative area of the Cerrado biome, located in the National Forest of Silvânia, central Brazil, providing insights for future studies on the biodiversity and conservation of Cerrado mammals. Sampling was carried out by linear transects, search for traces, footprint traps and camera traps. We recorded 23 species, among which three are listed in threat categories (e.g., Myrmecophaga tridactyla, Chrysocyon brachyurus and Leopardus tigrinus. We registered 160 records in the study area, where the most frequently recorded species were Didelphis albiventris (30 records and Cerdocyon thous (28 records. Our results indicated that a small protected area of Cerrado can include a large and important percentage of the diversity of mammals in this biome, providing information about richness, abundance, spatial distribution and insights for future studies on the biodiversity and conservation of these biological communities.

  3. Azimuthal coil size and field quality in the main CERN Large Hadron Collider dipoles

    Directory of Open Access Journals (Sweden)

    P. Ferracin

    2002-06-01

    Full Text Available Field quality in superconducting magnets strongly depends on the geometry of the coil. Fiberglass spacers (shims placed between the coil and the collars have been used to optimize magnetic and mechanical performances of superconducting magnets in large accelerators. A change in the shim thickness affects both the geometry of the coil and its state of compression (prestress under operational conditions. In this paper we develop a coupled magnetomechanical model of the main Large Hadron Collider dipole. This model allows us to evaluate the prestress dependence on the shim thickness and the map of deformations of the coil and the collars. Results of the model are compared to experimental measurements carried out in a dedicated experiment, where a magnet model has been reassembled 5 times with different shims. A good agreement is found between simulations and experimental data both on the mechanical behavior and on the field quality. We show that this approach allows us to improve this agreement with respect to models previously used in the literature. We finally evaluate the range of tunability that will be provided by shims during the production of the Large Hadron Collider main dipoles.

  4. Implementation of Lifestyle Modification Program Focusing on Physical Activity and Dietary Habits in a Large Group, Community-Based Setting

    Science.gov (United States)

    Stoutenberg, Mark; Falcon, Ashley; Arheart, Kris; Stasi, Selina; Portacio, Francia; Stepanenko, Bryan; Lan, Mary L.; Castruccio-Prince, Catarina; Nackenson, Joshua

    2017-01-01

    Background: Lifestyle modification programs improve several health-related behaviors, including physical activity (PA) and nutrition. However, few of these programs have been expanded to impact a large number of individuals in one setting at one time. Therefore, the purpose of this study was to determine whether a PA- and nutrition-based lifestyle…

  5. Caught you: threats to confidentiality due to the public release of large-scale genetic data sets.

    Science.gov (United States)

    Wjst, Matthias

    2010-12-29

    Large-scale genetic data sets are frequently shared with other research groups and even released on the Internet to allow for secondary analysis. Study participants are usually not informed about such data sharing because data sets are assumed to be anonymous after stripping off personal identifiers. The assumption of anonymity of genetic data sets, however, is tenuous because genetic data are intrinsically self-identifying. Two types of re-identification are possible: the "Netflix" type and the "profiling" type. The "Netflix" type needs another small genetic data set, usually with less than 100 SNPs but including a personal identifier. This second data set might originate from another clinical examination, a study of leftover samples or forensic testing. When merged to the primary, unidentified set it will re-identify all samples of that individual. Even with no second data set at hand, a "profiling" strategy can be developed to extract as much information as possible from a sample collection. Starting with the identification of ethnic subgroups along with predictions of body characteristics and diseases, the asthma kids case as a real-life example is used to illustrate that approach. Depending on the degree of supplemental information, there is a good chance that at least a few individuals can be identified from an anonymized data set. Any re-identification, however, may potentially harm study participants because it will release individual genetic disease risks to the public.

  6. Caught you: threats to confidentiality due to the public release of large-scale genetic data sets

    Directory of Open Access Journals (Sweden)

    Wjst Matthias

    2010-12-01

    Full Text Available Abstract Background Large-scale genetic data sets are frequently shared with other research groups and even released on the Internet to allow for secondary analysis. Study participants are usually not informed about such data sharing because data sets are assumed to be anonymous after stripping off personal identifiers. Discussion The assumption of anonymity of genetic data sets, however, is tenuous because genetic data are intrinsically self-identifying. Two types of re-identification are possible: the "Netflix" type and the "profiling" type. The "Netflix" type needs another small genetic data set, usually with less than 100 SNPs but including a personal identifier. This second data set might originate from another clinical examination, a study of leftover samples or forensic testing. When merged to the primary, unidentified set it will re-identify all samples of that individual. Even with no second data set at hand, a "profiling" strategy can be developed to extract as much information as possible from a sample collection. Starting with the identification of ethnic subgroups along with predictions of body characteristics and diseases, the asthma kids case as a real-life example is used to illustrate that approach. Summary Depending on the degree of supplemental information, there is a good chance that at least a few individuals can be identified from an anonymized data set. Any re-identification, however, may potentially harm study participants because it will release individual genetic disease risks to the public.

  7. Generating mock data sets for large-scale Lyman-α forest correlation measurements

    International Nuclear Information System (INIS)

    Font-Ribera, Andreu; McDonald, Patrick; Miralda-Escudé, Jordi

    2012-01-01

    Massive spectroscopic surveys of high-redshift quasars yield large numbers of correlated Lyα absorption spectra that can be used to measure large-scale structure. Simulations of these surveys are required to accurately interpret the measurements of correlations and correct for systematic errors. An efficient method to generate mock realizations of Lyα forest surveys is presented which generates a field over the lines of sight to the survey sources only, instead of having to generate it over the entire three-dimensional volume of the survey. The method can be calibrated to reproduce the power spectrum and one-point distribution function of the transmitted flux fraction, as well as the redshift evolution of these quantities, and is easily used for modeling any survey systematic effects. We present an example of how these mock surveys are applied to predict the measurement errors in a survey with similar parameters as the BOSS quasar survey in SDSS-III

  8. Generating mock data sets for large-scale Lyman-α forest correlation measurements

    Energy Technology Data Exchange (ETDEWEB)

    Font-Ribera, Andreu [Institut de Ciències de l' Espai (CSIC-IEEC), Campus UAB, Fac. Ciències, torre C5 parell 2, Bellaterra, Catalonia (Spain); McDonald, Patrick [Lawrence Berkeley National Laboratory, 1 Cyclotron Road, Berkeley, CA 94720 (United States); Miralda-Escudé, Jordi, E-mail: font@ieec.uab.es, E-mail: pvmcdonald@lbl.gov, E-mail: miralda@icc.ub.edu [Institució Catalana de Recerca i Estudis Avançats, Barcelona, Catalonia (Spain)

    2012-01-01

    Massive spectroscopic surveys of high-redshift quasars yield large numbers of correlated Lyα absorption spectra that can be used to measure large-scale structure. Simulations of these surveys are required to accurately interpret the measurements of correlations and correct for systematic errors. An efficient method to generate mock realizations of Lyα forest surveys is presented which generates a field over the lines of sight to the survey sources only, instead of having to generate it over the entire three-dimensional volume of the survey. The method can be calibrated to reproduce the power spectrum and one-point distribution function of the transmitted flux fraction, as well as the redshift evolution of these quantities, and is easily used for modeling any survey systematic effects. We present an example of how these mock surveys are applied to predict the measurement errors in a survey with similar parameters as the BOSS quasar survey in SDSS-III.

  9. Inflation and the Great Moderation: Evidence from a Large Panel Data Set

    OpenAIRE

    Georgios Karras

    2013-01-01

    This paper investigates the relationship between the Great Moderation and two measures of inflation performance: trend inflation and inflation volatility. Using annual data from 1970 to 2011 for a large panel of 180 developed and developing economies, the results show that, as expected, both measures are positively correlated with output volatility. When the two measures are jointly considered, however, and there is sufficient information to identify their effects separately, our empirical...

  10. Early outcome in renal transplantation from large donors to small and size-matched recipients - a porcine experimental model

    DEFF Research Database (Denmark)

    Ravlo, Kristian; Chhoden, Tashi; Søndergaard, Peter

    2012-01-01

    in small recipients within 60 min after reperfusion. Interestingly, this was associated with a significant reduction in medullary RPP, while there was no significant change in the size-matched recipients. No difference was observed in urinary NGAL excretion between the groups. A significant higher level......Kidney transplantation from a large donor to a small recipient, as in pediatric transplantation, is associated with an increased risk of thrombosis and DGF. We established a porcine model for renal transplantation from an adult donor to a small or size-matched recipient with a high risk of DGF...... and studied GFR, RPP using MRI, and markers of kidney injury within 10 h after transplantation. After induction of BD, kidneys were removed from ∼63-kg donors and kept in cold storage for ∼22 h until transplanted into small (∼15 kg, n = 8) or size-matched (n = 8) recipients. A reduction in GFR was observed...

  11. A summarization approach for Affymetrix GeneChip data using a reference training set from a large, biologically diverse database

    Directory of Open Access Journals (Sweden)

    Tripputi Mark

    2006-10-01

    Full Text Available Abstract Background Many of the most popular pre-processing methods for Affymetrix expression arrays, such as RMA, gcRMA, and PLIER, simultaneously analyze data across a set of predetermined arrays to improve precision of the final measures of expression. One problem associated with these algorithms is that expression measurements for a particular sample are highly dependent on the set of samples used for normalization and results obtained by normalization with a different set may not be comparable. A related problem is that an organization producing and/or storing large amounts of data in a sequential fashion will need to either re-run the pre-processing algorithm every time an array is added or store them in batches that are pre-processed together. Furthermore, pre-processing of large numbers of arrays requires loading all the feature-level data into memory which is a difficult task even with modern computers. We utilize a scheme that produces all the information necessary for pre-processing using a very large training set that can be used for summarization of samples outside of the training set. All subsequent pre-processing tasks can be done on an individual array basis. We demonstrate the utility of this approach by defining a new version of the Robust Multi-chip Averaging (RMA algorithm which we refer to as refRMA. Results We assess performance based on multiple sets of samples processed over HG U133A Affymetrix GeneChip® arrays. We show that the refRMA workflow, when used in conjunction with a large, biologically diverse training set, results in the same general characteristics as that of RMA in its classic form when comparing overall data structure, sample-to-sample correlation, and variation. Further, we demonstrate that the refRMA workflow and reference set can be robustly applied to naïve organ types and to benchmark data where its performance indicates respectable results. Conclusion Our results indicate that a biologically diverse

  12. Salt-assisted direct exfoliation of graphite into high-quality, large-size, few-layer graphene sheets.

    Science.gov (United States)

    Niu, Liyong; Li, Mingjian; Tao, Xiaoming; Xie, Zhuang; Zhou, Xuechang; Raju, Arun P A; Young, Robert J; Zheng, Zijian

    2013-08-21

    We report a facile and low-cost method to directly exfoliate graphite powders into large-size, high-quality, and solution-dispersible few-layer graphene sheets. In this method, aqueous mixtures of graphite and inorganic salts such as NaCl and CuCl2 are stirred, and subsequently dried by evaporation. Finally, the mixture powders are dispersed into an orthogonal organic solvent solution of the salt by low-power and short-time ultrasonication, which exfoliates graphite into few-layer graphene sheets. We find that the as-made graphene sheets contain little oxygen, and 86% of them are 1-5 layers with lateral sizes as large as 210 μm(2). Importantly, the as-made graphene can be readily dispersed into aqueous solution in the presence of surfactant and thus is compatible with various solution-processing techniques towards graphene-based thin film devices.

  13. Optimal integrated sizing and planning of hubs with midsize/large CHP units considering reliability of supply

    International Nuclear Information System (INIS)

    Moradi, Saeed; Ghaffarpour, Reza; Ranjbar, Ali Mohammad; Mozaffari, Babak

    2017-01-01

    Highlights: • New hub planning formulation is proposed to exploit assets of midsize/large CHPs. • Linearization approaches are proposed for two-variable nonlinear CHP fuel function. • Efficient operation of addressed CHPs & hub devices at contingencies are considered. • Reliability-embedded integrated planning & sizing is formulated as one single MILP. • Noticeable results for costs & reliability-embedded planning due to mid/large CHPs. - Abstract: Use of multi-carrier energy systems and the energy hub concept has recently been a widespread trend worldwide. However, most of the related researches specialize in CHP systems with constant electricity/heat ratios and linear operating characteristics. In this paper, integrated energy hub planning and sizing is developed for the energy systems with mid-scale and large-scale CHP units, by taking their wide operating range into consideration. The proposed formulation is aimed at taking the best use of the beneficial degrees of freedom associated with these units for decreasing total costs and increasing reliability. High-accuracy piecewise linearization techniques with approximation errors of about 1% are introduced for the nonlinear two-dimensional CHP input-output function, making it possible to successfully integrate the CHP sizing. Efficient operation of CHP and the hub at contingencies is extracted via a new formulation, which is developed to be incorporated to the planning and sizing problem. Optimal operation, planning, sizing and contingency operation of hub components are integrated and formulated as a single comprehensive MILP problem. Results on a case study with midsize CHPs reveal a 33% reduction in total costs, and it is demonstrated that the proposed formulation ceases the need for additional components/capacities for increasing reliability of supply.

  14. Large Size High Performance Transparent Amorphous Silicon Sensors for Laser Beam Position Detection and Monitoring

    Energy Technology Data Exchange (ETDEWEB)

    Calderon, A.; Martinez Rivero, C.; Matorras, F.; Rodrigo, T.; Sobron, M.; Vila, I.; Virto; Alberdi, J.; Arce, P.; Barcala, J. M.; Calvo, E.; Ferrando, A.; Josa, M. I.; Luque, J. M.; Molinero, A.; Navarrete, J.; Oller, J. C.; Kohler, C.; Lutz, B.; Schubert, M. B.

    2006-09-04

    We present the measured performance of a new generation of semitransparente amorphous silicon position detectors. They have a large sensitive area (30 x 30 mm2) and show good properties such as a high response (about 20 mA/W), an intinsic position resolution better than 3 m, a spatial point reconstruction precision better than 10 m, deflection angles smaller than 10 rad and a transmission power in the visible and NIR higher than 70%. In addition, multipoint alignment monitoring, using up to five sensors lined along a light path of about 5 meters, can be achieved with a resolution better than 20m. (Author)

  15. Large-size high-performance transparent amorphous silicon sensors for laser beam position detection

    Energy Technology Data Exchange (ETDEWEB)

    Calderon, A. [Instituto de Fisica de Cantabria. CSIC-University of Cantabria, Santander (Spain); Martinez-Rivero, C. [Instituto de Fisica de Cantabria. CSIC-University of Cantabria, Santander (Spain); Matorras, F. [Instituto de Fisica de Cantabria. CSIC-University of Cantabria, Santander (Spain); Rodrigo, T. [Instituto de Fisica de Cantabria. CSIC-University of Cantabria, Santander (Spain); Sobron, M. [Instituto de Fisica de Cantabria. CSIC-University of Cantabria, Santander (Spain); Vila, I. [Instituto de Fisica de Cantabria. CSIC-University of Cantabria, Santander (Spain); Virto, A.L. [Instituto de Fisica de Cantabria. CSIC-University of Cantabria, Santander (Spain); Alberdi, J. [CIEMAT, Madrid (Spain); Arce, P. [CIEMAT, Madrid (Spain); Barcala, J.M. [CIEMAT, Madrid (Spain); Calvo, E. [CIEMAT, Madrid (Spain); Ferrando, A. [CIEMAT, Madrid (Spain)]. E-mail: antonio.ferrando@ciemat.es; Josa, M.I. [CIEMAT, Madrid (Spain); Luque, J.M. [CIEMAT, Madrid (Spain); Molinero, A. [CIEMAT, Madrid (Spain); Navarrete, J. [CIEMAT, Madrid (Spain); Oller, J.C. [CIEMAT, Madrid (Spain); Yuste, C. [CIEMAT, Madrid (Spain); Koehler, C. [Steinbeis-Transferzentrum fuer Angewandte Photovoltaik und Duennschichttechnik, Stuttgart (Germany); Lutz, B. [Steinbeis-Transferzentrum fuer Angewandte Photovoltaik und Duennschichttechnik, Stuttgart (Germany); Schubert, M.B. [Steinbeis-Transferzentrum fuer Angewandte Photovoltaik und Duennschichttechnik, Stuttgart (Germany); Werner, J.H. [Steinbeis-Transferzentrum fuer Angewandte Photovoltaik und Duennschichttechnik, Stuttgart (Germany)

    2006-09-15

    We present the measured performance of a new generation of semitransparent amorphous silicon position detectors. They have a large sensitive area (30x30mm{sup 2}) and show good properties such as a high response (about 20mA/W), an intrinsic position resolution better than 3{mu}m, a spatial-point reconstruction precision better than 10{mu}m, deflection angles smaller than 10{mu}rad and a transmission power in the visible and NIR higher than 70%.

  16. Large Size High Performance Transparent Amorphous Silicon Sensors for Laser Beam Position Detection and Monitoring

    International Nuclear Information System (INIS)

    Calderon, A.; Martinez Rivero, C.; Matorras, F.; Rodrigo, T.; Sobron, M.; Vila, I.; Virto; Alberdi, J.; Arce, P.; Barcala, J. M.; Calvo, E.; Ferrando, A.; Josa, M. I.; Luque, J. M.; Molinero, A.; Navarrete, J.; Oller, J. C.; Kohler, C.; Lutz, B.; Schubert, M. B.

    2006-01-01

    We present the measured performance of a new generation of semitransparente amorphous silicon position detectors. They have a large sensitive area (30 x 30 mm2) and show good properties such as a high response (about 20 mA/W), an intinsic position resolution better than 3 m, a spatial point reconstruction precision better than 10 m, deflection angles smaller than 10 rad and a transmission power in the visible and NIR higher than 70%. In addition, multipoint alignment monitoring, using up to five sensors lined along a light path of about 5 meters, can be achieved with a resolution better than 20m. (Author)

  17. The welfare implications of large litter size in the domestic pig II: management factors

    DEFF Research Database (Denmark)

    Baxter, E.M.; Rutherford, K.M.D.; D'Eath, R.B.

    2013-01-01

    routinely exceeds the ability of individual sows to successfully rear all the piglets (ie viable piglets outnumber functional teats). Such interventions include: tooth reduction; split suckling; cross-fostering; use of nurse sow systems and early weaning, including split weaning; and use of artificial...... rearing systems. These practices raise welfare questions for both the piglets and sow and are described and discussed in this review. In addition, possible management approaches which might mitigate health and welfare issues associated with large litters are identified. These include early intervention...

  18. The main postulates of adaptive correction of distortions of the wave front in large-size optical systems

    Directory of Open Access Journals (Sweden)

    V. V. Sychev

    2014-01-01

    Full Text Available In the development of optical telescopes, striving to increase the penetrating power of a telescope has been always the main trend. A real way to solve this problem is to raise the quality of the image (reduction of the image angular size under real conditions of distorting factor and increase a diameter of the main mirror. This is counteracted by the various distorting factors or interference occurring in realtime use of telescopes, as well as by complicated manufacturing processes of large mirrors.It is shown that the most effective method to deal with the influence of distorting factors on the image quality in the telescope is the minimization (through selecting the place to mount a telescope and choosing the rational optical scheme, creating materials and new technologies, improving a design, unloading the mirrors, mounting choice, etc., and then the adaptive compensation of remaining distortions.It should be noted that a domestic concept to design large-sized telescopes allows us to use, in our opinion, the most efficient ways to do this. It means to abandon the creation of "an absolutely rigid and well-ordered" design, providing the passively aligned state telescope optics under operating conditions. The design must just have such a level of residual deformations that their effect can be efficiently compensated by the adaptive system using the segmented elements of the primary mirror and the secondary mirror as a corrector.It has been found that in the transmission optical systems to deliver laser power to a remote object, it is necessary not only to overcome the distorting effect of factors inherent in optical information systems, but, additionally, find a way to overcome a number of new difficulties. The main ones have been identified to be as follows:• the influence of laser radiation on the structure components and the propagation medium and, as a consequence, the opposite effect of the structure components and the propagation

  19. Numerical modeling of deformation and vibrations in the construction of large-size fiberglass cooling tower fan

    Directory of Open Access Journals (Sweden)

    Fanisovich Shmakov Arthur

    2016-01-01

    Full Text Available This paper presents the results of numerical modeling of deformation processes and the analysis of the fundamental frequencies of the construction of large-size fiberglass cooling tower fan. Obtain the components of the stress-strain state structure based on imported gas dynamic and thermal loads and the form of fundamental vibrations. The analysis of fundamental frequencies, the results of which have been proposed constructive solutions to reduce the probability of failure of the action of aeroelastic forces.

  20. Microstructural Control via Copious Nucleation Manipulated by In Situ Formed Nucleants: Large-Sized and Ductile Metallic Glass Composites.

    Science.gov (United States)

    Song, Wenli; Wu, Yuan; Wang, Hui; Liu, Xiongjun; Chen, Houwen; Guo, Zhenxi; Lu, Zhaoping

    2016-10-01

    A novel strategy to control the precipitation behavior of the austenitic phase, and to obtain large-sized, transformation-induced, plasticity-reinforced bulk metallic glass matrix composites, with good tensile properties, is proposed. By inducing heterogeneous nucleation of the transformable reinforcement via potent nucleants formed in situ, the characteristics of the austenitic phase are well manipulated. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  1. Vacuum system for applying reflective coatings on large-size optical components using the method of magnetron sputtering

    Science.gov (United States)

    Azerbaev, Alexander A.; Abdulkadyrov, Magomed A.; Belousov, Sergey P.; Ignatov, Aleksandr N.; Mukhammedzyanov, Timur R.

    2016-10-01

    Vacuum system for reflective coatings deposition on large-size optical components up to 4.0 m diameter using the method of magnetron sputtering was built at JSC LZOS. The technological process for deposition of reflective Al coating with protective SiO2 layer was designed and approved. After climatic tests the lifetime of such coating was estimated as 30 years. Uniformity of coating thickness ±5% was achieved on maximum diameter 4.0 m.

  2. IMPACTS OF OWN BRANDS STRATEGY ON MANUFACTURER – RETAILER RELATIONSHIP: CASE STUDIES IN LARGE SIZE COMPANIES

    Directory of Open Access Journals (Sweden)

    Renato Telles

    2012-05-01

    Full Text Available The study of marketing and production strategy of own brand products requires an understanding of both the benefits gained by retailers and manufacturers involved in this activity and also the effects of this strategy on the relationship between these companies. According to this idea, a qualitative and descriptive work was conducted, based on six case studies, three of them on large product manufacturers and three on large retailers. The aim of this study is defined is threefold: (1 to analyze the expectations of manufacturers and retailers with their own brands’ current operations, (2 to analyze the expectations of manufacturers and retailers with their own brands’ future operations, and (3 to analyze the influence of the own brands adoption on the relationship between manufacturers and retailers. The results indicate that (a current operations are primarily based on economic reasons (market share, and use of spare capacity, (b future operations are perceived from a marketing perspective (survival, and (c the adoption of own brand strategy does not harm the relationship between the parties – both manufacturers and retailers –, unlike the earlier studies indicated.

  3. Validating hierarchical verbal autopsy expert algorithms in a large data set with known causes of death.

    Science.gov (United States)

    Kalter, Henry D; Perin, Jamie; Black, Robert E

    2016-06-01

    Physician assessment historically has been the most common method of analyzing verbal autopsy (VA) data. Recently, the World Health Organization endorsed two automated methods, Tariff 2.0 and InterVA-4, which promise greater objectivity and lower cost. A disadvantage of the Tariff method is that it requires a training data set from a prior validation study, while InterVA relies on clinically specified conditional probabilities. We undertook to validate the hierarchical expert algorithm analysis of VA data, an automated, intuitive, deterministic method that does not require a training data set. Using Population Health Metrics Research Consortium study hospital source data, we compared the primary causes of 1629 neonatal and 1456 1-59 month-old child deaths from VA expert algorithms arranged in a hierarchy to their reference standard causes. The expert algorithms were held constant, while five prior and one new "compromise" neonatal hierarchy, and three former child hierarchies were tested. For each comparison, the reference standard data were resampled 1000 times within the range of cause-specific mortality fractions (CSMF) for one of three approximated community scenarios in the 2013 WHO global causes of death, plus one random mortality cause proportions scenario. We utilized CSMF accuracy to assess overall population-level validity, and the absolute difference between VA and reference standard CSMFs to examine particular causes. Chance-corrected concordance (CCC) and Cohen's kappa were used to evaluate individual-level cause assignment. Overall CSMF accuracy for the best-performing expert algorithm hierarchy was 0.80 (range 0.57-0.96) for neonatal deaths and 0.76 (0.50-0.97) for child deaths. Performance for particular causes of death varied, with fairly flat estimated CSMF over a range of reference values for several causes. Performance at the individual diagnosis level was also less favorable than that for overall CSMF (neonatal: best CCC = 0.23, range 0

  4. Prognostic significance of tumor size of small lung adenocarcinomas evaluated with mediastinal window settings on computed tomography.

    Directory of Open Access Journals (Sweden)

    Yukinori Sakao

    Full Text Available BACKGROUND: We aimed to clarify that the size of the lung adenocarcinoma evaluated using mediastinal window on computed tomography is an important and useful modality for predicting invasiveness, lymph node metastasis and prognosis in small adenocarcinoma. METHODS: We evaluated 176 patients with small lung adenocarcinomas (diameter, 1-3 cm who underwent standard surgical resection. Tumours were examined using computed tomography with thin section conditions (1.25 mm thick on high-resolution computed tomography with tumour dimensions evaluated under two settings: lung window and mediastinal window. We also determined the patient age, gender, preoperative nodal status, tumour size, tumour disappearance ratio, preoperative serum carcinoembryonic antigen levels and pathological status (lymphatic vessel, vascular vessel or pleural invasion. Recurrence-free survival was used for prognosis. RESULTS: Lung window, mediastinal window, tumour disappearance ratio and preoperative nodal status were significant predictive factors for recurrence-free survival in univariate analyses. Areas under the receiver operator curves for recurrence were 0.76, 0.73 and 0.65 for mediastinal window, tumour disappearance ratio and lung window, respectively. Lung window, mediastinal window, tumour disappearance ratio, preoperative serum carcinoembryonic antigen levels and preoperative nodal status were significant predictive factors for lymph node metastasis in univariate analyses; areas under the receiver operator curves were 0.61, 0.76, 0.72 and 0.66, for lung window, mediastinal window, tumour disappearance ratio and preoperative serum carcinoembryonic antigen levels, respectively. Lung window, mediastinal window, tumour disappearance ratio, preoperative serum carcinoembryonic antigen levels and preoperative nodal status were significant factors for lymphatic vessel, vascular vessel or pleural invasion in univariate analyses; areas under the receiver operator curves were 0

  5. Prognostic Significance of Tumor Size of Small Lung Adenocarcinomas Evaluated with Mediastinal Window Settings on Computed Tomography

    Science.gov (United States)

    Sakao, Yukinori; Kuroda, Hiroaki; Mun, Mingyon; Uehara, Hirofumi; Motoi, Noriko; Ishikawa, Yuichi; Nakagawa, Ken; Okumura, Sakae

    2014-01-01

    Background We aimed to clarify that the size of the lung adenocarcinoma evaluated using mediastinal window on computed tomography is an important and useful modality for predicting invasiveness, lymph node metastasis and prognosis in small adenocarcinoma. Methods We evaluated 176 patients with small lung adenocarcinomas (diameter, 1–3 cm) who underwent standard surgical resection. Tumours were examined using computed tomography with thin section conditions (1.25 mm thick on high-resolution computed tomography) with tumour dimensions evaluated under two settings: lung window and mediastinal window. We also determined the patient age, gender, preoperative nodal status, tumour size, tumour disappearance ratio, preoperative serum carcinoembryonic antigen levels and pathological status (lymphatic vessel, vascular vessel or pleural invasion). Recurrence-free survival was used for prognosis. Results Lung window, mediastinal window, tumour disappearance ratio and preoperative nodal status were significant predictive factors for recurrence-free survival in univariate analyses. Areas under the receiver operator curves for recurrence were 0.76, 0.73 and 0.65 for mediastinal window, tumour disappearance ratio and lung window, respectively. Lung window, mediastinal window, tumour disappearance ratio, preoperative serum carcinoembryonic antigen levels and preoperative nodal status were significant predictive factors for lymph node metastasis in univariate analyses; areas under the receiver operator curves were 0.61, 0.76, 0.72 and 0.66, for lung window, mediastinal window, tumour disappearance ratio and preoperative serum carcinoembryonic antigen levels, respectively. Lung window, mediastinal window, tumour disappearance ratio, preoperative serum carcinoembryonic antigen levels and preoperative nodal status were significant factors for lymphatic vessel, vascular vessel or pleural invasion in univariate analyses; areas under the receiver operator curves were 0.60, 0.81, 0

  6. High-throughput film-densitometry: An efficient approach to generate large data sets

    Energy Technology Data Exchange (ETDEWEB)

    Typke, Dieter; Nordmeyer, Robert A.; Jones, Arthur; Lee, Juyoung; Avila-Sakar, Agustin; Downing, Kenneth H.; Glaeser, Robert M.

    2004-07-14

    A film-handling machine (robot) has been built which can, in conjunction with a commercially available film densitometer, exchange and digitize over 300 electron micrographs per day. Implementation of robotic film handling effectively eliminates the delay and tedium associated with digitizing images when data are initially recorded on photographic film. The modulation transfer function (MTF) of the commercially available densitometer is significantly worse than that of a high-end, scientific microdensitometer. Nevertheless, its signal-to-noise ratio (S/N) is quite excellent, allowing substantial restoration of the output to ''near-to-perfect'' performance. Due to the large area of the standard electron microscope film that can be digitized by the commercial densitometer (up to 10,000 x 13,680 pixels with an appropriately coded holder), automated film digitization offers a fast and inexpensive alternative to high-end CCD cameras as a means of acquiring large amounts of image data in electron microscopy.

  7. Desorption of large molecules with light-element clusters: Effects of cluster size and substrate nature

    Energy Technology Data Exchange (ETDEWEB)

    Delcorte, Arnaud, E-mail: arnaud.delcorte@uclouvain.be [Institute of Condensed Matter and Nanosciences - Bio and Soft Matter, Universite catholique de Louvain, Croix du Sud, 1 bte 3, B-1348 Louvain-la-Neuve (Belgium); Garrison, Barbara J. [Department of Chemistry, Penn State University, University Park, PA 16802 (United States)

    2011-07-15

    This contribution focuses on the conditions required to desorb a large hydrocarbon molecule using light-element clusters. The test molecule is a 7.5 kDa coil of polystyrene (PS61). Several projectiles are compared, from C{sub 60} to 110 kDa organic droplets and two substrates are used, amorphous polyethylene and mono-crystalline gold. Different aiming points and incidence angles are examined. Under specific conditions, 10 keV nanodrops can desorb PS61 intact from a gold substrate and from a soft polyethylene substrate. The prevalent mechanism for the desorption of intact and 'cold' molecules is one in which the molecules are washed away by the projectile constituents and entrained in their flux, with an emission angle close to {approx}70 deg. The effects of the different parameters on the dynamics and the underlying physics are discussed in detail and the predictions of the model are compared with other published studies.

  8. Desorption of large molecules with light-element clusters: Effects of cluster size and substrate nature

    International Nuclear Information System (INIS)

    Delcorte, Arnaud; Garrison, Barbara J.

    2011-01-01

    This contribution focuses on the conditions required to desorb a large hydrocarbon molecule using light-element clusters. The test molecule is a 7.5 kDa coil of polystyrene (PS61). Several projectiles are compared, from C 60 to 110 kDa organic droplets and two substrates are used, amorphous polyethylene and mono-crystalline gold. Different aiming points and incidence angles are examined. Under specific conditions, 10 keV nanodrops can desorb PS61 intact from a gold substrate and from a soft polyethylene substrate. The prevalent mechanism for the desorption of intact and 'cold' molecules is one in which the molecules are washed away by the projectile constituents and entrained in their flux, with an emission angle close to ∼70 deg. The effects of the different parameters on the dynamics and the underlying physics are discussed in detail and the predictions of the model are compared with other published studies.

  9. MMSW. A large-size micromegas quadruplet prototype. Design and construction

    Energy Technology Data Exchange (ETDEWEB)

    Kuger, Fabian; Sidiropoulou, Ourania [Julius Maximilians Universitaet, Wuerzburg (Germany); European Organization for Nuclear Research (CERN), Geneva (Switzerland); Bianco, Michele; Danielsson, Hans; Degrange, Jordan; Oliveira, Rui de; Farina, Eduardo; Iengo, Paolo; Perez Gomez, Francisco; Sekhniaidze, Givi; Sforza, Federico; Vergain, Maurice; Wotschack, Joerg [European Organization for Nuclear Research (CERN), Geneva (Switzerland); Duedder, Andreas; Lin, Tai-Hua; Schott, Matthias [Johannes Gutenberg-Universitaet, Mainz (Germany)

    2016-07-01

    Two micromegas detector quadruplets with an area of 0.5 m{sup 2} (MMSW) have been recently constructed and tested at CERN and University of Mainz. They serve as prototypes for the planned upgrade project of the ATLAS muon system. Their design is based on the resistive-strip technology and thus renders the detectors spark tolerant. The applied 'mechanically floating' mesh design allows for large area Micromegas construction and facilitates detector cleaning before assembly. Each quadruplet comprises four detection layers with 1024 readout strips and a strip pitch of 415 μm. In two out of the four layers the strips are inclined by ± 1.5 to allow for the measurement of a second coordinate. We present the detector concept and report on the experience gained during the detector construction.

  10. Statistical homogeneity tests applied to large data sets from high energy physics experiments

    Science.gov (United States)

    Trusina, J.; Franc, J.; Kůs, V.

    2017-12-01

    Homogeneity tests are used in high energy physics for the verification of simulated Monte Carlo samples, it means if they have the same distribution as a measured data from particle detector. Kolmogorov-Smirnov, χ 2, and Anderson-Darling tests are the most used techniques to assess the samples’ homogeneity. Since MC generators produce plenty of entries from different models, each entry has to be re-weighted to obtain the same sample size as the measured data has. One way of the homogeneity testing is through the binning. If we do not want to lose any information, we can apply generalized tests based on weighted empirical distribution functions. In this paper, we propose such generalized weighted homogeneity tests and introduce some of their asymptotic properties. We present the results based on numerical analysis which focuses on estimations of the type-I error and power of the test. Finally, we present application of our homogeneity tests to data from the experiment DØ in Fermilab.

  11. Processing large sensor data sets for safeguards : the knowledge generation system.

    Energy Technology Data Exchange (ETDEWEB)

    Thomas, Maikel A.; Smartt, Heidi Anne; Matthews, Robert F.

    2012-04-01

    Modern nuclear facilities, such as reprocessing plants, present inspectors with significant challenges due in part to the sheer amount of equipment that must be safeguarded. The Sandia-developed and patented Knowledge Generation system was designed to automatically analyze large amounts of safeguards data to identify anomalous events of interest by comparing sensor readings with those expected from a process of interest and operator declarations. This paper describes a demonstration of the Knowledge Generation system using simulated accountability tank sensor data to represent part of a reprocessing plant. The demonstration indicated that Knowledge Generation has the potential to address several problems critical to the future of safeguards. It could be extended to facilitate remote inspections and trigger random inspections. Knowledge Generation could analyze data to establish trust hierarchies, to facilitate safeguards use of operator-owned sensors.

  12. Three-dimensional nanostructure determination from a large diffraction data set recorded using scanning electron nanodiffraction

    Directory of Open Access Journals (Sweden)

    Yifei Meng

    2016-09-01

    Full Text Available A diffraction-based technique is developed for the determination of three-dimensional nanostructures. The technique employs high-resolution and low-dose scanning electron nanodiffraction (SEND to acquire three-dimensional diffraction patterns, with the help of a special sample holder for large-angle rotation. Grains are identified in three-dimensional space based on crystal orientation and on reconstructed dark-field images from the recorded diffraction patterns. Application to a nanocrystalline TiN thin film shows that the three-dimensional morphology of columnar TiN grains of tens of nanometres in diameter can be reconstructed using an algebraic iterative algorithm under specified prior conditions, together with their crystallographic orientations. The principles can be extended to multiphase nanocrystalline materials as well. Thus, the tomographic SEND technique provides an effective and adaptive way of determining three-dimensional nanostructures.

  13. A polymer, random walk model for the size-distribution of large DNA fragments after high linear energy transfer radiation

    Science.gov (United States)

    Ponomarev, A. L.; Brenner, D.; Hlatky, L. R.; Sachs, R. K.

    2000-01-01

    DNA double-strand breaks (DSBs) produced by densely ionizing radiation are not located randomly in the genome: recent data indicate DSB clustering along chromosomes. Stochastic DSB clustering at large scales, from > 100 Mbp down to simulations and analytic equations. A random-walk, coarse-grained polymer model for chromatin is combined with a simple track structure model in Monte Carlo software called DNAbreak and is applied to data on alpha-particle irradiation of V-79 cells. The chromatin model neglects molecular details but systematically incorporates an increase in average spatial separation between two DNA loci as the number of base-pairs between the loci increases. Fragment-size distributions obtained using DNAbreak match data on large fragments about as well as distributions previously obtained with a less mechanistic approach. Dose-response relations, linear at small doses of high linear energy transfer (LET) radiation, are obtained. They are found to be non-linear when the dose becomes so large that there is a significant probability of overlapping or close juxtaposition, along one chromosome, for different DSB clusters from different tracks. The non-linearity is more evident for large fragments than for small. The DNAbreak results furnish an example of the RLC (randomly located clusters) analytic formalism, which generalizes the broken-stick fragment-size distribution of the random-breakage model that is often applied to low-LET data.

  14. INNOVATIVE TECHNIQUES AND TECHNOLOGY APPLICATION IN MANAGEMENT OF REMOTE HANDLED AND LARGE SIZED MIXED WASTE FORMS

    International Nuclear Information System (INIS)

    BLACKFORD LT

    2008-01-01

    CH2M HILL Hanford Group, Inc. (CH2M HILL) plays a critical role in Hanford Site cleanup for the U. S. Department of Energy, Office of River Protection (ORP). CH2M HILL is responsible for the management of 177 tanks containing 53 million gallons of highly radioactive wastes generated from weapons production activities from 1943 through 1990. In that time, 149 single-shell tanks, ranging in capacity from 50,000 gallons to 500,000 gallons, and 28 double-shell tanks with a capacity of 1 million gallons each, were constructed and filled with toxic liquid wastes and sludges. The cleanup mission includes removing these radioactive waste solids from the single-shell tanks to double-shell tanks for staging as feed to the Waste Treatment Plant (WTP) on the Hanford Site for vitrification of the wastes and disposal on the Hanford Site and Yucca Mountain repository. Concentrated efforts in retrieving residual solid and sludges from the single-shell tanks began in 2003; the first tank retrieved was C-106 in the 200 East Area of the site. The process for retrieval requires installation of modified sluicing systems, vacuum systems, and pumping systems into existing tank risers. Inherent with this process is the removal of existing pumps, thermo-couples, and agitating and monitoring equipment from the tank to be retrieved. Historically, these types of equipment have been extremely difficult to manage from the aspect of radiological dose, size, and weight of the equipment, as well as their attendant operating and support systems such as electrical distribution and control panels, filter systems, and mobile retrieval systems. Significant effort and expense were required to manage this new waste stream and resulted in several events over time that were both determined to be unsafe for workers and potentially unsound for protection of the environment. Over the last four years, processes and systems have been developed that reduce worker exposures to these hazards, eliminate violations

  15. Study of the pressing operation of large-sized tiles using X-ray absorption

    International Nuclear Information System (INIS)

    Amoros, J. L.; Mallol, G.; Llorens, D.; Boix, J.; Arnau, J. M.; Feliu, C.; Cerisuelo, J. A.; Gargallo, J. J.

    2010-01-01

    An apparatus for X-Ray non destructive inspection of bulk density distribution in large ceramic tiles has been designed, built and patented. This technique has many advantages compared with other methods: it allows tile bulk density distribution to be mapped and is neither destructive nor toxic, provided the X-ray tube and detector area are shielded to prevent leakage. In the present study, this technique, whose technical feasibility and accuracy had been verified in previous studies, has been used to scan ceramic tiles formed under different industrial conditions, modifying press working parameters. The use of high-precision laser telemeters allows tile thicknesses to be mapped, facilitating the interpretation of manufacturing defects produced in pressing, which cannot be interpreted by just measuring bulk density. The bulk density distributions obtained in the same unfired and fired tiles are also compared, a possibility afforded only by this measurement method, since it is non-destructive. The comparison of both unfired and fired tile bulk density distributions allows the influence of the pressing and firing stages on tile end porosity to be individually identified. (Author) 12 refs.

  16. Theory of resistive magnetohydrodynamic instabilities excited by energetic trapped particles in large-size tokamaks

    International Nuclear Information System (INIS)

    Biglari, H.

    1987-01-01

    A theory describing excitation of resistive magnetohydrodynamic instabilities due to a population of energetic particles, trapped in region of adverse curvature on energetic particles, trapped in region of adverse curvature in tokamaks, is presented. Theory's principal motivation is observation that high magnetic-field strengths and large geometric dimensions characteristic of present-generation thermonuclear fusion devices, places them in a frequency regime whereby processional drift frequency of auxiliary hot-ion species, in order of magnitude, falls below a typical inverse resistive interchange time scale, so that inclusion of resistive dissipation effects becomes important. Destabilization of the resistive internal kink mode by these suprathermal particles is first investigated. Using variational techniques, a generalized dispersion relation governing such modes, which recovers ideal theory in its appropriate limit, is derived and analyzed using Nyquist-diagrammatic techniques. An important implication of theory for present-generation fusion devices is that they will be stable to fishbone activity. Interaction of energetic particles with resistive interchange-ballooning modes is taken up. A population of hot particles, deeply trapped on adverse curvature side in tokamaks, can resonantly destabilize resistive interchange mode, which is stable in their absence because of favorable average curvature. Both modes are different from their usual resistive magnetohydrodynamic counterparts in their destabilization mechanism

  17. Design and Construction of Large Size Micromegas Chambers for the ATLAS Upgrade of the Muon Spectrometer

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00380308; The ATLAS collaboration

    2016-01-01

    Large area Micromegas detectors will be employed for the first time in high-energy physics experiments. A total surface area of about 150 m$^2$ of the forward regions (pseudo-rapidity coverage -- 1.3 $\\boldsymbol{< |\\eta| <}$ 2.7) of the Muon Spectrometer of the ATLAS detector at LHC will be equipped with 8-layer Micromegas modules. Each module extends over a surface from 2 to 3 m$^2$ for a total active area of 1200 m$^2$. Together with the small strip Thin Gap Chambers they will compose the two New Small Wheels (NSW), which will replace the innermost stations of the ATLAS endcap muon tracking system in the 2018/19 shutdown. In order to achieve a 15\\% transverse momentum resolution for 1 TeV muons, in addition to an excellent intrinsic position resolution, the mechanical precision of each plane of the assembled module must be $\\boldsymbol{30{\\mu}m}$ along the precision coordinate and $\\boldsymbol{80{\\mu}m}$ perpendicular to the chamber. All readout planes are segmented into strips with a pitch of $\\bold...

  18. Design and Construction of Large Size Micromegas Chambers for the ATLAS Upgrade of the Muon Spectrometer

    CERN Document Server

    Jeanneau, Fabien; The ATLAS collaboration

    2015-01-01

    Large area Micromegas detectors will be employed for the first time in high-energy physics experiments. A total surface of about 150 m2 of the forward regions of the Muon Spectrometer of the ATLAS detector at LHC will be equipped with 8-layer Micromegas modules. Each module extends over a surface from 2 to 3 m2 for a total active area of 1200 m2. Together with the small strip Thin Gap Chambers they will compose the two New Small Wheels, which will replace the innermost stations of the ATLAS endcap muon tracking system in the 2018/19 shutdown. In order to achieve a 15% transverse momentum resolution for 1 TeV muons, in addition to an excellent intrinsic resolution, the mechanical precision of each plane of the assembled module must be as good as 30 μm along the precision coordinate and 80 μm perpendicular to the chamber. In the prototyping towards the final configuration two similar quadruplets with dimensions 1.2×0.5 m2 have been built with the same structure as foreseen for the NSW upgrade. It represents ...

  19. Design and Construction of Large Size Micromegas Chambers for the ATLAS Upgrade of the Muon Spectrometer

    CERN Document Server

    Jeanneau, Fabien; The ATLAS collaboration

    2015-01-01

    Large area Micromegas detectors will be employed for the first time in high-energy physics experiments. A total surface of about 150 m2 of the forward regions of the Muon Spectrometer of the ATLAS detector at LHC will be equipped with 8-layer Micromegas modules. Each module extends over a surface from 2 to 3 m2 for a total active area of 1200 m2. Together with the small strip Thin Gap Chambers they will compose the two New Small Wheels, which will replace the innermost stations of the ATLAS endcap muon tracking system in the 2018/19 shutdown. In order to achieve a 15% transverse momentum resolution for 1 TeV muons, in addition to an excellent intrinsic resolution, the mechanical precision of each plane of the assembled module must be as good as 30 μm along the precision coordinate and 80 μm perpendicular to the chamber. All readout planes are segmented into strips with a pitch of 400 μm for a total of 4096 strips. In two of the four planes the strips are inclined by 1.5 ◦ and provide a measurement of the...

  20. Design and Construction of Large Size Micromegas Chambers for the Upgrade of the ATLAS Muon Spectrometer

    CERN Document Server

    Lösel, Philipp; Müller, Ralph

    2015-01-01

    Large area Micromegas detectors will be employed for the first time in high-energy physics experiments. A total surface of about $\\mathbf{150~m^2}$ of the forward regions of the Muon Spectrometer of the ATLAS detector at LHC will be equipped with 8-layer Micromegas modules. Each layer covers more than $\\mathbf{2~m^2}$ for a total active area of $\\mathbf{1200~m^2}$. Together with the small strip Thin Gap Chambers they will compose the two New Small Wheels, which will replace the innermost stations of the ATLAS endcap muon tracking system in the 2018/19 shutdown. In order to achieve a 15$\\mathbf{\\%}$ transverse momentum resolution for $\\mathbf{1~TeV}$ muons, in addition to an excellent intrinsic resolution, the mechanical precision of each plane of the assembled module must be as good as $\\mathbf{30~\\mu m}$ along the precision coordinate and $\\mathbf{80~\\mu m}$ perpendicular to the chamber. The design and construction procedure of the Micromegas modules will be presented, as well as the design for the assembly ...

  1. Design and Construction of Large Size Micromegas Chambers for the ATLAS Upgrade of the Muon Spectrometer

    CERN Document Server

    Losel, Philipp Jonathan; The ATLAS collaboration

    2014-01-01

    Large area Micromegas detectors will be employed fo r the first time in high-energy physics experiments. A total surface of about 150 m$^2$ of the forward regions of the Muon Spectrometer of the ATLAS detector at LHC will be equipped with 8-layer Micromegas modules. Each module extends over a surface from 2 to 3 m$^2$ for a total active area of 1200 m$^2$. Together with the small strip Thin Gap Chambers they will compose the two New Small Wheels, which will replace the innermost stations of the ATLAS endcap muon tracking system in the 2018/19 shutdown. In order to achieve a 15% transverse momentum resol ution for 1 TeV muons, in addition to an excellent intrinsic resolution, the mechanical prec ision of each plane of the assembled module must be as good as 30 $\\mu$m along the precision coordi nate and 80 $\\mu$m perpendicular to the chamber. The design and construction procedure of the microm egas modules will be presented, as well as the design for the assembly of modules onto the New Small Wheel. Emphasis wi...

  2. Large-sized soda ban as an alternative to soda tax.

    Science.gov (United States)

    Min, Hery Michelle

    2013-01-01

    This Note examines New York City's Sugary Drinks Portion Cap Rule (Soda Ban), which was originally set to become effective March 12, 2013. The New York County Supreme Court's decision in New York Statewide Coalition of Hispanic Chambers of Commerce v. New York City Department of Health and Mental Hygiene suspended the Soda Ban on March 11, 2013. The First Department of the Appellate Division of New York State Supreme Court affirmed the suspension on July 30, 2013. However, the complex economic policy and constitutional issues arising from the proposed Soda Ban deserve as much attention as the ultimate result of the legal challenge to the ban. Both courts struck down the Soda Ban on the grounds that it violated the separation of powers doctrine. The lower court further held that the Soda Ban was arbitrary and capricious. This Note does not focus solely on the holdings of the two courts, but takes a broader approach in analyzing the issues involved in the Soda Ban. By comparing and contrasting tobacco products with sugary beverages, this Note explains why the public seems to find the Soda Ban less appealing than tobacco regulations. Specifically, this Note addresses how the failed attempts of numerous states and cities to implement soda taxes demonstrate the complexity of policies geared toward curbing obesity; how fundamental values, such as health, fairness, efficiency, and autonomy factor into obesity policies; and the fact that legislatures and courts are struggling to determine the scope of public health law intervention. The Note explores how the Soda Ban, despite its judicial suspension, could represent a stepping-stone in combating the obesity epidemic.

  3. Simulation of reflecting surface deviations of centimeter-band parabolic space radiotelescope (SRT) with the large-size mirror

    Science.gov (United States)

    Kotik, A.; Usyukin, V.; Vinogradov, I.; Arkhipov, M.

    2017-11-01

    he realization of astrophysical researches requires the development of high-sensitive centimeterband parabolic space radiotelescopes (SRT) with the large-size mirrors. Constructively such SRT with the mirror size more than 10 m can be realized as deployable rigid structures. Mesh-structures of such size do not provide the reflector reflecting surface accuracy which is necessary for the centimeter band observations. Now such telescope with the 10 m diameter mirror is developed in Russia in the frame of "SPECTR - R" program. External dimensions of the telescope is more than the size of existing thermo-vacuum chambers used to prove SRT reflecting surface accuracy parameters under the action of space environment factors. That's why the numerical simulation turns out to be the basis required to accept the taken designs. Such modeling should be based on experimental working of the basic constructive materials and elements of the future reflector. In the article computational modeling of reflecting surface deviations of a centimeter-band of a large-sized deployable space reflector at a stage of his orbital functioning is considered. The analysis of the factors that determines the deviations - both determined (temperatures fields) and not-determined (telescope manufacturing and installation faults; the deformations caused by features of composite materials behavior in space) is carried out. The finite-element model and complex of methods are developed. They allow to carry out computational modeling of reflecting surface deviations caused by influence of all factors and to take into account the deviations correction by space vehicle orientation system. The results of modeling for two modes of functioning (orientation at the Sun) SRT are presented.

  4. When David beats Goliath: the advantage of large size in interspecific aggressive contests declines over evolutionary time.

    Directory of Open Access Journals (Sweden)

    Paul R Martin

    Full Text Available Body size has long been recognized to play a key role in shaping species interactions. For example, while small species thrive in a diversity of environments, they typically lose aggressive contests for resources with larger species. However, numerous examples exist of smaller species dominating larger species during aggressive interactions, suggesting that the evolution of traits can allow species to overcome the competitive disadvantage of small size. If these traits accumulate as lineages diverge, then the advantage of large size in interspecific aggressive interactions should decline with increased evolutionary distance. We tested this hypothesis using data on the outcomes of 23,362 aggressive interactions among 246 bird species pairs involving vultures at carcasses, hummingbirds at nectar sources, and antbirds and woodcreepers at army ant swarms. We found the advantage of large size declined as species became more evolutionarily divergent, and smaller species were more likely to dominate aggressive contests when interacting with more distantly-related species. These results appear to be caused by both the evolution of traits in smaller species that enhanced their abilities in aggressive contests, and the evolution of traits in larger species that were adaptive for other functions, but compromised their abilities to compete aggressively. Specific traits that may provide advantages to small species in aggressive interactions included well-developed leg musculature and talons, enhanced flight acceleration and maneuverability, novel fighting behaviors, and traits associated with aggression, such as testosterone and muscle development. Traits that may have hindered larger species in aggressive interactions included the evolution of morphologies for tree trunk foraging that compromised performance in aggressive contests away from trunks, and the evolution of migration. Overall, our results suggest that fundamental trade-offs, such as those

  5. New large solar photocatalytic plant: set-up and preliminary results.

    Science.gov (United States)

    Malato, S; Blanco, J; Vidal, A; Fernández, P; Cáceres, J; Trincado, P; Oliveira, J C; Vincent, M

    2002-04-01

    A European industrial consortium called SOLARDETOX has been created as the result of an EC-DGXII BRITE-EURAM-III-financed project on solar photocatalytic detoxification of water. The project objective was to develop a simple, efficient and commercially competitive water-treatment technology, based on compound parabolic collectors (CPCs) solar collectors and TiO2 photocatalysis, to make possible easy design and installation. The design, set-up and preliminary results of the main project deliverable, the first European industrial solar detoxification treatment plant, is presented. This plant has been designed for the batch treatment of 2 m3 of water with a 100 m2 collector-aperture area and aqueous aerated suspensions of polycrystalline TiO2 irradiated by sunlight. Fully automatic control reduces operation and maintenance manpower. Plant behaviour has been compared (using dichloroacetic acid and cyanide at 50 mg l(-1) initial concentration as model compounds) with the small CPC pilot plants installed at the Plataforma Solar de Almería several years ago. The first results with high-content cyanide (1 g l(-1)) waste water are presented and plant treatment capacity is calculated.

  6. Extraction of tacit knowledge from large ADME data sets via pairwise analysis.

    Science.gov (United States)

    Keefer, Christopher E; Chang, George; Kauffman, Gregory W

    2011-06-15

    Pharmaceutical companies routinely collect data across multiple projects for common ADME endpoints. Although at the time of collection the data is intended for use in decision making within a specific project, knowledge can be gained by data mining the entire cross-project data set for patterns of structure-activity relationships (SAR) that may be applied to any project. One such data mining method is pairwise analysis. This method has the advantage of being able to identify small structural changes that lead to significant changes in activity. In this paper, we describe the process for full pairwise analysis of our high-throughput ADME assays routinely used for compound discovery efforts at Pfizer (microsomal clearance, passive membrane permeability, P-gp efflux, and lipophilicity). We also describe multiple strategies for the application of these transforms in a prospective manner during compound design. Finally, a detailed analysis of the activity patterns in pairs of compounds that share the same molecular transformation reveals multiple types of transforms from an SAR perspective. These include bioisosteres, additives, multiplicatives, and a type we call switches as they act to either turn on or turn off an activity. Copyright © 2011 Elsevier Ltd. All rights reserved.

  7. Separation of large DNA molecules by applying pulsed electric field to size exclusion chromatography-based microchip

    Science.gov (United States)

    Azuma, Naoki; Itoh, Shintaro; Fukuzawa, Kenji; Zhang, Hedong

    2018-02-01

    Through electrophoresis driven by a pulsed electric field, we succeeded in separating large DNA molecules with an electrophoretic microchip based on size exclusion chromatography (SEC), which was proposed in our previous study. The conditions of the pulsed electric field required to achieve the separation were determined by numerical analyses using our originally proposed separation model. From the numerical results, we succeeded in separating large DNA molecules (λ DNA and T4 DNA) within 1600 s, which was approximately half of that achieved under a direct electric field in our previous study. Our SEC-based electrophoresis microchip will be one of the effective tools to meet the growing demand of faster and more convenient separation of large DNA molecules, especially in the field of epidemiological research of infectious diseases.

  8. Multiscale virtual particle based elastic network model (MVP-ENM) for normal mode analysis of large-sized biomolecules.

    Science.gov (United States)

    Xia, Kelin

    2017-12-20

    In this paper, a multiscale virtual particle based elastic network model (MVP-ENM) is proposed for the normal mode analysis of large-sized biomolecules. The multiscale virtual particle (MVP) model is proposed for the discretization of biomolecular density data. With this model, large-sized biomolecular structures can be coarse-grained into virtual particles such that a balance between model accuracy and computational cost can be achieved. An elastic network is constructed by assuming "connections" between virtual particles. The connection is described by a special harmonic potential function, which considers the influence from both the mass distributions and distance relations of the virtual particles. Two independent models, i.e., the multiscale virtual particle based Gaussian network model (MVP-GNM) and the multiscale virtual particle based anisotropic network model (MVP-ANM), are proposed. It has been found that in the Debye-Waller factor (B-factor) prediction, the results from our MVP-GNM with a high resolution are as good as the ones from GNM. Even with low resolutions, our MVP-GNM can still capture the global behavior of the B-factor very well with mismatches predominantly from the regions with large B-factor values. Further, it has been demonstrated that the low-frequency eigenmodes from our MVP-ANM are highly consistent with the ones from ANM even with very low resolutions and a coarse grid. Finally, the great advantage of MVP-ANM model for large-sized biomolecules has been demonstrated by using two poliovirus virus structures. The paper ends with a conclusion.

  9. Can wide consultation help with setting priorities for large-scale biodiversity monitoring programs?

    Directory of Open Access Journals (Sweden)

    Frédéric Boivin

    Full Text Available Climate and other global change phenomena affecting biodiversity require monitoring to track ecosystem changes and guide policy and management actions. Designing a biodiversity monitoring program is a difficult task that requires making decisions that often lack consensus due to budgetary constrains. As monitoring programs require long-term investment, they also require strong and continuing support from all interested parties. As such, stakeholder consultation is key to identify priorities and make sound design decisions that have as much support as possible. Here, we present the results of a consultation conducted to serve as an aid for designing a large-scale biodiversity monitoring program for the province of Québec (Canada. The consultation took the form of a survey with 13 discrete choices involving tradeoffs in respect to design priorities and 10 demographic questions (e.g., age, profession. The survey was sent to thousands of individuals having expected interests and knowledge about biodiversity and was completed by 621 participants. Overall, consensuses were few and it appeared difficult to create a design fulfilling the priorities of the majority. Most participants wanted 1 a monitoring design covering the entire territory and focusing on natural habitats; 2 a focus on species related to ecosystem services, on threatened and on invasive species. The only demographic characteristic that was related to the type of prioritization was the declared level of knowledge in biodiversity (null to high, but even then the influence was quite small.

  10. Can wide consultation help with setting priorities for large-scale biodiversity monitoring programs?

    Science.gov (United States)

    Boivin, Frédéric; Simard, Anouk; Peres-Neto, Pedro

    2014-01-01

    Climate and other global change phenomena affecting biodiversity require monitoring to track ecosystem changes and guide policy and management actions. Designing a biodiversity monitoring program is a difficult task that requires making decisions that often lack consensus due to budgetary constrains. As monitoring programs require long-term investment, they also require strong and continuing support from all interested parties. As such, stakeholder consultation is key to identify priorities and make sound design decisions that have as much support as possible. Here, we present the results of a consultation conducted to serve as an aid for designing a large-scale biodiversity monitoring program for the province of Québec (Canada). The consultation took the form of a survey with 13 discrete choices involving tradeoffs in respect to design priorities and 10 demographic questions (e.g., age, profession). The survey was sent to thousands of individuals having expected interests and knowledge about biodiversity and was completed by 621 participants. Overall, consensuses were few and it appeared difficult to create a design fulfilling the priorities of the majority. Most participants wanted 1) a monitoring design covering the entire territory and focusing on natural habitats; 2) a focus on species related to ecosystem services, on threatened and on invasive species. The only demographic characteristic that was related to the type of prioritization was the declared level of knowledge in biodiversity (null to high), but even then the influence was quite small.

  11. A large set of newly created interspecific Saccharomyces hybrids increases aromatic diversity in lager beers.

    Science.gov (United States)

    Mertens, Stijn; Steensels, Jan; Saels, Veerle; De Rouck, Gert; Aerts, Guido; Verstrepen, Kevin J

    2015-12-01

    Lager beer is the most consumed alcoholic beverage in the world. Its production process is marked by a fermentation conducted at low (8 to 15°C) temperatures and by the use of Saccharomyces pastorianus, an interspecific hybrid between Saccharomyces cerevisiae and the cold-tolerant Saccharomyces eubayanus. Recent whole-genome-sequencing efforts revealed that the currently available lager yeasts belong to one of only two archetypes, "Saaz" and "Frohberg." This limited genetic variation likely reflects that all lager yeasts descend from only two separate interspecific hybridization events, which may also explain the relatively limited aromatic diversity between the available lager beer yeasts compared to, for example, wine and ale beer yeasts. In this study, 31 novel interspecific yeast hybrids were developed, resulting from large-scale robot-assisted selection and breeding between carefully selected strains of S. cerevisiae (six strains) and S. eubayanus (two strains). Interestingly, many of the resulting hybrids showed a broader temperature tolerance than their parental strains and reference S. pastorianus yeasts. Moreover, they combined a high fermentation capacity with a desirable aroma profile in laboratory-scale lager beer fermentations, thereby successfully enriching the currently available lager yeast biodiversity. Pilot-scale trials further confirmed the industrial potential of these hybrids and identified one strain, hybrid H29, which combines a fast fermentation, high attenuation, and the production of a complex, desirable fruity aroma. Copyright © 2015, American Society for Microbiology. All Rights Reserved.

  12. Investigating the Variability in Cumulus Cloud Number as a Function of Subdomain Size and Organization using large-domain LES

    Science.gov (United States)

    Neggers, R.

    2017-12-01

    Recent advances in supercomputing have introduced a "grey zone" in the representation of cumulus convection in general circulation models, in which this process is partially resolved. Cumulus parameterizations need to be made scale-aware and scale-adaptive to be able to conceptually and practically deal with this situation. A potential way forward are schemes formulated in terms of discretized Cloud Size Densities, or CSDs. Advantages include i) the introduction of scale-awareness at the foundation of the scheme, and ii) the possibility to apply size-filtering of parameterized convective transport and clouds. The CSD is a new variable that requires closure; this concerns its shape, its range, but also variability in cloud number that can appear due to i) subsampling effects and ii) organization in a cloud field. The goal of this study is to gain insight by means of sub-domain analyses of various large-domain LES realizations of cumulus cloud populations. For a series of three-dimensional snapshots, each with a different degree of organization, the cloud size distribution is calculated in all subdomains, for a range of subdomain sizes. The standard deviation of the number of clouds of a certain size is found to decrease with the subdomain size, following a powerlaw scaling corresponding to an inverse-linear dependence. Cloud number variability also increases with cloud size; this reflects that subsampling affects the largest clouds first, due to their typically larger neighbor spacing. Rewriting this dependence in terms of two dimensionless groups, by dividing by cloud number and cloud size respectively, yields a data collapse. Organization in the cloud field is found to act on top of this primary dependence, by enhancing the cloud number variability at the smaller sizes. This behavior reflects that small clouds start to "live" on top of larger structures such as cold pools, favoring or inhibiting their formation (as illustrated by the attached figure of cloud mask

  13. Inferring Population Size History from Large Samples of Genome-Wide Molecular Data - An Approximate Bayesian Computation Approach.

    Directory of Open Access Journals (Sweden)

    Simon Boitard

    2016-03-01

    Full Text Available Inferring the ancestral dynamics of effective population size is a long-standing question in population genetics, which can now be tackled much more accurately thanks to the massive genomic data available in many species. Several promising methods that take advantage of whole-genome sequences have been recently developed in this context. However, they can only be applied to rather small samples, which limits their ability to estimate recent population size history. Besides, they can be very sensitive to sequencing or phasing errors. Here we introduce a new approximate Bayesian computation approach named PopSizeABC that allows estimating the evolution of the effective population size through time, using a large sample of complete genomes. This sample is summarized using the folded allele frequency spectrum and the average zygotic linkage disequilibrium at different bins of physical distance, two classes of statistics that are widely used in population genetics and can be easily computed from unphased and unpolarized SNP data. Our approach provides accurate estimations of past population sizes, from the very first generations before present back to the expected time to the most recent common ancestor of the sample, as shown by simulations under a wide range of demographic scenarios. When applied to samples of 15 or 25 complete genomes in four cattle breeds (Angus, Fleckvieh, Holstein and Jersey, PopSizeABC revealed a series of population declines, related to historical events such as domestication or modern breed creation. We further highlight that our approach is robust to sequencing errors, provided summary statistics are computed from SNPs with common alleles.

  14. Design and Construction of Large Size Micromegas Chambers for the ATLAS Upgrade of the Muon Spectrometer

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2015-07-01

    Large area Micromegas detectors will be employed for the first time in high-energy physics experiments. A total surface of about 150 m{sup 2} of the forward regions of the Muon Spectrometer of the ATLAS detector at LHC will be equipped with 8-layer Micromegas modules. Each module extends over a surface from 2 to 3 m{sup 2} for a total active area of 1200 m{sup 2}. Together with the small strip Thin Gap Chambers they will compose the two New Small Wheels, which will replace the innermost stations of the ATLAS end-cap muon tracking system in the 2018/19 shutdown. In order to achieve a 15% transverse momentum resolution for 1 TeV muons, in addition to an excellent intrinsic resolution, the mechanical precision of each plane of the assembled module must be as good as 30 μm along the precision coordinate and 80 μm perpendicular to the chamber. In the prototyping towards the final configuration two similar quadruplets with dimensions 1.2 x 0.5 m{sup 2} have been built with the same structure as foreseen for the NSW upgrade. It represents the first example of a Micromegas quadruplet ever built, realized using the resistive-strip technology and decoupling the amplification mesh from the readout structure. All readout planes are segmented into strips with a pitch of 400 μm for a total of 4096 strips. In two of the four planes the strips are inclined by 1.5 deg. and provide a measurement of the second coordinate. The design and construction procedure of the Micromegas modules will be presented, as well as the design for the assembly of modules onto the New Small Wheel. Emphasis will be given on the methods developed to achieve the challenging mechanical precision. Measurements of deformation on chamber prototypes as a function of thermal gradients, gas over-pressure and internal stress (mesh tension and module fixation on supports) will be also shown in comparison to simulation. These tests were essential in the development of the final design in order to minimize the

  15. Decomposing wage distributions on a large data set - a quantile regression analysis of the gender wage gap

    DEFF Research Database (Denmark)

    Albæk, Karsten; Brink Thomsen, Lars

    This paper presents and implements a procedure that makes it possible to decompose wage distributions on large data sets. We replace bootstrap sampling in the standard Machado-Mata procedure with ‘non-replacement subsampling’, which is more suitable for the linked employer-employee data applied i...... in gender wage differences in the lower part of the wage distribution.......This paper presents and implements a procedure that makes it possible to decompose wage distributions on large data sets. We replace bootstrap sampling in the standard Machado-Mata procedure with ‘non-replacement subsampling’, which is more suitable for the linked employer-employee data applied...... in this paper. Decompositions show that most of the glass ceiling is related to segregation in the form of either composition effects or different returns to males and females. A counterfactual wage distribution without differences in the constant terms (or ‘discrimination’) implies substantial changes...

  16. Small genomes and large seeds: chromosome numbers, genome size and seed mass in diploid Aesculus species (Sapindaceae).

    Science.gov (United States)

    Krahulcová, Anna; Trávnícek, Pavel; Krahulec, František; Rejmánek, Marcel

    2017-04-01

    Aesculus L. (horse chestnut, buckeye) is a genus of 12-19 extant woody species native to the temperate Northern Hemisphere. This genus is known for unusually large seeds among angiosperms. While chromosome counts are available for many Aesculus species, only one has had its genome size measured. The aim of this study is to provide more genome size data and analyse the relationship between genome size and seed mass in this genus. Chromosome numbers in root tip cuttings were confirmed for four species and reported for the first time for three additional species. Flow cytometric measurements of 2C nuclear DNA values were conducted on eight species, and mean seed mass values were estimated for the same taxa. The same chromosome number, 2 n = 40, was determined in all investigated taxa. Original measurements of 2C values for seven Aesculus species (eight taxa), added to just one reliable datum for A. hippocastanum , confirmed the notion that the genome size in this genus with relatively large seeds is surprisingly low, ranging from 0·955 pg 2C -1 in A. parviflora to 1·275 pg 2C -1 in A. glabra var. glabra. The chromosome number of 2 n = 40 seems to be conclusively the universal 2 n number for non-hybrid species in this genus. Aesculus genome sizes are relatively small, not only within its own family, Sapindaceae, but also within woody angiosperms. The genome sizes seem to be distinct and non-overlapping among the four major Aesculus clades. These results provide an extra support for the most recent reconstruction of Aesculus phylogeny. The correlation between the 2C values and seed masses in examined Aesculus species is slightly negative and not significant. However, when the four major clades are treated separately, there is consistent positive association between larger genome size and larger seed mass within individual lineages. © The Author 2017. Published by Oxford University Press on behalf of the Annals of Botany Company. All rights reserved. For

  17. Job-related diseases and occupations within a large workers' compensation data set.

    Science.gov (United States)

    Leigh, J P; Miller, T R

    1998-03-01

    The objective of this report is to describe workers' job-related diseases and the occupations associated with those diseases. The methods include aggregation and analysis of job-related disease and occupation data from the Bureau of Labor Statistics' Supplementary Data System (SDS) for 1985 and 1986--the last years of data available with workers' compensation categories: death, permanent total, permanent partial, and temporary total and partial. Diseases are ranked according to their contribution to the four workers' compensation (WC) categories and also ranked within occupations according to the number of cases. Occupations are ranked according to their contribution to specific diseases within one of the four categories. The following diseases comprise the greatest numbers of deaths: heart attacks, asbestosis, silicosis, and stroke. Within the permanent total category, the diseases with the greatest contributions are heart attack, silicosis, strokes, and inflammation of the joints. For the permanent partial category, they are hearing loss, inflammation of joints, carpal tunnel syndrome, and heart attacks. For the temporary total and partial category, they are: inflammation of joints, carpal tunnel syndrome, dermatitis, and toxic poisoning. Hearing loss or inflammation of joints are associated with more than 300 occupations. Circulatory diseases comprise a larger share of job-related diseases than is generally acknowledged. Occupations contributing the most heart attack deaths are truck drivers, managers, janitors, supervisors, firefighters, and laborers. Ratios of numbers of deaths to numbers of disabilities are far higher for illnesses than injuries. Occupations that are consistent in their high ranking on most lists involving a variety of conditions include nonconstruction laborers, janitors, and construction laborers. The large SDS, though dated, provides a tentative national look at the broad spectrum of occupational diseases as defined by WC and the

  18. Impact of problem-based learning in a large classroom setting: student perception and problem-solving skills.

    Science.gov (United States)

    Klegeris, Andis; Hurren, Heather

    2011-12-01

    Problem-based learning (PBL) can be described as a learning environment where the problem drives the learning. This technique usually involves learning in small groups, which are supervised by tutors. It is becoming evident that PBL in a small-group setting has a robust positive effect on student learning and skills, including better problem-solving skills and an increase in overall motivation. However, very little research has been done on the educational benefits of PBL in a large classroom setting. Here, we describe a PBL approach (using tutorless groups) that was introduced as a supplement to standard didactic lectures in University of British Columbia Okanagan undergraduate biochemistry classes consisting of 45-85 students. PBL was chosen as an effective method to assist students in learning biochemical and physiological processes. By monitoring student attendance and using informal and formal surveys, we demonstrated that PBL has a significant positive impact on student motivation to attend and participate in the course work. Student responses indicated that PBL is superior to traditional lecture format with regard to the understanding of course content and retention of information. We also demonstrated that student problem-solving skills are significantly improved, but additional controlled studies are needed to determine how much PBL exercises contribute to this improvement. These preliminary data indicated several positive outcomes of using PBL in a large classroom setting, although further studies aimed at assessing student learning are needed to further justify implementation of this technique in courses delivered to large undergraduate classes.

  19. MUSI: an integrated system for identifying multiple specificity from very large peptide or nucleic acid data sets.

    Science.gov (United States)

    Kim, Taehyung; Tyndel, Marc S; Huang, Haiming; Sidhu, Sachdev S; Bader, Gary D; Gfeller, David; Kim, Philip M

    2012-03-01

    Peptide recognition domains and transcription factors play crucial roles in cellular signaling. They bind linear stretches of amino acids or nucleotides, respectively, with high specificity. Experimental techniques that assess the binding specificity of these domains, such as microarrays or phage display, can retrieve thousands of distinct ligands, providing detailed insight into binding specificity. In particular, the advent of next-generation sequencing has recently increased the throughput of such methods by several orders of magnitude. These advances have helped reveal the presence of distinct binding specificity classes that co-exist within a set of ligands interacting with the same target. Here, we introduce a software system called MUSI that can rapidly analyze very large data sets of binding sequences to determine the relevant binding specificity patterns. Our pipeline provides two major advances. First, it can detect previously unrecognized multiple specificity patterns in any data set. Second, it offers integrated processing of very large data sets from next-generation sequencing machines. The results are visualized as multiple sequence logos describing the different binding preferences of the protein under investigation. We demonstrate the performance of MUSI by analyzing recent phage display data for human SH3 domains as well as microarray data for mouse transcription factors.

  20. PeptideNavigator: An interactive tool for exploring large and complex data sets generated during peptide-based drug design projects.

    Science.gov (United States)

    Diller, Kyle I; Bayden, Alexander S; Audie, Joseph; Diller, David J

    2018-01-01

    There is growing interest in peptide-based drug design and discovery. Due to their relatively large size, polymeric nature, and chemical complexity, the design of peptide-based drugs presents an interesting "big data" challenge. Here, we describe an interactive computational environment, PeptideNavigator, for naturally exploring the tremendous amount of information generated during a peptide drug design project. The purpose of PeptideNavigator is the presentation of large and complex experimental and computational data sets, particularly 3D data, so as to enable multidisciplinary scientists to make optimal decisions during a peptide drug discovery project. PeptideNavigator provides users with numerous viewing options, such as scatter plots, sequence views, and sequence frequency diagrams. These views allow for the collective visualization and exploration of many peptides and their properties, ultimately enabling the user to focus on a small number of peptides of interest. To drill down into the details of individual peptides, PeptideNavigator provides users with a Ramachandran plot viewer and a fully featured 3D visualization tool. Each view is linked, allowing the user to seamlessly navigate from collective views of large peptide data sets to the details of individual peptides with promising property profiles. Two case studies, based on MHC-1A activating peptides and MDM2 scaffold design, are presented to demonstrate the utility of PeptideNavigator in the context of disparate peptide-design projects. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. How do you assign persistent identifiers to extracts from large, complex, dynamic data sets that underpin scholarly publications?

    Science.gov (United States)

    Wyborn, Lesley; Car, Nicholas; Evans, Benjamin; Klump, Jens

    2016-04-01

    Persistent identifiers in the form of a Digital Object Identifier (DOI) are becoming more mainstream, assigned at both the collection and dataset level. For static datasets, this is a relatively straight-forward matter. However, many new data collections are dynamic, with new data being appended, models and derivative products being revised with new data, or the data itself revised as processing methods are improved. Further, because data collections are becoming accessible as services, researchers can log in and dynamically create user-defined subsets for specific research projects: they also can easily mix and match data from multiple collections, each of which can have a complex history. Inevitably extracts from such dynamic data sets underpin scholarly publications, and this presents new challenges. The National Computational Infrastructure (NCI) has been experiencing and making progress towards addressing these issues. The NCI is large node of the Research Data Services initiative (RDS) of the Australian Government's research infrastructure, which currently makes available over 10 PBytes of priority research collections, ranging from geosciences, geophysics, environment, and climate, through to astronomy, bioinformatics, and social sciences. Data are replicated to, or are produced at, NCI and then processed there to higher-level data products or directly analysed. Individual datasets range from multi-petabyte computational models and large volume raster arrays, down to gigabyte size, ultra-high resolution datasets. To facilitate access, maximise reuse and enable integration across the disciplines, datasets have been organized on a platform called the National Environmental Research Data Interoperability Platform (NERDIP). Combined, the NERDIP data collections form a rich and diverse asset for researchers: their co-location and standardization optimises the value of existing data, and forms a new resource to underpin data-intensive Science. New publication

  2. Effects of hippocampal lesions on the monkey's ability to learn large sets of object-place associations.

    Science.gov (United States)

    Belcher, Annabelle M; Harrington, Rebecca A; Malkova, Ludise; Mishkin, Mortimer

    2006-01-01

    Earlier studies found that recognition memory for object-place associations was impaired in patients with relatively selective hippocampal damage (Vargha-Khadem et al., Science 1997; 277:376-380), but was unaffected after selective hippocampal lesions in monkeys (Malkova and Mishkin, J Neurosci 2003; 23:1956-1965). A potentially important methodological difference between the two studies is that the patients were required to remember a set of 20 object-place associations for several minutes, whereas the monkeys had to remember only two such associations at a time, and only for a few seconds. To approximate more closely the task given to the patients, we trained monkeys on several successive sets of 10 object-place pairs each, with each set requiring learning across days. Despite the increased associative memory demands, monkeys given hippocampal lesions were unimpaired relative to their unoperated controls, suggesting that differences other than set size and memory duration underlie the different outcomes in the human and animal studies. (c) 2005 Wiley-Liss, Inc.

  3. Hypopigmentation Induced by Frequent Low-Fluence, Large-Spot-Size QS Nd:YAG Laser Treatments.

    Science.gov (United States)

    Wong, Yisheng; Lee, Siong See Joyce; Goh, Chee Leok

    2015-12-01

    The Q-switched 1064-nm neodymium-doped yttrium aluminum garnet (QS 1064-nm Nd:YAG) laser is increasingly used for nonablative skin rejuvenation or "laser toning" for melasma. Multiple and frequent low-fluence, large-spot-size treatments are used to achieve laser toning, and these treatments are associated with the development of macular hypopigmentation as a complication. We present a case series of three patients who developed guttate hypomelanotic macules on the face after receiving laser toning treatment with QS 1064-nm Nd:YAG.

  4. Optimal Siting and Sizing of Energy Storage System for Power Systems with Large-scale Wind Power Integration

    DEFF Research Database (Denmark)

    Zhao, Haoran; Wu, Qiuwei; Huang, Shaojun

    2015-01-01

    This paper proposes algorithms for optimal sitingand sizing of Energy Storage System (ESS) for the operationplanning of power systems with large scale wind power integration.The ESS in this study aims to mitigate the wind powerfluctuations during the interval between two rolling Economic......Dispatches (EDs) in order to maintain generation-load balance.The charging and discharging of ESS is optimized consideringoperation cost of conventional generators, capital cost of ESSand transmission losses. The statistics from simulated systemoperations are then coupled to the planning process to determinethe...

  5. Optical and thermal performance of large-size parabolic-trough solar collectors from outdoor experiments: A test method and a case study

    International Nuclear Information System (INIS)

    Valenzuela, Loreto; López-Martín, Rafael; Zarza, Eduardo

    2014-01-01

    This article presents an outdoor test method to evaluate the optical and thermal performance of parabolic-trough collectors of large size (length ≥ 100 m), similar to those currently installed in solar thermal power plants. Optical performance in line-focus collectors is defined by three parameters, peak-optical efficiency and longitudinal and transversal incidence angle modifiers. In parabolic-troughs, the transversal incidence angle modifier is usually assumed equal to 1, and the incidence angle modifier is referred to the longitudinal incidence angle modifier, which is a factor less than or equal to 1 and must be quantified. These measurements are performed by operating the collector at low fluid temperatures for reducing heat losses. Thermal performance is measured during tests at various operating temperatures, which are defined within the working temperature range of the solar field, and for the condition of maximum optical response. Heat losses are measured from both the experiments performed to measure the overall efficiency and the experiments done by operating the collector to ensure that absorber pipes are not exposed to concentrated solar radiation. The set of parameters describing the performance of a parabolic-trough collector of large size has been measured following the test procedures proposed and explained in the article. - Highlights: • Outdoor test procedures of parabolic-trough solar collector (PTC) of large size working at high temperature are described. • Optical performance measured with cold fluid temperature and thermal performance measured in the complete temperature range. • Experimental data obtained in the testing of a PTC prototype are explained

  6. DNMT1 is associated with cell cycle and DNA replication gene sets in diffuse large B-cell lymphoma.

    Science.gov (United States)

    Loo, Suet Kee; Ab Hamid, Suzina Sheikh; Musa, Mustaffa; Wong, Kah Keng

    2018-01-01

    Dysregulation of DNA (cytosine-5)-methyltransferase 1 (DNMT1) is associated with the pathogenesis of various types of cancer. It has been previously shown that DNMT1 is frequently expressed in diffuse large B-cell lymphoma (DLBCL), however its functions remain to be elucidated in the disease. In this study, we gene expression profiled (GEP) shRNA targeting DNMT1(shDNMT1)-treated germinal center B-cell-like DLBCL (GCB-DLBCL)-derived cell line (i.e. HT) compared with non-silencing shRNA (control shRNA)-treated HT cells. Independent gene set enrichment analysis (GSEA) performed using GEPs of shRNA-treated HT cells and primary GCB-DLBCL cases derived from two publicly-available datasets (i.e. GSE10846 and GSE31312) produced three separate lists of enriched gene sets for each gene sets collection from Molecular Signatures Database (MSigDB). Subsequent Venn analysis identified 268, 145 and six consensus gene sets from analyzing gene sets in C2 collection (curated gene sets), C5 sub-collection [gene sets from gene ontology (GO) biological process ontology] and Hallmark collection, respectively to be enriched in positive correlation with DNMT1 expression profiles in shRNA-treated HT cells, GSE10846 and GSE31312 datasets [false discovery rate (FDR) 0.8) with DNMT1 expression and significantly downregulated (log fold-change <-1.35; p<0.05) following DNMT1 silencing in HT cells. These results suggest the involvement of DNMT1 in the activation of cell cycle and DNA replication in DLBCL cells. Copyright © 2017 Elsevier GmbH. All rights reserved.

  7. Can interface features affect aggression resulting from violent video game play? An examination of realistic controller and large screen size.

    Science.gov (United States)

    Kim, Ki Joon; Sundar, S Shyam

    2013-05-01

    Aggressiveness attributed to violent video game play is typically studied as a function of the content features of the game. However, can interface features of the game also affect aggression? Guided by the General Aggression Model (GAM), we examine the controller type (gun replica vs. mouse) and screen size (large vs. small) as key technological aspects that may affect the state aggression of gamers, with spatial presence and arousal as potential mediators. Results from a between-subjects experiment showed that a realistic controller and a large screen display induced greater aggression, presence, and arousal than a conventional mouse and a small screen display, respectively, and confirmed that trait aggression was a significant predictor of gamers' state aggression. Contrary to GAM, however, arousal showed no effects on aggression; instead, presence emerged as a significant mediator.

  8. Measurement of liquid mixing characteristics in large-sized ion exchange column for isotope separation by stepwise response method

    International Nuclear Information System (INIS)

    Fujine, Sachio; Saito, Keiichiro; Iwamoto, Kazumi; Itoi, Toshiaki.

    1981-07-01

    Liquid mixing in a large-sized ion exchange column for isotope separation was measured by the step-wise response method, using NaCl solution as tracer. A 50 cm diameter column was packed with an ion exchange resin of 200 μm in mean diameter. Experiments were carried out for several types of distributor and collector, which were attached to each end of the column. The smallest mixing was observed for the perforated plate type of the collector, coupled with a minimum stagnant volume above the ion exchange resin bed. The 50 cm diameter column exhibited the better characteristics of liquid mixing than the 2 cm diameter column for which the good performance of lithium isotope separation had already been confirmed. These results indicate that a large increment of throughput is attainable by the scale-up of column diameter with the same performance of isotope separation as for the 2 cm diameter column. (author)

  9. Large-size, high-uniformity, random silver nanowire networks as transparent electrodes for crystalline silicon wafer solar cells.

    Science.gov (United States)

    Xie, Shouyi; Ouyang, Zi; Jia, Baohua; Gu, Min

    2013-05-06

    Metal nanowire networks are emerging as next generation transparent electrodes for photovoltaic devices. We demonstrate the application of random silver nanowire networks as the top electrode on crystalline silicon wafer solar cells. The dependence of transmittance and sheet resistance on the surface coverage is measured. Superior optical and electrical properties are observed due to the large-size, highly-uniform nature of these networks. When applying the nanowire networks on the solar cells with an optimized two-step annealing process, we achieved as large as 19% enhancement on the energy conversion efficiency. The detailed analysis reveals that the enhancement is mainly caused by the improved electrical properties of the solar cells due to the silver nanowire networks. Our result reveals that this technology is a promising alternative transparent electrode technology for crystalline silicon wafer solar cells.

  10. The Viking viewer for connectomics: scalable multi-user annotation and summarization of large volume data sets.

    Science.gov (United States)

    Anderson, J R; Mohammed, S; Grimm, B; Jones, B W; Koshevoy, P; Tasdizen, T; Whitaker, R; Marc, R E

    2011-01-01

    Modern microscope automation permits the collection of vast amounts of continuous anatomical imagery in both two and three dimensions. These large data sets present significant challenges for data storage, access, viewing, annotation and analysis. The cost and overhead of collecting and storing the data can be extremely high. Large data sets quickly exceed an individual's capability for timely analysis and present challenges in efficiently applying transforms, if needed. Finally annotated anatomical data sets can represent a significant investment of resources and should be easily accessible to the scientific community. The Viking application was our solution created to view and annotate a 16.5 TB ultrastructural retinal connectome volume and we demonstrate its utility in reconstructing neural networks for a distinctive retinal amacrine cell class. Viking has several key features. (1) It works over the internet using HTTP and supports many concurrent users limited only by hardware. (2) It supports a multi-user, collaborative annotation strategy. (3) It cleanly demarcates viewing and analysis from data collection and hosting. (4) It is capable of applying transformations in real-time. (5) It has an easily extensible user interface, allowing addition of specialized modules without rewriting the viewer. © 2010 The Authors Journal of Microscopy © 2010 The Royal Microscopical Society.

  11. Eco-friendly preparation of large-sized graphene via short-circuit discharge of lithium primary battery.

    Science.gov (United States)

    Kang, Shaohong; Yu, Tao; Liu, Tingting; Guan, Shiyou

    2018-02-15

    We proposed a large-sized graphene preparation method by short-circuit discharge of the lithium-graphite primary battery for the first time. LiC x is obtained through lithium ions intercalation into graphite cathode in the above primary battery. Graphene was acquired by chemical reaction between LiC x and stripper agents with dispersion under sonication conditions. The gained graphene is characterized by Raman spectrum, X-ray diffraction (XRD), transmission electron microscopy (TEM), X-ray photoelectron spectroscopy (XPS), Atomic force microscope (AFM) and Scanning electron microscopy (SEM). The results indicate that the as-prepared graphene has a large size and few defects, and it is monolayer or less than three layers. The quality of graphene is significant improved compared to the reported electrochemical methods. The yield of graphene can reach 8.76% when the ratio of the H 2 O and NMP is 3:7. This method provides a potential solution for the recycling of waste lithium ion batteries. Copyright © 2017 Elsevier Inc. All rights reserved.

  12. Knowledge and theme discovery across very large biological data sets using distributed queries: a prototype combining unstructured and structured data.

    Directory of Open Access Journals (Sweden)

    Uma S Mudunuri

    Full Text Available As the discipline of biomedical science continues to apply new technologies capable of producing unprecedented volumes of noisy and complex biological data, it has become evident that available methods for deriving meaningful information from such data are simply not keeping pace. In order to achieve useful results, researchers require methods that consolidate, store and query combinations of structured and unstructured data sets efficiently and effectively. As we move towards personalized medicine, the need to combine unstructured data, such as medical literature, with large amounts of highly structured and high-throughput data such as human variation or expression data from very large cohorts, is especially urgent. For our study, we investigated a likely biomedical query using the Hadoop framework. We ran queries using native MapReduce tools we developed as well as other open source and proprietary tools. Our results suggest that the available technologies within the Big Data domain can reduce the time and effort needed to utilize and apply distributed queries over large datasets in practical clinical applications in the life sciences domain. The methodologies and technologies discussed in this paper set the stage for a more detailed evaluation that investigates how various data structures and data models are best mapped to the proper computational framework.

  13. Prospects for the domestic production of large-sized cast blades and vanes for industrial gas turbines

    Science.gov (United States)

    Kazanskiy, D. A.; Grin, E. A.; Klimov, A. N.; Berestevich, A. I.

    2017-10-01

    Russian experience in the production of large-sized cast blades and vanes for industrial gas turbines is analyzed for the past decades. It is noted that the production of small- and medium-sized blades and vanes made of Russian alloys using technologies for aviation, marine, and gas-pumping turbines cannot be scaled for industrial gas turbines. It is shown that, in order to provide manufacturability under large-scale casting from domestic nickel alloys, it is necessary to solve complex problems in changing their chemical composition, to develop new casting technologies and to optimize the heat treatment modes. An experience of PAO NPO Saturn in manufacturing the blades and vanes made of ChS88U-VI and IN738-LC foundry nickel alloys for the turbines of the GTE-110 gas turbine unit is considered in detail. Potentialities for achieving adopted target parameters for the mechanical properties of working blades cast from ChS88UM-VI modified alloy are established. For the blades made of IN738-LC alloy manufactured using the existing foundry technology, a complete compliance with the requirements of normative and technical documentation has been established. Currently, in Russia, the basis of the fleet of gas turbine plants is composed by foreign turbines, and, for the implementation of the import substitution program, one can use the positive experience of PAO NPO Saturn in casting blades from IN738-LC alloy based on a reverse engineering technique. A preliminary complex of studies of the original manufacturer's blades should be carried out, involving, first of all, the determination of geometric size using modern measurement methods as well as the studies on the chemical compositions of the used materials (base metal and protective coatings). Further, verifying the constructed calculation models based on the obtained data, one could choose available domestic materials that would meet the operating conditions of the blades according to their heat resistance and corrosion

  14. Development of a composite large-size SiPM (assembled matrix) based modular detector cluster for MAGIC

    Energy Technology Data Exchange (ETDEWEB)

    Hahn, A., E-mail: ahahn@mpp.mpg.de [Max Planck Institute for Physics (Werner-Heisenberg-Institut), Föhringer Ring 6, 80805 München (Germany); Mazin, D., E-mail: mazin@mpp.mpg.de [Max Planck Institute for Physics (Werner-Heisenberg-Institut), Föhringer Ring 6, 80805 München (Germany); Institute for Cosmic Ray Research, The University of Tokyo, 5-1-5 Kashiwa-no-Ha, Kashiwa City, Chiba 277–8582 (Japan); Bangale, P., E-mail: priya@mpp.mpg.de [Max Planck Institute for Physics (Werner-Heisenberg-Institut), Föhringer Ring 6, 80805 München (Germany); Dettlaff, A., E-mail: todettl@mpp.mpg.de [Max Planck Institute for Physics (Werner-Heisenberg-Institut), Föhringer Ring 6, 80805 München (Germany); Fink, D., E-mail: fink@mpp.mpg.de [Max Planck Institute for Physics (Werner-Heisenberg-Institut), Föhringer Ring 6, 80805 München (Germany); Grundner, F., E-mail: grundner@mpp.mpg.de [Max Planck Institute for Physics (Werner-Heisenberg-Institut), Föhringer Ring 6, 80805 München (Germany); Haberer, W., E-mail: haberer@mpp.mpg.de [Max Planck Institute for Physics (Werner-Heisenberg-Institut), Föhringer Ring 6, 80805 München (Germany); Maier, R., E-mail: rma@mpp.mpg.de [Max Planck Institute for Physics (Werner-Heisenberg-Institut), Föhringer Ring 6, 80805 München (Germany); and others

    2017-02-11

    The MAGIC collaboration operates two 17 m diameter Imaging Atmospheric Cherenkov Telescopes (IACTs) on the Canary Island of La Palma. Each of the two telescopes is currently equipped with a photomultiplier tube (PMT) based imaging camera. Due to the advances in the development of Silicon Photomultipliers (SiPMs), they are becoming a widely used alternative to PMTs in many research fields including gamma-ray astronomy. Within the Otto-Hahn group at the Max Planck Institute for Physics, Munich, we are developing a SiPM based detector module for a possible upgrade of the MAGIC cameras and also for future experiments as, e.g., the Large Size Telescopes (LST) of the Cherenkov Telescope Array (CTA). Because of the small size of individual SiPM sensors (6 mm×6 mm) with respect to the 1-inch diameter PMTs currently used in MAGIC, we use a custom-made matrix of SiPMs to cover the same detection area. We developed an electronic circuit to actively sum up and amplify the SiPM signals. Existing non-imaging hexagonal light concentrators (Winston cones) used in MAGIC have been modified for the angular acceptance of the SiPMs by using C++ based ray tracing simulations. The first prototype based detector module includes seven channels and was installed into the MAGIC camera in May 2015. We present the results of the first prototype and its performance as well as the status of the project and discuss its challenges. - Highlights: • The design of the first SiPM large-size IACT pixel is described. • The simulation of the light concentrators is presented. • The temperature stability of the detector module is demonstrated. • The calibration procedure of SiPM device in the field is described.

  15. An Examination of Teachers' Perceptions and Practice when Teaching Large and Reduced-Size Classes: Do Teachers Really Teach Them in the Same Way?

    Science.gov (United States)

    Harfitt, Gary James

    2012-01-01

    Class size research suggests that teachers do not vary their teaching strategies when moving from large to smaller classes. This study draws on interviews and classroom observations of three experienced English language teachers working with large and reduced-size classes in Hong Kong secondary schools. Findings from the study point to subtle…

  16. Optimum sample length for estimating anchovy size distribution and the proportion of juveniles per fishing set for the Peruvian purse-seine fleet

    Directory of Open Access Journals (Sweden)

    Rocío Joo

    2017-04-01

    Full Text Available The length distribution of catches represents a fundamental source of information for estimating growth and spatio-temporal dynamics of cohorts. The length distribution of caught is estimated based on samples of catched individuals. This work studies the optimum sample size of individuals at each fishing set in order to obtain a representative sample of the length and the proportion of juveniles in the fishing set. For that matter, we use anchovy (Engraulis ringens length data from different fishing sets recorded by observers at-sea from the On-board Observers Program from the Peruvian Marine Research Institute. Finally, we propose an optimum sample size for obtaining robust size and juvenile estimations. Though the application of this work corresponds to the anchovy fishery, the procedure can be applied to any fishery, either for on board or inland biometric measurements.

  17. PACOM: A Versatile Tool for Integrating, Filtering, Visualizing, and Comparing Multiple Large Mass Spectrometry Proteomics Data Sets.

    Science.gov (United States)

    Martínez-Bartolomé, Salvador; Medina-Aunon, J Alberto; López-García, Miguel Ángel; González-Tejedo, Carmen; Prieto, Gorka; Navajas, Rosana; Salazar-Donate, Emilio; Fernández-Costa, Carolina; Yates, John R; Albar, Juan Pablo

    2018-04-06

    Mass-spectrometry-based proteomics has evolved into a high-throughput technology in which numerous large-scale data sets are generated from diverse analytical platforms. Furthermore, several scientific journals and funding agencies have emphasized the storage of proteomics data in public repositories to facilitate its evaluation, inspection, and reanalysis. (1) As a consequence, public proteomics data repositories are growing rapidly. However, tools are needed to integrate multiple proteomics data sets to compare different experimental features or to perform quality control analysis. Here, we present a new Java stand-alone tool, Proteomics Assay COMparator (PACOM), that is able to import, combine, and simultaneously compare numerous proteomics experiments to check the integrity of the proteomic data as well as verify data quality. With PACOM, the user can detect source of errors that may have been introduced in any step of a proteomics workflow and that influence the final results. Data sets can be easily compared and integrated, and data quality and reproducibility can be visually assessed through a rich set of graphical representations of proteomics data features as well as a wide variety of data filters. Its flexibility and easy-to-use interface make PACOM a unique tool for daily use in a proteomics laboratory. PACOM is available at https://github.com/smdb21/pacom .

  18. From mantle to critical zone: A review of large and giant sized deposits of the rare earth elements

    Directory of Open Access Journals (Sweden)

    M.P. Smith

    2016-05-01

    Full Text Available The rare earth elements are unusual when defining giant-sized ore deposits, as resources are often quoted as total rare earth oxide, but the importance of a deposit may be related to the grade for individual, or a limited group of the elements. Taking the total REE resource, only one currently known deposit (Bayan Obo would class as giant (>1.7 × 107 tonnes contained metal, but a range of others classify as large (>1.7 × 106 tonnes. With the exception of unclassified resource estimates from the Olympic Dam IOCG deposit, all of these deposits are related to alkaline igneous activity – either carbonatites or agpaitic nepheline syenites. The total resource in these deposits must relate to the scale of the primary igneous source, but the grade is a complex function of igneous source, magmatic crystallisation, hydrothermal modification and supergene enrichment during weathering. Isotopic data suggest that the sources conducive to the formation of large REE deposits are developed in subcontinental lithospheric mantle, enriched in trace elements either by plume activity, or by previous subduction. The reactivation of such enriched mantle domains in relatively restricted geographical areas may have played a role in the formation of some of the largest deposits (e.g. Bayan Obo. Hydrothermal activity involving fluids from magmatic to meteoric sources may result in the redistribution of the REE and increases in grade, depending on primary mineralogy and the availability of ligands. Weathering and supergene enrichment of carbonatite has played a role in the formation of the highest grade deposits at Mount Weld (Australia and Tomtor (Russia. For the individual REE with the current highest economic value (Nd and the HREE, the boundaries for the large and giant size classes are two orders of magnitude lower, and deposits enriched in these metals (agpaitic systems, ion absorption deposits may have significant economic impact in the near future.

  19. THE INFRARED SPECTRA OF VERY LARGE IRREGULAR POLYCYCLIC AROMATIC HYDROCARBONS (PAHs): OBSERVATIONAL PROBES OF ASTRONOMICAL PAH GEOMETRY, SIZE, AND CHARGE

    International Nuclear Information System (INIS)

    Bauschlicher, Charles W.; Peeters, Els; Allamandola, Louis J.

    2009-01-01

    The mid-infrared (IR) spectra of six large, irregular polycyclic aromatic hydrocarbons (PAHs) with formulae (C 84 H 24 -C 120 H 36 ) have been computed using density functional theory (DFT). Trends in the dominant band positions and intensities are compared to those of large, compact PAHs as a function of geometry, size, and charge. Irregular edge moieties that are common in terrestrial PAHs, such as bay regions and rings with quartet hydrogens, are shown to be uncommon in astronomical PAHs. As for all PAHs comprised solely of C and H reported to date, mid-IR emission from irregular PAHs fails to produce a strong CC str band at 6.2 μm, the position characteristic of the important, class A astronomical PAH spectra. Earlier studies showed that inclusion of nitrogen within a PAH shifts this to 6.2 μm for PAH cations. Here we show that this band shifts to 6.3 μm in nitrogenated PAH anions, close to the position of the CC stretch in class B astronomical PAH spectra. Thus, nitrogenated PAHs may be important in all sources and the peak position of the CC stretch near 6.2 μm appears to directly reflect the PAH cation to anion ratio. Large irregular PAHs exhibit features at 7.8 μm but lack them near 8.6 μm. Hence, the 7.7 μm astronomical feature is produced by a mixture of small and large PAHs while the 8.6 μm band can only be produced by large compact PAHs. As with the CC str , the position and profile of these bands reflect the PAH cation to anion ratio.

  20. Comparison of Hounsfield units by changing in size of physical area and setting size of region of interest by using the CT phantom made with a 3D printer

    International Nuclear Information System (INIS)

    Seung, Youl Hun

    2015-01-01

    In this study, we have observed the change of the Hounsfield (HU) in the alteration of by changing in size of physical area and setting size of region of interest (ROI) at focus on kVp and mAs. Four-channel multi-detector computed tomography was used to get transverse axial scanning images and HU. Three dimensional printer which is type of fused deposition modeling (FDM) was used to produce the Phantom. The structure of the phantom was designed to be a type of cylinder that contains 33 mm, 24 mm, 19 mm, 16 mm, 9 mm size of circle holes that are symmetrically located. It was charged with mixing iodine contrast agent and distilled water in the holes. The images were gained with changing by 90 kVp, 120 kVp, 140 kVp and 50 mAs, 100 mAs, 150 mAs, respectively. The ‘image J’ was used to get the HU measurement of gained images of ROI. As a result, it was confirmed that kVp affects to HU more than mAs. And it is suggested that the smaller size of physical area, the more decreasing HU even in material of a uniform density and the smaller setting size of ROI, the more increasing HU. Therefore, it is reason that to set maximum ROI within 5 HU is the best way to minimize in the alteration of by changing in size of physical area and setting size of region of interest

  1. Comparison of Hounsfield units by changing in size of physical area and setting size of region of interest by using the CT phantom made with a 3D printer

    Energy Technology Data Exchange (ETDEWEB)

    Seung, Youl Hun [Dept. of Radiological Science, Cheongju University, Cheongju (Korea, Republic of)

    2015-12-15

    In this study, we have observed the change of the Hounsfield (HU) in the alteration of by changing in size of physical area and setting size of region of interest (ROI) at focus on kVp and mAs. Four-channel multi-detector computed tomography was used to get transverse axial scanning images and HU. Three dimensional printer which is type of fused deposition modeling (FDM) was used to produce the Phantom. The structure of the phantom was designed to be a type of cylinder that contains 33 mm, 24 mm, 19 mm, 16 mm, 9 mm size of circle holes that are symmetrically located. It was charged with mixing iodine contrast agent and distilled water in the holes. The images were gained with changing by 90 kVp, 120 kVp, 140 kVp and 50 mAs, 100 mAs, 150 mAs, respectively. The ‘image J’ was used to get the HU measurement of gained images of ROI. As a result, it was confirmed that kVp affects to HU more than mAs. And it is suggested that the smaller size of physical area, the more decreasing HU even in material of a uniform density and the smaller setting size of ROI, the more increasing HU. Therefore, it is reason that to set maximum ROI within 5 HU is the best way to minimize in the alteration of by changing in size of physical area and setting size of region of interest.

  2. Scaling of the Urban Water Footprint: An Analysis of 65 Mid- to Large-Sized U.S. Metropolitan Areas

    Science.gov (United States)

    Mahjabin, T.; Garcia, S.; Grady, C.; Mejia, A.

    2017-12-01

    Scaling laws have been shown to be relevant to a range of disciplines including biology, ecology, hydrology, and physics, among others. Recently, scaling was shown to be important for understanding and characterizing cities. For instance, it was found that urban infrastructure (water supply pipes and electrical wires) tends to scale sublinearly with city population, implying that large cities are more efficient. In this study, we explore the scaling of the water footprint of cities. The water footprint is a measure of water appropriation that considers both the direct and indirect (virtual) water use of a consumer or producer. Here we compute the water footprint of 65 mid- to large-sized U.S. metropolitan areas, accounting for direct and indirect water uses associated with agricultural and industrial commodities, and residential and commercial water uses. We find that the urban water footprint, computed as the sum of the water footprint of consumption and production, exhibits sublinear scaling with an exponent of 0.89. This suggests the possibility of large cities being more water-efficient than small ones. To further assess this result, we conduct additional analysis by accounting for international flows, and the effects of green water and city boundary definition on the scaling. The analysis confirms the scaling and provides additional insight about its interpretation.

  3. Nutrition screening tools: Does one size fit all? A systematic review of screening tools for the hospital setting

    NARCIS (Netherlands)

    van Bokhorst-de van der Schueren, M.A.E.; Guaitoli, P.R.; Jansma, E.P.; de Vet, H.C.W.

    2014-01-01

    Background & aims: Numerous nutrition screening tools for the hospital setting have been developed. The aim of this systematic review is to study construct or criterion validity and predictive validity of nutrition screening tools for the general hospital setting. Methods: A systematic review of

  4. Rhesus monkeys (Macaca mulatta) show robust primacy and recency in memory for lists from small, but not large, image sets.

    Science.gov (United States)

    Basile, Benjamin M; Hampton, Robert R

    2010-02-01

    The combination of primacy and recency produces a U-shaped serial position curve typical of memory for lists. In humans, primacy is often thought to result from rehearsal, but there is little evidence for rehearsal in nonhumans. To further evaluate the possibility that rehearsal contributes to primacy in monkeys, we compared memory for lists of familiar stimuli (which may be easier to rehearse) to memory for unfamiliar stimuli (which are likely difficult to rehearse). Six rhesus monkeys saw lists of five images drawn from either large, medium, or small image sets. After presentation of each list, memory for one item was assessed using a serial probe recognition test. Across four experiments, we found robust primacy and recency with lists drawn from small and medium, but not large, image sets. This finding is consistent with the idea that familiar items are easier to rehearse and that rehearsal contributes to primacy, warranting further study of the possibility of rehearsal in monkeys. However, alternative interpretations are also viable and are discussed. Copyright 2009 Elsevier B.V. All rights reserved.

  5. Theoretical simulation and analysis of large size BMP-LSC by 3D Monte Carlo ray tracing model

    International Nuclear Information System (INIS)

    Zhang Feng; Zhang Ning-Ning; Yan Sen; Song Sun; Jun Bao; Chen Gao; Zhang Yi

    2017-01-01

    Luminescent solar concentrators (LSC) can reduce the area of solar cells by collecting light from a large area and concentrating the captured light onto relatively small area photovoltaic (PV) cells, and thereby reducing the cost of PV electricity generation. LSCs with bottom-facing cells (BMP-LSC) can collect both direct light and indirect light, so further improving the efficiency of the PV cells. However, it is hard to analyze the effect of each parameter by experiment because there are too many parameters involved in the BMP-LSC. In this paper, all the physical processes of the light transmission and collection in the BMP-LSC were analyzed. A three-dimensional Monte Carlo ray tracing program was developed to study the transmission of photons in the LSC. A larger-size LSC was simulated, and the effects of dye concentration, the LSC thickness, the cell area, and the cell distance were systematically analyzed. (paper)

  6. Phylogenetic relationships of hexaploid large-sized barbs (genus Labeobarbus, Cyprinidae) based on mtDNA data.

    Science.gov (United States)

    Tsigenopoulos, Costas S; Kasapidis, Panagiotis; Berrebi, Patrick

    2010-08-01

    The phylogenetic relationships among species of the Labeobarbus genus (Teleostei, Cyprinidae) which comprises large body-sized hexaploid taxa were inferred using complete cytochrome b mitochondrial gene sequences. Molecular data suggest two main evolutionary groups which roughly correspond to a Northern (Middle East and Northwest Africa) and a sub-Saharan lineage. The splitting of the African hexaploids from their Asian ancestors and their subsequent diversification on the African continent occurred in the Late Miocene, a period in which other cyprinins also invaded Africa and radiated in the Mediterranean region. Finally, systematic implications of these results to the taxonomic validity of genera or subgenera such as Varicorhinus, Kosswigobarbus, Carasobarbus and Capoeta are further discussed. Copyright 2010 Elsevier Inc. All rights reserved.

  7. How do low dispersal species establish large range sizes? The case of the water beetle Graphoderus bilineatus

    DEFF Research Database (Denmark)

    Iversen, Lars Lønsmann; Rannap, Riinu; Thomsen, Philip Francis

    2013-01-01

    important than species phylogeny or local spatial attributes. In this study we used the water beetle Graphoderus bilineatus a philopatric species of conservation concern in Europe as a model to explain large range size and to support effective conservation measures for such species that also have limited...... systems and wetlands which used to be highly connected throughout the central plains of Europe. Our data suggest that a broad habitat niche can prevent landscape elements from becoming barriers for species like G. bilineatus. Therefore, we question the usefulness of site protection as conservation...... measures for G. bilineatus and similar philopatric species. Instead, conservation actions should be focused at the landscape level to ensure a long-term viability of such species across their range....

  8. Chemical and electrochemical synthesis of nano-sized TiO{sub 2} anatase for large-area photon conversion

    Energy Technology Data Exchange (ETDEWEB)

    Babasaheb, Raghunath Sankapal; Shrikrishna, Dattatraya Sartale; Lux-Steiner, M.Ch.; Ennaoui, A. [Hahn-Meitner-Institut, Div. of Solar Energy Research, Berlin (Germany)

    2006-05-15

    We report on the synthesis of nanocrystalline titanium dioxide thin films and powders by chemical and electrochemical deposition methods. Both methods are simple, inexpensive and suitable for large-scale production. Air-annealing of the films and powders at T = 500 C leads to densely packed nanometer sized anatase TiO{sub 2} particles. The obtained layers are characterized by different methods such as: X-ray diffraction (XRD), transmission electron microscopy (TEM), scanning electron microscopy (SEM), X-ray photoelectron spectroscopy (XPS) and atomic force microscopy (AFM). Titanium dioxide TiO{sub 2} (anatase) phase with (101) preferred orientation has been obtained for the films deposited on glass; indium doped tin oxide (ITO) and quartz substrates. The powder obtained as the byproduct consists of TiO{sub 2} with anatase-phase as well. (authors)

  9. Chemical and electrochemical synthesis of nano-sized TiO2 anatase for large-area photon conversion

    International Nuclear Information System (INIS)

    Babasaheb, Raghunath Sankapal; Shrikrishna, Dattatraya Sartale; Lux-Steiner, M.Ch.; Ennaoui, A.

    2006-01-01

    We report on the synthesis of nanocrystalline titanium dioxide thin films and powders by chemical and electrochemical deposition methods. Both methods are simple, inexpensive and suitable for large-scale production. Air-annealing of the films and powders at T = 500 C leads to densely packed nanometer sized anatase TiO 2 particles. The obtained layers are characterized by different methods such as: X-ray diffraction (XRD), transmission electron microscopy (TEM), scanning electron microscopy (SEM), X-ray photoelectron spectroscopy (XPS) and atomic force microscopy (AFM). Titanium dioxide TiO 2 (anatase) phase with (101) preferred orientation has been obtained for the films deposited on glass; indium doped tin oxide (ITO) and quartz substrates. The powder obtained as the byproduct consists of TiO 2 with anatase-phase as well. (authors)

  10. Growth of large size lithium niobate single crystals of high quality by tilting-mirror-type floating zone method

    Energy Technology Data Exchange (ETDEWEB)

    Sarker, Abdur Razzaque, E-mail: razzaque_ru2000@yahoo.com [Department of Physics, University of Rajshahi (Bangladesh)

    2016-05-15

    Large size high quality LiNbO{sub 3} single crystals were grown successfully by tilting-mirror-type floating zone (TMFZ) technique. The grown crystals were characterized by X-ray diffraction, etch pits density measurement, Impedance analysis, Vibrating sample magnetometry (VSM) and UV-Visible spectrometry. The effect of mirror tilting during growth on the structural, electrical, optical properties and defect density of the LiNbO{sub 3} crystals were investigated. It was found that the defect density in the crystals reduced for tilting the mirror in the TMFZ method. The chemical analysis revealed that the grown crystals were of high quality with uniform composition. The single crystals grown by TMFZ method contains no low-angle grain boundaries, indicating that they can be used for high efficiency optoelectronic devices. (author)

  11. Synthesis of a large-sized mesoporous phosphosilicate thin film through evaporation-induced polymeric micelle assembly.

    Science.gov (United States)

    Li, Yunqi; Bastakoti, Bishnu Prasad; Imura, Masataka; Suzuki, Norihiro; Jiang, Xiangfen; Ohki, Shinobu; Deguchi, Kenzo; Suzuki, Madoka; Arai, Satoshi; Yamauchi, Yusuke

    2015-01-01

    A triblock copolymer, poly(styrene-b-2-vinyl pyridine-b-ethylene oxide) (PS-b-P2VP-b-PEO) was used as a soft template to synthesize large-sized mesoporous phosphosilicate thin films. The kinetically frozen PS core stabilizes the micelles. The strong interaction of the inorganic precursors with the P2VP shell enables the fabrication of highly robust walls of phosphosilicate and the PEO helps orderly packing of the micelles during solvent evaporation. The molar ratio of phosphoric acid and tetraethyl orthosilicate is crucial to achieve the final mesostructure. The insertion of phosphorus species into the siloxane network is studied by (29) Si and (31) P MAS NMR spectra. The mesoporous phosphosilicate films exhibit steady cell adhesion properties and show great promise as excellent materials in bone-growth engineering applications. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  12. Easy and General Synthesis of Large-Sized Mesoporous Rare-Earth Oxide Thin Films by 'Micelle Assembly'.

    Science.gov (United States)

    Li, Yunqi; Bastakoti, Bishnu Prasad; Imura, Masataka; Dai, Pengcheng; Yamauchi, Yusuke

    2015-12-01

    Large-sized (ca. 40 nm) mesoporous Er2O3 thin films are synthesized by using a triblock copolymer poly(styrene-b-2-vinyl pyridine-b-ethylene oxide) (PS-b-P2VP-b-PEO) as a pore directing agent. Each block makes different contributions and the molar ratio of PVP/Er(3+) is crucial to guide the resultant mesoporous structure. An easy and general method is proposed and used to prepare a series of mesoporous rare-earth oxide (Sm2O3, Dy2O3, Tb2O3, Ho2O3, Yb2O3, and Lu2O3) thin films with potential uses in electronics and optical devices. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  13. A large set of potential past, present and future hydro-meteorological time series for the UK

    Directory of Open Access Journals (Sweden)

    B. P. Guillod

    2018-01-01

    Full Text Available Hydro-meteorological extremes such as drought and heavy precipitation can have large impacts on society and the economy. With potentially increasing risks associated with such events due to climate change, properly assessing the associated impacts and uncertainties is critical for adequate adaptation. However, the application of risk-based approaches often requires large sets of extreme events, which are not commonly available. Here, we present such a large set of hydro-meteorological time series for recent past and future conditions for the United Kingdom based on weather@home 2, a modelling framework consisting of a global climate model (GCM driven by observed or projected sea surface temperature (SST and sea ice which is downscaled to 25 km over the European domain by a regional climate model (RCM. Sets of 100 time series are generated for each of (i a historical baseline (1900–2006, (ii five near-future scenarios (2020–2049 and (iii five far-future scenarios (2070–2099. The five scenarios in each future time slice all follow the Representative Concentration Pathway 8.5 (RCP8.5 and sample the range of sea surface temperature and sea ice changes from CMIP5 (Coupled Model Intercomparison Project Phase 5 models. Validation of the historical baseline highlights good performance for temperature and potential evaporation, but substantial seasonal biases in mean precipitation, which are corrected using a linear approach. For extremes in low precipitation over a long accumulation period ( > 3 months and shorter-duration high precipitation (1–30 days, the time series generally represents past statistics well. Future projections show small precipitation increases in winter but large decreases in summer on average, leading to an overall drying, consistently with the most recent UK Climate Projections (UKCP09 but larger in magnitude than the latter. Both drought and high-precipitation events are projected to increase in frequency and

  14. A large set of potential past, present and future hydro-meteorological time series for the UK

    Science.gov (United States)

    Guillod, Benoit P.; Jones, Richard G.; Dadson, Simon J.; Coxon, Gemma; Bussi, Gianbattista; Freer, James; Kay, Alison L.; Massey, Neil R.; Sparrow, Sarah N.; Wallom, David C. H.; Allen, Myles R.; Hall, Jim W.

    2018-01-01

    Hydro-meteorological extremes such as drought and heavy precipitation can have large impacts on society and the economy. With potentially increasing risks associated with such events due to climate change, properly assessing the associated impacts and uncertainties is critical for adequate adaptation. However, the application of risk-based approaches often requires large sets of extreme events, which are not commonly available. Here, we present such a large set of hydro-meteorological time series for recent past and future conditions for the United Kingdom based on weather@home 2, a modelling framework consisting of a global climate model (GCM) driven by observed or projected sea surface temperature (SST) and sea ice which is downscaled to 25 km over the European domain by a regional climate model (RCM). Sets of 100 time series are generated for each of (i) a historical baseline (1900-2006), (ii) five near-future scenarios (2020-2049) and (iii) five far-future scenarios (2070-2099). The five scenarios in each future time slice all follow the Representative Concentration Pathway 8.5 (RCP8.5) and sample the range of sea surface temperature and sea ice changes from CMIP5 (Coupled Model Intercomparison Project Phase 5) models. Validation of the historical baseline highlights good performance for temperature and potential evaporation, but substantial seasonal biases in mean precipitation, which are corrected using a linear approach. For extremes in low precipitation over a long accumulation period ( > 3 months) and shorter-duration high precipitation (1-30 days), the time series generally represents past statistics well. Future projections show small precipitation increases in winter but large decreases in summer on average, leading to an overall drying, consistently with the most recent UK Climate Projections (UKCP09) but larger in magnitude than the latter. Both drought and high-precipitation events are projected to increase in frequency and intensity in most regions

  15. Long-term resource variation and group size: A large-sample field test of the Resource Dispersion Hypothesis

    Directory of Open Access Journals (Sweden)

    Morecroft Michael D

    2001-07-01

    Full Text Available Abstract Background The Resource Dispersion Hypothesis (RDH proposes a mechanism for the passive formation of social groups where resources are dispersed, even in the absence of any benefits of group living per se. Despite supportive modelling, it lacks empirical testing. The RDH predicts that, rather than Territory Size (TS increasing monotonically with Group Size (GS to account for increasing metabolic needs, TS is constrained by the dispersion of resource patches, whereas GS is independently limited by their richness. We conducted multiple-year tests of these predictions using data from the long-term study of badgers Meles meles in Wytham Woods, England. The study has long failed to identify direct benefits from group living and, consequently, alternative explanations for their large group sizes have been sought. Results TS was not consistently related to resource dispersion, nor was GS consistently related to resource richness. Results differed according to data groupings and whether territories were mapped using minimum convex polygons or traditional methods. Habitats differed significantly in resource availability, but there was also evidence that food resources may be spatially aggregated within habitat types as well as between them. Conclusions This is, we believe, the largest ever test of the RDH and builds on the long-term project that initiated part of the thinking behind the hypothesis. Support for predictions were mixed and depended on year and the method used to map territory borders. We suggest that within-habitat patchiness, as well as model assumptions, should be further investigated for improved tests of the RDH in the future.

  16. The potential of natural gas use including cogeneration in large-sized industry and commercial sector in Peru

    International Nuclear Information System (INIS)

    Gonzales Palomino, Raul; Nebra, Silvia A.

    2012-01-01

    In recent years there have been several discussions on a greater use of natural gas nationwide. Moreover, there have been several announcements by the private and public sectors regarding the construction of new pipelines to supply natural gas to the Peruvian southern and central-north markets. This paper presents future scenarios for the use of natural gas in the large-sized industrial and commercial sectors of the country based on different hypotheses on developments in the natural gas industry, national economic growth, energy prices, technological changes and investment decisions. First, the paper estimates the market potential and characterizes the energy consumption. Then it makes a selection of technological alternatives for the use of natural gas, and it makes an energetic and economic analysis and economic feasibility. Finally, the potential use of natural gas is calculated through nine different scenarios. The natural gas use in cogeneration systems is presented as an alternative to contribute to the installed power capacity of the country. Considering the introduction of the cogeneration in the optimistic–advanced scenario and assuming that all of their conditions would be put into practice, in 2020, the share of the cogeneration in electricity production in Peru would be 9.9%. - Highlights: ► This paper presents future scenarios for the use of natural gas in the large-sized industrial and commercial sectors of Peru. ► The potential use of natural gas is calculated through nine different scenarios.► The scenarios were based on different hypotheses on developments in the natural gas industry, national economic growth, energy prices, technological changes and investment decisions. ► We estimated the market potential and characterized the energy consumption, and made a selection of technological alternatives for the use of natural gas.

  17. FR-type radio sources in COSMOS: relation of radio structure to size, accretion modes and large-scale environment

    Science.gov (United States)

    Vardoulaki, Eleni; Faustino Jimenez Andrade, Eric; Delvecchio, Ivan; Karim, Alexander; Smolčić, Vernesa; Magnelli, Benjamin; Bertoldi, Frank; Schinnener, Eva; Sargent, Mark; Finoguenov, Alexis; VLA COSMOS Team

    2018-01-01

    The radio sources associated with active galactic nuclei (AGN) can exhibit a variety of radio structures, from simple to more complex, giving rise to a variety of classification schemes. The question which still remains open, given deeper surveys revealing new populations of radio sources, is whether this plethora of radio structures can be attributed to the physical properties of the host or to the environment. Here we present an analysis on the radio structure of radio-selected AGN from the VLA-COSMOS Large Project at 3 GHz (JVLA-COSMOS; Smolčić et al.) in relation to: 1) their linear projected size, 2) the Eddington ratio, and 3) the environment their hosts lie within. We classify these as FRI (jet-like) and FRII (lobe-like) based on the FR-type classification scheme, and compare them to a sample of jet-less radio AGN in JVLA-COSMOS. We measure their linear projected sizes using a semi-automatic machine learning technique. Their Eddington ratios are calculated from X-ray data available for COSMOS. As environmental probes we take the X-ray groups (hundreds kpc) and the density fields (~Mpc-scale) in COSMOS. We find that FRII radio sources are on average larger than FRIs, which agrees with literature. But contrary to past studies, we find no dichotomy in FR objects in JVLA-COSMOS given their Eddington ratios, as on average they exhibit similar values. Furthermore our results show that the large-scale environment does not explain the observed dichotomy in lobe- and jet-like FR-type objects as both types are found on similar environments, but it does affect the shape of the radio structure introducing bents for objects closer to the centre of an X-ray group.

  18. Development of a composite large-size SiPM (assembled matrix) based modular detector cluster for MAGIC

    Science.gov (United States)

    Hahn, A.; Mazin, D.; Bangale, P.; Dettlaff, A.; Fink, D.; Grundner, F.; Haberer, W.; Maier, R.; Mirzoyan, R.; Podkladkin, S.; Teshima, M.; Wetteskind, H.

    2017-02-01

    The MAGIC collaboration operates two 17 m diameter Imaging Atmospheric Cherenkov Telescopes (IACTs) on the Canary Island of La Palma. Each of the two telescopes is currently equipped with a photomultiplier tube (PMT) based imaging camera. Due to the advances in the development of Silicon Photomultipliers (SiPMs), they are becoming a widely used alternative to PMTs in many research fields including gamma-ray astronomy. Within the Otto-Hahn group at the Max Planck Institute for Physics, Munich, we are developing a SiPM based detector module for a possible upgrade of the MAGIC cameras and also for future experiments as, e.g., the Large Size Telescopes (LST) of the Cherenkov Telescope Array (CTA). Because of the small size of individual SiPM sensors (6 mm×6 mm) with respect to the 1-inch diameter PMTs currently used in MAGIC, we use a custom-made matrix of SiPMs to cover the same detection area. We developed an electronic circuit to actively sum up and amplify the SiPM signals. Existing non-imaging hexagonal light concentrators (Winston cones) used in MAGIC have been modified for the angular acceptance of the SiPMs by using C++ based ray tracing simulations. The first prototype based detector module includes seven channels and was installed into the MAGIC camera in May 2015. We present the results of the first prototype and its performance as well as the status of the project and discuss its challenges.

  19. Out-coupling membrane for large-size organic light-emitting panels with high efficiency and improved uniformity

    Energy Technology Data Exchange (ETDEWEB)

    Ding, Lei, E-mail: dinglei@sust.edu.cn [College of Electrical and Information Engineering, Shaanxi University of Science and Technology, Xi’an, Shaanxi 710021 (China); Wang, Lu-Wei [College of Electrical and Information Engineering, Shaanxi University of Science and Technology, Xi’an, Shaanxi 710021 (China); Zhou, Lei, E-mail: zhzhlei@gmail.com [Faculty of Mathematics and Physics, Huaiyin Institute of Technology, Huai' an 223003 (China); Zhang, Fang-hui [College of Electrical and Information Engineering, Shaanxi University of Science and Technology, Xi’an, Shaanxi 710021 (China)

    2016-12-15

    Highlights: • An out-coupling membrane embedded with a scattering film of SiO{sub 2} spheres and polyethylene terephthalate (PET) plastic was successfully developed for 150 × 150 mm{sup 2} OLEDs. • Remarkable enhancement in efficiency was achieved from the OLEDs with out- coupling membrane. • The uniformity of large-size GOLED lighting panel is remarkably improved. - Abstract: An out-coupling membrane embedded with a scattering film of SiO{sub 2} spheres and polyethylene terephthalate (PET) plastic was successfully developed for 150 × 150 mm{sup 2} green OLEDs. Comparing with a reference OLED panel, an approximately 1-fold enhancement in the forward emission was obtained with an out-coupling membrane adhered to the surface of the external glass substrate of the panel. Moreover, it was verified that the emission color at different viewing angles can be stabilized without apparent spectral distortion. Particularly, the uniformity of the large-area OLEDs was greatly improved. Theoretical calculation clarified that the improved performance of the lighting panels is primarily attributed to the effect of particle scattering.

  20. Facile synthesis of uniform large-sized InP nanocrystal quantum dots using tris(tert-butyldimethylsilyl)phosphine

    Science.gov (United States)

    2012-01-01

    Colloidal III-V semiconductor nanocrystal quantum dots [NQDs] have attracted interest because they have reduced toxicity compared with II-VI compounds. However, the study and application of III-V semiconductor nanocrystals are limited by difficulties in their synthesis. In particular, it is difficult to control nucleation because the molecular bonds in III-V semiconductors are highly covalent. A synthetic approach of InP NQDs was presented using newly synthesized organometallic phosphorus [P] precursors with different functional moieties while preserving the P-Si bond. Introducing bulky side chains in our study improved the stability while facilitating InP formation with strong confinement at a readily low temperature regime (210°C to 300°C). Further shell coating with ZnS resulted in highly luminescent core-shell materials. The design and synthesis of P precursors for high-quality InP NQDs were conducted for the first time, and we were able to control the nucleation by varying the reactivity of P precursors, therefore achieving uniform large-sized InP NQDs. This opens the way for the large-scale production of high-quality Cd-free nanocrystal quantum dots. PMID:22289352

  1. Statistical process control charts for attribute data involving very large sample sizes: a review of problems and solutions.

    Science.gov (United States)

    Mohammed, Mohammed A; Panesar, Jagdeep S; Laney, David B; Wilson, Richard

    2013-04-01

    The use of statistical process control (SPC) charts in healthcare is increasing. The primary purpose of SPC is to distinguish between common-cause variation which is attributable to the underlying process, and special-cause variation which is extrinsic to the underlying process. This is important because improvement under common-cause variation requires action on the process, whereas special-cause variation merits an investigation to first find the cause. Nonetheless, when dealing with attribute or count data (eg, number of emergency admissions) involving very large sample sizes, traditional SPC charts often produce tight control limits with most of the data points appearing outside the control limits. This can give a false impression of common and special-cause variation, and potentially misguide the user into taking the wrong actions. Given the growing availability of large datasets from routinely collected databases in healthcare, there is a need to present a review of this problem (which arises because traditional attribute charts only consider within-subgroup variation) and its solutions (which consider within and between-subgroup variation), which involve the use of the well-established measurements chart and the more recently developed attribute charts based on Laney's innovative approach. We close by making some suggestions for practice.

  2. Large superconducting conductors and joints for fusion magnets: From conceptual design to test at full size scale

    International Nuclear Information System (INIS)

    Ciazynski, D.; Duchateau, J.L.; Decool, P.; Libeyre, P.; Turck, B.

    2001-01-01

    A new kind of superconducting conductor, using the so-called cable-in-conduit concept, is emerging mainly involving fusion activity. It is to be noted that at present time no large Nb 3 Sn magnet in the world is operating using this concept. The difficulty of this technology which has now been studied for 20 years, is that it has to integrate major progresses in multiple interconnected new fields such as: large number (1000) of superconducting strands, high current conductors (50 kA), forced flow cryogenics, Nb 3 Sn technology, low loss conductors in pulsed operation, high current connections, high voltage insulation (10 kV), economical and industrial feasibility. CEA was very involved during these last 10 years in this development which took place in the frame of the NET and ITER technological programs. One major milestone was reached in 1998-1999 with the successful tests by our Association of three full size conductor and connection samples in the Sultan facility (Villigen, Switzerland). (author)

  3. Testing Probation Outcomes in an Evidence-Based Practice Setting: Reduced Caseload Size and Intensive Supervision Effectiveness

    Science.gov (United States)

    Jalbert, Sarah Kuck; Rhodes, William; Flygare, Christopher; Kane, Michael

    2010-01-01

    Probation and parole professionals argue that supervision outcomes would improve if caseloads were reduced below commonly achieved standards. Criminal justice researchers are skeptical because random assignment and strong observation studies have failed to show that criminal recidivism falls with reductions in caseload sizes. One explanation is…

  4. Efficacy of formative evaluation using a focus group for a large classroom setting in an accelerated pharmacy program.

    Science.gov (United States)

    Nolette, Shaun; Nguyen, Alyssa; Kogan, David; Oswald, Catherine; Whittaker, Alana; Chakraborty, Arup

    2017-07-01

    Formative evaluation is a process utilized to improve communication between students and faculty. This evaluation method allows the ability to address pertinent issues in a timely manner; however, implementation of formative evaluation can be a challenge, especially in a large classroom setting. Using mediated formative evaluation, the purpose of this study is to determine if a student based focus group is a viable option to improve efficacy of communication between an instructor and students as well as time management in a large classroom setting. Out of 140 total students, six students were selected to form a focus group - one from each of six total sections of the classroom. Each focus group representative was responsible for collecting all the questions from students of their corresponding sections and submitting them to the instructor two to three times a day. Responses from the instructor were either passed back to pertinent students by the focus group representatives or addressed directly with students by the instructor. This study was conducted using a fifteen-question survey after the focus group model was utilized for one month. A printed copy of the survey was distributed in the class by student investigators. Questions were of varying types, including Likert scale, yes/no, and open-ended response. One hundred forty surveys were administered, and 90 complete responses were collected. Surveys showed that 93.3% of students found that use of the focus group made them more likely to ask questions for understanding. The surveys also showed 95.5% of students found utilizing the focus group for questions allowed for better understanding of difficult concepts. General open-ended answer portions of the survey showed that most students found the focus group allowed them to ask questions more easily since they did not feel intimidated by asking in front of the whole class. No correlation was found between demographic characteristics and survey responses. This may

  5. A comparison of accuracy validation methods for genomic and pedigree-based predictions of swine litter size traits using Large White and simulated data.

    Science.gov (United States)

    Putz, A M; Tiezzi, F; Maltecca, C; Gray, K A; Knauer, M T

    2018-02-01

    The objective of this study was to compare and determine the optimal validation method when comparing accuracy from single-step GBLUP (ssGBLUP) to traditional pedigree-based BLUP. Field data included six litter size traits. Simulated data included ten replicates designed to mimic the field data in order to determine the method that was closest to the true accuracy. Data were split into training and validation sets. The methods used were as follows: (i) theoretical accuracy derived from the prediction error variance (PEV) of the direct inverse (iLHS), (ii) approximated accuracies from the accf90(GS) program in the BLUPF90 family of programs (Approx), (iii) correlation between predictions and the single-step GEBVs from the full data set (GEBV Full ), (iv) correlation between predictions and the corrected phenotypes of females from the full data set (Y c ), (v) correlation from method iv divided by the square root of the heritability (Y ch ) and (vi) correlation between sire predictions and the average of their daughters' corrected phenotypes (Y cs ). Accuracies from iLHS increased from 0.27 to 0.37 (37%) in the Large White. Approximation accuracies were very consistent and close in absolute value (0.41 to 0.43). Both iLHS and Approx were much less variable than the corrected phenotype methods (ranging from 0.04 to 0.27). On average, simulated data showed an increase in accuracy from 0.34 to 0.44 (29%) using ssGBLUP. Both iLHS and Y ch approximated the increase well, 0.30 to 0.46 and 0.36 to 0.45, respectively. GEBV Full performed poorly in both data sets and is not recommended. Results suggest that for within-breed selection, theoretical accuracy using PEV was consistent and accurate. When direct inversion is infeasible to get the PEV, correlating predictions to the corrected phenotypes divided by the square root of heritability is adequate given a large enough validation data set. © 2017 Blackwell Verlag GmbH.

  6. Prognostic Significance of Tumor Size of Small Lung Adenocarcinomas Evaluated with Mediastinal Window Settings on Computed Tomography

    OpenAIRE

    Sakao, Yukinori; Kuroda, Hiroaki; Mun, Mingyon; Uehara, Hirofumi; Motoi, Noriko; Ishikawa, Yuichi; Nakagawa, Ken; Okumura, Sakae

    2014-01-01

    BACKGROUND: We aimed to clarify that the size of the lung adenocarcinoma evaluated using mediastinal window on computed tomography is an important and useful modality for predicting invasiveness, lymph node metastasis and prognosis in small adenocarcinoma. METHODS: We evaluated 176 patients with small lung adenocarcinomas (diameter, 1-3 cm) who underwent standard surgical resection. Tumours were examined using computed tomography with thin section conditions (1.25 mm thick on high-resolution ...

  7. Effect of Modifying Intervention Set Size with Acquisition Rate Data among Students Identified with a Learning Disability

    Science.gov (United States)

    Haegele, Katherine; Burns, Matthew K.

    2015-01-01

    The amount of information that students can successfully learn and recall at least 1 day later is called an acquisition rate (AR) and is unique to the individual student. The current study extended previous drill rehearsal research with word recognition by (a) using students identified with a learning disability in reading, (b) assessing set sizes…

  8. Effects of loading frequency on fatigue crack growth mechanisms in α/β Ti microstructure with large colony size

    International Nuclear Information System (INIS)

    Sansoz, F.; Ghonem, H.

    2003-01-01

    This paper deals with crack tip/microstructure interactions at 520 deg. C in lamellar Ti-6Al-2Sn-4Zr-2Mo-0.1Si (Ti6242) alloy under different fatigue loading frequencies. A series of heat treatments were performed in order to produce large colony microstructures that vary in their lamellar and colony size. Fatigue crack growth (FCG) experiments were conducted on these microstructures at loading frequencies of 10 and 0.05 Hz. The lower frequency was explored with and without imposing a 5 min hold-time at the peak stress level during each loading cycle. Results show that the crack growth behavior is sensitive to the loading frequency. For the same microstructure, the crack growth rate is found to be lower at 10 than at 0.05 Hz. The addition of a hold-time, however, did not alter the FCG rate indicating that creep strain during one loading cycle does not contribute significantly in the crack growth process. It is also shown that variations in lamella and colony size have no effects on the FCG rate except for the early stage of crack propagation. Scanning Electron Microscope examinations are performed on the fracture surface in order to identify the relevant crack growth mechanisms with respect to the loading frequency and the microstructure details. Quasi-cleavage of the α/β colonies along strong planar shear bands is shown to be a major mode of failure under all test condition. At a loading frequency of 10 Hz, the crack path proceeds arbitrary along planes either perpendicular or parallel to the long axis of α lamellae, while at 0.05 Hz, parallel-to-lamellae crack paths become favored. Corresponding differences of crack growth behavior are examined in terms of slip emission at the crack tip and interactions with the microstructure details

  9. Accelerating solidification process simulation for large-sized system of liquid metal atoms using GPU with CUDA

    Energy Technology Data Exchange (ETDEWEB)

    Jie, Liang [School of Information Science and Engineering, Hunan University, Changshang, 410082 (China); Li, KenLi, E-mail: lkl@hnu.edu.cn [School of Information Science and Engineering, Hunan University, Changshang, 410082 (China); National Supercomputing Center in Changsha, 410082 (China); Shi, Lin [School of Information Science and Engineering, Hunan University, Changshang, 410082 (China); Liu, RangSu [School of Physics and Micro Electronic, Hunan University, Changshang, 410082 (China); Mei, Jing [School of Information Science and Engineering, Hunan University, Changshang, 410082 (China)

    2014-01-15

    Molecular dynamics simulation is a powerful tool to simulate and analyze complex physical processes and phenomena at atomic characteristic for predicting the natural time-evolution of a system of atoms. Precise simulation of physical processes has strong requirements both in the simulation size and computing timescale. Therefore, finding available computing resources is crucial to accelerate computation. However, a tremendous computational resource (GPGPU) are recently being utilized for general purpose computing due to its high performance of floating-point arithmetic operation, wide memory bandwidth and enhanced programmability. As for the most time-consuming component in MD simulation calculation during the case of studying liquid metal solidification processes, this paper presents a fine-grained spatial decomposition method to accelerate the computation of update of neighbor lists and interaction force calculation by take advantage of modern graphics processors units (GPU), enlarging the scale of the simulation system to a simulation system involving 10 000 000 atoms. In addition, a number of evaluations and tests, ranging from executions on different precision enabled-CUDA versions, over various types of GPU (NVIDIA 480GTX, 580GTX and M2050) to CPU clusters with different number of CPU cores are discussed. The experimental results demonstrate that GPU-based calculations are typically 9∼11 times faster than the corresponding sequential execution and approximately 1.5∼2 times faster than 16 CPU cores clusters implementations. On the basis of the simulated results, the comparisons between the theoretical results and the experimental ones are executed, and the good agreement between the two and more complete and larger cluster structures in the actual macroscopic materials are observed. Moreover, different nucleation and evolution mechanism of nano-clusters and nano-crystals formed in the processes of metal solidification is observed with large-sized

  10. The Influence of Function, Topography, and Setting on Noncontingent Reinforcement Effect Sizes for Reduction in Problem Behavior: A Meta-Analysis of Single-Case Experimental Design Data

    Science.gov (United States)

    Ritter, William A.; Barnard-Brak, Lucy; Richman, David M.; Grubb, Laura M.

    2018-01-01

    Richman et al. ("J Appl Behav Anal" 48:131-152, 2015) completed a meta-analytic analysis of single-case experimental design data on noncontingent reinforcement (NCR) for the treatment of problem behavior exhibited by individuals with developmental disabilities. Results showed that (1) NCR produced very large effect sizes for reduction in…

  11. Some problems raised by the operation of large nuclear turbo-generator sets. Solutions proposed for the protection of large size generators

    International Nuclear Information System (INIS)

    Chaumienne, J.-P.

    1976-01-01

    The operating requirements of nuclear power stations call for relays with ever increasing performances. This urges the development of new electronic systems while giving high importance to their reliability. So as to provide for easy application and minitoring of the relays, even when the turbo-generator unit is operating, a new cubicle design is considered which offers maximum safety and flexibility in use [fr

  12. Growing vertical ZnO nanorod arrays within graphite: efficient isolation of large size and high quality single-layer graphene.

    Science.gov (United States)

    Ding, Ling; E, Yifeng; Fan, Louzhen; Yang, Shihe

    2013-07-18

    We report a unique strategy for efficiently exfoliating large size and high quality single-layer graphene directly from graphite into DMF dispersions by growing ZnO nanorod arrays between the graphene layers in graphite.

  13. The ambient dose equivalent at flight altitudes: a fit to a large set of data using a Bayesian approach

    International Nuclear Information System (INIS)

    Wissmann, F; Reginatto, M; Moeller, T

    2010-01-01

    The problem of finding a simple, generally applicable description of worldwide measured ambient dose equivalent rates at aviation altitudes between 8 and 12 km is difficult to solve due to the large variety of functional forms and parametrisations that are possible. We present an approach that uses Bayesian statistics and Monte Carlo methods to fit mathematical models to a large set of data and to compare the different models. About 2500 data points measured in the periods 1997-1999 and 2003-2006 were used. Since the data cover wide ranges of barometric altitude, vertical cut-off rigidity and phases in the solar cycle 23, we developed functions which depend on these three variables. Whereas the dependence on the vertical cut-off rigidity is described by an exponential, the dependences on barometric altitude and solar activity may be approximated by linear functions in the ranges under consideration. Therefore, a simple Taylor expansion was used to define different models and to investigate the relevance of the different expansion coefficients. With the method presented here, it is possible to obtain probability distributions for each expansion coefficient and thus to extract reliable uncertainties even for the dose rate evaluated. The resulting function agrees well with new measurements made at fixed geographic positions and during long haul flights covering a wide range of latitudes.

  14. Percutaneous Ethanol Injection of Unresectable Medium-to-Large-Sized Hepatomas Using a Multipronged Needle: Efficacy and Safety

    International Nuclear Information System (INIS)

    Ho, C.S.; Kachura, J.R.; Gallinger, S.; Grant, D.; Greig, P.; McGilvray, I.; Knox, J.; Sherman, M.; Wong, F.; Wong, D.

    2007-01-01

    Fine needles with an end hole or multiple side holes have traditionally been used for percutaneous ethanol injection (PEI) of hepatomas. This study retrospectively evaluates the safety and efficacy of PEI of unresectable medium-to-large (3.5-9 cm) hepatomas using a multipronged needle and with conscious sedation. Twelve patients, eight men and four women (age 51-77 years; mean: 69) received PEI for hepatomas, mostly subcapsular or exophytic in location with average tumor size of 5.6 cm (range: 3.5-9.0 cm). Patients were consciously sedated and an 18G retractable multipronged needle (Quadrafuse needle; Rex Medical, Philadelphia, PA) was used for injection under real-time ultrasound guidance. By varying the length of the prongs and rotating the needle, the alcohol was widely distributed within the tumor. The progress of ablation was monitored by contrast-enhanced ultrasound, computed tomography (CT) or magnetic resonance imaging (MRI) after each weekly injection and within a month after the final (third) injection and 3 months thereafter. An average total of 63 mL (range: 20-154 ml) of alcohol was injected per patient in an average of 2.3 sessions. Contrast-enhanced CT, ultrasound, or MRI was used to determine the degree of necrosis. Complete necrosis was noted in eight patients (67%), near-complete necrosis (90-99%) in two (16.7%), and partial success (50-89%) in two (16.7%). Follow-up in the first 9 months showed local recurrence in two patients and new lesions in another. There was no mortality. One patient developed renal failure, liver failure, and localized perforation of the stomach. He responded to medical treatment and surgery was not required for the perforation. One patient had severe postprocedural abdominal pain and fever, and another had transient hyperbilirubinemia; both recovered with conservative treatment. PEI with a multipronged needle is a new, safe, and efficacious method in treating medium-to-large-sized hepatocellular carcinoma under conscious

  15. WebViz:A Web-based Collaborative Interactive Visualization System for large-Scale Data Sets

    Science.gov (United States)

    Yuen, D. A.; McArthur, E.; Weiss, R. M.; Zhou, J.; Yao, B.

    2010-12-01

    WebViz is a web-based application designed to conduct collaborative, interactive visualizations of large data sets for multiple users, allowing researchers situated all over the world to utilize the visualization services offered by the University of Minnesota’s Laboratory for Computational Sciences and Engineering (LCSE). This ongoing project has been built upon over the last 3 1/2 years .The motivation behind WebViz lies primarily with the need to parse through an increasing amount of data produced by the scientific community as a result of larger and faster multicore and massively parallel computers coming to the market, including the use of general purpose GPU computing. WebViz allows these large data sets to be visualized online by anyone with an account. The application allows users to save time and resources by visualizing data ‘on the fly’, wherever he or she may be located. By leveraging AJAX via the Google Web Toolkit (http://code.google.com/webtoolkit/), we are able to provide users with a remote, web portal to LCSE's (http://www.lcse.umn.edu) large-scale interactive visualization system already in place at the University of Minnesota. LCSE’s custom hierarchical volume rendering software provides high resolution visualizations on the order of 15 million pixels and has been employed for visualizing data primarily from simulations in astrophysics to geophysical fluid dynamics . In the current version of WebViz, we have implemented a highly extensible back-end framework built around HTTP "server push" technology. The web application is accessible via a variety of devices including netbooks, iPhones, and other web and javascript-enabled cell phones. Features in the current version include the ability for users to (1) securely login (2) launch multiple visualizations (3) conduct collaborative visualization sessions (4) delegate control aspects of a visualization to others and (5) engage in collaborative chats with other users within the user interface

  16. Precise large deviations of aggregate claims in a size-dependent renewal risk model with stopping time claim-number process

    Directory of Open Access Journals (Sweden)

    Shuo Zhang

    2017-04-01

    Full Text Available Abstract In this paper, we consider a size-dependent renewal risk model with stopping time claim-number process. In this model, we do not make any assumption on the dependence structure of claim sizes and inter-arrival times. We study large deviations of the aggregate amount of claims. For the subexponential heavy-tailed case, we obtain a precise large-deviation formula; our method substantially relies on a martingale for the structure of our models.

  17. Evolution of A-Type Macrosegregation in Large Size Steel Ingot After Multistep Forging and Heat Treatment

    Science.gov (United States)

    Loucif, Abdelhalim; Ben Fredj, Emna; Harris, Nathan; Shahriari, Davood; Jahazi, Mohammad; Lapierre-Boire, Louis-Philippe

    2018-06-01

    A-type macrosegregation refers to the channel chemical heterogeneities that can be formed during solidification in large size steel ingots. In this research, a combination of experiment and simulation was used to study the influence of open die forging parameters on the evolution of A-type macrosegregation patterns during a multistep forging of a 40 metric ton (MT) cast, high-strength steel ingot. Macrosegregation patterns were determined experimentally by macroetch along the longitudinal axis of the forged and heat-treated ingot. Mass spectroscopy, on more than 900 samples, was used to determine the chemical composition map of the entire longitudinal sectioned surface. FORGE NxT 1.1 finite element modeling code was used to predict the effect of forging sequences on the morphology evolution of A-type macrosegregation patterns. For this purpose, grain flow variables were defined and implemented in a large scale finite element modeling code to describe oriented grains and A-type segregation patterns. Examination of the A-type macrosegregation showed four to five parallel continuous channels located nearly symmetrical to the axis of the forged ingot. In some regions, the A-type patterns became curved or obtained a wavy form in contrast to their straight shape in the as-cast state. Mass spectrometry analysis of the main alloying elements (C, Mn, Ni, Cr, Mo, Cu, P, and S) revealed that carbon, manganese, and chromium were the most segregated alloying elements in A-type macrosegregation patterns. The observed differences were analyzed using thermodynamic calculations, which indicated that changes in the chemical composition of the liquid metal can affect the primary solidification mode and the segregation intensity of the alloying elements. Finite element modeling simulation results showed very good agreement with the experimental observations, thereby allowing for the quantification of the influence of temperature and deformation on the evolution of the shape of the

  18. How great white sharks nourish their embryos to a large size: evidence of lipid histotrophy in lamnoid shark reproduction.

    Science.gov (United States)

    Sato, Keiichi; Nakamura, Masaru; Tomita, Taketeru; Toda, Minoru; Miyamoto, Kei; Nozu, Ryo

    2016-09-15

    The great white shark (Carcharodon carcharias) exhibits viviparous and oophagous reproduction. A 4950 mm total length (TL) gravid female accidentally caught by fishermen in the Okinawa Prefecture, Southern Japan carried six embryos (543-624 mm TL, three in each uterus). Both uteri contained copious amounts of yellowish viscous uterine fluid (over 79.2 litres in the left uterus), nutrient eggs and broken egg cases. The embryos had yolk stomachs that had ruptured, the mean volume of which was approximately 197.9 ml. Embryos had about 20 rows of potentially functional teeth in the upper and lower jaws. Periodic acid Schiff (PAS)-positive substances were observed on the surface and in the cytoplasm of the epithelial cells, and large, secretory, OsO4-oxidized lipid droplets of various sizes were distributed on the surface of the villous string epithelium on the uterine wall. Histological examination of the uterine wall showed it to consist of villi, similar to the trophonemata of Dasyatidae rays, suggesting that the large amount of fluid found in the uterus of the white shark was likely required for embryo nutrition. We conclude that: (1) the lipid-rich fluid is secreted from the uterine epithelium only in early gestation before the onset of oophagy, (2) the embryos probably use the abundant uterine fluid and encased nutrient eggs for nutrition at this stage of their development, and (3) the uterine fluid is the major source of embryonic nutrition before oophagy onset. This is the first record of the lipid histotrophy of reproduction among all shark species. © 2016. Published by The Company of Biologists Ltd.

  19. How great white sharks nourish their embryos to a large size: evidence of lipid histotrophy in lamnoid shark reproduction

    Directory of Open Access Journals (Sweden)

    Keiichi Sato

    2016-09-01

    Full Text Available The great white shark (Carcharodon carcharias exhibits viviparous and oophagous reproduction. A 4950 mm total length (TL gravid female accidentally caught by fishermen in the Okinawa Prefecture, Southern Japan carried six embryos (543-624 mm TL, three in each uterus. Both uteri contained copious amounts of yellowish viscous uterine fluid (over 79.2 litres in the left uterus, nutrient eggs and broken egg cases. The embryos had yolk stomachs that had ruptured, the mean volume of which was approximately 197.9 ml. Embryos had about 20 rows of potentially functional teeth in the upper and lower jaws. Periodic acid Schiff (PAS-positive substances were observed on the surface and in the cytoplasm of the epithelial cells, and large, secretory, OsO4-oxidized lipid droplets of various sizes were distributed on the surface of the villous string epithelium on the uterine wall. Histological examination of the uterine wall showed it to consist of villi, similar to the trophonemata of Dasyatidae rays, suggesting that the large amount of fluid found in the uterus of the white shark was likely required for embryo nutrition. We conclude that: (1 the lipid-rich fluid is secreted from the uterine epithelium only in early gestation before the onset of oophagy, (2 the embryos probably use the abundant uterine fluid and encased nutrient eggs for nutrition at this stage of their development, and (3 the uterine fluid is the major source of embryonic nutrition before oophagy onset. This is the first record of the lipid histotrophy of reproduction among all shark species.

  20. Long‐term creep rates on the Hayward Fault: evidence for controls on the size and frequency of large earthquakes

    Science.gov (United States)

    Lienkaemper, James J.; McFarland, Forrest S.; Simpson, Robert W.; Bilham, Roger; Ponce, David A.; Boatwright, John; Caskey, S. John

    2012-01-01

    The Hayward fault (HF) in California exhibits large (Mw 6.5–7.1) earthquakes with short recurrence times (161±65 yr), probably kept short by a 26%–78% aseismic release rate (including postseismic). Its interseismic release rate varies locally over time, as we infer from many decades of surface creep data. Earliest estimates of creep rate, primarily from infrequent surveys of offset cultural features, revealed distinct spatial variation in rates along the fault, but no detectable temporal variation. Since the 1989 Mw 6.9 Loma Prieta earthquake (LPE), monitoring on 32 alinement arrays and 5 creepmeters has greatly improved the spatial and temporal resolution of creep rate. We now identify significant temporal variations, mostly associated with local and regional earthquakes. The largest rate change was a 6‐yr cessation of creep along a 5‐km length near the south end of the HF, attributed to a regional stress drop from the LPE, ending in 1996 with a 2‐cm creep event. North of there near Union City starting in 1991, rates apparently increased by 25% above pre‐LPE levels on a 16‐km‐long reach of the fault. Near Oakland in 2007 an Mw 4.2 earthquake initiated a 1–2 cm creep event extending 10–15 km along the fault. Using new better‐constrained long‐term creep rates, we updated earlier estimates of depth to locking along the HF. The locking depths outline a single, ∼50‐km‐long locked or retarded patch with the potential for an Mw∼6.8 event equaling the 1868 HF earthquake. We propose that this inferred patch regulates the size and frequency of large earthquakes on HF.

  1. Disposable swim diaper retention of Cryptosporidium-sized particles on human subjects in a recreational water setting.

    Science.gov (United States)

    Amburgey, James E; Anderson, J Brian

    2011-12-01

    Cryptosporidium is a chlorine-resistant protozoan parasite responsible for the majority of waterborne disease outbreaks in recreational water venues in the USA. Swim diapers are commonly used by diaper-aged children participating in aquatic activities. This research was intended to evaluate disposable swim diapers for retaining 5-μm diameter polystyrene microspheres, which were used as non-infectious surrogates for Cryptosporidium oocysts. A hot tub recirculating water without a filter was used for this research. The microsphere concentration in the water was monitored at regular intervals following introduction of microspheres inside of a swim diaper while a human subject undertook normal swim/play activities. Microsphere concentrations in the bulk water showed that the majority (50-97%) of Cryptosporidium-sized particles were released from the swim diaper within 1 to 5 min regardless of the swim diaper type or configuration. After only 10 min of play, 77-100% of the microspheres had been released from all swim diapers tested. This research suggests that the swim diapers commonly used by diaper-aged children in swimming pools and other aquatic activities are of limited value in retaining Cryptosporidium-sized particles. Improved swim diaper solutions are necessary to efficiently retain pathogens and effectively safeguard public health in recreational water venues.

  2. Small size today, aquarium dumping tomorrow: sales of juvenile non-native large fish as an important threat in Brazil

    Directory of Open Access Journals (Sweden)

    André L. B. Magalhães

    2017-12-01

    Full Text Available ABSTRACT Informal sales of large-bodied non-native aquarium fishes (known as “tankbusters” is increasing among Brazilian hobbyists. In this study, we surveyed this non-regulated trade on Facebook® from May 2012 to September 2016, systematically collecting information about the fishes available for trading: species, family, common/scientific names, native range, juvenile length, behavior, number of specimens available in five geographical regions from Brazil. We also assessed the invasion risk of the most frequently sold species using the Fish Invasiveness Screening Test (FIST. We found 93 taxa belonging to 35 families. Cichlidae was the dominant family, and most species were native to South America. All species are sold at very small sizes (< 10.0 cm, and most display aggressive behavior. The hybrid Amphilophus trimaculatus × Amphilophus citrinellus, Astronotus ocellatus, Uaru amphiacanthoides, Osteoglossum bicirrhosum, Cichla piquiti, Pangasianodon hypophthalmus, Datnioides microlepis and Cichla kelberi were the main species available. The southeast region showed the greatest trading activity. Based on biological traits, the FIST indicated that Arapaima gigas, C. kelberi and C. temensis are high-risk species in terms of biological invasions via aquarium dumping. We suggest management strategies such as trade regulations, monitoring, euthanasia and educational programs to prevent further introductions via aquarium dumping.

  3. Characterisation of Late Bronze Age large size shield nails by EDXRF, micro-EDXRF and X-ray digital radiography

    International Nuclear Information System (INIS)

    Figueiredo, E.; Araujo, M.F.; Silva, R.J.C.; Senna-Martinez, J.C.; Ines Vaz, J.L.

    2011-01-01

    In the present study six exceptional large size metallic nails, a dagger and a sickle from the Late Bronze Age archaeological site of Figueiredo das Donas (Central Portugal) have been analysed by EDXRF, micro-EDXRF and X-ray digital radiography for the study of material composition and technology of fabrication. The combination of these analytical and examination techniques showed that all artefacts are made of bronze with As, Sb and Pb impurities, and that the nails were most likely manufactured using the casting-on technique. These results reinforce the use of binary bronze by Late Bronze Age in the region, and the incorporation of new fabrication technologies that resulted from ancient spheres of interaction. - Highlights: → EDXRF, micro-EDXRF and X-ray digital radiography in cultural heritage studies. → Archaeometallurgical study of a Late Bronze Age artefact collection from Portugal. → Practise of a specific and traditional bronze metallurgy. → Appearance of technological innovations as the casting-on technique.

  4. Boys who pee the farthest have a large hollow head, a thin skin, and medium-size manhood

    Science.gov (United States)

    Attinger, Daniel; Lee, Vincent

    2016-11-01

    Following a recent trend of scientific studies on artwork, we study here the thermodynamics of a jetting thermometer made of ceramic, related to the Chinese tea culture. The thermometer represents a boy who "urinates" shortly after hot water is poured onto his head. Long jetting distance indicates if the water temperature is hot enough to brew tea. Here, a thermofluid model describes the jetting phenomenon of that pee-pee boy. The study demonstrates how thermal expansion of an interior air pocket causes jetting. The validity of assumptions underlying the Hagen-Poiseuille flow is discussed for urethra of finite length. A thermodynamic potential is shown to define maximum jetting velocity. Seven optimization criteria to maximize jetting distance are provided, including two dimensionless numbers. The dimensionless numbers are obtained by comparing the time scales of the internal pressure buildup due to heating, with that of pressure relief due to jetting. Optimization results show that longer jets are produced by large individuals, with low body mass index, with a boyhood of medium size inclined at an angle π/4. Analogies are drawn with pissing contests among humans and lobsters. The study ends by noting similitudes of working principle between that politically incorrect thermometer and Galileo Galilei's thermoscope.

  5. Ni foam assisted synthesis of high quality hexagonal boron nitride with large domain size and controllable thickness

    Science.gov (United States)

    Ying, Hao; Li, Xiuting; Li, Deshuai; Huang, Mingqiang; Wan, Wen; Yao, Qian; Chen, Xiangping; Wang, Zhiwei; Wu, Yanqing; Wang, Le; Chen, Shanshan

    2018-04-01

    The scalable synthesis of two-dimensional (2D) hexagonal boron nitride (h-BN) is of great interest for its numerous applications in novel electronic devices. Highly-crystalline h-BN films, with single-crystal sizes up to hundreds of microns, are demonstrated via a novel Ni foam assisted technique reported here for the first time. The nucleation density of h-BN domains can be significantly reduced due to the high boron solubility, as well as the large specific surface area of the Ni foam. The crystalline structure of the h-BN domains is found to be well aligned with, and therefore strongly dependent upon, the underlying Pt lattice orientation. Growth-time dependent experiments confirm the presence of a surface mediated self-limiting growth mechanism for monolayer h-BN on the Pt substrate. However, utilizing remote catalysis from the Ni foam, bilayer h-BN films can be synthesized breaking the self-limiting effect. This work provides further understanding of the mechanisms involved in the growth of h-BN and proposes a facile synthesis technique that may be applied to further applications in which control over the crystal alignment, and the numbers of layers is crucial.

  6. Theoretical simulation and analysis of large size BMP-LSC by 3D Monte Carlo ray tracing model

    Institute of Scientific and Technical Information of China (English)

    Feng Zhang; Ning-Ning Zhang; Yi Zhang; Sen Yan; Song Sun; Jun Bao; Chen Gao

    2017-01-01

    Luminescent solar concentrators (LSC) can reduce the area of solar cells by collecting light from a large area and concentrating the captured light onto relatively small area photovoltaic (PV) cells,and thereby reducing the cost of PV electricity generation.LSCs with bottom-facing cells (BMP-LSC) can collect both direct light and indirect light,so further improving the efficiency of the PV cells.However,it is hard to analyze the effect of each parameter by experiment because there are too many parameters involved in the BMP-LSC.In this paper,all the physical processes of the light transmission and collection in the BMP-LSC were analyzed.A three-dimensional Monte Carlo ray tracing program was developed to study the transmission of photons in the LSC.A larger-size LSC was simulated,and the effects of dye concentration,the LSC thickness,the cell area,and the cell distance were systematically analyzed.

  7. Depletion of trophy large-sized sharks populations of the Argentinean coast, south-western Atlantic: insights from fishers' knowledge

    Directory of Open Access Journals (Sweden)

    Alejo Irigoyen

    Full Text Available Abstract Globally, sharks are impacted by a wide range of human activities, resulting in many populations being depleted. Trophy large-sized sharks of the Argentinean coast, the sand-tiger Carcharias taurus , the copper Carcharhinus brachyurus and the sevengill shark Notorynchus cepedianus are under intense sport and artisanal fishing since the 50's decade. However, the current and historical information for the assessment of its populations status is scarce. The aim of this work was to analyze the status of conservation of these species through the gathering of expert fishermen knowledge (FK on semi-structured interviews. Abundance variation perception between the beginning and the end of fishermen careers revealed a critical status for the species study (means variation between -77 and -90 %. Furthermore, a best day's catch analysis reinforce this result in the case of the sand tiger shark. The school shark Galeorhinus galeus was included on this work with the objective of contrast FK with formal information available of catch-per-unit-effort (CPUE time series. Both sources of information, despite are not comparable, shows declines ~ - 80%. The critical conservation situation of study species needs urgent management action, particularly for the san tiger shark which could became regionally extinct before the reaction of stakeholders occurs.

  8. Mammals of medium and large size in Santa Rita do Sapucaí, Minas Gerais, southeastern Brazil

    Directory of Open Access Journals (Sweden)

    Eduardo, A. A.

    2009-01-01

    Full Text Available The diversity of Brazilian vertebrates is regarded among the highest in the world. However, the biologicaldiversity is still mostly unknown and a good part of it is seriously threatened by human activities. This study aimed toinventory the medium and large size mammals present in the Reserva Biológica de Santa Rita do Sapucaí, an Atlanticforest reserve located in Santa Rita do Sapucaí, southeastern Brazil. Sand-plots, photographic traps and searches foranimal tracks on pre-existent trails in the area, were carried out once every two months between May 2006 andFebruary 2007. The sand-plots and tracks were inspected during five consecutive days per sampling. We obtained 108records of 15 species, mostly of carnivorans. Two confirmed species are threatened with extinction in Brazil (Callithrixaurita and Leopardus pardalis. The results suggest that the sampled reserve has high species richness and plays animportant role in conservation of mammals in this landscape, including species threatened with extinction.

  9. Development of 3D Visualization Technology for Medium-and Large-sized Radioactive Metal Wastes from Decommissioning Nuclear Facilities

    Energy Technology Data Exchange (ETDEWEB)

    Lee, A Rim; Park, Chan Hee; Lee, Jung Min; Kim, Rinah; Moon, Joo Hyun [Dongguk Univ., Gyongju (Korea, Republic of)

    2013-10-15

    The most important point of decommissioning nuclear facilities and nuclear power plants is to spend less money and do this process safely. In order to perform a better decommissioning nuclear facilities and nuclear power plants, a data base of radioactive waste from decontamination and decommissioning of nuclear facilities should be constructed. This data base is described herein, from the radioactive nuclide to the shape of component of nuclear facilities, and representative results of the status and analysis are presented. With the increase in number of nuclear facilities at the end of their useful life, the demand of decommissioning technologies will continue to grow for years to come. This analysis of medium-and large-sized radioactive metal wastes and 3D visualization technology of the radioactive metal wastes using the 3D-SCAN are planned to be used for constructing data bases. The data bases are expected to be used on development of the basic technologies for decommissioning nuclear facilities 4 session.

  10. Consecutive Short-Scan CT for Geological Structure Analog Models with Large Size on In-Situ Stage.

    Science.gov (United States)

    Yang, Min; Zhang, Wen; Wu, Xiaojun; Wei, Dongtao; Zhao, Yixin; Zhao, Gang; Han, Xu; Zhang, Shunli

    2016-01-01

    For the analysis of interior geometry and property changes of a large-sized analog model during a loading or other medium (water or oil) injection process with a non-destructive way, a consecutive X-ray computed tomography (XCT) short-scan method is developed to realize an in-situ tomography imaging. With this method, the X-ray tube and detector rotate 270° around the center of the guide rail synchronously by switching positive and negative directions alternately on the way of translation until all the needed cross-sectional slices are obtained. Compared with traditional industrial XCTs, this method well solves the winding problems of high voltage cables and oil cooling service pipes during the course of rotation, also promotes the convenience of the installation of high voltage generator and cooling system. Furthermore, hardware costs are also significantly decreased. This kind of scanner has higher spatial resolution and penetrating ability than medical XCTs. To obtain an effective sinogram which matches rotation angles accurately, a structural similarity based method is applied to elimination of invalid projection data which do not contribute to the image reconstruction. Finally, on the basis of geometrical symmetry property of fan-beam CT scanning, a whole sinogram filling a full 360° range is produced and a standard filtered back-projection (FBP) algorithm is performed to reconstruct artifacts-free images.

  11. Signal formation processes in Micromegas detectors and quality control for large size detector construction for the ATLAS new small wheel

    Energy Technology Data Exchange (ETDEWEB)

    Kuger, Fabian

    2017-07-31

    and production feasibility resulted in the selection of the proposed mesh by the NSW community and its full scale industrial manufacturing. The successful completion of both tasks were important milestones towards the construction of large size Micromegas detectors clearing the path for NSW series production.

  12. Signal formation processes in Micromegas detectors and quality control for large size detector construction for the ATLAS new small wheel

    International Nuclear Information System (INIS)

    Kuger, Fabian

    2017-01-01

    and production feasibility resulted in the selection of the proposed mesh by the NSW community and its full scale industrial manufacturing. The successful completion of both tasks were important milestones towards the construction of large size Micromegas detectors clearing the path for NSW series production.

  13. Nutrition screening tools: does one size fit all? A systematic review of screening tools for the hospital setting.

    Science.gov (United States)

    van Bokhorst-de van der Schueren, Marian A E; Guaitoli, Patrícia Realino; Jansma, Elise P; de Vet, Henrica C W

    2014-02-01

    Numerous nutrition screening tools for the hospital setting have been developed. The aim of this systematic review is to study construct or criterion validity and predictive validity of nutrition screening tools for the general hospital setting. A systematic review of English, French, German, Spanish, Portuguese and Dutch articles identified via MEDLINE, Cinahl and EMBASE (from inception to the 2nd of February 2012). Additional studies were identified by checking reference lists of identified manuscripts. Search terms included key words for malnutrition, screening or assessment instruments, and terms for hospital setting and adults. Data were extracted independently by 2 authors. Only studies expressing the (construct, criterion or predictive) validity of a tool were included. 83 studies (32 screening tools) were identified: 42 studies on construct or criterion validity versus a reference method and 51 studies on predictive validity on outcome (i.e. length of stay, mortality or complications). None of the tools performed consistently well to establish the patients' nutritional status. For the elderly, MNA performed fair to good, for the adults MUST performed fair to good. SGA, NRS-2002 and MUST performed well in predicting outcome in approximately half of the studies reviewed in adults, but not in older patients. Not one single screening or assessment tool is capable of adequate nutrition screening as well as predicting poor nutrition related outcome. Development of new tools seems redundant and will most probably not lead to new insights. New studies comparing different tools within one patient population are required. Copyright © 2013 Elsevier Ltd and European Society for Clinical Nutrition and Metabolism. All rights reserved.

  14. Contribution of large-sized primary sensory neuronal sensitization to mechanical allodynia by upregulation of hyperpolarization-activated cyclic nucleotide gated channels via cyclooxygenase 1 cascade.

    Science.gov (United States)

    Sun, Wei; Yang, Fei; Wang, Yan; Fu, Han; Yang, Yan; Li, Chun-Li; Wang, Xiao-Liang; Lin, Qing; Chen, Jun

    2017-02-01

    Under physiological state, small- and medium-sized dorsal root ganglia (DRG) neurons are believed to mediate nociceptive behavioral responses to painful stimuli. However, recently it has been found that a number of large-sized neurons are also involved in nociceptive transmission under neuropathic conditions. Nonetheless, the underlying mechanisms that large-sized DRG neurons mediate nociception are poorly understood. In the present study, the role of large-sized neurons in bee venom (BV)-induced mechanical allodynia and the underlying mechanisms were investigated. Behaviorally, it was found that mechanical allodynia was still evoked by BV injection in rats in which the transient receptor potential vanilloid 1-positive DRG neurons were chemically deleted. Electrophysiologically, in vitro patch clamp recordings of large-sized neurons showed hyperexcitability in these neurons. Interestingly, the firing pattern of these neurons was changed from phasic to tonic under BV-inflamed state. It has been suggested that hyperpolarization-activated cyclic nucleotide gated channels (HCN) expressed in large-sized DRG neurons contribute importantly to repeatedly firing. So we examined the roles of HCNs in BV-induced mechanical allodynia. Consistent with the overexpression of HCN1/2 detected by immunofluorescence, HCNs-mediated hyperpolarization activated cation current (I h ) was significantly increased in the BV treated samples. Pharmacological experiments demonstrated that the hyperexcitability and upregulation of I h in large-sized neurons were mediated by cyclooxygenase-1 (COX-1)-prostaglandin E2 pathway. This is evident by the fact that the COX-1 inhibitor significantly attenuated the BV-induced mechanical allodynia. These results suggest that BV can excite the large-sized DRG neurons at least in part by increasing I h through activation of COX-1. Copyright © 2016 Elsevier Ltd. All rights reserved.

  15. PENGARUH LABA BERSIH, ARUS KAS OPERASIONAL, INVESTMENT OPPORTUNITY SET DAN FIRM SIZE TERHADAP DIVIDEN KAS (Studi Kasus Pada Perusahaan Manufaktur di Bursa Efek Indonesia Tahun 2010 – 2012

    Directory of Open Access Journals (Sweden)

    Luluk Muhimatul Ifada

    2014-12-01

    Full Text Available This study aimed to investigate the influence of net profit, operating cash flow, investment opportunity set, and firm size on cash dividend. The sample of this research is manufacturing companies list on Indonesia Stock Exchange (BEI in period 2010-2012 published by www.idx.co.idand posted at Indonesia Capital Market Directory (ICMD. There are 28 acquired companies that meet the criteria specified. The analysis method use multiple regression analysis with level of significance 5%. The conclusion of this research based on  t-statistic result.The result of this research proved that variable net profit have significantly positive influence on cash dividend. Operating cash flow have significantly positive influence on cash dividend. Investment opportunity set hasn’t significantly and have negative correlation influence towards cash dividend. Firm size hasn’t significantly but have positive correlation influence toward cash dividend.Penelitian   ini   bertujuan   untuk   menganalisis   pengaruh   laba   bersih, arus kas operasi, investment opportunity set, dan firm size terhadap dividen kas. Sampel penelitian ini adalah perusahaan manufaktur yang terdaftar di Bursa EfekIndonesia periode tahun 2010-2012. Data yang digunakan adalah laporan keuangan dari masing-masing perusahaan sampel, yang dipublikasikan melalui website www.idx.co.id. dan termuat dalam Indonesia Capital Market Dierctory (ICMD. Data yang memenuhi kriteria penelitian terdapat 28 perusahaan. Penelitian ini menggunakan alat uji statistik dengan pendekatan analisis regresi linier berganda dengan tingkat signifikansi 5%. Kesimpulan pengujian diambil berdasarkan hasil uji t-Statistik. Hasil pengujian ini menunjukkan bahwa pengujian pada variabel laba bersih terhadap dividen kas terbukti berpengaruh positif dan signifikan. Pengujian pada variabel arus kas operasional terhadap dividen kas terbukti berpengaruh positif dan signifikan.Pengujian pada variabel investment

  16. Patient data and patient rights: Swiss healthcare stakeholders' ethical awareness regarding large patient data sets - a qualitative study.

    Science.gov (United States)

    Mouton Dorey, Corine; Baumann, Holger; Biller-Andorno, Nikola

    2018-03-07

    There is a growing interest in aggregating more biomedical and patient data into large health data sets for research and public benefits. However, collecting and processing patient data raises new ethical issues regarding patient's rights, social justice and trust in public institutions. The aim of this empirical study is to gain an in-depth understanding of the awareness of possible ethical risks and corresponding obligations among those who are involved in projects using patient data, i.e. healthcare professionals, regulators and policy makers. We used a qualitative design to examine Swiss healthcare stakeholders' experiences and perceptions of ethical challenges with regard to patient data in real-life settings where clinical registries are sponsored, created and/or used. A semi-structured interview was carried out with 22 participants (11 physicians, 7 policy-makers, 4 ethical committee members) between July 2014 and January 2015. The interviews were audio-recorded, transcribed, coded and analysed using a thematic method derived from Grounded Theory. All interviewees were concerned as a matter of priority with the needs of legal and operating norms for the collection and use of data, whereas less interest was shown in issues regarding patient agency, the need for reciprocity, and shared governance in the management and use of clinical registries' patient data. This observed asymmetry highlights a possible tension between public and research interests on the one hand, and the recognition of patients' rights and citizens' involvement on the other. The advocation of further health-related data sharing on the grounds of research and public interest, without due regard for the perspective of patients and donors, could run the risk of fostering distrust towards healthcare data collections. Ultimately, this could diminish the expected social benefits. However, rather than setting patient rights against public interest, new ethical approaches could strengthen both

  17. Selection of the Maximum Spatial Cluster Size of the Spatial Scan Statistic by Using the Maximum Clustering Set-Proportion Statistic.

    Science.gov (United States)

    Ma, Yue; Yin, Fei; Zhang, Tao; Zhou, Xiaohua Andrew; Li, Xiaosong

    2016-01-01

    Spatial scan statistics are widely used in various fields. The performance of these statistics is influenced by parameters, such as maximum spatial cluster size, and can be improved by parameter selection using performance measures. Current performance measures are based on the presence of clusters and are thus inapplicable to data sets without known clusters. In this work, we propose a novel overall performance measure called maximum clustering set-proportion (MCS-P), which is based on the likelihood of the union of detected clusters and the applied dataset. MCS-P was compared with existing performance measures in a simulation study to select the maximum spatial cluster size. Results of other performance measures, such as sensitivity and misclassification, suggest that the spatial scan statistic achieves accurate results in most scenarios with the maximum spatial cluster sizes selected using MCS-P. Given that previously known clusters are not required in the proposed strategy, selection of the optimal maximum cluster size with MCS-P can improve the performance of the scan statistic in applications without identified clusters.

  18. How Well Does Fracture Set Characterization Reduce Uncertainty in Capture Zone Size for Wells Situated in Sedimentary Bedrock Aquifers?

    Science.gov (United States)

    West, A. C.; Novakowski, K. S.

    2005-12-01

    beyond a threshold concentration within the specified time. Aquifers are simulated by drawing the random spacings and apertures from specified distributions. Predictions are made of capture zone size assuming various degrees of knowledge of these distributions, with the parameters of the horizontal fractures being estimated using simulated hydraulic tests and a maximum likelihood estimator. The uncertainty is evaluated by calculating the variance in the capture zone size estimated in multiple realizations. The results show that despite good strategies to estimate the parameters of the horizontal fractures the uncertainty in capture zone size is enormous, mostly due to the lack of available information on vertical fractures. Also, at realistic distances (less than ten kilometers) and using realistic transmissivity distributions for the horizontal fractures the uptake of solute from fractures into matrix cannot be relied upon to protect the production well from contamination.

  19. Particle size distributions of lead measured in battery manufacturing and secondary smelter facilities and implications in setting workplace lead exposure limits.

    Science.gov (United States)

    Petito Boyce, Catherine; Sax, Sonja N; Cohen, Joel M

    2017-08-01

    Inhalation plays an important role in exposures to lead in airborne particulate matter in occupational settings, and particle size determines where and how much of airborne lead is deposited in the respiratory tract and how much is subsequently absorbed into the body. Although some occupational airborne lead particle size data have been published, limited information is available reflecting current workplace conditions in the U.S. To address this data gap, the Battery Council International (BCI) conducted workplace monitoring studies at nine lead acid battery manufacturing facilities (BMFs) and five secondary smelter facilities (SSFs) across the U.S. This article presents the results of the BCI studies focusing on the particle size distributions calculated from Personal Marple Impactor sampling data and particle deposition estimates in each of the three major respiratory tract regions derived using the Multiple-Path Particle Dosimetry model. The BCI data showed the presence of predominantly larger-sized particles in the work environments evaluated, with average mass median aerodynamic diameters (MMADs) ranging from 21-32 µm for the three BMF job categories and from 15-25 µm for the five SSF job categories tested. The BCI data also indicated that the percentage of lead mass measured at the sampled facilities in the submicron range (i.e., lead) was generally small. The estimated average percentages of lead mass in the submicron range for the tested job categories ranged from 0.8-3.3% at the BMFs and from 0.44-6.1% at the SSFs. Variability was observed in the particle size distributions across job categories and facilities, and sensitivity analyses were conducted to explore this variability. The BCI results were compared with results reported in the scientific literature. Screening-level analyses were also conducted to explore the overall degree of lead absorption potentially associated with the observed particle size distributions and to identify key issues

  20. Spatial fingerprints of community structure in human interaction network for an extensive set of large-scale regions.

    Directory of Open Access Journals (Sweden)

    Zsófia Kallus

    Full Text Available Human interaction networks inferred from country-wide telephone activity recordings were recently used to redraw political maps by projecting their topological partitions into geographical space. The results showed remarkable spatial cohesiveness of the network communities and a significant overlap between the redrawn and the administrative borders. Here we present a similar analysis based on one of the most popular online social networks represented by the ties between more than 5.8 million of its geo-located users. The worldwide coverage of their measured activity allowed us to analyze the large-scale regional subgraphs of entire continents and an extensive set of examples for single countries. We present results for North and South America, Europe and Asia. In our analysis we used the well-established method of modularity clustering after an aggregation of the individual links into a weighted graph connecting equal-area geographical pixels. Our results show fingerprints of both of the opposing forces of dividing local conflicts and of uniting cross-cultural trends of globalization.

  1. Spatial fingerprints of community structure in human interaction network for an extensive set of large-scale regions.

    Science.gov (United States)

    Kallus, Zsófia; Barankai, Norbert; Szüle, János; Vattay, Gábor

    2015-01-01

    Human interaction networks inferred from country-wide telephone activity recordings were recently used to redraw political maps by projecting their topological partitions into geographical space. The results showed remarkable spatial cohesiveness of the network communities and a significant overlap between the redrawn and the administrative borders. Here we present a similar analysis based on one of the most popular online social networks represented by the ties between more than 5.8 million of its geo-located users. The worldwide coverage of their measured activity allowed us to analyze the large-scale regional subgraphs of entire continents and an extensive set of examples for single countries. We present results for North and South America, Europe and Asia. In our analysis we used the well-established method of modularity clustering after an aggregation of the individual links into a weighted graph connecting equal-area geographical pixels. Our results show fingerprints of both of the opposing forces of dividing local conflicts and of uniting cross-cultural trends of globalization.

  2. Assessing the impact of large-scale computing on the size and complexity of first-principles electromagnetic models

    International Nuclear Information System (INIS)

    Miller, E.K.

    1990-01-01

    There is a growing need to determine the electromagnetic performance of increasingly complex systems at ever higher frequencies. The ideal approach would be some appropriate combination of measurement, analysis, and computation so that system design and assessment can be achieved to a needed degree of accuracy at some acceptable cost. Both measurement and computation benefit from the continuing growth in computer power that, since the early 1950s, has increased by a factor of more than a million in speed and storage. For example, a CRAY2 has an effective throughput (not the clock rate) of about 10 11 floating-point operations (FLOPs) per hour compared with the approximate 10 5 provided by the UNIVAC-1. The purpose of this discussion is to illustrate the computational complexity of modeling large (in wavelengths) electromagnetic problems. In particular the author makes the point that simply relying on faster computers for increasing the size and complexity of problems that can be modeled is less effective than might be anticipated from this raw increase in computer throughput. He suggests that rather than depending on faster computers alone, various analytical and numerical alternatives need development for reducing the overall FLOP count required to acquire the information desired. One approach is to decrease the operation count of the basic model computation itself, by reducing the order of the frequency dependence of the various numerical operations or their multiplying coefficients. Another is to decrease the number of model evaluations that are needed, an example being the number of frequency samples required to define a wideband response, by using an auxiliary model of the expected behavior. 11 refs., 5 figs., 2 tabs

  3. Upper limits to americium concentration in large sized sodium-cooled fast reactors loaded with metallic fuel

    International Nuclear Information System (INIS)

    Zhang, Youpeng; Wallenius, Janne

    2014-01-01

    Highlights: • The americium transmutation capability of Integral Fast Reactor was investigated. • The impact from americium introduction was parameterized by applying SERPENT Monte Carlo calculations. • Higher americium content in metallic fuel leads to a power penalty, preserving consistent safety margins. - Abstract: Transient analysis of a large sized sodium-cooled reactor loaded with metallic fuel modified by different fractions of americium have been performed. Unprotected loss-of-offsite power, unprotected loss-of-flow and unprotected transient-over-power accidents were simulated with the SAS4A/SASSYS code based on the geometrical model of an IFR with power rating of 2500 MW th , using safety parameters obtained with the SERPENT Monte Carlo code. The Ti-modified austenitic D9 steel, having higher creep rupture strength, was considered as the cladding and structural material apart from the ferritic/martensitic HT9 steel. For the reference case of U–12Pu–1Am–10Zr fuel at EOEC, the margin to fuel melt during a design basis condition UTOP is about 50 K for a maximum linear rating of 30 kW/m. In order to maintain a margin of 50 K to fuel failure, the linear power rating has to be reduced by ∼3% and 6% for 2 wt.% and 3 wt.% Am introduction into the fuel respectively. Hence, an Am concentration of 2–3 wt.% in the fuel would lead to a power penalty of 3–6%, permitting a consumption rate of 3.0–5.1 kg Am/TW h th . This consumption rate is significantly higher than the one previously obtained for oxide fuelled SFRs

  4. Analysis of Large Seeds from Three Different Medicago truncatula Ecotypes Reveals a Potential Role of Hormonal Balance in Final Size Determination of Legume Grains

    Directory of Open Access Journals (Sweden)

    Kaustav Bandyopadhyay

    2016-09-01

    Full Text Available Legume seeds are important as protein and oil source for human diet. Understanding how their final seed size is determined is crucial to improve crop yield. In this study, we analyzed seed development of three accessions of the model legume, Medicago truncatula, displaying contrasted seed size. By comparing two large seed accessions to the reference accession A17, we described mechanisms associated with large seed size determination and potential factors modulating the final seed size. We observed that early events during embryogenesis had a major impact on final seed size and a delayed heart stage embryo development resulted to large seeds. We also observed that the difference in seed growth rate was mainly due to a difference in embryo cell number, implicating a role of cell division rate. Large seed accessions could be explained by an extended period of cell division due to a longer embryogenesis phase. According to our observations and recent reports, we observed that auxin (IAA and abscisic acid (ABA ratio could be a key determinant of cell division regulation at the end of embryogenesis. Overall, our study highlights that timing of events occurring during early seed development play decisive role for final seed size determination.

  5. Ecosystem size structure response to 21st century climate projection: large fish abundance decreases in the central North Pacific and increases in the California Current.

    Science.gov (United States)

    Woodworth-Jefcoats, Phoebe A; Polovina, Jeffrey J; Dunne, John P; Blanchard, Julia L

    2013-03-01

    Output from an earth system model is paired with a size-based food web model to investigate the effects of climate change on the abundance of large fish over the 21st century. The earth system model, forced by the Intergovernmental Panel on Climate Change (IPCC) Special report on emission scenario A2, combines a coupled climate model with a biogeochemical model including major nutrients, three phytoplankton functional groups, and zooplankton grazing. The size-based food web model includes linkages between two size-structured pelagic communities: primary producers and consumers. Our investigation focuses on seven sites in the North Pacific, each highlighting a specific aspect of projected climate change, and includes top-down ecosystem depletion through fishing. We project declines in large fish abundance ranging from 0 to 75.8% in the central North Pacific and increases of up to 43.0% in the California Current (CC) region over the 21st century in response to change in phytoplankton size structure and direct physiological effects. We find that fish abundance is especially sensitive to projected changes in large phytoplankton density and our model projects changes in the abundance of large fish being of the same order of magnitude as changes in the abundance of large phytoplankton. Thus, studies that address only climate-induced impacts to primary production without including changes to phytoplankton size structure may not adequately project ecosystem responses. © 2012 Blackwell Publishing Ltd.

  6. Setting up fuel supply strategies for large-scale bio-energy projects using agricultural and forest residues. A methodology for developing countries

    International Nuclear Information System (INIS)

    Junginger, M.

    2000-08-01

    The objective of this paper is to develop a coherent methodology to set up fuel supply strategies for large-scale biomass-conversion units. This method will explicitly take risks and uncertainties regarding availability and costs in relation to time into account. This paper aims at providing general guidelines, which are not country-specific. These guidelines cannot provide 'perfect fit'-solutions, but aim to give general help to overcome barriers and to set up supply strategies. It will mainly focus on residues from the agricultural and forestry sector. This study focuses on electricity or both electricity and heat production (CHP) with plant scales between 1040 MWe. This range is chosen due to rules of economies of scale. In large-scale plants the benefits of increased efficiency outweigh increased transportation costs, allowing a lower price per kWh which in turn may allow higher biomass costs. However, fuel-supply risks tend to get higher with increasing plant size, which makes it more important to assess them for large(r) conversion plants. Although the methodology does not focus on a specific conversion technology, it should be stressed that the technology must be able to handle a wide variety of biomass fuels with different characteristics because many biomass residues are not available the year round and various fuels are needed for a constant supply. The methodology allows for comparing different technologies (with known investment and operational and maintenance costs from literature) and evaluation for different fuel supply scenarios. In order to demonstrate the methodology, a case study was carried out for the north-eastern part of Thailand (Isaan), an agricultural region. The research was conducted in collaboration with the Regional Wood Energy Development Programme in Asia (RWEDP), a project of the UN Food and Agricultural Organization (FAO) in Bangkok, Thailand. In Section 2 of this paper the methodology will be presented. In Section 3 the economic

  7. Empirical Mining of Large Data Sets Already Helps to Solve Practical Ecological Problems; A Panoply of Working Examples (Invited)

    Science.gov (United States)

    Hargrove, W. W.; Hoffman, F. M.; Kumar, J.; Spruce, J.; Norman, S. P.

    2013-12-01

    Here we present diverse examples where empirical mining and statistical analysis of large data sets have already been shown to be useful for a wide variety of practical decision-making problems within the realm of large-scale ecology. Because a full understanding and appreciation of particular ecological phenomena are possible only after hypothesis-directed research regarding the existence and nature of that process, some ecologists may feel that purely empirical data harvesting may represent a less-than-satisfactory approach. Restricting ourselves exclusively to process-driven approaches, however, may actually slow progress, particularly for more complex or subtle ecological processes. We may not be able to afford the delays caused by such directed approaches. Rather than attempting to formulate and ask every relevant question correctly, empirical methods allow trends, relationships and associations to emerge freely from the data themselves, unencumbered by a priori theories, ideas and prejudices that have been imposed upon them. Although they cannot directly demonstrate causality, empirical methods can be extremely efficient at uncovering strong correlations with intermediate "linking" variables. In practice, these correlative structures and linking variables, once identified, may provide sufficient predictive power to be useful themselves. Such correlation "shadows" of causation can be harnessed by, e.g., Bayesian Belief Nets, which bias ecological management decisions, made with incomplete information, toward favorable outcomes. Empirical data-harvesting also generates a myriad of testable hypotheses regarding processes, some of which may even be correct. Quantitative statistical regionalizations based on quantitative multivariate similarity have lended insights into carbon eddy-flux direction and magnitude, wildfire biophysical conditions, phenological ecoregions useful for vegetation type mapping and monitoring, forest disease risk maps (e.g., sudden oak

  8. Automatic reduction of large X-ray fluorescence data-sets applied to XAS and mapping experiments

    International Nuclear Information System (INIS)

    Martin Montoya, Ligia Andrea

    2017-02-01

    In this thesis two automatic methods for the reduction of large fluorescence data sets are presented. The first method is proposed in the framework of BioXAS experiments. The challenge of this experiment is to deal with samples in ultra dilute concentrations where the signal-to-background ratio is low. The experiment is performed in fluorescence mode X-ray absorption spectroscopy with a 100 pixel high-purity Ge detector. The first step consists on reducing 100 fluorescence spectra into one. In this step, outliers are identified by means of the shot noise. Furthermore, a fitting routine which model includes Gaussian functions for the fluorescence lines and exponentially modified Gaussian (EMG) functions for the scattering lines (with long tails at lower energies) is proposed to extract the line of interest from the fluorescence spectrum. Additionally, the fitting model has an EMG function for each scattering line (elastic and inelastic) at incident energies where they start to be discerned. At these energies, the data reduction is done per detector column to include the angular dependence of scattering. In the second part of this thesis, an automatic method for texts separation on palimpsests is presented. Scanning X-ray fluorescence is performed on the parchment, where a spectrum per scanned point is collected. Within this method, each spectrum is treated as a vector forming a basis which is to be transformed so that the basis vectors are the spectra of each ink. Principal Component Analysis is employed as an initial guess of the seek basis. This basis is further transformed by means of an optimization routine that maximizes the contrast and minimizes the non-negative entries in the spectra. The method is tested on original and self made palimpsests.

  9. Patterns of Limnohabitans Microdiversity across a Large Set of Freshwater Habitats as Revealed by Reverse Line Blot Hybridization

    Science.gov (United States)

    Jezbera, Jan; Jezberová, Jitka; Kasalický, Vojtěch; Šimek, Karel; Hahn, Martin W.

    2013-01-01

    Among abundant freshwater Betaproteobacteria, only few groups are considered to be of central ecological importance. One of them is the well-studied genus Limnohabitans and mainly its R-BT subcluster, investigated previously mainly by fluorescence in situ hybridization methods. We designed, based on sequences from a large Limnohabitans culture collection, 18 RLBH (Reverse Line Blot Hybridization) probes specific for different groups within the genus Limnohabitans by targeting diagnostic sequences on their 16 S–23 S rRNA ITS regions. The developed probes covered in sum 92% of the available isolates. This set of probes was applied to environmental DNA originating from 161 different European standing freshwater habitats to reveal the microdiversity (intra-genus) patterns of the Limnohabitans genus along a pH gradient. Investigated habitats differed in various physicochemical parameters, and represented a very broad range of standing freshwater habitats. The Limnohabitans microdiversity, assessed as number of RLBH-defined groups detected, increased significantly along the gradient of rising pH of habitats. 14 out of 18 probes returned detection signals that allowed predictions on the distribution of distinct Limnohabitans groups. Most probe-defined Limnohabitans groups showed preferences for alkaline habitats, one for acidic, and some seemed to lack preferences. Complete niche-separation was indicated for some of the probe-targeted groups. Moreover, bimodal distributions observed for some groups of Limnohabitans, suggested further niche separation between genotypes within the same probe-defined group. Statistical analyses suggested that different environmental parameters such as pH, conductivity, oxygen and altitude influenced the distribution of distinct groups. The results of our study do not support the hypothesis that the wide ecological distribution of Limnohabitans bacteria in standing freshwater habitats results from generalist adaptations of these bacteria

  10. Patterns of Limnohabitans microdiversity across a large set of freshwater habitats as revealed by Reverse Line Blot Hybridization.

    Directory of Open Access Journals (Sweden)

    Jan Jezbera

    Full Text Available Among abundant freshwater Betaproteobacteria, only few groups are considered to be of central ecological importance. One of them is the well-studied genus Limnohabitans and mainly its R-BT subcluster, investigated previously mainly by fluorescence in situ hybridization methods. We designed, based on sequences from a large Limnohabitans culture collection, 18 RLBH (Reverse Line Blot Hybridization probes specific for different groups within the genus Limnohabitans by targeting diagnostic sequences on their 16 S-23 S rRNA ITS regions. The developed probes covered in sum 92% of the available isolates. This set of probes was applied to environmental DNA originating from 161 different European standing freshwater habitats to reveal the microdiversity (intra-genus patterns of the Limnohabitans genus along a pH gradient. Investigated habitats differed in various physicochemical parameters, and represented a very broad range of standing freshwater habitats. The Limnohabitans microdiversity, assessed as number of RLBH-defined groups detected, increased significantly along the gradient of rising pH of habitats. 14 out of 18 probes returned detection signals that allowed predictions on the distribution of distinct Limnohabitans groups. Most probe-defined Limnohabitans groups showed preferences for alkaline habitats, one for acidic, and some seemed to lack preferences. Complete niche-separation was indicated for some of the probe-targeted groups. Moreover, bimodal distributions observed for some groups of Limnohabitans, suggested further niche separation between genotypes within the same probe-defined group. Statistical analyses suggested that different environmental parameters such as pH, conductivity, oxygen and altitude influenced the distribution of distinct groups. The results of our study do not support the hypothesis that the wide ecological distribution of Limnohabitans bacteria in standing freshwater habitats results from generalist adaptations of

  11. Finite-time and finite-size scalings in the evaluation of large-deviation functions: Numerical approach in continuous time.

    Science.gov (United States)

    Guevara Hidalgo, Esteban; Nemoto, Takahiro; Lecomte, Vivien

    2017-06-01

    Rare trajectories of stochastic systems are important to understand because of their potential impact. However, their properties are by definition difficult to sample directly. Population dynamics provides a numerical tool allowing their study, by means of simulating a large number of copies of the system, which are subjected to selection rules that favor the rare trajectories of interest. Such algorithms are plagued by finite simulation time and finite population size, effects that can render their use delicate. In this paper, we present a numerical approach which uses the finite-time and finite-size scalings of estimators of the large deviation functions associated to the distribution of rare trajectories. The method we propose allows one to extract the infinite-time and infinite-size limit of these estimators, which-as shown on the contact process-provides a significant improvement of the large deviation function estimators compared to the standard one.

  12. Finite-time and finite-size scalings in the evaluation of large-deviation functions: Numerical approach in continuous time

    Science.gov (United States)

    Guevara Hidalgo, Esteban; Nemoto, Takahiro; Lecomte, Vivien

    2017-06-01

    Rare trajectories of stochastic systems are important to understand because of their potential impact. However, their properties are by definition difficult to sample directly. Population dynamics provides a numerical tool allowing their study, by means of simulating a large number of copies of the system, which are subjected to selection rules that favor the rare trajectories of interest. Such algorithms are plagued by finite simulation time and finite population size, effects that can render their use delicate. In this paper, we present a numerical approach which uses the finite-time and finite-size scalings of estimators of the large deviation functions associated to the distribution of rare trajectories. The method we propose allows one to extract the infinite-time and infinite-size limit of these estimators, which—as shown on the contact process—provides a significant improvement of the large deviation function estimators compared to the standard one.

  13. HOW THE ROCKY FLATS ENVIRONMENTAL TECHNOLOGY SITE DEVELOPED A NEW WASTE PACKAGE USING A POLYUREA COATING THAT IS SAFELY AND ECONOMICALLY ELIMINATING SIZE REDUCTION OF LARGE ITEMS

    International Nuclear Information System (INIS)

    Dorr, Kent A.; Hogue, Richard S.; Kimokeo, Margaret K.

    2003-01-01

    One of the major challenges involved in closing the Rocky Flats Environmental Technology Site (RFETS) is the disposal of extremely large pieces of contaminated production equipment and building debris. Past practice has been to size reduce the equipment into pieces small enough to fit into approved, standard waste containers. Size reducing this equipment is extremely expensive, and exposes workers to high-risk tasks, including significant industrial, chemical, and radiological hazards. RFETS has developed a waste package using a Polyurea coating for shipping large contaminated objects. The cost and schedule savings have been significant

  14. The definition of basic parameters of the set of small-sized equipment for preparation of dry mortar for various applications

    Directory of Open Access Journals (Sweden)

    Emelyanova Inga

    2017-01-01

    Full Text Available Based on the conducted information retrieval and review of the scientific literature, unsolved issues have been identified in the process of preparation of dry construction mixtures in the conditions of a construction site. The constructions of existing technological complexes for the production of dry construction mixtures are considered and their main drawbacks are identified in terms of application in the conditions of the construction site. On the basis of the conducted research, the designs of technological sets of small-sized equipment for the preparation of dry construction mixtures in the construction site are proposed. It is found out that the basis for creating the proposed technological kits are new designs of concrete mixers operating in cascade mode. A technique for calculating the main parameters of technological sets of equipment is proposed, depending on the use of the base machine of the kit.

  15. Brief report: large individual variation in outcomes of autistic children receiving low-intensity behavioral interventions in community settings.

    Science.gov (United States)

    Kamio, Yoko; Haraguchi, Hideyuki; Miyake, Atsuko; Hiraiwa, Mikio

    2015-01-01

    Despite widespread awareness of the necessity of early intervention for children with autism spectrum disorders (ASDs), evidence is still limited, in part, due to the complex nature of ASDs. This exploratory study aimed to examine the change across time in young children with autism and their mothers, who received less intensive early interventions with and without applied behavior analysis (ABA) methods in community settings in Japan. Eighteen children with autism (mean age: 45.7 months; range: 28-64 months) received ABA-based treatment (a median of 3.5 hours per week; an interquartile range of 2-5.6 hours per week) and/or eclectic treatment-as-usual (TAU) (a median of 3.1 hours per week; an interquartile range of 2-5.6 hours per week). Children's outcomes were the severity of autistic symptoms, cognitive functioning, internalizing and externalizing behavior after 6 months (a median of 192 days; an interquartile range of 178-206 days). In addition, maternal parenting stress at 6-month follow-up, and maternal depression at 1.5-year follow-up (a median of 512 days; an interquartile range of 358-545 days) were also examined. Large individual variations were observed for a broad range of children's and mothers' outcomes. Neither ABA nor TAU hours per week were significantly associated with an improvement in core autistic symptoms. A significant improvement was observed only for internalizing problems, irrespective of the type, intensity or monthly cost of treatment received. Higher ABA cost per month (a median of 1,188 USD; an interquartile range of 538-1,888 USD) was associated with less improvement in language-social DQ (a median of 9; an interquartile range of -6.75-23.75). To determine an optimal program for each child with ASD in areas with poor ASD resources, further controlled studies are needed that assess a broad range of predictive and outcome variables focusing on both individual characteristics and treatment components.

  16. Group heterogeneity increases the risks of large group size: a longitudinal study of productivity in research groups.

    Science.gov (United States)

    Cummings, Jonathon N; Kiesler, Sara; Bosagh Zadeh, Reza; Balakrishnan, Aruna D

    2013-06-01

    Heterogeneous groups are valuable, but differences among members can weaken group identification. Weak group identification may be especially problematic in larger groups, which, in contrast with smaller groups, require more attention to motivating members and coordinating their tasks. We hypothesized that as groups increase in size, productivity would decrease with greater heterogeneity. We studied the longitudinal productivity of 549 research groups varying in disciplinary heterogeneity, institutional heterogeneity, and size. We examined their publication and citation productivity before their projects started and 5 to 9 years later. Larger groups were more productive than smaller groups, but their marginal productivity declined as their heterogeneity increased, either because their members belonged to more disciplines or to more institutions. These results provide evidence that group heterogeneity moderates the effects of group size, and they suggest that desirable diversity in groups may be better leveraged in smaller, more cohesive units.

  17. The underestimated role of temperature-oxygen relationship in large-scale studies on size-to-temperature response.

    Science.gov (United States)

    Walczyńska, Aleksandra; Sobczyk, Łukasz

    2017-09-01

    The observation that ectotherm size decreases with increasing temperature (temperature-size rule; TSR) has been widely supported. This phenomenon intrigues researchers because neither its adaptive role nor the conditions under which it is realized are well defined. In light of recent theoretical and empirical studies, oxygen availability is an important candidate for understanding the adaptive role behind TSR. However, this hypothesis is still undervalued in TSR studies at the geographical level. We reanalyzed previously published data about the TSR pattern in diatoms sampled from Icelandic geothermal streams, which concluded that diatoms were an exception to the TSR. Our goal was to incorporate oxygen as a factor in the analysis and to examine whether this approach would change the results. Specifically, we expected that the strength of size response to cold temperatures would be different than the strength of response to hot temperatures, where the oxygen limitation is strongest. By conducting a regression analysis for size response at the community level, we found that diatoms from cold, well-oxygenated streams showed no size-to-temperature response, those from intermediate temperature and oxygen conditions showed reverse TSR, and diatoms from warm, poorly oxygenated streams showed significant TSR. We also distinguished the roles of oxygen and nutrition in TSR. Oxygen is a driving factor, while nutrition is an important factor that should be controlled for. Our results show that if the geographical or global patterns of TSR are to be understood, oxygen should be included in the studies. This argument is important especially for predicting the size response of ectotherms facing climate warming.

  18. Chemical Characterization and Source Apportionment of Size Fractionated Atmospheric Aerosols, and, Evaluating Student Attitudes and Learning in Large Lecture General Chemistry Classes

    Science.gov (United States)

    Allen, Gregory Harold

    Chemical speciation and source apportionment of size fractionated atmospheric aerosols were investigated using laser desorption time-of-flight mass spectrometry (LD TOF-MS) and source apportionment was carried out using carbon-14 accelerator mass spectrometry (14C AMS). Sample collection was carried out using the Davis Rotating-drum Unit for Monitoring impact analyzer in Davis, Colfax, and Yosemite, CA. Ambient atmospheric aerosols collected during the winter of 2010/11 and 2011/12 showed a significant difference in the types of compounds found in the small and large sized particles. The difference was due to the increase number of oxidized carbon species that were found in the small particles size ranges, but not in the large particles size ranges. Overall, the ambient atmospheric aerosols collected during the winter in Davis, CA had and average fraction modern of F14C = 0.753 +/- 0.006, indicating that the majority of the size fractionated particles originated from biogenic sources. Samples collected during the King Fire in Colfax, CA were used to determine the contribution of biomass burning (wildfire) aerosols. Factor analysis was used to reduce the ions found in the LD TOF-MS analysis of the King Fire samples. The final factor analysis generated a total of four factors that explained an overall 83% of the variance in the data set. Two of the factors correlated heavily with increased smoke events during the sample period. The increased smoke events produced a large number of highly oxidized organic aerosols (OOA2) and aromatic compounds that are indicative of biomass burning organic aerosols (WBOA). The signal intensities of the factors generated in the King Fire data were investigated in samples collected in Yosemite and Davis, CA to look at the impact of biomass burning on ambient atmospheric aerosols. In both comparison sample collections the OOA2 and WBOA factors both increased during biomass burning events located near the sampling sites. The correlation

  19. Earthquake resistance of cracked concrete embedded with large size rebars (D43 and D57) in nuclear containment

    Energy Technology Data Exchange (ETDEWEB)

    Kan, Y-C. [Chaoyang Univ. of Tech., Taichung, Taiwan (China); Pei, K-C. [Inst. of Nuclear Energy Research, Taiwan (China)

    2014-07-01

    The bond behavior of varying size re-bar (D19, D32, D43 and D57) embedded in concrete, under cyclic load in a pullout test, were investigated through a series of experiments and monitored in real-time by acoustic emission (AE) herein. Parallel tests of specimens with an existing crack were also conducted to observe the behavior of cracked concrete subject to cyclic load. The detailed acoustics information can be used for analyzing and comparing the effects of concrete with varying size re-bar. The results provide useful in formation in evaluating the safety of NPP RC structure subjected to cyclic load. (author)

  20. Quantification of population sizes of large herbivores and their long-term functional role in ecosystems using dung fungal spores

    NARCIS (Netherlands)

    Baker, Ambroise G.; Cornelissen, Perry; Bhagwat, Shonil A.; Vera, Frans M.W.; Willis, Katherine J.

    2016-01-01

    The relationship between large herbivore numbers and landscape cover over time is poorly understood. There are two schools of thought: one views large herbivores as relatively passive elements upon the landscape and the other as ecosystem engineers driving vegetation succession. The latter

  1. Superficially porous particles with 1000Å pores for large biomolecule high performance liquid chromatography and polymer size exclusion chromatography.

    Science.gov (United States)

    Wagner, Brian M; Schuster, Stephanie A; Boyes, Barry E; Shields, Taylor J; Miles, William L; Haynes, Mark J; Moran, Robert E; Kirkland, Joseph J; Schure, Mark R

    2017-03-17

    To facilitate mass transport and column efficiency, solutes must have free access to particle pores to facilitate interactions with the stationary phase. To ensure this feature, particles should be used for HPLC separations which have pores sufficiently large to accommodate the solute without restricted diffusion. This paper describes the design and properties of superficially porous (also called Fused-Core ® , core shell or porous shell) particles with very large (1000Å) pores specifically developed for separating very large biomolecules and polymers. Separations of DNA fragments, monoclonal antibodies, large proteins and large polystyrene standards are used to illustrate the utility of these particles for efficient, high-resolution applications. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. Small genomes and large seeds: chromosome numbers, genome size and seed mass in diploid Aesculus species (Sapindaceae)

    Czech Academy of Sciences Publication Activity Database

    Krahulcová, Anna; Trávníček, Pavel; Krahulec, František; Rejmánek, M.

    2017-01-01

    Roč. 119, č. 6 (2017), s. 957-964 ISSN 0305-7364 Institutional support: RVO:67985939 Keywords : Aesculus * chromosome number * genome size * phylogeny * seed mass Subject RIV: EF - Botanics OBOR OECD: Plant sciences, botany Impact factor: 4.041, year: 2016

  3. Autologous chondrocyte implantation: Is it likely to become a saviour of large-sized and full-thickness cartilage defect in young adult knee?

    Science.gov (United States)

    Zhang, Chi; Cai, You-Zhi; Lin, Xiang-Jin

    2016-05-01

    A literature review of the first-, second- and third-generation autologous chondrocyte implantation (ACI) technique for the treatment of large-sized (>4 cm(2)) and full-thickness knee cartilage defects in young adults was conducted, examining the current literature on features, clinical scores, complications, magnetic resonance image (MRI) and histological outcomes, rehabilitation and cost-effectiveness. A literature review was carried out in the main medical databases to evaluate the several studies concerning ACI treatment of large-sized and full-thickness knee cartilage defects in young adults. ACI technique has been shown to relieve symptoms and improve functional assessment in large-sized (>4 cm(2)) and full-thickness knee articular cartilage defect of young adults in short- and medium-term follow-up. Besides, low level of evidence demonstrated its efficiency and durability at long-term follow-up after implantation. Furthermore, MRI and histological evaluations provided the evidence that graft can return back to the previous nearly normal cartilage via ACI techniques. Clinical outcomes tend to be similar in different ACI techniques, but with simplified procedure, low complication rate and better graft quality in the third-generation ACI technique. ACI based on the experience of cell-based therapy, with the high potential to regenerate hyaline-like tissue, represents clinical development in treatment of large-sized and full-thickness knee cartilage defects. IV.

  4. Evidence from a Large Sample on the Effects of Group Size and Decision-Making Time on Performance in a Marketing Simulation Game

    Science.gov (United States)

    Treen, Emily; Atanasova, Christina; Pitt, Leyland; Johnson, Michael

    2016-01-01

    Marketing instructors using simulation games as a way of inducing some realism into a marketing course are faced with many dilemmas. Two important quandaries are the optimal size of groups and how much of the students' time should ideally be devoted to the game. Using evidence from a very large sample of teams playing a simulation game, the study…

  5. Some problems raised by the operation of large nuclear turbo-generator sets. Automatic control system for steam turbo-generator units

    International Nuclear Information System (INIS)

    Cecconi, F.

    1976-01-01

    The design of an appropriate automatic system was found to be useful to improve the control of large size turbo-generator units so as to provide easy and efficient control and monitoring. The experience of the manufacturer of these turbo-generator units allowed a system well suited for this function to be designed [fr

  6. Do sex, body size and reproductive condition influence the thermal preferences of a large lizard? A study in Tupinambis merianae.

    Science.gov (United States)

    Cecchetto, Nicolas Rodolfo; Naretto, Sergio

    2015-10-01

    Body temperature is a key factor in physiological processes, influencing lizard performances; and life history traits are expected to generate variability of thermal preferences in different individuals. Gender, body size and reproductive condition may impose specific requirements on preferred body temperatures. If these three factors have different physiological functions and thermal requirements, then the preferred temperature may represent a compromise that optimizes these physiological functions. Therefore, the body temperatures that lizards select in a controlled environment may reflect a temperature that maximizes their physiological needs. The tegu lizard Tupinambis merianae is one of the largest lizards in South America and has wide ontogenetic variation in body size and sexual dimorphism. In the present study we evaluate intraspecific variability of thermal preferences of T. merianae. We determined the selected body temperature and the rate at which males and females attain their selected temperature, in relation to body size and reproductive condition. We also compared the behavior in the thermal gradient between males and females and between reproductive condition of individuals. Our study show that T. merianae selected body temperature within a narrow range of temperatures variation in the laboratory thermal gradient, with 36.24±1.49°C being the preferred temperature. We observed no significant differences between sex, body size and reproductive condition in thermal preferences. Accordingly, we suggest that the evaluated categories of T. merianae have similar thermal requirements. Males showed higher rates to obtain heat than females and reproductive females, higher rates than non-reproductive ones females. Moreover, males and reproductive females showed a more dynamic behavior in the thermal gradient. Therefore, even though they achieve the same selected temperature, they do it differentially. Copyright © 2015 Elsevier Ltd. All rights reserved.

  7. Refractory Inclusion Size Distribution and Fabric Measured in a Large Slab of the Allende CV3 Chondrite

    Science.gov (United States)

    Srinivasan, P.; Simon, Justin I.; Cuzzi, J. N.

    2013-01-01

    Aggregate textures of chondrites reflect accretion of early-formed particles in the solar nebula. Explanations for the size and density variations of particle populations found among chondrites are debated. Differences could have risen out of formation in different locations in the nebula, and/or they could have been caused by a sorting process [1]. Many ideas on the cause of chondrule sorting have been proposed; some including sorting by mass [2,3], by X-winds [4], turbulent concentration [5], and by photophoresis [6]. However, few similar studies have been conducted for Ca-, Al-rich inclusions (CAIs). These particles are known to have formed early, and their distribution could attest to the early stages of Solar System (ESS) history. Unfortunately, CAIs are not as common in chondrites as chondrules are, reducing the usefulness of studies restricted to a few thin sections. Furthermore, the largest sizes of CAIs are generally much larger than chondrules, and therefore rarely present in most studied chondrite thin sections. This study attempts to perform a more representative sampling of the CAI population in the Allende chondrite by investigating a two decimeter-sized slab.

  8. A Large Size Chimeric Highly Immunogenic Peptide Presents Multistage Plasmodium Antigens as a Vaccine Candidate System against Malaria.

    Science.gov (United States)

    Lozano, José Manuel; Varela, Yahson; Silva, Yolanda; Ardila, Karen; Forero, Martha; Guasca, Laura; Guerrero, Yuly; Bermudez, Adriana; Alba, Patricia; Vanegas, Magnolia; Patarroyo, Manuel Elkin

    2017-11-01

    Rational strategies for obtaining malaria vaccine candidates should include not only a proper selection of target antigens for antibody stimulation, but also a versatile molecular design based on ordering the right pieces from the complex pathogen molecular puzzle towards more active and functional immunogens. Classical Plasmodium falciparum antigens regarded as vaccine candidates have been selected as model targets in this study. Among all possibilities we have chosen epitopes of Pf CSP, STARP; MSA1 and Pf 155/RESA from pre- and erythrocyte stages respectively for designing a large 82-residue chimeric immunogen. A number of options aimed at diminishing steric hindrance for synthetic procedures were assessed based on standard Fmoc chemistry such as building block orthogonal ligation; pseudo-proline and microwave-assisted procedures, therefore the large-chimeric target was produced, characterized and immunologically tested. Antigenicity and functional in vivo efficacy tests of the large-chimera formulations administered alone or as antigen mixtures have proven the stimulation of high antibody titers, showing strong correlation with protection and parasite clearance of vaccinated BALB/c mice after being lethally challenged with both P. berghei -ANKA and P. yoelii 17XL malaria strains. Besides, 3D structure features shown by the large-chimera encouraged as to propose using these rational designed large synthetic molecules as reliable vaccine candidate-presenting systems.

  9. A Large Size Chimeric Highly Immunogenic Peptide Presents Multistage Plasmodium Antigens as a Vaccine Candidate System against Malaria

    Directory of Open Access Journals (Sweden)

    José Manuel Lozano

    2017-11-01

    Full Text Available Rational strategies for obtaining malaria vaccine candidates should include not only a proper selection of target antigens for antibody stimulation, but also a versatile molecular design based on ordering the right pieces from the complex pathogen molecular puzzle towards more active and functional immunogens. Classical Plasmodium falciparum antigens regarded as vaccine candidates have been selected as model targets in this study. Among all possibilities we have chosen epitopes of PfCSP, STARP; MSA1 and Pf155/RESA from pre- and erythrocyte stages respectively for designing a large 82-residue chimeric immunogen. A number of options aimed at diminishing steric hindrance for synthetic procedures were assessed based on standard Fmoc chemistry such as building block orthogonal ligation; pseudo-proline and microwave-assisted procedures, therefore the large-chimeric target was produced, characterized and immunologically tested. Antigenicity and functional in vivo efficacy tests of the large-chimera formulations administered alone or as antigen mixtures have proven the stimulation of high antibody titers, showing strong correlation with protection and parasite clearance of vaccinated BALB/c mice after being lethally challenged with both P. berghei-ANKA and P. yoelii 17XL malaria strains. Besides, 3D structure features shown by the large-chimera encouraged as to propose using these rational designed large synthetic molecules as reliable vaccine candidate-presenting systems.

  10. Modeling of the evolution of bubble size distribution of gas-liquid flow inside a large vertical pipe. Influence of bubble coalescence and breakup models

    International Nuclear Information System (INIS)

    Liao, Yixiang; Lucas, Dirk

    2011-01-01

    The range of gas-liquid flow applications in today's technology is immensely wide. Important examples can be found in chemical reactors, boiling and condensation equipments as well as nuclear reactors. In gas-liquid flows, the bubble size distribution plays an important role in the phase structure and interfacial exchange behaviors. It is therefore necessary to take into account the dynamic change of the bubble size distribution to get good predictions in CFD. An efficient 1D Multi-Bubble-Size-Class Test Solver was introduced in Lucas et al. (2001) for the simulation of the development of the flow structure along a vertical pipe. The model considers a large number of bubble classes. It solves the radial profiles of liquid and gas velocities, bubble-size class resolved gas fraction profiles as well as turbulence parameters on basis of the bubble size distribution present at the given axial position. The evolution of the flow along the height is assumed to be solely caused by the progress of bubble coalescence and break-up resulting in a bubble size distribution changing in the axial direction. In this model, the bubble coalescence and breakup models are very important for reasonable predictions of the bubble size distribution. Many bubble coalescence and breakup models have been proposed in the literature. However, some obvious discrepancies exist in the models; for example, the daughter bubble size distributions are greatly different from different bubble breakup models, as reviewed in our previous publication (Liao and Lucas, 2009a; 2010). Therefore, it is necessary to compare and evaluate typical bubble coalescence and breakup models that have been commonly used in the literature. Thus, this work is aimed to make a comparison of several typical bubble coalescence and breakup models and to discuss in detail the ability of the Test Solver to predict the evolution of bubble size distribution. (orig.)

  11. Does Decision Quality (Always) Increase with the Size of Information Samples? Some Vicissitudes in Applying the Law of Large Numbers

    Science.gov (United States)

    Fiedler, Klaus; Kareev, Yaakov

    2006-01-01

    Adaptive decision making requires that contingencies between decision options and their relative assets be assessed accurately and quickly. The present research addresses the challenging notion that contingencies may be more visible from small than from large samples of observations. An algorithmic account for such a seemingly paradoxical effect…

  12. Studies on the welding of heavy-section ASTM A542 Cl. 1 steel for large-sized pressure vessels

    International Nuclear Information System (INIS)

    Shimizu, Shigeki; Aota, Toshiichi; Kasahara, Masayuki

    1977-01-01

    ASTM A 542, Cl. 1 steel was developed and standardized recently, and is excellent in the high temperature strength and toughness as compared with conventionally used A 387, Grade 22 steel, accordingly the application to large pressure vessels is planned. This steel is a low alloy steel, and in case of large thickness, the possibility of cracking in the welded part is large. Also many times of annealing are required for the prevention of welding cracking, the relieving of residual stress, and the softening of hardened portion, but the possibility of cracking during stress-relieving annealing is large. In this study, Tekken type cracking test was carried out by coated electrode welding, and restricted cracking test was carried out by submerged arc welding of the A 542, Cl. 1 steel and A 387, Grade 22 steel, thus the welding cracking property was investigated, and the optimal welding conditions were selected. Also the test of cracking during the stress-relieving annealing of both steels was carried out, and the method of preventing the cracking was studied. The optimal conditions of stress-relieving annealing were selected, and the mechanism of the cracking was clarified. The mechanical properties of the joints welded and stress-relieved under the selected conditions were confirmed. (Kako, I.)

  13. Electrochemical machining of internal built-up surfaces of large-sized vessels for nuclear power plants

    Energy Technology Data Exchange (ETDEWEB)

    Ryabchenko, N N; Pulin, V Ya [Vsesoyuznyj Proektno-Tekhnologicheskij Inst. Atomnogo Mashinostroeniya i Kotlostroeniya, Rostov-na-Donu (USSR)

    1977-01-01

    Electrochemical machining (ECM) has been employed for finishing of mechanically processed inner surfaces of large lateral parts of construction bodies with welded 0Kh18N10T steel overlayer. The finishing technology developed reduces the surface roughness from 10 mcm to the standard 2.5 mcm at the efficiency of machining of 2-4 m/sup 2/ per hour.

  14. Structure and properties of large-sized forged disk of alloy type KhN73MBTYu-VD(EhI 698-VD)

    International Nuclear Information System (INIS)

    Sudakov, V.S.

    1994-01-01

    Investigation results are presented for structure and mechanical properties of serial large-sized forged disk 1100 mm in diameter produced of alloy type EhI 9698-VD hand tested after standard heat treatment and isothermal ageing at operating temperature. Chemical composition studies have revealed no macroheterogeneity. In a central cross-section macrostructure is free of pores, inclusions, delaminating and variation in grain size. The metal of the disk possesses high values of long-term rupture strength and creep resistance at 650-700 deg C

  15. Brief report: large individual variation in outcomes of autistic children receiving low-intensity behavioral interventions in community settings

    OpenAIRE

    Kamio, Yoko; Haraguchi, Hideyuki; Miyake, Atsuko; Hiraiwa, Mikio

    2015-01-01

    Background Despite widespread awareness of the necessity of early intervention for children with autism spectrum disorders (ASDs), evidence is still limited, in part, due to the complex nature of ASDs. This exploratory study aimed to examine the change across time in young children with autism and their mothers, who received less intensive early interventions with and without applied behavior analysis (ABA) methods in community settings in Japan. Methods Eighteen children with autism (mean ag...

  16. Study on characteristics of response to nodal vibration in a main hull of a large-size ferry boat; Ogata feri no shusentai yodo oto tokusei ni kansuru kenkyu

    Energy Technology Data Exchange (ETDEWEB)

    Takimoto, T; Yamamoto, A; Kasuda, T; Yanagi, K [Mitsubishi Heavy Industries, Ltd., Tokyo (Japan)

    1996-04-10

    Demand for reduction in vibration and noise in large-size ferry boats has been severer in recent years. On the other hand, vibration exciting force in main engines and propellers is on an increasing trend in association with increase in speed and horsepower. A large-size ferry boat uses an intermediate-speed diesel engine which has high vibration exciting frequency. Therefore, discussions were given on characteristics of response to nodal vibration in a main hull induced by primary internal moment in a main engine in a large-size ferry boat mounting an intermediate speed main engine. Results of detailed vibration calculations, vibration experiments using an actual ship, and results of measurements were used for the discussions. Natural frequency for two-node vibration above and below the main hull was set for an equation of estimation such that the whole ship is hypothesized to have been structured with beams having the same cross section according to the Todd`s equation, and effect of rigidity of the long structure can be evaluated. Parameters were derived by using the minimum square method that uses the measured natural frequency of the ship A through the ship E among large-size ferry boats. The derived result may be summarized as follows: this equation of estimation has an estimation error of about 5% against the natural frequency for nodal vibration above and below the main hull; and this equation of estimation has an estimation error of about 30% against the acceleration in the vertical direction at the end of the stern. 2 refs., 11 figs., 1 tab.

  17. MANUFACTURING AND CONTINUOUS IMPROVEMENT PERFORMANCE LEVEL IN PLANTS OF MEXICO; A COMPARATIVE ANALYSIS AMONG LARGE AND MEDIUM SIZE PLANTS

    OpenAIRE

    Carlos Monge; Jesús Cruz

    2015-01-01

    A random and statistically significant sample of 40 medium (12) and large (28) manufacturing plants of Apodaca, Mexico were surveyed using a structured and validated questionnaire to investigate the level of implementation of lean manufacturing, sustainable manufacturing, continuous improvement and operational efficiency and environmental responsibility in them, it is important to mention it was found that performance in the mentioned philosophies, on the two categories of plants is low, howe...

  18. Fabrication of epoxy composites with large-pore sized mesoporous silica and investigation of their thermal expansion.

    Science.gov (United States)

    Suzuki, Norihiro; Kiba, Shosuke; Yamauchi, Yusuke

    2012-02-01

    We fabricate epoxy composites with low thermal expansion by using mesoporous silica particles with a large pore diameter (around 10 nm) as inorganic fillers. From a simple calculation, almost all the mesopores are estimated to be completely filled with the epoxy polymer. The coefficient of linear thermal expansion (CTE) values of the obtained epoxy composites proportionally decrease with the increase of the mesoporous silica content.

  19. Growth dynamics of the threatened Caribbean staghorn coral Acropora cervicornis: influence of host genotype, symbiont identity, colony size, and environmental setting.

    Science.gov (United States)

    Lirman, Diego; Schopmeyer, Stephanie; Galvan, Victor; Drury, Crawford; Baker, Andrew C; Baums, Iliana B

    2014-01-01

    The drastic decline in the abundance of Caribbean acroporid corals (Acropora cervicornis, A. palmata) has prompted the listing of this genus as threatened as well as the development of a regional propagation and restoration program. Using in situ underwater nurseries, we documented the influence of coral genotype and symbiont identity, colony size, and propagation method on the growth and branching patterns of staghorn corals in Florida and the Dominican Republic. Individual tracking of> 1700 nursery-grown staghorn fragments and colonies from 37 distinct genotypes (identified using microsatellites) in Florida and the Dominican Republic revealed a significant positive relationship between size and growth, but a decreasing rate of productivity with increasing size. Pruning vigor (enhanced growth after fragmentation) was documented even in colonies that lost 95% of their coral tissue/skeleton, indicating that high productivity can be maintained within nurseries by sequentially fragmenting corals. A significant effect of coral genotype was documented for corals grown in a common-garden setting, with fast-growing genotypes growing up to an order of magnitude faster than slow-growing genotypes. Algal-symbiont identity established using qPCR techniques showed that clade A (likely Symbiodinium A3) was the dominant symbiont type for all coral genotypes, except for one coral genotype in the DR and two in Florida that were dominated by clade C, with A- and C-dominated genotypes having similar growth rates. The threatened Caribbean staghorn coral is capable of extremely fast growth, with annual productivity rates exceeding 5 cm of new coral produced for every cm of existing coral. This species benefits from high fragment survivorship coupled by the pruning vigor experienced by the parent colonies after fragmentation. These life-history characteristics make A. cervicornis a successful candidate nursery species and provide optimism for the potential role that active propagation

  20. Growth dynamics of the threatened Caribbean staghorn coral Acropora cervicornis: influence of host genotype, symbiont identity, colony size, and environmental setting.

    Directory of Open Access Journals (Sweden)

    Diego Lirman

    Full Text Available BACKGROUND: The drastic decline in the abundance of Caribbean acroporid corals (Acropora cervicornis, A. palmata has prompted the listing of this genus as threatened as well as the development of a regional propagation and restoration program. Using in situ underwater nurseries, we documented the influence of coral genotype and symbiont identity, colony size, and propagation method on the growth and branching patterns of staghorn corals in Florida and the Dominican Republic. METHODOLOGY/PRINCIPAL FINDINGS: Individual tracking of> 1700 nursery-grown staghorn fragments and colonies from 37 distinct genotypes (identified using microsatellites in Florida and the Dominican Republic revealed a significant positive relationship between size and growth, but a decreasing rate of productivity with increasing size. Pruning vigor (enhanced growth after fragmentation was documented even in colonies that lost 95% of their coral tissue/skeleton, indicating that high productivity can be maintained within nurseries by sequentially fragmenting corals. A significant effect of coral genotype was documented for corals grown in a common-garden setting, with fast-growing genotypes growing up to an order of magnitude faster than slow-growing genotypes. Algal-symbiont identity established using qPCR techniques showed that clade A (likely Symbiodinium A3 was the dominant symbiont type for all coral genotypes, except for one coral genotype in the DR and two in Florida that were dominated by clade C, with A- and C-dominated genotypes having similar growth rates. CONCLUSION/SIGNIFICANCE: The threatened Caribbean staghorn coral is capable of extremely fast growth, with annual productivity rates exceeding 5 cm of new coral produced for every cm of existing coral. This species benefits from high fragment survivorship coupled by the pruning vigor experienced by the parent colonies after fragmentation. These life-history characteristics make A. cervicornis a successful candidate