WorldWideScience

Sample records for processes showing large

  1. Identification of low order models for large scale processes

    NARCIS (Netherlands)

    Wattamwar, S.K.

    2010-01-01

    Many industrial chemical processes are complex, multi-phase and large scale in nature. These processes are characterized by various nonlinear physiochemical effects and fluid flows. Such processes often show coexistence of fast and slow dynamics during their time evolutions. The increasing demand

  2. [Dual process in large number estimation under uncertainty].

    Science.gov (United States)

    Matsumuro, Miki; Miwa, Kazuhisa; Terai, Hitoshi; Yamada, Kento

    2016-08-01

    According to dual process theory, there are two systems in the mind: an intuitive and automatic System 1 and a logical and effortful System 2. While many previous studies about number estimation have focused on simple heuristics and automatic processes, the deliberative System 2 process has not been sufficiently studied. This study focused on the System 2 process for large number estimation. First, we described an estimation process based on participants’ verbal reports. The task, corresponding to the problem-solving process, consisted of creating subgoals, retrieving values, and applying operations. Second, we investigated the influence of such deliberative process by System 2 on intuitive estimation by System 1, using anchoring effects. The results of the experiment showed that the System 2 process could mitigate anchoring effects.

  3. A KPI-based process monitoring and fault detection framework for large-scale processes.

    Science.gov (United States)

    Zhang, Kai; Shardt, Yuri A W; Chen, Zhiwen; Yang, Xu; Ding, Steven X; Peng, Kaixiang

    2017-05-01

    Large-scale processes, consisting of multiple interconnected subprocesses, are commonly encountered in industrial systems, whose performance needs to be determined. A common approach to this problem is to use a key performance indicator (KPI)-based approach. However, the different KPI-based approaches are not developed with a coherent and consistent framework. Thus, this paper proposes a framework for KPI-based process monitoring and fault detection (PM-FD) for large-scale industrial processes, which considers the static and dynamic relationships between process and KPI variables. For the static case, a least squares-based approach is developed that provides an explicit link with least-squares regression, which gives better performance than partial least squares. For the dynamic case, using the kernel representation of each subprocess, an instrument variable is used to reduce the dynamic case to the static case. This framework is applied to the TE benchmark process and the hot strip mill rolling process. The results show that the proposed method can detect faults better than previous methods. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  4. Large transverse momentum hadronic processes

    International Nuclear Information System (INIS)

    Darriulat, P.

    1977-01-01

    The possible relations between deep inelastic leptoproduction and large transverse momentum (psub(t)) processes in hadronic collisions are usually considered in the framework of the quark-parton picture. Experiments observing the structure of the final state in proton-proton collisions producing at least one large transverse momentum particle have led to the following conclusions: a large fraction of produced particles are uneffected by the large psub(t) process. The other products are correlated to the large psub(t) particle. Depending upon the sign of scalar product they can be separated into two groups of ''towards-movers'' and ''away-movers''. The experimental evidence are reviewed favouring such a picture and the properties are discussed of each of three groups (underlying normal event, towards-movers and away-movers). Some phenomenological interpretations are presented. The exact nature of away- and towards-movers must be further investigated. Their apparent jet structure has to be confirmed. Angular correlations between leading away and towards movers are very informative. Quantum number flow, both within the set of away and towards-movers, and between it and the underlying normal event, are predicted to behave very differently in different models

  5. Listeria monocytogenes strains show large variations in competitive growth in mixed culture biofilms and suspensions with bacteria from food processing environments.

    Science.gov (United States)

    Heir, Even; Møretrø, Trond; Simensen, Andreas; Langsrud, Solveig

    2018-06-20

    Interactions and competition between resident bacteria in food processing environments could affect their ability to survive, grow and persist in microhabitats and niches in the food industry. In this study, the competitive ability of L. monocytogenes strains grown together in separate culture mixes with other L. monocytogenes (L. mono mix), L. innocua (Listeria mix), Gram-negative bacteria (Gram- mix) and with a multigenera mix (Listeria + Gram- mix) was investigated in biofilms on stainless steel and in suspensions at 12 °C. The mixed cultures included resident bacteria from processing surfaces in meat and salmon industry represented by L. monocytogenes (n = 6), L. innocua (n = 5) and Gram-negative bacteria (n = 6; Acinetobacter sp., Pseudomonas fragi, Pseudomonas fluorescens, Serratia liquefaciens, Stenotrophomonas maltophilia). Despite hampered in growth in mixed cultures, L. monocytogenes established in biofilms with counts at day nine between 7.3 and 9.0 log per coupon with the lowest counts in the Listeria + G- mix that was dominated by Pseudomonas. Specific L. innocua inhibited growth of L. monocytogenes strains differently; inhibition that was further enhanced by the background Gram-negative microbiota. In these multispecies and multibacteria cultures, the growth competitive effects lead to the dominance of a strong competitor L. monocytogenes strain that was only slightly inhibited by L. innocua and showed strong competitive abilities in mixed cultures with resident Gram-negative bacteria. The results indicates complex patterns of bacterial interactions and L. monocytogenes inhibition in the multibacteria cultures that only partially depend on cell contact and likely involve various antagonistic and bacterial tolerance mechanisms. The study indicates large variations among L. monocytogenes in their competitiveness under multibacterial culture conditions that should be considered in further studies towards understanding of L

  6. Efficient querying of large process model repositories

    NARCIS (Netherlands)

    Jin, Tao; Wang, Jianmin; La Rosa, M.; Hofstede, ter A.H.M.; Wen, Lijie

    2013-01-01

    Recent years have seen an increased uptake of business process management technology in industries. This has resulted in organizations trying to manage large collections of business process models. One of the challenges facing these organizations concerns the retrieval of models from large business

  7. Large scale and big data processing and management

    CERN Document Server

    Sakr, Sherif

    2014-01-01

    Large Scale and Big Data: Processing and Management provides readers with a central source of reference on the data management techniques currently available for large-scale data processing. Presenting chapters written by leading researchers, academics, and practitioners, it addresses the fundamental challenges associated with Big Data processing tools and techniques across a range of computing environments.The book begins by discussing the basic concepts and tools of large-scale Big Data processing and cloud computing. It also provides an overview of different programming models and cloud-bas

  8. Processing graded feedback: electrophysiological correlates of learning from small and large errors.

    Science.gov (United States)

    Luft, Caroline Di Bernardi; Takase, Emilio; Bhattacharya, Joydeep

    2014-05-01

    Feedback processing is important for learning and therefore may affect the consolidation of skills. Considerable research demonstrates electrophysiological differences between correct and incorrect feedback, but how we learn from small versus large errors is usually overlooked. This study investigated electrophysiological differences when processing small or large error feedback during a time estimation task. Data from high-learners and low-learners were analyzed separately. In both high- and low-learners, large error feedback was associated with higher feedback-related negativity (FRN) and small error feedback was associated with a larger P300 and increased amplitude over the motor related areas of the left hemisphere. In addition, small error feedback induced larger desynchronization in the alpha and beta bands with distinctly different topographies between the two learning groups: The high-learners showed a more localized decrease in beta power over the left frontocentral areas, and the low-learners showed a widespread reduction in the alpha power following small error feedback. Furthermore, only the high-learners showed an increase in phase synchronization between the midfrontal and left central areas. Importantly, this synchronization was correlated to how well the participants consolidated the estimation of the time interval. Thus, although large errors were associated with higher FRN, small errors were associated with larger oscillatory responses, which was more evident in the high-learners. Altogether, our results suggest an important role of the motor areas in the processing of error feedback for skill consolidation.

  9. Data-driven process decomposition and robust online distributed modelling for large-scale processes

    Science.gov (United States)

    Shu, Zhang; Lijuan, Li; Lijuan, Yao; Shipin, Yang; Tao, Zou

    2018-02-01

    With the increasing attention of networked control, system decomposition and distributed models show significant importance in the implementation of model-based control strategy. In this paper, a data-driven system decomposition and online distributed subsystem modelling algorithm was proposed for large-scale chemical processes. The key controlled variables are first partitioned by affinity propagation clustering algorithm into several clusters. Each cluster can be regarded as a subsystem. Then the inputs of each subsystem are selected by offline canonical correlation analysis between all process variables and its controlled variables. Process decomposition is then realised after the screening of input and output variables. When the system decomposition is finished, the online subsystem modelling can be carried out by recursively block-wise renewing the samples. The proposed algorithm was applied in the Tennessee Eastman process and the validity was verified.

  10. Large Data at Small Universities: Astronomical processing using a computer classroom

    Science.gov (United States)

    Fuller, Nathaniel James; Clarkson, William I.; Fluharty, Bill; Belanger, Zach; Dage, Kristen

    2016-06-01

    The use of large computing clusters for astronomy research is becoming more commonplace as datasets expand, but access to these required resources is sometimes difficult for research groups working at smaller Universities. As an alternative to purchasing processing time on an off-site computing cluster, or purchasing dedicated hardware, we show how one can easily build a crude on-site cluster by utilizing idle cycles on instructional computers in computer-lab classrooms. Since these computers are maintained as part of the educational mission of the University, the resource impact on the investigator is generally low.By using open source Python routines, it is possible to have a large number of desktop computers working together via a local network to sort through large data sets. By running traditional analysis routines in an “embarrassingly parallel” manner, gains in speed are accomplished without requiring the investigator to learn how to write routines using highly specialized methodology. We demonstrate this concept here applied to 1. photometry of large-format images and 2. Statistical significance-tests for X-ray lightcurve analysis. In these scenarios, we see a speed-up factor which scales almost linearly with the number of cores in the cluster. Additionally, we show that the usage of the cluster does not severely limit performance for a local user, and indeed the processing can be performed while the computers are in use for classroom purposes.

  11. Large scale processing of dielectric electroactive polymers

    DEFF Research Database (Denmark)

    Vudayagiri, Sindhu

    Efficient processing techniques are vital to the success of any manufacturing industry. The processing techniques determine the quality of the products and thus to a large extent the performance and reliability of the products that are manufactured. The dielectric electroactive polymer (DEAP...

  12. On Building and Processing of Large Digitalized Map Archive

    Directory of Open Access Journals (Sweden)

    Milan Simunek

    2011-07-01

    Full Text Available A tall list of problems needs to be solved during a long-time work on a virtual model of Prague aim of which is to show historical development of the city in virtual reality. This paper presents an integrated solution to digitalizing, cataloguing and processing of a large number of maps from different periods and from variety of sources. A specialized (GIS software application was developed to allow for a fast georeferencing (using an evolutionary algorithm, for cataloguing in an internal database, and subsequently for an easy lookup of relevant maps. So the maps could be processed further to serve as a main input for a proper modeling of a changing face of the city through times.

  13. Large Scale Processes and Extreme Floods in Brazil

    Science.gov (United States)

    Ribeiro Lima, C. H.; AghaKouchak, A.; Lall, U.

    2016-12-01

    Persistent large scale anomalies in the atmospheric circulation and ocean state have been associated with heavy rainfall and extreme floods in water basins of different sizes across the world. Such studies have emerged in the last years as a new tool to improve the traditional, stationary based approach in flood frequency analysis and flood prediction. Here we seek to advance previous studies by evaluating the dominance of large scale processes (e.g. atmospheric rivers/moisture transport) over local processes (e.g. local convection) in producing floods. We consider flood-prone regions in Brazil as case studies and the role of large scale climate processes in generating extreme floods in such regions is explored by means of observed streamflow, reanalysis data and machine learning methods. The dynamics of the large scale atmospheric circulation in the days prior to the flood events are evaluated based on the vertically integrated moisture flux and its divergence field, which are interpreted in a low-dimensional space as obtained by machine learning techniques, particularly supervised kernel principal component analysis. In such reduced dimensional space, clusters are obtained in order to better understand the role of regional moisture recycling or teleconnected moisture in producing floods of a given magnitude. The convective available potential energy (CAPE) is also used as a measure of local convection activities. We investigate for individual sites the exceedance probability in which large scale atmospheric fluxes dominate the flood process. Finally, we analyze regional patterns of floods and how the scaling law of floods with drainage area responds to changes in the climate forcing mechanisms (e.g. local vs large scale).

  14. Large-Scale Graph Processing Using Apache Giraph

    KAUST Repository

    Sakr, Sherif

    2017-01-07

    This book takes its reader on a journey through Apache Giraph, a popular distributed graph processing platform designed to bring the power of big data processing to graph data. Designed as a step-by-step self-study guide for everyone interested in large-scale graph processing, it describes the fundamental abstractions of the system, its programming models and various techniques for using the system to process graph data at scale, including the implementation of several popular and advanced graph analytics algorithms.

  15. Large-Scale Graph Processing Using Apache Giraph

    KAUST Repository

    Sakr, Sherif; Orakzai, Faisal Moeen; Abdelaziz, Ibrahim; Khayyat, Zuhair

    2017-01-01

    This book takes its reader on a journey through Apache Giraph, a popular distributed graph processing platform designed to bring the power of big data processing to graph data. Designed as a step-by-step self-study guide for everyone interested in large-scale graph processing, it describes the fundamental abstractions of the system, its programming models and various techniques for using the system to process graph data at scale, including the implementation of several popular and advanced graph analytics algorithms.

  16. Large forging manufacturing process

    Science.gov (United States)

    Thamboo, Samuel V.; Yang, Ling

    2002-01-01

    A process for forging large components of Alloy 718 material so that the components do not exhibit abnormal grain growth includes the steps of: a) providing a billet with an average grain size between ASTM 0 and ASTM 3; b) heating the billet to a temperature of between 1750.degree. F. and 1800.degree. F.; c) upsetting the billet to obtain a component part with a minimum strain of 0.125 in at least selected areas of the part; d) reheating the component part to a temperature between 1750.degree. F. and 1800.degree. F.; e) upsetting the component part to a final configuration such that said selected areas receive no strains between 0.01 and 0.125; f) solution treating the component part at a temperature of between 1725.degree. F. and 1750.degree. F.; and g) aging the component part over predetermined times at different temperatures. A modified process achieves abnormal grain growth in selected areas of a component where desirable.

  17. Large deviations for Gaussian processes in Hoelder norm

    International Nuclear Information System (INIS)

    Fatalov, V R

    2003-01-01

    Some results are proved on the exact asymptotic representation of large deviation probabilities for Gaussian processes in the Hoeder norm. The following classes of processes are considered: the Wiener process, the Brownian bridge, fractional Brownian motion, and stationary Gaussian processes with power-law covariance function. The investigation uses the method of double sums for Gaussian fields

  18. Increasing a large petrochemical company efficiency by improvement of decision making process

    OpenAIRE

    Kirin Snežana D.; Nešić Lela G.

    2010-01-01

    The paper shows the results of a research conducted in a large petrochemical company, in a state under transition, with the aim to "shed light" on the decision making process from the aspect of personal characteristics of the employees, in order to use the results to improve decision making process and increase company efficiency. The research was conducted by a survey, i.e. by filling out a questionnaire specially made for this purpose, in real conditions, during working hours. The sample of...

  19. The Very Large Array Data Processing Pipeline

    Science.gov (United States)

    Kent, Brian R.; Masters, Joseph S.; Chandler, Claire J.; Davis, Lindsey E.; Kern, Jeffrey S.; Ott, Juergen; Schinzel, Frank K.; Medlin, Drew; Muders, Dirk; Williams, Stewart; Geers, Vincent C.; Momjian, Emmanuel; Butler, Bryan J.; Nakazato, Takeshi; Sugimoto, Kanako

    2018-01-01

    We present the VLA Pipeline, software that is part of the larger pipeline processing framework used for the Karl G. Jansky Very Large Array (VLA), and Atacama Large Millimeter/sub-millimeter Array (ALMA) for both interferometric and single dish observations.Through a collection of base code jointly used by the VLA and ALMA, the pipeline builds a hierarchy of classes to execute individual atomic pipeline tasks within the Common Astronomy Software Applications (CASA) package. Each pipeline task contains heuristics designed by the team to actively decide the best processing path and execution parameters for calibration and imaging. The pipeline code is developed and written in Python and uses a "context" structure for tracking the heuristic decisions and processing results. The pipeline "weblog" acts as the user interface in verifying the quality assurance of each calibration and imaging stage. The majority of VLA scheduling blocks above 1 GHz are now processed with the standard continuum recipe of the pipeline and offer a calibrated measurement set as a basic data product to observatory users. In addition, the pipeline is used for processing data from the VLA Sky Survey (VLASS), a seven year community-driven endeavor started in September 2017 to survey the entire sky down to a declination of -40 degrees at S-band (2-4 GHz). This 5500 hour next-generation large radio survey will explore the time and spectral domains, relying on pipeline processing to generate calibrated measurement sets, polarimetry, and imaging data products that are available to the astronomical community with no proprietary period. Here we present an overview of the pipeline design philosophy, heuristics, and calibration and imaging results produced by the pipeline. Future development will include the testing of spectral line recipes, low signal-to-noise heuristics, and serving as a testing platform for science ready data products.The pipeline is developed as part of the CASA software package by an

  20. Microarray Data Processing Techniques for Genome-Scale Network Inference from Large Public Repositories.

    Science.gov (United States)

    Chockalingam, Sriram; Aluru, Maneesha; Aluru, Srinivas

    2016-09-19

    Pre-processing of microarray data is a well-studied problem. Furthermore, all popular platforms come with their own recommended best practices for differential analysis of genes. However, for genome-scale network inference using microarray data collected from large public repositories, these methods filter out a considerable number of genes. This is primarily due to the effects of aggregating a diverse array of experiments with different technical and biological scenarios. Here we introduce a pre-processing pipeline suitable for inferring genome-scale gene networks from large microarray datasets. We show that partitioning of the available microarray datasets according to biological relevance into tissue- and process-specific categories significantly extends the limits of downstream network construction. We demonstrate the effectiveness of our pre-processing pipeline by inferring genome-scale networks for the model plant Arabidopsis thaliana using two different construction methods and a collection of 11,760 Affymetrix ATH1 microarray chips. Our pre-processing pipeline and the datasets used in this paper are made available at http://alurulab.cc.gatech.edu/microarray-pp.

  1. Large-Deviation Results for Discriminant Statistics of Gaussian Locally Stationary Processes

    Directory of Open Access Journals (Sweden)

    Junichi Hirukawa

    2012-01-01

    Full Text Available This paper discusses the large-deviation principle of discriminant statistics for Gaussian locally stationary processes. First, large-deviation theorems for quadratic forms and the log-likelihood ratio for a Gaussian locally stationary process with a mean function are proved. Their asymptotics are described by the large deviation rate functions. Second, we consider the situations where processes are misspecified to be stationary. In these misspecified cases, we formally make the log-likelihood ratio discriminant statistics and derive the large deviation theorems of them. Since they are complicated, they are evaluated and illustrated by numerical examples. We realize the misspecification of the process to be stationary seriously affecting our discrimination.

  2. CRISPR transcript processing: a mechanism for generating a large number of small interfering RNAs

    Directory of Open Access Journals (Sweden)

    Djordjevic Marko

    2012-07-01

    Full Text Available Abstract Background CRISPR/Cas (Clustered Regularly Interspaced Short Palindromic Repeats/CRISPR associated sequences is a recently discovered prokaryotic defense system against foreign DNA, including viruses and plasmids. CRISPR cassette is transcribed as a continuous transcript (pre-crRNA, which is processed by Cas proteins into small RNA molecules (crRNAs that are responsible for defense against invading viruses. Experiments in E. coli report that overexpression of cas genes generates a large number of crRNAs, from only few pre-crRNAs. Results We here develop a minimal model of CRISPR processing, which we parameterize based on available experimental data. From the model, we show that the system can generate a large amount of crRNAs, based on only a small decrease in the amount of pre-crRNAs. The relationship between the decrease of pre-crRNAs and the increase of crRNAs corresponds to strong linear amplification. Interestingly, this strong amplification crucially depends on fast non-specific degradation of pre-crRNA by an unidentified nuclease. We show that overexpression of cas genes above a certain level does not result in further increase of crRNA, but that this saturation can be relieved if the rate of CRISPR transcription is increased. We furthermore show that a small increase of CRISPR transcription rate can substantially decrease the extent of cas gene activation necessary to achieve a desired amount of crRNA. Conclusions The simple mathematical model developed here is able to explain existing experimental observations on CRISPR transcript processing in Escherichia coli. The model shows that a competition between specific pre-crRNA processing and non-specific degradation determines the steady-state levels of crRNA and is responsible for strong linear amplification of crRNAs when cas genes are overexpressed. The model further shows how disappearance of only a few pre-crRNA molecules normally present in the cell can lead to a large (two

  3. Drell–Yan process at Large Hadron Collider

    Indian Academy of Sciences (India)

    the Drell–Yan process [1] first studied with muon final states. In Standard .... Two large-statistics sets of signal events, based on the value of the dimuon invariant mass, .... quality control criteria are applied to this globally reconstructed muon.

  4. Hierarchical optimal control of large-scale nonlinear chemical processes.

    Science.gov (United States)

    Ramezani, Mohammad Hossein; Sadati, Nasser

    2009-01-01

    In this paper, a new approach is presented for optimal control of large-scale chemical processes. In this approach, the chemical process is decomposed into smaller sub-systems at the first level, and a coordinator at the second level, for which a two-level hierarchical control strategy is designed. For this purpose, each sub-system in the first level can be solved separately, by using any conventional optimization algorithm. In the second level, the solutions obtained from the first level are coordinated using a new gradient-type strategy, which is updated by the error of the coordination vector. The proposed algorithm is used to solve the optimal control problem of a complex nonlinear chemical stirred tank reactor (CSTR), where its solution is also compared with the ones obtained using the centralized approach. The simulation results show the efficiency and the capability of the proposed hierarchical approach, in finding the optimal solution, over the centralized method.

  5. Informational support of the investment process in a large city economy

    Directory of Open Access Journals (Sweden)

    Tamara Zurabovna Chargazia

    2016-12-01

    Full Text Available Large cities possess a sufficient potential to participate in the investment processes both at the national and international levels. A potential investor’s awareness of the possibilities and prospects of a city development is of a great importance for him or her to make a decision. So, providing a potential investor with relevant, laconic and reliable information, the local authorities increase the intensity of the investment process in the city economy and vice-versa. As a hypothesis, there is a proposition that a large city administration can sufficiently activate the investment processes in the economy of a corresponding territorial entity using the tools of the information providing. The purpose of this article is to develop measures for the improvement of the investment portal of a large city as an important instrument of the information providing, which will make it possible to brisk up the investment processes at the level under analysis. The reasons of the unsatisfactory information providing on the investment process in a large city economy are deeply analyzed; the national and international experience in this sphere is studied; advantages and disadvantages of the information providing of the investment process in the economy of the city of Makeyevka are considered; the investment portals of different cities are compared. There are suggested technical approaches for improving the investment portal of a large city. The research results can be used to improve the investment policy of large cities.

  6. Resin infusion of large composite structures modeling and manufacturing process

    Energy Technology Data Exchange (ETDEWEB)

    Loos, A.C. [Michigan State Univ., Dept. of Mechanical Engineering, East Lansing, MI (United States)

    2006-07-01

    The resin infusion processes resin transfer molding (RTM), resin film infusion (RFI) and vacuum assisted resin transfer molding (VARTM) are cost effective techniques for the fabrication of complex shaped composite structures. The dry fibrous preform is placed in the mold, consolidated, resin impregnated and cured in a single step process. The fibrous performs are often constructed near net shape using highly automated textile processes such as knitting, weaving and braiding. In this paper, the infusion processes RTM, RFI and VARTM are discussed along with the advantages of each technique compared with traditional composite fabrication methods such as prepreg tape lay up and autoclave cure. The large number of processing variables and the complex material behavior during infiltration and cure make experimental optimization of the infusion processes costly and inefficient. Numerical models have been developed which can be used to simulate the resin infusion processes. The model formulation and solution procedures for the VARTM process are presented. A VARTM process simulation of a carbon fiber preform was presented to demonstrate the type of information that can be generated by the model and to compare the model predictions with experimental measurements. Overall, the predicted flow front positions, resin pressures and preform thicknesses agree well with the measured values. The results of the simulation show the potential cost and performance benefits that can be realized by using a simulation model as part of the development process. (au)

  7. The power of event-driven analytics in Large Scale Data Processing

    CERN Multimedia

    CERN. Geneva; Marques, Paulo

    2011-01-01

    FeedZai is a software company specialized in creating high-­‐throughput low-­‐latency data processing solutions. FeedZai develops a product called "FeedZai Pulse" for continuous event-­‐driven analytics that makes application development easier for end users. It automatically calculates key performance indicators and baselines, showing how current performance differ from previous history, creating timely business intelligence updated to the second. The tool does predictive analytics and trend analysis, displaying data on real-­‐time web-­‐based graphics. In 2010 FeedZai won the European EBN Smart Entrepreneurship Competition, in the Digital Models category, being considered one of the "top-­‐20 smart companies in Europe". The main objective of this seminar/workshop is to explore the topic for large-­‐scale data processing using Complex Event Processing and, in particular, the possible uses of Pulse in...

  8. Storage process of large solid radioactive wastes

    International Nuclear Information System (INIS)

    Morin, Bruno; Thiery, Daniel.

    1976-01-01

    Process for the storage of large size solid radioactive waste, consisting of contaminated objects such as cartridge filters, metal swarf, tools, etc, whereby such waste is incorporated in a thermohardening resin at room temperature, after prior addition of at least one inert charge to the resin. Cross-linking of the resin is then brought about [fr

  9. A large-scale circuit mechanism for hierarchical dynamical processing in the primate cortex

    OpenAIRE

    Chaudhuri, Rishidev; Knoblauch, Kenneth; Gariel, Marie-Alice; Kennedy, Henry; Wang, Xiao-Jing

    2015-01-01

    We developed a large-scale dynamical model of the macaque neocortex, which is based on recently acquired directed- and weighted-connectivity data from tract-tracing experiments, and which incorporates heterogeneity across areas. A hierarchy of timescales naturally emerges from this system: sensory areas show brief, transient responses to input (appropriate for sensory processing), whereas association areas integrate inputs over time and exhibit persistent activity (suitable for decision-makin...

  10. Neutral processes forming large clones during colonization of new areas.

    Science.gov (United States)

    Rafajlović, M; Kleinhans, D; Gulliksson, C; Fries, J; Johansson, D; Ardehed, A; Sundqvist, L; Pereyra, R T; Mehlig, B; Jonsson, P R; Johannesson, K

    2017-08-01

    In species reproducing both sexually and asexually clones are often more common in recently established populations. Earlier studies have suggested that this pattern arises due to natural selection favouring generally or locally successful genotypes in new environments. Alternatively, as we show here, this pattern may result from neutral processes during species' range expansions. We model a dioecious species expanding into a new area in which all individuals are capable of both sexual and asexual reproduction, and all individuals have equal survival rates and dispersal distances. Even under conditions that favour sexual recruitment in the long run, colonization starts with an asexual wave. After colonization is completed, a sexual wave erodes clonal dominance. If individuals reproduce more than one season, and with only local dispersal, a few large clones typically dominate for thousands of reproductive seasons. Adding occasional long-distance dispersal, more dominant clones emerge, but they persist for a shorter period of time. The general mechanism involved is simple: edge effects at the expansion front favour asexual (uniparental) recruitment where potential mates are rare. Specifically, our model shows that neutral processes (with respect to genotype fitness) during the population expansion, such as random dispersal and demographic stochasticity, produce genotype patterns that differ from the patterns arising in a selection model. The comparison with empirical data from a post-glacially established seaweed species (Fucus radicans) shows that in this case, a neutral mechanism is strongly supported. © 2017 The Authors. Journal of Evolutionary Biology Published by John Wiley & Sons ltd on Behalf of European Society for Evolutionary Biology.

  11. Towards Portable Large-Scale Image Processing with High-Performance Computing.

    Science.gov (United States)

    Huo, Yuankai; Blaber, Justin; Damon, Stephen M; Boyd, Brian D; Bao, Shunxing; Parvathaneni, Prasanna; Noguera, Camilo Bermudez; Chaganti, Shikha; Nath, Vishwesh; Greer, Jasmine M; Lyu, Ilwoo; French, William R; Newton, Allen T; Rogers, Baxter P; Landman, Bennett A

    2018-05-03

    High-throughput, large-scale medical image computing demands tight integration of high-performance computing (HPC) infrastructure for data storage, job distribution, and image processing. The Vanderbilt University Institute for Imaging Science (VUIIS) Center for Computational Imaging (CCI) has constructed a large-scale image storage and processing infrastructure that is composed of (1) a large-scale image database using the eXtensible Neuroimaging Archive Toolkit (XNAT), (2) a content-aware job scheduling platform using the Distributed Automation for XNAT pipeline automation tool (DAX), and (3) a wide variety of encapsulated image processing pipelines called "spiders." The VUIIS CCI medical image data storage and processing infrastructure have housed and processed nearly half-million medical image volumes with Vanderbilt Advanced Computing Center for Research and Education (ACCRE), which is the HPC facility at the Vanderbilt University. The initial deployment was natively deployed (i.e., direct installations on a bare-metal server) within the ACCRE hardware and software environments, which lead to issues of portability and sustainability. First, it could be laborious to deploy the entire VUIIS CCI medical image data storage and processing infrastructure to another HPC center with varying hardware infrastructure, library availability, and software permission policies. Second, the spiders were not developed in an isolated manner, which has led to software dependency issues during system upgrades or remote software installation. To address such issues, herein, we describe recent innovations using containerization techniques with XNAT/DAX which are used to isolate the VUIIS CCI medical image data storage and processing infrastructure from the underlying hardware and software environments. The newly presented XNAT/DAX solution has the following new features: (1) multi-level portability from system level to the application level, (2) flexible and dynamic software

  12. RESOURCE SAVING TECHNOLOGICAL PROCESS OF LARGE-SIZE DIE THERMAL TREATMENT

    Directory of Open Access Journals (Sweden)

    L. A. Glazkov

    2009-01-01

    Full Text Available The given paper presents a development of a technological process pertaining to hardening large-size parts made of die steel. The proposed process applies a water-air mixture instead of a conventional hardening medium that is industrial oil.While developing this new technological process it has been necessary to solve the following problems: reduction of thermal treatment duration, reduction of power resource expense (natural gas and mineral oil, elimination of fire danger and increase of process ecological efficiency. 

  13. Valid knowledge for the professional design of large and complex design processes

    NARCIS (Netherlands)

    Aken, van J.E.

    2004-01-01

    The organization and planning of design processes, which we may regard as design process design, is an important issue. Especially for large and complex design-processes traditional approaches to process design may no longer suffice. The design literature gives quite some design process models. As

  14. Visual analysis of inter-process communication for large-scale parallel computing.

    Science.gov (United States)

    Muelder, Chris; Gygi, Francois; Ma, Kwan-Liu

    2009-01-01

    In serial computation, program profiling is often helpful for optimization of key sections of code. When moving to parallel computation, not only does the code execution need to be considered but also communication between the different processes which can induce delays that are detrimental to performance. As the number of processes increases, so does the impact of the communication delays on performance. For large-scale parallel applications, it is critical to understand how the communication impacts performance in order to make the code more efficient. There are several tools available for visualizing program execution and communications on parallel systems. These tools generally provide either views which statistically summarize the entire program execution or process-centric views. However, process-centric visualizations do not scale well as the number of processes gets very large. In particular, the most common representation of parallel processes is a Gantt char t with a row for each process. As the number of processes increases, these charts can become difficult to work with and can even exceed screen resolution. We propose a new visualization approach that affords more scalability and then demonstrate it on systems running with up to 16,384 processes.

  15. Nonterrestrial material processing and manufacturing of large space systems

    Science.gov (United States)

    Von Tiesenhausen, G.

    1979-01-01

    Nonterrestrial processing of materials and manufacturing of large space system components from preprocessed lunar materials at a manufacturing site in space is described. Lunar materials mined and preprocessed at the lunar resource complex will be flown to the space manufacturing facility (SMF), where together with supplementary terrestrial materials, they will be final processed and fabricated into space communication systems, solar cell blankets, radio frequency generators, and electrical equipment. Satellite Power System (SPS) material requirements and lunar material availability and utilization are detailed, and the SMF processing, refining, fabricating facilities, material flow and manpower requirements are described.

  16. Neighborhood diversity of large trees shows independent species patterns in a mixed dipterocarp forest in Sri Lanka.

    Science.gov (United States)

    Punchi-Manage, Ruwan; Wiegand, Thorsten; Wiegand, Kerstin; Getzin, Stephan; Huth, Andreas; Gunatilleke, C V Savitri; Gunatilleke, I A U Nimal

    2015-07-01

    Interactions among neighboring individuals influence plant performance and should create spatial patterns in local community structure. In order to assess the role of large trees in generating spatial patterns in local species richness, we used the individual species-area relationship (ISAR) to evaluate the species richness of trees of different size classes (and dead trees) in circular neighborhoods with varying radius around large trees of different focal species. To reveal signals of species interactions, we compared the ISAR function of the individuals of focal species with that of randomly selected nearby locations. We expected that large trees should strongly affect the community structure of smaller trees in their neighborhood, but that these effects should fade away with increasing size class. Unexpectedly, we found that only few focal species showed signals of species interactions with trees of the different size classes and that this was less likely for less abundant focal species. However, the few and relatively weak departures from independence were consistent with expectations of the effect of competition for space and the dispersal syndrome on spatial patterns. A noisy signal of competition for space found for large trees built up gradually with increasing life stage; it was not yet present for large saplings but detectable for intermediates. Additionally, focal species with animal-dispersed seeds showed higher species richness in their neighborhood than those with gravity- and gyration-dispersed seeds. Our analysis across the entire ontogeny from recruits to large trees supports the hypothesis that stochastic effects dilute deterministic species interactions in highly diverse communities. Stochastic dilution is a consequence of the stochastic geometry of biodiversity in species-rich communities where the identities of the nearest neighbors of a given plant are largely unpredictable. While the outcome of local species interactions is governed for each

  17. QCD phenomenology of the large P/sub T/ processes

    International Nuclear Information System (INIS)

    Stroynowski, R.

    1979-11-01

    Quantum Chromodynamics (QCD) provides a framework for the possible high-accuracy calculations of the large-p/sub T/ processes. The description of the large-transverse-momentum phenomena is introduced in terms of the parton model, and the modifications expected from QCD are described by using as an example single-particle distributions. The present status of available data (π, K, p, p-bar, eta, particle ratios, beam ratios, direct photons, nuclear target dependence), the evidence for jets, and the future prospects are reviewed. 80 references, 33 figures, 3 tables

  18. Combined process automation for large-scale EEG analysis.

    Science.gov (United States)

    Sfondouris, John L; Quebedeaux, Tabitha M; Holdgraf, Chris; Musto, Alberto E

    2012-01-01

    Epileptogenesis is a dynamic process producing increased seizure susceptibility. Electroencephalography (EEG) data provides information critical in understanding the evolution of epileptiform changes throughout epileptic foci. We designed an algorithm to facilitate efficient large-scale EEG analysis via linked automation of multiple data processing steps. Using EEG recordings obtained from electrical stimulation studies, the following steps of EEG analysis were automated: (1) alignment and isolation of pre- and post-stimulation intervals, (2) generation of user-defined band frequency waveforms, (3) spike-sorting, (4) quantification of spike and burst data and (5) power spectral density analysis. This algorithm allows for quicker, more efficient EEG analysis. Copyright © 2011 Elsevier Ltd. All rights reserved.

  19. Process variations in surface nano geometries manufacture on large area substrates

    DEFF Research Database (Denmark)

    Calaon, Matteo; Hansen, Hans Nørgaard; Tosello, Guido

    2014-01-01

    The need of transporting, treating and measuring increasingly smaller biomedical samples has pushed the integration of a far reaching number of nanofeatures over large substrates size in respect to the conventional processes working area windows. Dimensional stability of nano fabrication processe...

  20. Broadband Reflective Coating Process for Large FUVOIR Mirrors, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — ZeCoat Corporation will develop and demonstrate a set of revolutionary coating processes for making broadband reflective coatings suitable for very large mirrors (4+...

  1. A practical process for light-water detritiation at large scales

    Energy Technology Data Exchange (ETDEWEB)

    Boniface, H.A. [Atomic Energy of Canada Limited, Chalk River, ON (Canada); Robinson, J., E-mail: jr@tyne-engineering.com [Tyne Engineering, Burlington, ON (Canada); Gnanapragasam, N.V.; Castillo, I.; Suppiah, S. [Atomic Energy of Canada Limited, Chalk River, ON (Canada)

    2014-07-01

    AECL and Tyne Engineering have recently completed a preliminary engineering design for a modest-scale tritium removal plant for light water, intended for installation at AECL's Chalk River Laboratories (CRL). This plant design was based on the Combined Electrolysis and Catalytic Exchange (CECE) technology developed at CRL over many years and demonstrated there and elsewhere. The general features and capabilities of this design have been reported as well as the versatility of the design for separating any pair of the three hydrogen isotopes. The same CECE technology could be applied directly to very large-scale wastewater detritiation, such as the case at Fukushima Daiichi Nuclear Power Station. However, since the CECE process scales linearly with throughput, the required capital and operating costs are substantial for such large-scale applications. This paper discusses some options for reducing the costs of very large-scale detritiation. Options include: Reducing tritium removal effectiveness; Energy recovery; Improving the tolerance of impurities; Use of less expensive or more efficient equipment. A brief comparison with alternative processes is also presented. (author)

  2. Large-scale production of diesel-like biofuels - process design as an inherent part of microorganism development.

    Science.gov (United States)

    Cuellar, Maria C; Heijnen, Joseph J; van der Wielen, Luuk A M

    2013-06-01

    Industrial biotechnology is playing an important role in the transition to a bio-based economy. Currently, however, industrial implementation is still modest, despite the advances made in microorganism development. Given that the fuels and commodity chemicals sectors are characterized by tight economic margins, we propose to address overall process design and efficiency at the start of bioprocess development. While current microorganism development is targeted at product formation and product yield, addressing process design at the start of bioprocess development means that microorganism selection can also be extended to other critical targets for process technology and process scale implementation, such as enhancing cell separation or increasing cell robustness at operating conditions that favor the overall process. In this paper we follow this approach for the microbial production of diesel-like biofuels. We review current microbial routes with both oleaginous and engineered microorganisms. For the routes leading to extracellular production, we identify the process conditions for large scale operation. The process conditions identified are finally translated to microorganism development targets. We show that microorganism development should be directed at anaerobic production, increasing robustness at extreme process conditions and tailoring cell surface properties. All the same time, novel process configurations integrating fermentation and product recovery, cell reuse and low-cost technologies for product separation are mandatory. This review provides a state-of-the-art summary of the latest challenges in large-scale production of diesel-like biofuels. Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  3. Processing and properties of large grain (RE)BCO

    International Nuclear Information System (INIS)

    Cardwell, D.A.

    1998-01-01

    The potential of high temperature superconductors to generate large magnetic fields and to carry current with low power dissipation at 77 K is particularly attractive for a variety of permanent magnet applications. As a result large grain bulk (RE)-Ba-Cu-O ((RE)BCO) materials have been developed by melt process techniques in an attempt to fabricate practical materials for use in high field devices. This review outlines the current state of the art in this field of processing, including seeding requirements for the controlled fabrication of these materials, the origin of striking growth features such as the formation of a facet plane around the seed, platelet boundaries and (RE) 2 BaCuO 5 (RE-211) inclusions in the seeded melt grown microstructure. An observed variation in critical current density in large grain (RE)BCO samples is accounted for by Sm contamination of the material in the vicinity of the seed and with the development of a non-uniform growth morphology at ∼4 mm from the seed position. (RE)Ba 2 Cu 3 O 7-δ (RE-123) dendrites are observed to form and bro[en preferentially within the a/b plane of the lattice in this growth regime. Finally, trapped fields in excess of 3 T have been reported in irr[iated U-doped YBCO and (RE) 1+x Ba 2-x Cu 3 O y (RE=Sm, Nd) materials have been observed to carry transport current in fields of up to 10 T at 77 K. This underlines the potential of bulk (RE)BCO materials for practical permanent magnet type applications. (orig.)

  4. Formation of Large-scale Coronal Loops Interconnecting Two Active Regions through Gradual Magnetic Reconnection and an Associated Heating Process

    Science.gov (United States)

    Du, Guohui; Chen, Yao; Zhu, Chunming; Liu, Chang; Ge, Lili; Wang, Bing; Li, Chuanyang; Wang, Haimin

    2018-06-01

    Coronal loops interconnecting two active regions (ARs), called interconnecting loops (ILs), are prominent large-scale structures in the solar atmosphere. They carry a significant amount of magnetic flux and therefore are considered to be an important element of the solar dynamo process. Earlier observations showed that eruptions of ILs are an important source of CMEs. It is generally believed that ILs are formed through magnetic reconnection in the high corona (>150″–200″), and several scenarios have been proposed to explain their brightening in soft X-rays (SXRs). However, the detailed IL formation process has not been fully explored, and the associated energy release in the corona still remains unresolved. Here, we report the complete formation process of a set of ILs connecting two nearby ARs, with successive observations by STEREO-A on the far side of the Sun and by SDO and Hinode on the Earth side. We conclude that ILs are formed by gradual reconnection high in the corona, in line with earlier postulations. In addition, we show evidence that ILs brighten in SXRs and EUVs through heating at or close to the reconnection site in the corona (i.e., through the direct heating process of reconnection), a process that has been largely overlooked in earlier studies of ILs.

  5. 19 CFR 113.75 - Bond conditions for deferral of duty on large yachts imported for sale at United States boat shows.

    Science.gov (United States)

    2010-04-01

    ... yachts imported for sale at United States boat shows. 113.75 Section 113.75 Customs Duties U.S. CUSTOMS... Customs Bond Conditions § 113.75 Bond conditions for deferral of duty on large yachts imported for sale at....C. 1484b for a dutiable large yacht imported for sale at a United States boat show must conform to...

  6. Constructing large scale SCI-based processing systems by switch elements

    International Nuclear Information System (INIS)

    Wu, B.; Kristiansen, E.; Skaali, B.; Bogaerts, A.; Divia, R.; Mueller, H.

    1993-05-01

    The goal of this paper is to study some of the design criteria for the switch elements to form the interconnection of large scale SCI-based processing systems. The approved IEEE standard 1596 makes it possible to couple up to 64K nodes together. In order to connect thousands of nodes to construct large scale SCI-based processing systems, one has to interconnect these nodes by switch elements to form different topologies. A summary of the requirements and key points of interconnection networks and switches is presented. Two models of the SCI switch elements are proposed. The authors investigate several examples of systems constructed for 4-switches with simulations and the results are analyzed. Some issues and enhancements are discussed to provide the ideas behind the switch design that can improve performance and reduce latency. 29 refs., 11 figs., 3 tabs

  7. Feasibility of large volume casting cementation process for intermediate level radioactive waste

    International Nuclear Information System (INIS)

    Chen Zhuying; Chen Baisong; Zeng Jishu; Yu Chengze

    1988-01-01

    The recent tendency of radioactive waste treatment and disposal both in China and abroad is reviewed. The feasibility of the large volume casting cementation process for treating and disposing the intermediate level radioactive waste from spent fuel reprocessing plant in shallow land is assessed on the basis of the analyses of the experimental results (such as formulation study, solidified radioactive waste properties measurement ect.). It can be concluded large volume casting cementation process is a promising, safe and economic process. It is feasible to dispose the intermediate level radioactive waste from reprocessing plant it the disposal site chosen has resonable geological and geographical conditions and some additional effective protection means are taken

  8. Large-scale membrane transfer process: its application to single-crystal-silicon continuous membrane deformable mirror

    International Nuclear Information System (INIS)

    Wu, Tong; Sasaki, Takashi; Hane, Kazuhiro; Akiyama, Masayuki

    2013-01-01

    This paper describes a large-scale membrane transfer process developed for the construction of large-scale membrane devices via the transfer of continuous single-crystal-silicon membranes from one substrate to another. This technique is applied for fabricating a large stroke deformable mirror. A bimorph spring array is used to generate a large air gap between the mirror membrane and the electrode. A 1.9 mm × 1.9 mm × 2 µm single-crystal-silicon membrane is successfully transferred to the electrode substrate by Au–Si eutectic bonding and the subsequent all-dry release process. This process provides an effective approach for transferring a free-standing large continuous single-crystal-silicon to a flexible suspension spring array with a large air gap. (paper)

  9. Modelling hydrologic and hydrodynamic processes in basins with large semi-arid wetlands

    Science.gov (United States)

    Fleischmann, Ayan; Siqueira, Vinícius; Paris, Adrien; Collischonn, Walter; Paiva, Rodrigo; Pontes, Paulo; Crétaux, Jean-François; Bergé-Nguyen, Muriel; Biancamaria, Sylvain; Gosset, Marielle; Calmant, Stephane; Tanimoun, Bachir

    2018-06-01

    Hydrological and hydrodynamic models are core tools for simulation of large basins and complex river systems associated to wetlands. Recent studies have pointed towards the importance of online coupling strategies, representing feedbacks between floodplain inundation and vertical hydrology. Especially across semi-arid regions, soil-floodplain interactions can be strong. In this study, we included a two-way coupling scheme in a large scale hydrological-hydrodynamic model (MGB) and tested different model structures, in order to assess which processes are important to be simulated in large semi-arid wetlands and how these processes interact with water budget components. To demonstrate benefits from this coupling over a validation case, the model was applied to the Upper Niger River basin encompassing the Niger Inner Delta, a vast semi-arid wetland in the Sahel Desert. Simulation was carried out from 1999 to 2014 with daily TMPA 3B42 precipitation as forcing, using both in-situ and remotely sensed data for calibration and validation. Model outputs were in good agreement with discharge and water levels at stations both upstream and downstream of the Inner Delta (Nash-Sutcliffe Efficiency (NSE) >0.6 for most gauges), as well as for flooded areas within the Delta region (NSE = 0.6; r = 0.85). Model estimates of annual water losses across the Delta varied between 20.1 and 30.6 km3/yr, while annual evapotranspiration ranged between 760 mm/yr and 1130 mm/yr. Evaluation of model structure indicated that representation of both floodplain channels hydrodynamics (storage, bifurcations, lateral connections) and vertical hydrological processes (floodplain water infiltration into soil column; evapotranspiration from soil and vegetation and evaporation of open water) are necessary to correctly simulate flood wave attenuation and evapotranspiration along the basin. Two-way coupled models are necessary to better understand processes in large semi-arid wetlands. Finally, such coupled

  10. Children with dyslexia show cortical hyperactivation in response to increasing literacy processing demands

    Directory of Open Access Journals (Sweden)

    Frøydis eMorken

    2014-12-01

    Full Text Available This fMRI study aimed to examine how differences in literacy processing demands may affect cortical activation patterns in 11- to 12-year-old children with dyslexia as compared to children with typical reading skills. 11 children with and 18 without dyslexia were assessed using a reading paradigm based on different stages of literacy development. In the analyses, six regions showed an interaction effect between group and condition in a factorial ANOVA. These regions were selected as regions of interest for further analyses. Overall, the dyslexia group showed cortical hyperactivation compared to the typical group. The difference between the groups tended to increase with increasing processing demands. Differences in cortical activation were not reflected in in-scanner reading performance. The six regions further grouped into three patterns, which are discussed in terms of processing demands, compensatory mechanisms, orthography and contextual facilitation. We conclude that the observed hyperactivation is chiefly a result of compensatory activity, modulated by other factors.

  11. 19 CFR Appendix C to Part 113 - Bond for Deferral of Duty on Large Yachts Imported for Sale at United States Boat Shows

    Science.gov (United States)

    2010-04-01

    ... 19 Customs Duties 1 2010-04-01 2010-04-01 false Bond for Deferral of Duty on Large Yachts Imported... Appendix C to Part 113—Bond for Deferral of Duty on Large Yachts Imported for Sale at United States Boat Shows Bond for Deferral of Duty on Large Yachts Imported for Sale at United States Boat Shows ____, as...

  12. Process γ*γ → σ at large virtuality of γ*

    International Nuclear Information System (INIS)

    Volkov, M.K.; Radzhabov, A.E.; Yudichev, V.L.

    2004-01-01

    The process γ*γ → σ is investigated in the framework of the SU(2) x SU(2) chiral NJL model, where γ*γ are photons with the large and small virtuality, respectively, and σ is a pseudoscalar meson. The form factor of the process is derived for arbitrary virtuality of γ* in the Euclidean kinematic domain. The asymptotic behavior of this form factor resembles the asymptotic behavior of the γ*γ → π form factor [ru

  13. Individual differences influence two-digit number processing, but not their analog magnitude processing: a large-scale online study.

    Science.gov (United States)

    Huber, Stefan; Nuerk, Hans-Christoph; Reips, Ulf-Dietrich; Soltanlou, Mojtaba

    2017-12-23

    Symbolic magnitude comparison is one of the most well-studied cognitive processes in research on numerical cognition. However, while the cognitive mechanisms of symbolic magnitude processing have been intensively studied, previous studies have paid less attention to individual differences influencing symbolic magnitude comparison. Employing a two-digit number comparison task in an online setting, we replicated previous effects, including the distance effect, the unit-decade compatibility effect, and the effect of cognitive control on the adaptation to filler items, in a large-scale study in 452 adults. Additionally, we observed that the most influential individual differences were participants' first language, time spent playing computer games and gender, followed by reported alcohol consumption, age and mathematical ability. Participants who used a first language with a left-to-right reading/writing direction were faster than those who read and wrote in the right-to-left direction. Reported playing time for computer games was correlated with faster reaction times. Female participants showed slower reaction times and a larger unit-decade compatibility effect than male participants. Participants who reported never consuming alcohol showed overall slower response times than others. Older participants were slower, but more accurate. Finally, higher grades in mathematics were associated with faster reaction times. We conclude that typical experiments on numerical cognition that employ a keyboard as an input device can also be run in an online setting. Moreover, while individual differences have no influence on domain-specific magnitude processing-apart from age, which increases the decade distance effect-they generally influence performance on a two-digit number comparison task.

  14. The testing of thermal-mechanical-hydrological-chemical processes using a large block

    International Nuclear Information System (INIS)

    Lin, W.; Wilder, D.G.; Blink, J.A.; Blair, S.C.; Buscheck, T.A.; Chesnut, D.A.; Glassley, W.E.; Lee, K.; Roberts, J.J.

    1994-01-01

    The radioactive decay heat from nuclear waste packages may, depending on the thermal load, create coupled thermal-mechanical-hydrological-chemical (TMHC) processes in the near-field environment of a repository. A group of tests on a large block (LBT) are planned to provide a timely opportunity to test and calibrate some of the TMHC model concepts. The LBT is advantageous for testing and verifying model concepts because the boundary conditions are controlled, and the block can be characterized before and after the experiment. A block of Topopah Spring tuff of about 3 x 3 x 4.5 m will be sawed and isolated at Fran Ridge, Nevada Test Site. Small blocks of the rock adjacent to the large block will be collected for laboratory testing of some individual thermal-mechanical, hydrological, and chemical processes. A constant load of about 4 MPa will be applied to the top and sides of the large block. The sides will be sealed with moisture and thermal barriers. The large block will be heated with one heater in each borehole and guard heaters on the sides so that a dry-out zone and a condensate zone will exist simultaneously. Temperature, moisture content, pore pressure, chemical composition, stress and displacement will be measured throughout the block during the heating and cool-down phases. The results from the experiments on small blocks and the tests on the large block will provide a better understanding of some concepts of the coupled TMHC processes

  15. Processing and properties of large-sized ceramic slabs

    Energy Technology Data Exchange (ETDEWEB)

    Raimondo, M.; Dondi, M.; Zanelli, C.; Guarini, G.; Gozzi, A.; Marani, F.; Fossa, L.

    2010-07-01

    Large-sized ceramic slabs with dimensions up to 360x120 cm{sup 2} and thickness down to 2 mm are manufactured through an innovative ceramic process, starting from porcelain stoneware formulations and involving wet ball milling, spray drying, die-less slow-rate pressing, a single stage of fast drying-firing, and finishing (trimming, assembling of ceramic-fiberglass composites). Fired and unfired industrial slabs were selected and characterized from the technological, compositional (XRF, XRD) and microstructural (SEM) viewpoints. Semi-finished products exhibit a remarkable microstructural uniformity and stability in a rather wide window of firing schedules. The phase composition and compact microstructure of fired slabs are very similar to those of porcelain stoneware tiles. The values of water absorption, bulk density, closed porosity, functional performances as well as mechanical and tribological properties conform to the top quality range of porcelain stoneware tiles. However, the large size coupled with low thickness bestow on the slab a certain degree of flexibility, which is emphasized in ceramic-fiberglass composites. These outstanding performances make the large-sized slabs suitable to be used in novel applications: building and construction (new floorings without dismantling the previous paving, ventilated facades, tunnel coverings, insulating panelling), indoor furnitures (table tops, doors), support for photovoltaic ceramic panels. (Author) 24 refs.

  16. Processing and properties of large-sized ceramic slabs

    International Nuclear Information System (INIS)

    Raimondo, M.; Dondi, M.; Zanelli, C.; Guarini, G.; Gozzi, A.; Marani, F.; Fossa, L.

    2010-01-01

    Large-sized ceramic slabs with dimensions up to 360x120 cm 2 and thickness down to 2 mm are manufactured through an innovative ceramic process, starting from porcelain stoneware formulations and involving wet ball milling, spray drying, die-less slow-rate pressing, a single stage of fast drying-firing, and finishing (trimming, assembling of ceramic-fiberglass composites). Fired and unfired industrial slabs were selected and characterized from the technological, compositional (XRF, XRD) and microstructural (SEM) viewpoints. Semi-finished products exhibit a remarkable microstructural uniformity and stability in a rather wide window of firing schedules. The phase composition and compact microstructure of fired slabs are very similar to those of porcelain stoneware tiles. The values of water absorption, bulk density, closed porosity, functional performances as well as mechanical and tribological properties conform to the top quality range of porcelain stoneware tiles. However, the large size coupled with low thickness bestow on the slab a certain degree of flexibility, which is emphasized in ceramic-fiberglass composites. These outstanding performances make the large-sized slabs suitable to be used in novel applications: building and construction (new floorings without dismantling the previous paving, ventilated facades, tunnel coverings, insulating panelling), indoor furnitures (table tops, doors), support for photovoltaic ceramic panels. (Author) 24 refs.

  17. Inverse problem to constrain the controlling parameters of large-scale heat transport processes: The Tiberias Basin example

    Science.gov (United States)

    Goretzki, Nora; Inbar, Nimrod; Siebert, Christian; Möller, Peter; Rosenthal, Eliyahu; Schneider, Michael; Magri, Fabien

    2015-04-01

    Salty and thermal springs exist along the lakeshore of the Sea of Galilee, which covers most of the Tiberias Basin (TB) in the northern Jordan- Dead Sea Transform, Israel/Jordan. As it is the only freshwater reservoir of the entire area, it is important to study the salinisation processes that pollute the lake. Simulations of thermohaline flow along a 35 km NW-SE profile show that meteoric and relic brines are flushed by the regional flow from the surrounding heights and thermally induced groundwater flow within the faults (Magri et al., 2015). Several model runs with trial and error were necessary to calibrate the hydraulic conductivity of both faults and major aquifers in order to fit temperature logs and spring salinity. It turned out that the hydraulic conductivity of the faults ranges between 30 and 140 m/yr whereas the hydraulic conductivity of the Upper Cenomanian aquifer is as high as 200 m/yr. However, large-scale transport processes are also dependent on other physical parameters such as thermal conductivity, porosity and fluid thermal expansion coefficient, which are hardly known. Here, inverse problems (IP) are solved along the NW-SE profile to better constrain the physical parameters (a) hydraulic conductivity, (b) thermal conductivity and (c) thermal expansion coefficient. The PEST code (Doherty, 2010) is applied via the graphical interface FePEST in FEFLOW (Diersch, 2014). The results show that both thermal and hydraulic conductivity are consistent with the values determined with the trial and error calibrations. Besides being an automatic approach that speeds up the calibration process, the IP allows to cover a wide range of parameter values, providing additional solutions not found with the trial and error method. Our study shows that geothermal systems like TB are more comprehensively understood when inverse models are applied to constrain coupled fluid flow processes over large spatial scales. References Diersch, H.-J.G., 2014. FEFLOW Finite

  18. Large wood mobility processes in low-order Chilean river channels

    Science.gov (United States)

    Iroumé, Andrés; Mao, Luca; Andreoli, Andrea; Ulloa, Héctor; Ardiles, María Paz

    2015-01-01

    Large wood (LW) mobility was studied over several time periods in channel segments of four low-order mountain streams, southern Chile. All wood pieces found within the bankfull channels and on the streambanks extending into the channel with dimensions more than 10 cm in diameter and 1 m in length were measured and their position was referenced. Thirty six percent of measured wood pieces were tagged to investigate log mobility. All segments were first surveyed in summer and then after consecutive rainy winter periods. Annual LW mobility ranged between 0 and 28%. Eighty-four percent of the moved LW had diameters ≤ 40 cm and 92% had lengths ≤ 7 m. Large wood mobility was higher in periods when maximum water level (Hmax) exceeded channel bankfull depth (HBk) than in periods with flows less than HBk, but the difference was not statistically significant. Dimensions of moved LW showed no significant differences between periods with flows exceeding and with flows less than bankfull stage. Statistically significant relationships were found between annual LW mobility (%) and unit stream power (for Hmax) and Hmax/HBk. The mean diameter of transported wood pieces per period was significantly correlated with unit stream power for H15% and H50% (the level above which the flow remains for 15 and 50% of the time, respectively). These results contribute to an understanding of the complexity of LW mobilization processes in mountain streams and can be used to assess and prevent potential damage caused by LW mobilization during floods.

  19. Process automation system for integration and operation of Large Volume Plasma Device

    International Nuclear Information System (INIS)

    Sugandhi, R.; Srivastava, P.K.; Sanyasi, A.K.; Srivastav, Prabhakar; Awasthi, L.M.; Mattoo, S.K.

    2016-01-01

    Highlights: • Analysis and design of process automation system for Large Volume Plasma Device (LVPD). • Data flow modeling for process model development. • Modbus based data communication and interfacing. • Interface software development for subsystem control in LabVIEW. - Abstract: Large Volume Plasma Device (LVPD) has been successfully contributing towards understanding of the plasma turbulence driven by Electron Temperature Gradient (ETG), considered as a major contributor for the plasma loss in the fusion devices. Large size of the device imposes certain difficulties in the operation, such as access of the diagnostics, manual control of subsystems and large number of signals monitoring etc. To achieve integrated operation of the machine, automation is essential for the enhanced performance and operational efficiency. Recently, the machine is undergoing major upgradation for the new physics experiments. The new operation and control system consists of following: (1) PXIe based fast data acquisition system for the equipped diagnostics; (2) Modbus based Process Automation System (PAS) for the subsystem controls and (3) Data Utilization System (DUS) for efficient storage, processing and retrieval of the acquired data. In the ongoing development, data flow model of the machine’s operation has been developed. As a proof of concept, following two subsystems have been successfully integrated: (1) Filament Power Supply (FPS) for the heating of W- filaments based plasma source and (2) Probe Positioning System (PPS) for control of 12 number of linear probe drives for a travel length of 100 cm. The process model of the vacuum production system has been prepared and validated against acquired pressure data. In the next upgrade, all the subsystems of the machine will be integrated in a systematic manner. The automation backbone is based on 4-wire multi-drop serial interface (RS485) using Modbus communication protocol. Software is developed on LabVIEW platform using

  20. Process automation system for integration and operation of Large Volume Plasma Device

    Energy Technology Data Exchange (ETDEWEB)

    Sugandhi, R., E-mail: ritesh@ipr.res.in; Srivastava, P.K.; Sanyasi, A.K.; Srivastav, Prabhakar; Awasthi, L.M.; Mattoo, S.K.

    2016-11-15

    Highlights: • Analysis and design of process automation system for Large Volume Plasma Device (LVPD). • Data flow modeling for process model development. • Modbus based data communication and interfacing. • Interface software development for subsystem control in LabVIEW. - Abstract: Large Volume Plasma Device (LVPD) has been successfully contributing towards understanding of the plasma turbulence driven by Electron Temperature Gradient (ETG), considered as a major contributor for the plasma loss in the fusion devices. Large size of the device imposes certain difficulties in the operation, such as access of the diagnostics, manual control of subsystems and large number of signals monitoring etc. To achieve integrated operation of the machine, automation is essential for the enhanced performance and operational efficiency. Recently, the machine is undergoing major upgradation for the new physics experiments. The new operation and control system consists of following: (1) PXIe based fast data acquisition system for the equipped diagnostics; (2) Modbus based Process Automation System (PAS) for the subsystem controls and (3) Data Utilization System (DUS) for efficient storage, processing and retrieval of the acquired data. In the ongoing development, data flow model of the machine’s operation has been developed. As a proof of concept, following two subsystems have been successfully integrated: (1) Filament Power Supply (FPS) for the heating of W- filaments based plasma source and (2) Probe Positioning System (PPS) for control of 12 number of linear probe drives for a travel length of 100 cm. The process model of the vacuum production system has been prepared and validated against acquired pressure data. In the next upgrade, all the subsystems of the machine will be integrated in a systematic manner. The automation backbone is based on 4-wire multi-drop serial interface (RS485) using Modbus communication protocol. Software is developed on LabVIEW platform using

  1. A mesh density study for application to large deformation rolling process evaluation

    International Nuclear Information System (INIS)

    Martin, J.A.

    1997-12-01

    When addressing large deformation through an elastic-plastic analysis the mesh density is paramount in determining the accuracy of the solution. However, given the nonlinear nature of the problem, a highly-refined mesh will generally require a prohibitive amount of computer resources. This paper addresses finite element mesh optimization studies considering accuracy of results and computer resource needs as applied to large deformation rolling processes. In particular, the simulation of the thread rolling manufacturing process is considered using the MARC software package and a Cray C90 supercomputer. Both mesh density and adaptive meshing on final results for both indentation of a rigid body to a specified depth and contact rolling along a predetermined length are evaluated

  2. Deal or No Deal? Decision Making under Risk in a Large-Stake TV Game Show and Related Experiments

    NARCIS (Netherlands)

    M.J. van den Assem (Martijn)

    2008-01-01

    textabstractThe central theme of this dissertation is the analysis of risky choice. The first two chapters analyze the choice behavior of contestants in a TV game show named “Deal or No Deal” (DOND). DOND provides a unique opportunity to study risk behavior, because it is characterized by very large

  3. Large Scale Gaussian Processes for Atmospheric Parameter Retrieval and Cloud Screening

    Science.gov (United States)

    Camps-Valls, G.; Gomez-Chova, L.; Mateo, G.; Laparra, V.; Perez-Suay, A.; Munoz-Mari, J.

    2017-12-01

    Current Earth-observation (EO) applications for image classification have to deal with an unprecedented big amount of heterogeneous and complex data sources. Spatio-temporally explicit classification methods are a requirement in a variety of Earth system data processing applications. Upcoming missions such as the super-spectral Copernicus Sentinels EnMAP and FLEX will soon provide unprecedented data streams. Very high resolution (VHR) sensors like Worldview-3 also pose big challenges to data processing. The challenge is not only attached to optical sensors but also to infrared sounders and radar images which increased in spectral, spatial and temporal resolution. Besides, we should not forget the availability of the extremely large remote sensing data archives already collected by several past missions, such ENVISAT, Cosmo-SkyMED, Landsat, SPOT, or Seviri/MSG. These large-scale data problems require enhanced processing techniques that should be accurate, robust and fast. Standard parameter retrieval and classification algorithms cannot cope with this new scenario efficiently. In this work, we review the field of large scale kernel methods for both atmospheric parameter retrieval and cloud detection using infrared sounding IASI data and optical Seviri/MSG imagery. We propose novel Gaussian Processes (GPs) to train problems with millions of instances and high number of input features. Algorithms can cope with non-linearities efficiently, accommodate multi-output problems, and provide confidence intervals for the predictions. Several strategies to speed up algorithms are devised: random Fourier features and variational approaches for cloud classification using IASI data and Seviri/MSG, and engineered randomized kernel functions and emulation in temperature, moisture and ozone atmospheric profile retrieval from IASI as a proxy to the upcoming MTG-IRS sensor. Excellent compromise between accuracy and scalability are obtained in all applications.

  4. Large-scale methanol plants. [Based on Japanese-developed process

    Energy Technology Data Exchange (ETDEWEB)

    Tado, Y

    1978-02-01

    A study was made on how to produce methanol economically which is expected as a growth item for use as a material for pollution-free energy or for chemical use, centering on the following subjects: (1) Improvement of thermal economy, (2) Improvement of process, and (3) Problems of hardware attending the expansion of scale. The results of this study were already adopted in actual plants, obtaining good results, and large-scale methanol plants are going to be realized.

  5. Dyslexic Participants Show Intact Spontaneous Categorization Processes

    Science.gov (United States)

    Nikolopoulos, Dimitris S.; Pothos, Emmanuel M.

    2009-01-01

    We examine the performance of dyslexic participants on an unsupervised categorization task against that of matched non-dyslexic control participants. Unsupervised categorization is a cognitive process critical for conceptual development. Existing research in dyslexia has emphasized perceptual tasks and supervised categorization tasks (for which…

  6. Applicability of vector processing to large-scale nuclear codes

    International Nuclear Information System (INIS)

    Ishiguro, Misako; Harada, Hiroo; Matsuura, Toshihiko; Okuda, Motoi; Ohta, Fumio; Umeya, Makoto.

    1982-03-01

    To meet the growing trend of computational requirements in JAERI, introduction of a high-speed computer with vector processing faculty (a vector processor) is desirable in the near future. To make effective use of a vector processor, appropriate optimization of nuclear codes to pipelined-vector architecture is vital, which will pose new problems concerning code development and maintenance. In this report, vector processing efficiency is assessed with respect to large-scale nuclear codes by examining the following items: 1) The present feature of computational load in JAERI is analyzed by compiling the computer utilization statistics. 2) Vector processing efficiency is estimated for the ten heavily-used nuclear codes by analyzing their dynamic behaviors run on a scalar machine. 3) Vector processing efficiency is measured for the other five nuclear codes by using the current vector processors, FACOM 230-75 APU and CRAY-1. 4) Effectiveness of applying a high-speed vector processor to nuclear codes is evaluated by taking account of the characteristics in JAERI jobs. Problems of vector processors are also discussed from the view points of code performance and ease of use. (author)

  7. Dehydrogenation in large ingot casting process

    International Nuclear Information System (INIS)

    Ubukata, Takashi; Suzuki, Tadashi; Ueda, Sou; Shibata, Takashi

    2009-01-01

    Forging components (for nuclear power plants) have become larger and larger because of decreased weld lines from a safety point of view. Consequently they have been manufactured from ingots requirement for 200 tons or more. Dehydrogenation is one of the key issues for large ingot manufacturing process. In the case of ingots of 200 tons or heavier, mold stream degassing (MSD) has been applied for dehydrogenation. Although JSW had developed mold stream degassing by argon (MSD-Ar) as a more effective dehydrogenating practice, MSD-Ar was not applied for these ingots, because conventional refractory materials of a stopper rod for the Ar blowing hole had low durability. In this study, we have developed a new type of stopper rod through modification of both refractory materials and the stopper rod construction and have successfully expanded the application range of MSD-Ar up to ingots weighting 330 tons. Compared with the conventional MSD, the hydrogen content in ingots after MSD-Ar has decreased by 24 percent due to the dehydrogenation rate of MSD-Ar increased by 34 percent. (author)

  8. Controlled elaboration of large-area plasmonic substrates by plasma process

    International Nuclear Information System (INIS)

    Pugliara, A; Despax, B; Makasheva, K; Bonafos, C; Carles, R

    2015-01-01

    Elaboration in a controlled way of large-area and efficient plasmonic substrates is achieved by combining sputtering of silver nanoparticles (AgNPs) and plasma polymerization of the embedding dielectric matrix in an axially asymmetric, capacitively coupled RF discharge maintained at low gas pressure. The plasma parameters and deposition conditions were optimized according to the optical response of these substrates. Structural and optical characterizations of the samples confirm the process efficiency. The obtained results indicate that to deposit a single layer of large and closely situated AgNPs, a high injected power and short sputtering times must be privileged. The plasma-elaborated plasmonic substrates appear to be very sensitive to any stimuli that affect their plasmonic response. (paper)

  9. Extraterrestrial processing and manufacturing of large space systems. Volume 3: Executive summary

    Science.gov (United States)

    Miller, R. H.; Smith, D. B. S.

    1979-01-01

    Facilities and equipment are defined for refining processes to commercial grade of lunar material that is delivered to a 'space manufacturing facility' in beneficiated, primary processed quality. The manufacturing facilities and the equipment for producing elements of large space systems from these materials and providing programmatic assessments of the concepts are also defined. In-space production processes of solar cells (by vapor deposition) and arrays, structures and joints, conduits, waveguides, RF equipment radiators, wire cables, converters, and others are described.

  10. A methodology for fault diagnosis in large chemical processes and an application to a multistage flash desalination process: Part II

    International Nuclear Information System (INIS)

    Tarifa, Enrique E.; Scenna, Nicolas J.

    1998-01-01

    In Part I, an efficient method for identifying faults in large processes was presented. The whole plant is divided into sectors by using structural, functional, or causal decomposition. A signed directed graph (SDG) is the model used for each sector. The SDG represents interactions among process variables. This qualitative model is used to carry out qualitative simulation for all possible faults. The output of this step is information about the process behaviour. This information is used to build rules. When a symptom is detected in one sector, its rules are evaluated using on-line data and fuzzy logic to yield the diagnosis. In this paper the proposed methodology is applied to a multiple stage flash (MSF) desalination process. This process is composed of sequential flash chambers. It was designed for a pilot plant that produces drinkable water for a community in Argentina; that is, it is a real case. Due to the large number of variables, recycles, phase changes, etc., this process is a good challenge for the proposed diagnosis method

  11. Medical students perceive better group learning processes when large classes are made to seem small.

    Science.gov (United States)

    Hommes, Juliette; Arah, Onyebuchi A; de Grave, Willem; Schuwirth, Lambert W T; Scherpbier, Albert J J A; Bos, Gerard M J

    2014-01-01

    Medical schools struggle with large classes, which might interfere with the effectiveness of learning within small groups due to students being unfamiliar to fellow students. The aim of this study was to assess the effects of making a large class seem small on the students' collaborative learning processes. A randomised controlled intervention study was undertaken to make a large class seem small, without the need to reduce the number of students enrolling in the medical programme. The class was divided into subsets: two small subsets (n=50) as the intervention groups; a control group (n=102) was mixed with the remaining students (the non-randomised group n∼100) to create one large subset. The undergraduate curriculum of the Maastricht Medical School, applying the Problem-Based Learning principles. In this learning context, students learn mainly in tutorial groups, composed randomly from a large class every 6-10 weeks. The formal group learning activities were organised within the subsets. Students from the intervention groups met frequently within the formal groups, in contrast to the students from the large subset who hardly enrolled with the same students in formal activities. Three outcome measures assessed students' group learning processes over time: learning within formally organised small groups, learning with other students in the informal context and perceptions of the intervention. Formal group learning processes were perceived more positive in the intervention groups from the second study year on, with a mean increase of β=0.48. Informal group learning activities occurred almost exclusively within the subsets as defined by the intervention from the first week involved in the medical curriculum (E-I indexes>-0.69). Interviews tapped mainly positive effects and negligible negative side effects of the intervention. Better group learning processes can be achieved in large medical schools by making large classes seem small.

  12. Large 3D resistivity and induced polarization acquisition using the Fullwaver system: towards an adapted processing methodology

    Science.gov (United States)

    Truffert, Catherine; Leite, Orlando; Gance, Julien; Texier, Benoît; Bernard, Jean

    2017-04-01

    Driven by needs in the mineral exploration market for ever faster and ever easier set-up of large 3D resistivity and induced polarization, autonomous and cableless recorded systems come to the forefront. Opposite to the traditional centralized acquisition, this new system permits a complete random distribution of receivers on the survey area allowing to obtain a real 3D imaging. This work presents the results of a 3 km2 large experiment up to 600m of depth performed with a new type of autonomous distributed receivers: the I&V-Fullwaver. With such system, all usual drawbacks induced by long cable set up over large 3D areas - time consuming, lack of accessibility, heavy weight, electromagnetic induction, etc. - disappear. The V-Fullwavers record the entire time series of voltage on two perpendicular axes, for a good determination of the data quality although I-Fullwaver records injected current simultaneously. For this survey, despite good assessment of each individual signal quality, on each channel of the set of Fullwaver systems, a significant number of negative apparent resistivity and chargeability remains present in the dataset (around 15%). These values are commonly not taken into account in the inversion software although they may be due to complex geological structure of interest (e.g. linked to the presence of sulfides in the earth). Taking into account that such distributed recording system aims to restitute the best 3D resistivity and IP tomography, how can 3D inversion be improved? In this work, we present the dataset, the processing chain and quality control of a large 3D survey. We show that the quality of the data selected is good enough to include it into the inversion processing. We propose a second way of processing based on the modulus of the apparent resistivity that stabilizes the inversion. We then discuss the results of both processing. We conclude that an effort could be made on the inclusion of negative apparent resistivity in the inversion

  13. PLANNING QUALITY ASSURANCE PROCESSES IN A LARGE SCALE GEOGRAPHICALLY SPREAD HYBRID SOFTWARE DEVELOPMENT PROJECT

    Directory of Open Access Journals (Sweden)

    Святослав Аркадійович МУРАВЕЦЬКИЙ

    2016-02-01

    Full Text Available There have been discussed key points of operational activates in a large scale geographically spread software development projects. A look taken at required QA processes structure in such project. There have been given up to date methods of integration quality assurance processes into software development processes. There have been reviewed existing groups of software development methodologies. Such as sequential, agile and based on RPINCE2. There have been given a condensed overview of quality assurance processes in each group. There have been given a review of common challenges that sequential and agile models are having in case of large geographically spread hybrid software development project. Recommendations were given in order to tackle those challenges.  The conclusions about the best methodology choice and appliance to the particular project have been made.

  14. Large break frequency for the SRS (Savannah River Site) production reactor process water system

    International Nuclear Information System (INIS)

    Daugherty, W.L.; Awadalla, N.G.; Sindelar, R.L.; Bush, S.H.

    1989-01-01

    The objective of this paper is to present the results and conclusions of an evaluation of the large break frequency for the process water system (primary coolant system), including the piping, reactor tank, heat exchangers, expansion joints and other process water system components. This evaluation was performed to support the ongoing PRA effort and to complement deterministic analyses addressing the credibility of a double-ended guillotine break. This evaluation encompasses three specific areas: the failure probability of large process water piping directly from imposed loads, the indirect failure probability of piping caused by the seismic-induced failure of surrounding structures, and the failure of all other process water components. The first two of these areas are discussed in detail in other papers. This paper primarily addresses the failure frequency of components other than piping, and includes the other two areas as contributions to the overall process water system break frequency

  15. Visualization of the Flux Rope Generation Process Using Large Quantities of MHD Simulation Data

    Directory of Open Access Journals (Sweden)

    Y Kubota

    2013-03-01

    Full Text Available We present a new concept of analysis using visualization of large quantities of simulation data. The time development of 3D objects with high temporal resolution provides the opportunity for scientific discovery. We visualize large quantities of simulation data using the visualization application 'Virtual Aurora' based on AVS (Advanced Visual Systems and the parallel distributed processing at "Space Weather Cloud" in NICT based on Gfarm technology. We introduce two results of high temporal resolution visualization: the magnetic flux rope generation process and dayside reconnection using a system of magnetic field line tracing.

  16. Large sample hydrology in NZ: Spatial organisation in process diagnostics

    Science.gov (United States)

    McMillan, H. K.; Woods, R. A.; Clark, M. P.

    2013-12-01

    A key question in hydrology is how to predict the dominant runoff generation processes in any given catchment. This knowledge is vital for a range of applications in forecasting hydrological response and related processes such as nutrient and sediment transport. A step towards this goal is to map dominant processes in locations where data is available. In this presentation, we use data from 900 flow gauging stations and 680 rain gauges in New Zealand, to assess hydrological processes. These catchments range in character from rolling pasture, to alluvial plains, to temperate rainforest, to volcanic areas. By taking advantage of so many flow regimes, we harness the benefits of large-sample and comparative hydrology to study patterns and spatial organisation in runoff processes, and their relationship to physical catchment characteristics. The approach we use to assess hydrological processes is based on the concept of diagnostic signatures. Diagnostic signatures in hydrology are targeted analyses of measured data which allow us to investigate specific aspects of catchment response. We apply signatures which target the water balance, the flood response and the recession behaviour. We explore the organisation, similarity and diversity in hydrological processes across the New Zealand landscape, and how these patterns change with scale. We discuss our findings in the context of the strong hydro-climatic gradients in New Zealand, and consider the implications for hydrological model building on a national scale.

  17. Simulation research on the process of large scale ship plane segmentation intelligent workshop

    Science.gov (United States)

    Xu, Peng; Liao, Liangchuang; Zhou, Chao; Xue, Rui; Fu, Wei

    2017-04-01

    Large scale ship plane segmentation intelligent workshop is a new thing, and there is no research work in related fields at home and abroad. The mode of production should be transformed by the existing industry 2.0 or part of industry 3.0, also transformed from "human brain analysis and judgment + machine manufacturing" to "machine analysis and judgment + machine manufacturing". In this transforming process, there are a great deal of tasks need to be determined on the aspects of management and technology, such as workshop structure evolution, development of intelligent equipment and changes in business model. Along with them is the reformation of the whole workshop. Process simulation in this project would verify general layout and process flow of large scale ship plane section intelligent workshop, also would analyze intelligent workshop working efficiency, which is significant to the next step of the transformation of plane segmentation intelligent workshop.

  18. Medical Students Perceive Better Group Learning Processes when Large Classes Are Made to Seem Small

    Science.gov (United States)

    Hommes, Juliette; Arah, Onyebuchi A.; de Grave, Willem; Schuwirth, Lambert W. T.; Scherpbier, Albert J. J. A.; Bos, Gerard M. J.

    2014-01-01

    Objective Medical schools struggle with large classes, which might interfere with the effectiveness of learning within small groups due to students being unfamiliar to fellow students. The aim of this study was to assess the effects of making a large class seem small on the students' collaborative learning processes. Design A randomised controlled intervention study was undertaken to make a large class seem small, without the need to reduce the number of students enrolling in the medical programme. The class was divided into subsets: two small subsets (n = 50) as the intervention groups; a control group (n = 102) was mixed with the remaining students (the non-randomised group n∼100) to create one large subset. Setting The undergraduate curriculum of the Maastricht Medical School, applying the Problem-Based Learning principles. In this learning context, students learn mainly in tutorial groups, composed randomly from a large class every 6–10 weeks. Intervention The formal group learning activities were organised within the subsets. Students from the intervention groups met frequently within the formal groups, in contrast to the students from the large subset who hardly enrolled with the same students in formal activities. Main Outcome Measures Three outcome measures assessed students' group learning processes over time: learning within formally organised small groups, learning with other students in the informal context and perceptions of the intervention. Results Formal group learning processes were perceived more positive in the intervention groups from the second study year on, with a mean increase of β = 0.48. Informal group learning activities occurred almost exclusively within the subsets as defined by the intervention from the first week involved in the medical curriculum (E-I indexes>−0.69). Interviews tapped mainly positive effects and negligible negative side effects of the intervention. Conclusion Better group learning processes can be

  19. Large-Scale No-Show Patterns and Distributions for Clinic Operational Research

    Directory of Open Access Journals (Sweden)

    Michael L. Davies

    2016-02-01

    Full Text Available Patient no-shows for scheduled primary care appointments are common. Unused appointment slots reduce patient quality of care, access to services and provider productivity while increasing loss to follow-up and medical costs. This paper describes patterns of no-show variation by patient age, gender, appointment age, and type of appointment request for six individual service lines in the United States Veterans Health Administration (VHA. This retrospective observational descriptive project examined 25,050,479 VHA appointments contained in individual-level records for eight years (FY07-FY14 for 555,183 patients. Multifactor analysis of variance (ANOVA was performed, with no-show rate as the dependent variable, and gender, age group, appointment age, new patient status, and service line as factors. The analyses revealed that males had higher no-show rates than females to age 65, at which point males and females exhibited similar rates. The average no-show rates decreased with age until 75–79, whereupon rates increased. As appointment age increased, males and new patients had increasing no-show rates. Younger patients are especially prone to no-show as appointment age increases. These findings provide novel information to healthcare practitioners and management scientists to more accurately characterize no-show and attendance rates and the impact of certain patient factors. Future general population data could determine whether findings from VHA data generalize to others.

  20. Large-Scale No-Show Patterns and Distributions for Clinic Operational Research.

    Science.gov (United States)

    Davies, Michael L; Goffman, Rachel M; May, Jerrold H; Monte, Robert J; Rodriguez, Keri L; Tjader, Youxu C; Vargas, Dominic L

    2016-02-16

    Patient no-shows for scheduled primary care appointments are common. Unused appointment slots reduce patient quality of care, access to services and provider productivity while increasing loss to follow-up and medical costs. This paper describes patterns of no-show variation by patient age, gender, appointment age, and type of appointment request for six individual service lines in the United States Veterans Health Administration (VHA). This retrospective observational descriptive project examined 25,050,479 VHA appointments contained in individual-level records for eight years (FY07-FY14) for 555,183 patients. Multifactor analysis of variance (ANOVA) was performed, with no-show rate as the dependent variable, and gender, age group, appointment age, new patient status, and service line as factors. The analyses revealed that males had higher no-show rates than females to age 65, at which point males and females exhibited similar rates. The average no-show rates decreased with age until 75-79, whereupon rates increased. As appointment age increased, males and new patients had increasing no-show rates. Younger patients are especially prone to no-show as appointment age increases. These findings provide novel information to healthcare practitioners and management scientists to more accurately characterize no-show and attendance rates and the impact of certain patient factors. Future general population data could determine whether findings from VHA data generalize to others.

  1. Large-scale functional networks connect differently for processing words and symbol strings.

    Science.gov (United States)

    Liljeström, Mia; Vartiainen, Johanna; Kujala, Jan; Salmelin, Riitta

    2018-01-01

    Reconfigurations of synchronized large-scale networks are thought to be central neural mechanisms that support cognition and behavior in the human brain. Magnetoencephalography (MEG) recordings together with recent advances in network analysis now allow for sub-second snapshots of such networks. In the present study, we compared frequency-resolved functional connectivity patterns underlying reading of single words and visual recognition of symbol strings. Word reading emphasized coherence in a left-lateralized network with nodes in classical perisylvian language regions, whereas symbol processing recruited a bilateral network, including connections between frontal and parietal regions previously associated with spatial attention and visual working memory. Our results illustrate the flexible nature of functional networks, whereby processing of different form categories, written words vs. symbol strings, leads to the formation of large-scale functional networks that operate at distinct oscillatory frequencies and incorporate task-relevant regions. These results suggest that category-specific processing should be viewed not so much as a local process but as a distributed neural process implemented in signature networks. For words, increased coherence was detected particularly in the alpha (8-13 Hz) and high gamma (60-90 Hz) frequency bands, whereas increased coherence for symbol strings was observed in the high beta (21-29 Hz) and low gamma (30-45 Hz) frequency range. These findings attest to the role of coherence in specific frequency bands as a general mechanism for integrating stimulus-dependent information across brain regions.

  2. A Proactive Complex Event Processing Method for Large-Scale Transportation Internet of Things

    OpenAIRE

    Wang, Yongheng; Cao, Kening

    2014-01-01

    The Internet of Things (IoT) provides a new way to improve the transportation system. The key issue is how to process the numerous events generated by IoT. In this paper, a proactive complex event processing method is proposed for large-scale transportation IoT. Based on a multilayered adaptive dynamic Bayesian model, a Bayesian network structure learning algorithm using search-and-score is proposed to support accurate predictive analytics. A parallel Markov decision processes model is design...

  3. Processes with large Psub(T) in the quantum chromodynamics

    International Nuclear Information System (INIS)

    Slepchenko, L.A.

    1981-01-01

    Necessary data on deep inelastic processes and processes of hard collision of hadrons and their interpretation in QCD are stated. Low of power reduction of exclusive and inclusive cross sections at large transverse momenta, electromagnetic and inelastic (structural functions) formfactors of hadrons have been discussed. When searching for a method of taking account of QCD effects scaling disturbance was considered. It is shown that for the large transverse momenta the deep inelastic l-h scatterina is represented as the scattering with a compound system (hadron) in the pulse approximation. In an assumption of a parton model obtained was a hadron cross section calculated through a renormalized structural parton function was obtained. Proof of the factorization in the principal logarithmic approximation of QCD has been obtained by means of a quark-gluon diagram technique. The cross section of the hadron reaction in the factorized form, which is analogous to the l-h scattering, has been calculated. It is shown that a) the diagram summing with the gluon emission generates the scaling disturbance in renormalized structural functions (SF) of quarks and gluons and a running coupling constant arises simultaneously; b) the disturbance character of the Bjorken scaling of SF is the same as in the deep inelasic lepton scattering. QCD problems which can not be solved within the framework of the perturbation theory, are discussed. The evolution of SF describing the bound state of a hadron and the hadron light cone have been studied. Radiation corrections arising in two-loop and higher approximations have been evaluated. QCD corrections for point-similar power asymptotes of processes with high energies and transfers of momenta have been studied on the example of the inclusive production of quark and gluon jets. Rules of the quark counting of anomalous dimensionalities of QCD have been obtained. It is concluded that the considered limit of the inclusive cross sections is close to

  4. Possible implications of large scale radiation processing of food

    International Nuclear Information System (INIS)

    Zagorski, Z.P.

    1990-01-01

    Large scale irradiation has been discussed in terms of the participation of processing cost in the final value of the improved product. Another factor has been taken into account and that is the saturation of the market with the new product. In the case of successful projects the participation of irradiation cost is low, and the demand for the better product is covered. A limited availability of sources makes the modest saturation of the market difficult with all food subjected to correct radiation treatment. The implementation of the preservation of food needs a decided selection of these kinds of food which comply to all conditions i.e. of acceptance by regulatory bodies, real improvement of quality and economy. The last condition prefers the possibility of use of electron beams of low energy. The best fulfilment of conditions for successful processing is observed in the group of dry food, in expensive spices in particular. (author)

  5. Possible implications of large scale radiation processing of food

    Science.gov (United States)

    Zagórski, Z. P.

    Large scale irradiation has been discussed in terms of the participation of processing cost in the final value of the improved product. Another factor has been taken into account and that is the saturation of the market with the new product. In the case of succesful projects the participation of irradiation cost is low, and the demand for the better product is covered. A limited availability of sources makes the modest saturation of the market difficult with all food subjected to correct radiation treatment. The implementation of the preservation of food needs a decided selection of these kinds of food which comply to all conditions i.e. of acceptance by regulatory bodies, real improvement of quality and economy. The last condition prefers the possibility of use of electron beams of low energy. The best fullfilment of conditions for succesful processing is observed in the group of dry food, in expensive spices in particular.

  6. Large Eddy Simulation of Cryogenic Injection Processes at Supercritical Pressure

    Science.gov (United States)

    Oefelein, Joseph C.

    2002-01-01

    This paper highlights results from the first of a series of hierarchical simulations aimed at assessing the modeling requirements for application of the large eddy simulation technique to cryogenic injection and combustion processes in liquid rocket engines. The focus is on liquid-oxygen-hydrogen coaxial injectors at a condition where the liquid-oxygen is injected at a subcritical temperature into a supercritical environment. For this situation a diffusion dominated mode of combustion occurs in the presence of exceedingly large thermophysical property gradients. Though continuous, these gradients approach the behavior of a contact discontinuity. Significant real gas effects and transport anomalies coexist locally in colder regions of the flow, with ideal gas and transport characteristics occurring within the flame zone. The current focal point is on the interfacial region between the liquid-oxygen core and the coaxial hydrogen jet where the flame anchors itself.

  7. The key network communication technology in large radiation image cooperative process system

    International Nuclear Information System (INIS)

    Li Zheng; Kang Kejun; Gao Wenhuan; Wang Jingjin

    1998-01-01

    Large container inspection system (LCIS) based on radiation imaging technology is a powerful tool for the customs to check the contents inside a large container without opening it. An image distributed network system is composed of operation manager station, image acquisition station, environment control station, inspection processing station, check-in station, check-out station, database station by using advanced network technology. Mass data, such as container image data, container general information, manifest scanning data, commands and status, must be on-line transferred between different stations. Advanced network communication technology is presented

  8. Response of deep and shallow tropical maritime cumuli to large-scale processes

    Science.gov (United States)

    Yanai, M.; Chu, J.-H.; Stark, T. E.; Nitta, T.

    1976-01-01

    The bulk diagnostic method of Yanai et al. (1973) and a simplified version of the spectral diagnostic method of Nitta (1975) are used for a more quantitative evaluation of the response of various types of cumuliform clouds to large-scale processes, using the same data set in the Marshall Islands area for a 100-day period in 1956. The dependence of the cloud mass flux distribution on radiative cooling, large-scale vertical motion, and evaporation from the sea is examined. It is shown that typical radiative cooling rates in the tropics tend to produce a bimodal distribution of mass spectrum exhibiting deep and shallow clouds. The bimodal distribution is further enhanced when the large-scale vertical motion is upward, and a nearly unimodal distribution of shallow clouds prevails when the relative cooling is compensated by the heating due to the large-scale subsidence. Both deep and shallow clouds are modulated by large-scale disturbances. The primary role of surface evaporation is to maintain the moisture flux at the cloud base.

  9. Large scale production and downstream processing of a recombinant porcine parvovirus vaccine

    NARCIS (Netherlands)

    Maranga, L.; Rueda, P.; Antonis, A.F.G.; Vela, C.; Langeveld, J.P.M.; Casal, J.I.; Carrondo, M.J.T.

    2002-01-01

    Porcine parvovirus (PPV) virus-like particles (VLPs) constitute a potential vaccine for prevention of parvovirus-induced reproductive failure in gilts. Here we report the development of a large scale (25 l) production process for PPV-VLPs with baculovirus-infected insect cells. A low multiplicity of

  10. Large quantity production of carbon and boron nitride nanotubes by mechano-thermal process

    International Nuclear Information System (INIS)

    Chen, Y.; Fitzgerald, J.D.; Chadderton, L.; Williams, J.S.; Campbell, S.J.

    2002-01-01

    Full text: Nanotube materials including carbon and boron nitride have excellent properties compared with bulk materials. The seamless graphene cylinders with a high length to diameter ratio make them as superstrong fibers. A high amount of hydrogen can be stored into nanotubes as future clean fuel source. Theses applications require large quantity of nanotubes materials. However, nanotube production in large quantity, fully controlled quality and low costs remains challenges for most popular synthesis methods such as arc discharge, laser heating and catalytic chemical decomposition. Discovery of new synthesis methods is still crucial for future industrial application. The new low-temperature mechano-thermal process discovered by the current author provides an opportunity to develop a commercial method for bulk production. This mechano-thermal process consists of a mechanical ball milling and a thermal annealing processes. Using this method, both carbon and boron nitride nanotubes were produced. I will present the mechano-thermal method as the new bulk production technique in the conference. The lecture will summarise main results obtained. In the case of carbon nanotubes, different nanosized structures including multi-walled nanotubes, nanocells, and nanoparticles have been produced in a graphite sample using a mechano-thermal process, consisting of I mechanical milling at room temperature for up to 150 hours and subsequent thermal annealing at 1400 deg C. Metal particles have played an important catalytic effect on the formation of different tubular structures. While defect structure of the milled graphite appears to be responsible for the formation of small tubes. It is found that the mechanical treatment of graphite powder produces a disordered and microporous structure, which provides nucleation sites for nanotubes as well as free carbon atoms. Multiwalled carbon nanotubes appear to grow via growth of the (002) layers during thermal annealing. In the case of BN

  11. Design Methodology of Process Layout considering Various Equipment Types for Large scale Pyro processing Facility

    International Nuclear Information System (INIS)

    Yu, Seung Nam; Lee, Jong Kwang; Lee, Hyo Jik

    2016-01-01

    At present, each item of process equipment required for integrated processing is being examined, based on experience acquired during the Pyropocess Integrated Inactive Demonstration Facility (PRIDE) project, and considering the requirements and desired performance enhancement of KAPF as a new facility beyond PRIDE. Essentially, KAPF will be required to handle hazardous materials such as spent nuclear fuel, which must be processed in an isolated and shielded area separate from the operator location. Moreover, an inert-gas atmosphere must be maintained, because of the radiation and deliquescence of the materials. KAPF must also achieve the goal of significantly increased yearly production beyond that of the previous facility; therefore, several parts of the production line must be automated. This article presents the method considered for the conceptual design of both the production line and the overall layout of the KAPF process equipment. This study has proposed a design methodology that can be utilized as a preliminary step for the design of a hot-cell-type, large-scale facility, in which the various types of processing equipment operated by the remote handling system are integrated. The proposed methodology applies to part of the overall design procedure and contains various weaknesses. However, if the designer is required to maximize the efficiency of the installed material-handling system while considering operation restrictions and maintenance conditions, this kind of design process can accommodate the essential components that must be employed simultaneously in a general hot-cell system

  12. Quality Improvement Process in a Large Intensive Care Unit: Structure and Outcomes.

    Science.gov (United States)

    Reddy, Anita J; Guzman, Jorge A

    2016-11-01

    Quality improvement in the health care setting is a complex process, and even more so in the critical care environment. The development of intensive care unit process measures and quality improvement strategies are associated with improved outcomes, but should be individualized to each medical center as structure and culture can differ from institution to institution. The purpose of this report is to describe the structure of quality improvement processes within a large medical intensive care unit while using examples of the study institution's successes and challenges in the areas of stat antibiotic administration, reduction in blood product waste, central line-associated bloodstream infections, and medication errors. © The Author(s) 2015.

  13. APD arrays and large-area APDs via a new planar process

    CERN Document Server

    Farrell, R; Vanderpuye, K; Grazioso, R; Myers, R; Entine, G

    2000-01-01

    A fabrication process has been developed which allows the beveled-edge-type of avalanche photodiode (APD) to be made without the need for the artful bevel formation steps. This new process, applicable to both APD arrays and to discrete detectors, greatly simplifies manufacture and should lead to significant cost reduction for such photodetectors. This is achieved through a simple innovation that allows isolation around the device or array pixel to be brought into the plane of the surface of the silicon wafer, hence a planar process. A description of the new process is presented along with performance data for a variety of APD device and array configurations. APD array pixel gains in excess of 10 000 have been measured. Array pixel coincidence timing resolution of less than 5 ns has been demonstrated. An energy resolution of 6% for 662 keV gamma-rays using a CsI(T1) scintillator on a planar processed large-area APD has been recorded. Discrete APDs with active areas up to 13 cm sup 2 have been operated.

  14. Analysis of reforming process of large distorted ring in final enlarging forging

    International Nuclear Information System (INIS)

    Miyazawa, Takeshi; Murai, Etsuo

    2002-01-01

    In the construction of reactors or pressure vessels for oil chemical plants and nuclear power stations, mono block open-die forging rings are often utilized. Generally, a large forged ring is manufactured by means of enlarging forging with reductions of the wall thickness. During the enlarging process the circular ring is often distorted and becomes an ellipse in shape. However the shape control of the ring is a complicated work. This phenomenon makes the matter still worse in forging of larger rings. In order to make precision forging of large rings, we have developed the forging method using a v-shape anvil. The v-shape anvil is geometrically adjusted to fit the distorted ring in the final circle and reform automatically the shape of the ring during enlarging forging. This paper has analyzed the reforming process of distorted ring by computer program based on F.E.M. and examined the effect on the precision of ring forging. (author)

  15. High-energy, large-momentum-transfer processes: Ladder diagrams in var-phi 3 theory

    International Nuclear Information System (INIS)

    Newton, C.L.J.

    1990-01-01

    Relativistic quantum field theories may help one to understand high-energy, large-momentum-transfer processes, where the center-of-mass energy is much larger than the transverse momentum transfers, which are in turn much larger than the masses of the participating particles. With this possibility in mind, the author studies ladder diagrams in var-phi 3 theory. He shows that in the limit s much-gt |t| much-gt m 2 , the scattering amplitude for the N-rung ladder diagram takes the form s -1 |t| -N+1 times a homogeneous polynomial of degree 2N - 2 and ln s and ln |t|. This polynomial takes different forms depending on the relation of ln |t| to ln s. More precisely, the asymptotic formula for the N-rung ladder diagram has points of non-analytically when ln |t| = γ ln s for γ = 1/2, 1/3, hor-ellipsis, 1/N-2

  16. Hydrothermal processes above the Yellowstone magma chamber: Large hydrothermal systems and large hydrothermal explosions

    Science.gov (United States)

    Morgan, L.A.; Shanks, W.C. Pat; Pierce, K.L.

    2009-01-01

    Hydrothermal explosions are violent and dramatic events resulting in the rapid ejection of boiling water, steam, mud, and rock fragments from source craters that range from a few meters up to more than 2 km in diameter; associated breccia can be emplaced as much as 3 to 4 km from the largest craters. Hydrothermal explosions occur where shallow interconnected reservoirs of steam- and liquid-saturated fluids with temperatures at or near the boiling curve underlie thermal fields. Sudden reduction in confi ning pressure causes fluids to fl ash to steam, resulting in signifi cant expansion, rock fragmentation, and debris ejection. In Yellowstone, hydrothermal explosions are a potentially signifi cant hazard for visitors and facilities and can damage or even destroy thermal features. The breccia deposits and associated craters formed from hydrothermal explosions are mapped as mostly Holocene (the Mary Bay deposit is older) units throughout Yellowstone National Park (YNP) and are spatially related to within the 0.64-Ma Yellowstone caldera and along the active Norris-Mammoth tectonic corridor. In Yellowstone, at least 20 large (>100 m in diameter) hydrothermal explosion craters have been identifi ed; the scale of the individual associated events dwarfs similar features in geothermal areas elsewhere in the world. Large hydrothermal explosions in Yellowstone have occurred over the past 16 ka averaging ??1 every 700 yr; similar events are likely in the future. Our studies of large hydrothermal explosion events indicate: (1) none are directly associated with eruptive volcanic or shallow intrusive events; (2) several historical explosions have been triggered by seismic events; (3) lithic clasts and comingled matrix material that form hydrothermal explosion deposits are extensively altered, indicating that explosions occur in areas subjected to intense hydrothermal processes; (4) many lithic clasts contained in explosion breccia deposits preserve evidence of repeated fracturing

  17. Modeling of large-scale oxy-fuel combustion processes

    DEFF Research Database (Denmark)

    Yin, Chungen

    2012-01-01

    Quite some studies have been conducted in order to implement oxy-fuel combustion with flue gas recycle in conventional utility boilers as an effective effort of carbon capture and storage. However, combustion under oxy-fuel conditions is significantly different from conventional air-fuel firing......, among which radiative heat transfer under oxy-fuel conditions is one of the fundamental issues. This paper demonstrates the nongray-gas effects in modeling of large-scale oxy-fuel combustion processes. Oxy-fuel combustion of natural gas in a 609MW utility boiler is numerically studied, in which...... calculation of the oxy-fuel WSGGM remarkably over-predicts the radiative heat transfer to the furnace walls and under-predicts the gas temperature at the furnace exit plane, which also result in a higher incomplete combustion in the gray calculation. Moreover, the gray and non-gray calculations of the same...

  18. Effect of Heat Treatment Process on Mechanical Properties and Microstructure of a 9% Ni Steel for Large LNG Storage Tanks

    Science.gov (United States)

    Zhang, J. M.; Li, H.; Yang, F.; Chi, Q.; Ji, L. K.; Feng, Y. R.

    2013-12-01

    In this paper, two different heat treatment processes of a 9% Ni steel for large liquefied natural gas storage tanks were performed in an industrial heating furnace. The former was a special heat treatment process consisting of quenching and intercritical quenching and tempering (Q-IQ-T). The latter was a heat treatment process only consisting of quenching and tempering. Mechanical properties were measured by tensile testing and charpy impact testing, and the microstructure was analyzed by optical microscopy, transmission electron microscopy, and x-ray diffraction. The results showed that outstanding mechanical properties were obtained from the Q-IQ-T process in comparison with the Q-T process, and a cryogenic toughness with charpy impact energy value of 201 J was achieved at 77 K. Microstructure analysis revealed that samples of the Q-IQ-T process had about 9.8% of austenite in needle-like martensite, while samples of the Q-T process only had about 0.9% of austenite retained in tempered martensite.

  19. Progress in Root Cause and Fault Propagation Analysis of Large-Scale Industrial Processes

    Directory of Open Access Journals (Sweden)

    Fan Yang

    2012-01-01

    Full Text Available In large-scale industrial processes, a fault can easily propagate between process units due to the interconnections of material and information flows. Thus the problem of fault detection and isolation for these processes is more concerned about the root cause and fault propagation before applying quantitative methods in local models. Process topology and causality, as the key features of the process description, need to be captured from process knowledge and process data. The modelling methods from these two aspects are overviewed in this paper. From process knowledge, structural equation modelling, various causal graphs, rule-based models, and ontological models are summarized. From process data, cross-correlation analysis, Granger causality and its extensions, frequency domain methods, information-theoretical methods, and Bayesian nets are introduced. Based on these models, inference methods are discussed to find root causes and fault propagation paths under abnormal situations. Some future work is proposed in the end.

  20. Engaging the public with low-carbon energy technologies: Results from a Scottish large group process

    International Nuclear Information System (INIS)

    Howell, Rhys; Shackley, Simon; Mabon, Leslie; Ashworth, Peta; Jeanneret, Talia

    2014-01-01

    This paper presents the results of a large group process conducted in Edinburgh, Scotland investigating public perceptions of climate change and low-carbon energy technologies, specifically carbon dioxide capture and storage (CCS). The quantitative and qualitative results reported show that the participants were broadly supportive of efforts to reduce carbon dioxide emissions, and that there is an expressed preference for renewable energy technologies to be employed to achieve this. CCS was considered in detail during the research due to its climate mitigation potential; results show that the workshop participants were cautious about its deployment. The paper discusses a number of interrelated factors which appear to influence perceptions of CCS; factors such as the perceived costs and benefits of the technology, and people's personal values and trust in others all impacted upon participants’ attitudes towards the technology. The paper thus argues for the need to provide the public with broad-based, balanced and trustworthy information when discussing CCS, and to take seriously the full range of factors that influence public perceptions of low-carbon technologies. - Highlights: • We report the results of a Scottish large group workshop on energy technologies. • There is strong public support for renewable energy and mixed opinions towards CCS. • The workshop was successful in initiating discussion around climate change and energy technologies. • Issues of trust, uncertainty, costs, benefits, values and emotions all inform public perceptions. • Need to take seriously the full range of factors that inform perceptions

  1. COPASutils: an R package for reading, processing, and visualizing data from COPAS large-particle flow cytometers.

    Directory of Open Access Journals (Sweden)

    Tyler C Shimko

    Full Text Available The R package COPASutils provides a logical workflow for the reading, processing, and visualization of data obtained from the Union Biometrica Complex Object Parametric Analyzer and Sorter (COPAS or the BioSorter large-particle flow cytometers. Data obtained from these powerful experimental platforms can be unwieldy, leading to difficulties in the ability to process and visualize the data using existing tools. Researchers studying small organisms, such as Caenorhabditis elegans, Anopheles gambiae, and Danio rerio, and using these devices will benefit from this streamlined and extensible R package. COPASutils offers a powerful suite of functions for the rapid processing and analysis of large high-throughput screening data sets.

  2. Large deviations for the Fleming-Viot process with neutral mutation and selection

    OpenAIRE

    Dawson, Donald; Feng, Shui

    1998-01-01

    Large deviation principles are established for the Fleming-Viot processes with neutral mutation and selection, and the corresponding equilibrium measures as the sampling rate goes to 0. All results are first proved for the finite allele model, and then generalized, through the projective limit technique, to the infinite allele model. Explicit expressions are obtained for the rate functions.

  3. Chess masters show a hallmark of face processing with chess.

    Science.gov (United States)

    Boggan, Amy L; Bartlett, James C; Krawczyk, Daniel C

    2012-02-01

    Face processing has several distinctive hallmarks that researchers have attributed either to face-specific mechanisms or to extensive experience distinguishing faces. Here, we examined the face-processing hallmark of selective attention failure--as indexed by the congruency effect in the composite paradigm--in a domain of extreme expertise: chess. Among 27 experts, we found that the congruency effect was equally strong with chessboards and faces. Further, comparing these experts with recreational players and novices, we observed a trade-off: Chess expertise was positively related to the congruency effect with chess yet negatively related to the congruency effect with faces. These and other findings reveal a case of expertise-dependent, facelike processing of objects of expertise and suggest that face and expert-chess recognition share common processes.

  4. High-Temperature-Short-Time Annealing Process for High-Performance Large-Area Perovskite Solar Cells.

    Science.gov (United States)

    Kim, Minjin; Kim, Gi-Hwan; Oh, Kyoung Suk; Jo, Yimhyun; Yoon, Hyun; Kim, Ka-Hyun; Lee, Heon; Kim, Jin Young; Kim, Dong Suk

    2017-06-27

    Organic-inorganic hybrid metal halide perovskite solar cells (PSCs) are attracting tremendous research interest due to their high solar-to-electric power conversion efficiency with a high possibility of cost-effective fabrication and certified power conversion efficiency now exceeding 22%. Although many effective methods for their application have been developed over the past decade, their practical transition to large-size devices has been restricted by difficulties in achieving high performance. Here we report on the development of a simple and cost-effective production method with high-temperature and short-time annealing processing to obtain uniform, smooth, and large-size grain domains of perovskite films over large areas. With high-temperature short-time annealing at 400 °C for 4 s, the perovskite film with an average domain size of 1 μm was obtained, which resulted in fast solvent evaporation. Solar cells fabricated using this processing technique had a maximum power conversion efficiency exceeding 20% over a 0.1 cm 2 active area and 18% over a 1 cm 2 active area. We believe our approach will enable the realization of highly efficient large-area PCSs for practical development with a very simple and short-time procedure. This simple method should lead the field toward the fabrication of uniform large-scale perovskite films, which are necessary for the production of high-efficiency solar cells that may also be applicable to several other material systems for more widespread practical deployment.

  5. Large-scale continuous process to vitrify nuclear defense waste: operating experience with nonradioactive waste

    International Nuclear Information System (INIS)

    Cosper, M.B.; Randall, C.T.; Traverso, G.M.

    1982-01-01

    The developmental program underway at SRL has demonstrated the vitrification process proposed for the sludge processing facility of the DWPF on a large scale. DWPF design criteria for production rate, equipment lifetime, and operability have all been met. The expected authorization and construction of the DWPF will result in the safe and permanent immobilization of a major quantity of existing high level waste. 11 figures, 4 tables

  6. Large deviations

    CERN Document Server

    Varadhan, S R S

    2016-01-01

    The theory of large deviations deals with rates at which probabilities of certain events decay as a natural parameter in the problem varies. This book, which is based on a graduate course on large deviations at the Courant Institute, focuses on three concrete sets of examples: (i) diffusions with small noise and the exit problem, (ii) large time behavior of Markov processes and their connection to the Feynman-Kac formula and the related large deviation behavior of the number of distinct sites visited by a random walk, and (iii) interacting particle systems, their scaling limits, and large deviations from their expected limits. For the most part the examples are worked out in detail, and in the process the subject of large deviations is developed. The book will give the reader a flavor of how large deviation theory can help in problems that are not posed directly in terms of large deviations. The reader is assumed to have some familiarity with probability, Markov processes, and interacting particle systems.

  7. Preparation by the nano-casting process of novel porous carbons from large pore zeolite templates

    International Nuclear Information System (INIS)

    F Gaslain; J Parmentier; V Valtchev; J Patarin; C Vix Guterl

    2005-01-01

    The development of new growing industrial applications such as gas storage (e.g.: methane or hydrogen) or electric double-layer capacitors has focussed the attention of many research groups. For this kind of application, porous carbons with finely tailored micro-porosity (i.e.: pore size diameter ≤ 1 nm) appear as very promising materials due to their high surface area and their specific pore size distribution. In order to meet these requirements, attention has been paid towards the feasibility of preparing microporous carbons by the nano-casting process. Since the sizes and shapes of the pores and walls respectively become the walls and pores of the resultant carbons, using templates with different framework topologies leads to various carbon replicas. The works performed with commercially available zeolites employed as templates [1-4] showed that the most promising candidate is the FAU-type zeolite, which is a large zeolite with three-dimensional channel system. The promising results obtained on FAU-type matrices encouraged us to study the microporous carbon formation on large pore zeolites synthesized in our laboratory, such as EMC-1 (International Zeolite Association framework type FAU), zeolite β (BEA) or EMC-2 (EMT). The carbon replicas were prepared following largely the nano-casting method proposed for zeolite Y by the Kyotani research group [4]: either by liquid impregnation of furfuryl alcohol (FA) followed by carbonization or by vapour deposition (CVD) of propylene, or by an association of these two processes. Heat treatment of the mixed materials (zeolite / carbon) could also follow in order to improve the structural ordering of the carbon. After removal of the inorganic template by an acidic treatment, the carbon materials obtained were characterised by several analytical techniques (XRD, N 2 and CO 2 adsorption, electron microscopy, etc...). The unique characteristics of these carbons are discussed in details in this paper and compared to those

  8. Using value stream mapping technique through the lean production transformation process: An implementation in a large-scaled tractor company

    Directory of Open Access Journals (Sweden)

    Mehmet Rıza Adalı

    2017-04-01

    Full Text Available Today’s world, manufacturing industries have to continue their development and continuity in more competitive environment via decreasing their costs. As a first step in the lean production process transformation is to analyze the value added activities and non-value adding activities. This study aims at applying the concepts of Value Stream Mapping (VSM in a large-scaled tractor company in Sakarya. Waste and process time are identified by mapping the current state in the production line of platform. The future state was suggested with improvements for elimination of waste and reduction of lead time, which went from 13,08 to 4,35 days. Analysis are made using current and future states to support the suggested improvements and cycle time of the production line of platform is improved 8%. Results showed that VSM is a good alternative in the decision-making for change in production process.

  9. Breeding and Genetics Symposium: really big data: processing and analysis of very large data sets.

    Science.gov (United States)

    Cole, J B; Newman, S; Foertter, F; Aguilar, I; Coffey, M

    2012-03-01

    Modern animal breeding data sets are large and getting larger, due in part to recent availability of high-density SNP arrays and cheap sequencing technology. High-performance computing methods for efficient data warehousing and analysis are under development. Financial and security considerations are important when using shared clusters. Sound software engineering practices are needed, and it is better to use existing solutions when possible. Storage requirements for genotypes are modest, although full-sequence data will require greater storage capacity. Storage requirements for intermediate and results files for genetic evaluations are much greater, particularly when multiple runs must be stored for research and validation studies. The greatest gains in accuracy from genomic selection have been realized for traits of low heritability, and there is increasing interest in new health and management traits. The collection of sufficient phenotypes to produce accurate evaluations may take many years, and high-reliability proofs for older bulls are needed to estimate marker effects. Data mining algorithms applied to large data sets may help identify unexpected relationships in the data, and improved visualization tools will provide insights. Genomic selection using large data requires a lot of computing power, particularly when large fractions of the population are genotyped. Theoretical improvements have made possible the inversion of large numerator relationship matrices, permitted the solving of large systems of equations, and produced fast algorithms for variance component estimation. Recent work shows that single-step approaches combining BLUP with a genomic relationship (G) matrix have similar computational requirements to traditional BLUP, and the limiting factor is the construction and inversion of G for many genotypes. A naïve algorithm for creating G for 14,000 individuals required almost 24 h to run, but custom libraries and parallel computing reduced that to

  10. The vascular disrupting agent ZD6126 shows increased antitumor efficacy and enhanced radiation response in large, advanced tumors

    International Nuclear Information System (INIS)

    Siemann, Dietmar W.; Rojiani, Amyn M.

    2005-01-01

    Purpose: ZD6126 is a vascular-targeting agent that induces selective effects on the morphology of proliferating and immature endothelial cells by disrupting the tubulin cytoskeleton. The efficacy of ZD6126 was investigated in large vs. small tumors in a variety of animal models. Methods and Materials: Three rodent tumor models (KHT, SCCVII, RIF-1) and three human tumor xenografts (Caki-1, KSY-1, SKBR3) were used. Mice bearing leg tumors ranging in size from 0.1-2.0 g were injected intraperitoneally with a single 150 mg/kg dose of ZD6126. The response was assessed by morphologic and morphometric means as well as an in vivo to in vitro clonogenic cell survival assay. To examine the impact of tumor size on the extent of enhancement of radiation efficacy by ZD6126, KHT sarcomas of three different sizes were irradiated locally with a range of radiation doses, and cell survival was determined. Results: All rodent tumors and human tumor xenografts evaluated showed a strong correlation between increasing tumor size and treatment effect as determined by clonogenic cell survival. Detailed evaluation of KHT sarcomas treated with ZD6126 showed a reduction in patent tumor blood vessels that was ∼20% in small ( 90% in large (>1.0 g) tumors. Histologic assessment revealed that the extent of tumor necrosis after ZD6126 treatment, although minimal in small KHT sarcomas, became more extensive with increasing tumor size. Clonogenic cell survival after ZD6126 exposure showed a decrease in tumor surviving fraction from approximately 3 x 10 -1 to 1 x 10 -4 with increasing tumor size. When combined with radiotherapy, ZD6126 treatment resulted in little enhancement of the antitumor effect of radiation in small (<0.3 g) tumors but marked increases in cell kill in tumors larger than 1.0 g. Conclusions: Because bulky neoplastic disease is typically the most difficult to manage, the present findings provide further support for the continued development of vascular disrupting agents such as

  11. Accelerated decomposition techniques for large discounted Markov decision processes

    Science.gov (United States)

    Larach, Abdelhadi; Chafik, S.; Daoui, C.

    2017-12-01

    Many hierarchical techniques to solve large Markov decision processes (MDPs) are based on the partition of the state space into strongly connected components (SCCs) that can be classified into some levels. In each level, smaller problems named restricted MDPs are solved, and then these partial solutions are combined to obtain the global solution. In this paper, we first propose a novel algorithm, which is a variant of Tarjan's algorithm that simultaneously finds the SCCs and their belonging levels. Second, a new definition of the restricted MDPs is presented to ameliorate some hierarchical solutions in discounted MDPs using value iteration (VI) algorithm based on a list of state-action successors. Finally, a robotic motion-planning example and the experiment results are presented to illustrate the benefit of the proposed decomposition algorithms.

  12. Application of large radiation sources in chemical processing industry

    International Nuclear Information System (INIS)

    Krishnamurthy, K.

    1977-01-01

    Large radiation sources and their application in chemical processing industry are described. A reference has also been made to the present developments in this field in India. Radioactive sources, notably 60 Co, are employed in production of wood-plastic and concrete-polymer composites, vulcanised rubbers, polymers, sulfochlorinated paraffin hydrocarbons and in a number of other applications which require deep penetration and high reliability of source. Machine sources of electrons are used in production of heat shrinkable plastics, insulation materials for cables, curing of paints etc. Radiation sources have also been used for sewage hygienisation. As for the scene in India, 60 Co sources, gamma chambers and batch irradiators are manufactured. A list of the on-going R and D projects and organisations engaged in research in this field is given. (M.G.B.)

  13. Curbing variations in packaging process through Six Sigma way in a large-scale food-processing industry

    Science.gov (United States)

    Desai, Darshak A.; Kotadiya, Parth; Makwana, Nikheel; Patel, Sonalinkumar

    2015-03-01

    Indian industries need overall operational excellence for sustainable profitability and growth in the present age of global competitiveness. Among different quality and productivity improvement techniques, Six Sigma has emerged as one of the most effective breakthrough improvement strategies. Though Indian industries are exploring this improvement methodology to their advantage and reaping the benefits, not much has been presented and published regarding experience of Six Sigma in the food-processing industries. This paper is an effort to exemplify the application of Six Sigma quality improvement drive to one of the large-scale food-processing sectors in India. The paper discusses the phase wiz implementation of define, measure, analyze, improve, and control (DMAIC) on one of the chronic problems, variations in the weight of milk powder pouch. The paper wraps up with the improvements achieved and projected bottom-line gain to the unit by application of Six Sigma methodology.

  14. Moditored unsaturated soil transport processes as a support for large scale soil and water management

    Science.gov (United States)

    Vanclooster, Marnik

    2010-05-01

    The current societal demand for sustainable soil and water management is very large. The drivers of global and climate change exert many pressures on the soil and water ecosystems, endangering appropriate ecosystem functioning. The unsaturated soil transport processes play a key role in soil-water system functioning as it controls the fluxes of water and nutrients from the soil to plants (the pedo-biosphere link), the infiltration flux of precipitated water to groundwater and the evaporative flux, and hence the feed back from the soil to the climate system. Yet, unsaturated soil transport processes are difficult to quantify since they are affected by huge variability of the governing properties at different space-time scales and the intrinsic non-linearity of the transport processes. The incompatibility of the scales between the scale at which processes reasonably can be characterized, the scale at which the theoretical process correctly can be described and the scale at which the soil and water system need to be managed, calls for further development of scaling procedures in unsaturated zone science. It also calls for a better integration of theoretical and modelling approaches to elucidate transport processes at the appropriate scales, compatible with the sustainable soil and water management objective. Moditoring science, i.e the interdisciplinary research domain where modelling and monitoring science are linked, is currently evolving significantly in the unsaturated zone hydrology area. In this presentation, a review of current moditoring strategies/techniques will be given and illustrated for solving large scale soil and water management problems. This will also allow identifying research needs in the interdisciplinary domain of modelling and monitoring and to improve the integration of unsaturated zone science in solving soil and water management issues. A focus will be given on examples of large scale soil and water management problems in Europe.

  15. Self-Calibrated In-Process Photogrammetry for Large Raw Part Measurement and Alignment before Machining.

    Science.gov (United States)

    Mendikute, Alberto; Yagüe-Fabra, José A; Zatarain, Mikel; Bertelsen, Álvaro; Leizea, Ibai

    2017-09-09

    Photogrammetry methods are being used more and more as a 3D technique for large scale metrology applications in industry. Optical targets are placed on an object and images are taken around it, where measuring traceability is provided by precise off-process pre-calibrated digital cameras and scale bars. According to the 2D target image coordinates, target 3D coordinates and camera views are jointly computed. One of the applications of photogrammetry is the measurement of raw part surfaces prior to its machining. For this application, post-process bundle adjustment has usually been adopted for computing the 3D scene. With that approach, a high computation time is observed, leading in practice to time consuming and user dependent iterative review and re-processing procedures until an adequate set of images is taken, limiting its potential for fast, easy-to-use, and precise measurements. In this paper, a new efficient procedure is presented for solving the bundle adjustment problem in portable photogrammetry. In-process bundle computing capability is demonstrated on a consumer grade desktop PC, enabling quasi real time 2D image and 3D scene computing. Additionally, a method for the self-calibration of camera and lens distortion has been integrated into the in-process approach due to its potential for highest precision when using low cost non-specialized digital cameras. Measurement traceability is set only by scale bars available in the measuring scene, avoiding the uncertainty contribution of off-process camera calibration procedures or the use of special purpose calibration artifacts. The developed self-calibrated in-process photogrammetry has been evaluated both in a pilot case scenario and in industrial scenarios for raw part measurement, showing a total in-process computing time typically below 1 s per image up to a maximum of 2 s during the last stages of the computed industrial scenes, along with a relative precision of 1/10,000 (e.g. 0.1 mm error in 1 m) with

  16. Operational experinece with large scale biogas production at the promest manure processing plant in Helmond, the Netherlands

    International Nuclear Information System (INIS)

    Schomaker, A.H.H.M.

    1992-01-01

    In The Netherlands a surplus of 15 million tons of liquid pig manure is produced yearly on intensive pig breeding farms. The dutch government has set a three-way policy to reduce this excess of manure: 1. conversion of animal fodder into a product with less and better ingestible nutrients; 2. distribution of the surplus to regions with a shortage of animal manure; 3. processing of the remainder of the surplus in large scale processing plants. The first large scale plant for the processing of liquid pig manure was put in operation in 1988 as a demonstration plant at Promest in Helmond. The design capacity of this plant is 100,000 tons of pig manure per year. The plant was initiated by the Manure Steering Committee of the province Noord-Brabant in order to prove at short notice whether large scale manure processing might contribute to the solution of the problem of the manure surplus in The Netherlands. This steering committee is a corporation of the national and provincial government and the agricultural industrial life. (au)

  17. On conservation of the baryon chirality in the processes with large momentum transfer

    International Nuclear Information System (INIS)

    Ioffe, B.L.

    1976-01-01

    The hypothesis of the baryon chirality conservation in the processes with large momentum transfer is suggested and some arguments in its favour are made. Experimental implicatiosns of this assumption for weak and electromagnetic form factors of transitions in the baryon octet and of transitions N → Δ, N → Σsup(*) are considered

  18. Recycling process of Mn-Al doped large grain UO2 pellets

    International Nuclear Information System (INIS)

    Nam, Ik Hui; Yang, Jae Ho; Rhee, Young Woo; Kim, Dong Joo; Kim, Jong Hun; Kim, Keon Sik; Song, Kun Woo

    2010-01-01

    To reduce the fuel cycle costs and the total mass of spent light water reactor (LWR) fuels, it is necessary to extend the fuel discharged burn-up. Research on fuel pellets focuses on increasing the pellet density and grain size to increase the uranium contents and the high burnup safety margins for LWRs. KAERI are developing the large grain UO 2 pellet for the same purpose. Small amount of additives doping technology are used to increase the grain size and the high temperature deformation of UO 2 pellets. Various promising additive candidates had been developed during the last 3 years and the MnO-Al 2 O 3 doped UO 2 fuel pellet is one of the most promising candidates. In a commercial UO 2 fuel pellet manufacturing process, defective UO 2 pellets or scraps are produced and those should be reused. A common recycling method for defective UO 2 pellets or scraps is that they are oxidized in air at about 450 .deg. C to make U 3 O 8 powder and then added to UO 2 powder. In the oxidation of a UO 2 pellet, the oxygen propagates along the grain boundary. The U 3 O 8 formation on the grain boundary causes a spallation of the grains. So, size and shape of U 3 O 8 powder deeply depend on the initial grain size of UO 2 pellets. In the case of Mn-Al doped large grain pellets, the average grain size is about 45μm and about 5 times larger than a typical un-doped UO 2 pellet which has grain size of about 8∼10μm. That big difference in grain size is expected to cause a big difference in recycled U 3 O 8 powder morphology. Addition of U 3 O 8 to UO 2 leads to a drop in the pellet density, impeding a grain growth and the formation of graph- like pore segregates. Such degradation of the UO 2 pellet properties by adding the recycled U 3 O 8 powder depend on the U 3 O 8 powder properties. So, it is necessary to understand the property and its effect on the pellet of the recycled U 3 O 8 . This paper shows a preliminary result about the recycled U 3 O 8 powder which was obtained by

  19. Grey('s) Identity: Complications of Learning and Becoming in a Popular Television Show

    Science.gov (United States)

    Jubas, Kaela

    2013-01-01

    In this article, the author outlines an analysis of the American show "Grey's Anatomy" as an example of how popular culture represents identity and the process of professional identity construction in a medical workplace, particularly the surgical service of a large urban hospital. In discussing identity, she connects professional identity to…

  20. Hadronic processes with large transfer momenta and quark counting rules in multiparticle dual amplitude

    International Nuclear Information System (INIS)

    Akkelin, S.V.; Kobylinskij, N.A.; Martynov, E.S.

    1989-01-01

    A dual N-particle amplitude satisfying the quark counting rules for the processes with large transfer momenta is constructed. The multiparticle channels are shown to give an essential contribution to the amplitude decreasing power in a hard kinematic limit. 19 refs.; 9 figs

  1. Research on Francis Turbine Modeling for Large Disturbance Hydropower Station Transient Process Simulation

    Directory of Open Access Journals (Sweden)

    Guangtao Zhang

    2015-01-01

    Full Text Available In the field of hydropower station transient process simulation (HSTPS, characteristic graph-based iterative hydroturbine model (CGIHM has been widely used when large disturbance hydroturbine modeling is involved. However, by this model, iteration should be used to calculate speed and pressure, and slow convergence or no convergence problems may be encountered for some reasons like special characteristic graph profile, inappropriate iterative algorithm, or inappropriate interpolation algorithm, and so forth. Also, other conventional large disturbance hydroturbine models are of some disadvantages and difficult to be used widely in HSTPS. Therefore, to obtain an accurate simulation result, a simple method for hydroturbine modeling is proposed. By this method, both the initial operating point and the transfer coefficients of linear hydroturbine model keep changing during simulation. Hence, it can reflect the nonlinearity of the hydroturbine and be used for Francis turbine simulation under large disturbance condition. To validate the proposed method, both large disturbance and small disturbance simulations of a single hydrounit supplying a resistive, isolated load were conducted. It was shown that the simulation result is consistent with that of field test. Consequently, the proposed method is an attractive option for HSTPS involving Francis turbine modeling under large disturbance condition.

  2. Large-scale calculations of the beta-decay rates and r-process nucleosynthesis

    Energy Technology Data Exchange (ETDEWEB)

    Borzov, I N; Goriely, S [Inst. d` Astronomie et d` Astrophysique, Univ. Libre de Bruxelles, Campus Plaine, Bruxelles (Belgium); Pearson, J M [Inst. d` Astronomie et d` Astrophysique, Univ. Libre de Bruxelles, Campus Plaine, Bruxelles (Belgium); [Lab. de Physique Nucleaire, Univ. de Montreal, Montreal (Canada)

    1998-06-01

    An approximation to a self-consistent model of the ground state and {beta}-decay properties of neutron-rich nuclei is outlined. The structure of the {beta}-strength functions in stable and short-lived nuclei is discussed. The results of large-scale calculations of the {beta}-decay rates for spherical and slightly deformed nuclides of relevance to the r-process are analysed and compared with the results of existing global calculations and recent experimental data. (orig.)

  3. ARMA modelling of neutron stochastic processes with large measurement noise

    International Nuclear Information System (INIS)

    Zavaljevski, N.; Kostic, Lj.; Pesic, M.

    1994-01-01

    An autoregressive moving average (ARMA) model of the neutron fluctuations with large measurement noise is derived from langevin stochastic equations and validated using time series data obtained during prompt neutron decay constant measurements at the zero power reactor RB in Vinca. Model parameters are estimated using the maximum likelihood (ML) off-line algorithm and an adaptive pole estimation algorithm based on the recursive prediction error method (RPE). The results show that subcriticality can be determined from real data with high measurement noise using much shorter statistical sample than in standard methods. (author)

  4. Process optimization of large-scale production of recombinant adeno-associated vectors using dielectric spectroscopy.

    Science.gov (United States)

    Negrete, Alejandro; Esteban, Geoffrey; Kotin, Robert M

    2007-09-01

    A well-characterized manufacturing process for the large-scale production of recombinant adeno-associated vectors (rAAV) for gene therapy applications is required to meet current and future demands for pre-clinical and clinical studies and potential commercialization. Economic considerations argue in favor of suspension culture-based production. Currently, the only feasible method for large-scale rAAV production utilizes baculovirus expression vectors and insect cells in suspension cultures. To maximize yields and achieve reproducibility between batches, online monitoring of various metabolic and physical parameters is useful for characterizing early stages of baculovirus-infected insect cells. In this study, rAAVs were produced at 40-l scale yielding ~1 x 10(15) particles. During the process, dielectric spectroscopy was performed by real time scanning in radio frequencies between 300 kHz and 10 MHz. The corresponding permittivity values were correlated with the rAAV production. Both infected and uninfected reached a maximum value; however, only infected cell cultures permittivity profile reached a second maximum value. This effect was correlated with the optimal harvest time for rAAV production. Analysis of rAAV indicated the harvesting time around 48 h post-infection (hpi), and 72 hpi produced similar quantities of biologically active rAAV. Thus, if operated continuously, the 24-h reduction in the production process of rAAV gives sufficient time for additional 18 runs a year corresponding to an extra production of ~2 x 10(16) particles. As part of large-scale optimization studies, this new finding will facilitate the bioprocessing scale-up of rAAV and other bioproducts.

  5. Large-scale self-assembled zirconium phosphate smectic layers via a simple spray-coating process

    Science.gov (United States)

    Wong, Minhao; Ishige, Ryohei; White, Kevin L.; Li, Peng; Kim, Daehak; Krishnamoorti, Ramanan; Gunther, Robert; Higuchi, Takeshi; Jinnai, Hiroshi; Takahara, Atsushi; Nishimura, Riichi; Sue, Hung-Jue

    2014-04-01

    The large-scale assembly of asymmetric colloidal particles is used in creating high-performance fibres. A similar concept is extended to the manufacturing of thin films of self-assembled two-dimensional crystal-type materials with enhanced and tunable properties. Here we present a spray-coating method to manufacture thin, flexible and transparent epoxy films containing zirconium phosphate nanoplatelets self-assembled into a lamellar arrangement aligned parallel to the substrate. The self-assembled mesophase of zirconium phosphate nanoplatelets is stabilized by epoxy pre-polymer and exhibits rheology favourable towards large-scale manufacturing. The thermally cured film forms a mechanically robust coating and shows excellent gas barrier properties at both low- and high humidity levels as a result of the highly aligned and overlapping arrangement of nanoplatelets. This work shows that the large-scale ordering of high aspect ratio nanoplatelets is easier to achieve than previously thought and may have implications in the technological applications for similar materials.

  6. Keeping a large-pupilled eye on high-level visual processing.

    Science.gov (United States)

    Binda, Paola; Murray, Scott O

    2015-01-01

    The pupillary light response has long been considered an elementary reflex. However, evidence now shows that it integrates information from such complex phenomena as attention, contextual processing, and imagery. These discoveries make pupillometry a promising tool for an entirely new application: the study of high-level vision. Copyright © 2014 Elsevier Ltd. All rights reserved.

  7. Accelerating large-scale protein structure alignments with graphics processing units

    Directory of Open Access Journals (Sweden)

    Pang Bin

    2012-02-01

    Full Text Available Abstract Background Large-scale protein structure alignment, an indispensable tool to structural bioinformatics, poses a tremendous challenge on computational resources. To ensure structure alignment accuracy and efficiency, efforts have been made to parallelize traditional alignment algorithms in grid environments. However, these solutions are costly and of limited accessibility. Others trade alignment quality for speedup by using high-level characteristics of structure fragments for structure comparisons. Findings We present ppsAlign, a parallel protein structure Alignment framework designed and optimized to exploit the parallelism of Graphics Processing Units (GPUs. As a general-purpose GPU platform, ppsAlign could take many concurrent methods, such as TM-align and Fr-TM-align, into the parallelized algorithm design. We evaluated ppsAlign on an NVIDIA Tesla C2050 GPU card, and compared it with existing software solutions running on an AMD dual-core CPU. We observed a 36-fold speedup over TM-align, a 65-fold speedup over Fr-TM-align, and a 40-fold speedup over MAMMOTH. Conclusions ppsAlign is a high-performance protein structure alignment tool designed to tackle the computational complexity issues from protein structural data. The solution presented in this paper allows large-scale structure comparisons to be performed using massive parallel computing power of GPU.

  8. A methodology for fault diagnosis in large chemical processes and an application to a multistage flash desalination process: Part I

    International Nuclear Information System (INIS)

    Tarifa, Enrique E.; Scenna, Nicolas J.

    1998-01-01

    This work presents a new strategy for fault diagnosis in large chemical processes (E.E. Tarifa, Fault diagnosis in complex chemistries plants: plants of large dimensions and batch processes. Ph.D. thesis, Universidad Nacional del Litoral, Santa Fe, 1995). A special decomposition of the plant is made in sectors. Afterwards each sector is studied independently. These steps are carried out in the off-line mode. They produced vital information for the diagnosis system. This system works in the on-line mode and is based on a two-tier strategy. When a fault is produced, the upper level identifies the faulty sector. Then, the lower level carries out an in-depth study that focuses only on the critical sectors to identify the fault. The loss of information produced by the process partition may cause spurious diagnosis. This problem is overcome at the second level using qualitative simulation and fuzzy logic. In the second part of this work, the new methodology is tested to evaluate its performance in practical cases. A multiple stage flash desalination system (MSF) is chosen because it is a complex system, with many recycles and variables to be supervised. The steps for the knowledge base generation and all the blocks included in the diagnosis system are analyzed. Evaluation of the diagnosis performance is carried out using a rigorous dynamic simulator

  9. E-health, phase two: the imperative to integrate process automation with communication automation for large clinical reference laboratories.

    Science.gov (United States)

    White, L; Terner, C

    2001-01-01

    The initial efforts of e-health have fallen far short of expectations. They were buoyed by the hype and excitement of the Internet craze but limited by their lack of understanding of important market and environmental factors. E-health now recognizes that legacy systems and processes are important, that there is a technology adoption process that needs to be followed, and that demonstrable value drives adoption. Initial e-health transaction solutions have targeted mostly low-cost problems. These solutions invariably are difficult to integrate into existing systems, typically requiring manual interfacing to supported processes. This limitation in particular makes them unworkable for large volume providers. To meet the needs of these providers, e-health companies must rethink their approaches, appropriately applying technology to seamlessly integrate all steps into existing business functions. E-automation is a transaction technology that automates steps, integration of steps, and information communication demands, resulting in comprehensive automation of entire business functions. We applied e-automation to create a billing management solution for clinical reference laboratories. Large volume, onerous regulations, small margins, and only indirect access to patients challenge large laboratories' billing departments. Couple these problems with outmoded, largely manual systems and it becomes apparent why most laboratory billing departments are in crisis. Our approach has been to focus on the most significant and costly problems in billing: errors, compliance, and system maintenance and management. The core of the design relies on conditional processing, a "universal" communications interface, and ASP technologies. The result is comprehensive automation of all routine processes, driving out errors and costs. Additionally, compliance management and billing system support and management costs are dramatically reduced. The implications of e-automated processes can extend

  10. Large-Scale Consumption and Zero-Waste Recycling Method of Red Mud in Steel Making Process

    Directory of Open Access Journals (Sweden)

    Guoshan Ning

    2018-03-01

    Full Text Available To release the environmental pressure from the massive discharge of bauxite residue (red mud, a novel recycling method of red mud in steel making process was investigated through high-temperature experiments and thermodynamic analysis. The results showed that after the reduction roasting of the carbon-bearing red mud pellets at 1100–1200 °C for 12–20 min, the metallic pellets were obtained with the metallization ratio of ≥88%. Then, the separation of slag and iron achieved from the metallic pellets at 1550 °C, after composition adjustment targeting the primary crystal region of the 12CaO·7Al2O3 phase. After iron removal and composition adjustment, the smelting-separation slag had good smelting performance and desulfurization capability, which meets the demand of sulfurization flux in steel making process. The pig iron quality meets the requirements of the high-quality raw material for steel making. In virtue of the huge scale and output of steel industry, the large-scale consumption and zero-waste recycling method of red mud was proposed, which comprised of the carbon-bearing red mud pellets roasting in the rotary hearth furnace and smelting separation in the electric arc furnace after composition adjustment.

  11. Leveraging human oversight and intervention in large-scale parallel processing of open-source data

    Science.gov (United States)

    Casini, Enrico; Suri, Niranjan; Bradshaw, Jeffrey M.

    2015-05-01

    The popularity of cloud computing along with the increased availability of cheap storage have led to the necessity of elaboration and transformation of large volumes of open-source data, all in parallel. One way to handle such extensive volumes of information properly is to take advantage of distributed computing frameworks like Map-Reduce. Unfortunately, an entirely automated approach that excludes human intervention is often unpredictable and error prone. Highly accurate data processing and decision-making can be achieved by supporting an automatic process through human collaboration, in a variety of environments such as warfare, cyber security and threat monitoring. Although this mutual participation seems easily exploitable, human-machine collaboration in the field of data analysis presents several challenges. First, due to the asynchronous nature of human intervention, it is necessary to verify that once a correction is made, all the necessary reprocessing is done in chain. Second, it is often needed to minimize the amount of reprocessing in order to optimize the usage of resources due to limited availability. In order to improve on these strict requirements, this paper introduces improvements to an innovative approach for human-machine collaboration in the processing of large amounts of open-source data in parallel.

  12. Large-scale analyses of synonymous substitution rates can be sensitive to assumptions about the process of mutation.

    Science.gov (United States)

    Aris-Brosou, Stéphane; Bielawski, Joseph P

    2006-08-15

    A popular approach to examine the roles of mutation and selection in the evolution of genomes has been to consider the relationship between codon bias and synonymous rates of molecular evolution. A significant relationship between these two quantities is taken to indicate the action of weak selection on substitutions among synonymous codons. The neutral theory predicts that the rate of evolution is inversely related to the level of functional constraint. Therefore, selection against the use of non-preferred codons among those coding for the same amino acid should result in lower rates of synonymous substitution as compared with sites not subject to such selection pressures. However, reliably measuring the extent of such a relationship is problematic, as estimates of synonymous rates are sensitive to our assumptions about the process of molecular evolution. Previous studies showed the importance of accounting for unequal codon frequencies, in particular when synonymous codon usage is highly biased. Yet, unequal codon frequencies can be modeled in different ways, making different assumptions about the mutation process. Here we conduct a simulation study to evaluate two different ways of modeling uneven codon frequencies and show that both model parameterizations can have a dramatic impact on rate estimates and affect biological conclusions about genome evolution. We reanalyze three large data sets to demonstrate the relevance of our results to empirical data analysis.

  13. Beyond single syllables: large-scale modeling of reading aloud with the Connectionist Dual Process (CDP++) model.

    Science.gov (United States)

    Perry, Conrad; Ziegler, Johannes C; Zorzi, Marco

    2010-09-01

    Most words in English have more than one syllable, yet the most influential computational models of reading aloud are restricted to processing monosyllabic words. Here, we present CDP++, a new version of the Connectionist Dual Process model (Perry, Ziegler, & Zorzi, 2007). CDP++ is able to simulate the reading aloud of mono- and disyllabic words and nonwords, and learns to assign stress in exactly the same way as it learns to associate graphemes with phonemes. CDP++ is able to simulate the monosyllabic benchmark effects its predecessor could, and therefore shows full backwards compatibility. CDP++ also accounts for a number of novel effects specific to disyllabic words, including the effects of stress regularity and syllable number. In terms of database performance, CDP++ accounts for over 49% of the reaction time variance on items selected from the English Lexicon Project, a very large database of several thousand of words. With its lexicon of over 32,000 words, CDP++ is therefore a notable example of the successful scaling-up of a connectionist model to a size that more realistically approximates the human lexical system. Copyright © 2010 Elsevier Inc. All rights reserved.

  14. The Faculty Promotion Process. An Empirical Analysis of the Administration of Large State Universities.

    Science.gov (United States)

    Luthans, Fred

    One phase of academic management, the faculty promotion process, is systematically described and analyzed. The study encompasses three parts: (l) the justification of the use of management concepts in the analysis of academic administration; (2) a descriptive presentation of promotion policies and practices in 46 large state universities; and (3)…

  15. Development of polymers for large scale roll-to-roll processing of polymer solar cells

    DEFF Research Database (Denmark)

    Carlé, Jon Eggert

    Development of polymers for large scale roll-to-roll processing of polymer solar cells Conjugated polymers potential to both absorb light and transport current as well as the perspective of low cost and large scale production has made these kinds of material attractive in solar cell research....... The research field of polymer solar cells (PSCs) is rapidly progressing along three lines: Improvement of efficiency and stability together with the introduction of large scale production methods. All three lines are explored in this work. The thesis describes low band gap polymers and why these are needed....... Polymer of this type display broader absorption resulting in better overlap with the solar spectrum and potentially higher current density. Synthesis, characterization and device performance of three series of polymers illustrating how the absorption spectrum of polymers can be manipulated synthetically...

  16. Tomato Fruits Show Wide Phenomic Diversity but Fruit Developmental Genes Show Low Genomic Diversity.

    Directory of Open Access Journals (Sweden)

    Vijee Mohan

    Full Text Available Domestication of tomato has resulted in large diversity in fruit phenotypes. An intensive phenotyping of 127 tomato accessions from 20 countries revealed extensive morphological diversity in fruit traits. The diversity in fruit traits clustered the accessions into nine classes and identified certain promising lines having desirable traits pertaining to total soluble salts (TSS, carotenoids, ripening index, weight and shape. Factor analysis of the morphometric data from Tomato Analyzer showed that the fruit shape is a complex trait shared by several factors. The 100% variance between round and flat fruit shapes was explained by one discriminant function having a canonical correlation of 0.874 by stepwise discriminant analysis. A set of 10 genes (ACS2, COP1, CYC-B, RIN, MSH2, NAC-NOR, PHOT1, PHYA, PHYB and PSY1 involved in various plant developmental processes were screened for SNP polymorphism by EcoTILLING. The genetic diversity in these genes revealed a total of 36 non-synonymous and 18 synonymous changes leading to the identification of 28 haplotypes. The average frequency of polymorphism across the genes was 0.038/Kb. Significant negative Tajima'D statistic in two of the genes, ACS2 and PHOT1 indicated the presence of rare alleles in low frequency. Our study indicates that while there is low polymorphic diversity in the genes regulating plant development, the population shows wider phenotype diversity. Nonetheless, morphological and genetic diversity of the present collection can be further exploited as potential resources in future.

  17. SIproc: an open-source biomedical data processing platform for large hyperspectral images.

    Science.gov (United States)

    Berisha, Sebastian; Chang, Shengyuan; Saki, Sam; Daeinejad, Davar; He, Ziqi; Mankar, Rupali; Mayerich, David

    2017-04-10

    There has recently been significant interest within the vibrational spectroscopy community to apply quantitative spectroscopic imaging techniques to histology and clinical diagnosis. However, many of the proposed methods require collecting spectroscopic images that have a similar region size and resolution to the corresponding histological images. Since spectroscopic images contain significantly more spectral samples than traditional histology, the resulting data sets can approach hundreds of gigabytes to terabytes in size. This makes them difficult to store and process, and the tools available to researchers for handling large spectroscopic data sets are limited. Fundamental mathematical tools, such as MATLAB, Octave, and SciPy, are extremely powerful but require that the data be stored in fast memory. This memory limitation becomes impractical for even modestly sized histological images, which can be hundreds of gigabytes in size. In this paper, we propose an open-source toolkit designed to perform out-of-core processing of hyperspectral images. By taking advantage of graphical processing unit (GPU) computing combined with adaptive data streaming, our software alleviates common workstation memory limitations while achieving better performance than existing applications.

  18. Study of Drell-Yan process in CMS experiment at Large Hadron Collider

    CERN Document Server

    Jindal, Monika

    The proton-proton collisions at the Large Hadron Collider (LHC) is the begining of a new era in the high energy physics. It enables the possibility of the discoveries at high-energy frontier and also allows the study of Standard Model physics with high precision. The new physics discoveries and the precision measurements can be achieved with highly efficient and accurate detectors like Compact Muon Solenoid. In this thesis, we report the measurement of the differential production cross-section of the Drell-Yan process, $q ar{q} ightarrow Z/gamma^{*} ightarrowmu^{+}mu^{-}$ in proton-proton collisions at the center-of-mass energy $sqrt{s}=$ 7 TeV using CMS experiment at the LHC. This measurement is based on the analysis of data which corresponds to an integrated luminosity of $intmath{L}dt$ = 36.0 $pm$ 1.4 pb$^{-1}$. The measurement of the production cross-section of the Drell-Yan process provides a first test of the Standard Model in a new energy domain and may reveal exotic physics processes. The Drell...

  19. Investigation of deep inelastic scattering processes involving large p$_{t}$ direct photons in the final state

    CERN Multimedia

    2002-01-01

    This experiment will investigate various aspects of photon-parton scattering and will be performed in the H2 beam of the SPS North Area with high intensity hadron beams up to 350 GeV/c. \\\\\\\\ a) The directly produced photon yield in deep inelastic hadron-hadron collisions. Large p$_{t}$ direct photons from hadronic interactions are presumably a result of a simple annihilation process of quarks and antiquarks or of a QCD-Compton process. The relative contribution of the two processes can be studied by using various incident beam projectiles $\\pi^{+}, \\pi^{-}, p$ and in the future $\\bar{p}$. \\\\\\\\b) The correlations between directly produced photons and their accompanying hadronic jets. We will examine events with a large p$_{t}$ direct photon for away-side jets. If jets are recognised their properties will be investigated. Differences between a gluon and a quark jet may become observable by comparing reactions where valence quark annihilations (away-side jet originates from a gluon) dominate over the QDC-Compton...

  20. Near-Space TOPSAR Large-Scene Full-Aperture Imaging Scheme Based on Two-Step Processing

    Directory of Open Access Journals (Sweden)

    Qianghui Zhang

    2016-07-01

    Full Text Available Free of the constraints of orbit mechanisms, weather conditions and minimum antenna area, synthetic aperture radar (SAR equipped on near-space platform is more suitable for sustained large-scene imaging compared with the spaceborne and airborne counterparts. Terrain observation by progressive scans (TOPS, which is a novel wide-swath imaging mode and allows the beam of SAR to scan along the azimuth, can reduce the time of echo acquisition for large scene. Thus, near-space TOPS-mode SAR (NS-TOPSAR provides a new opportunity for sustained large-scene imaging. An efficient full-aperture imaging scheme for NS-TOPSAR is proposed in this paper. In this scheme, firstly, two-step processing (TSP is adopted to eliminate the Doppler aliasing of the echo. Then, the data is focused in two-dimensional frequency domain (FD based on Stolt interpolation. Finally, a modified TSP (MTSP is performed to remove the azimuth aliasing. Simulations are presented to demonstrate the validity of the proposed imaging scheme for near-space large-scene imaging application.

  1. Some Examples of Residence-Time Distribution Studies in Large-Scale Chemical Processes by Using Radiotracer Techniques

    Energy Technology Data Exchange (ETDEWEB)

    Bullock, R. M.; Johnson, P.; Whiston, J. [Imperial Chemical Industries Ltd., Billingham, Co., Durham (United Kingdom)

    1967-06-15

    The application of radiotracers to determine flow patterns in chemical processes is discussed with particular reference to the derivation of design data from model reactors for translation to large-scale units, the study of operating efficiency and design attainment in established plant and the rapid identification of various types of process malfunction. The requirements governing the selection of tracers for various types of media are considered and an example is given of the testing of the behaviour of a typical tracer before use in a particular large-scale process operating at 250 atm and 200 Degree-Sign C. Information which may be derived from flow patterns is discussed including the determination of mixing parameters, gas hold-up in gas/liquid reactions and the detection of channelling and stagnant regions. Practical results and their interpretation are given in relation to an define hydroformylation reaction system, a process for the conversion of propylene to isopropanol, a moving bed catalyst system for the isomerization of xylenes and a three-stage gas-liquid reaction system. The use of mean residence-time data for the detection of leakage between reaction vessels and a heat interchanger system is given as an example of the identification of process malfunction. (author)

  2. Extraterrestrial processing and manufacturing of large space systems, volume 1, chapters 1-6

    Science.gov (United States)

    Miller, R. H.; Smith, D. B. S.

    1979-01-01

    Space program scenarios for production of large space structures from lunar materials are defined. The concept of the space manufacturing facility (SMF) is presented. The manufacturing processes and equipment for the SMF are defined and the conceptual layouts are described for the production of solar cells and arrays, structures and joints, conduits, waveguides, RF equipment radiators, wire cables, and converters. A 'reference' SMF was designed and its operation requirements are described.

  3. Large earthquake rupture process variations on the Middle America megathrust

    Science.gov (United States)

    Ye, Lingling; Lay, Thorne; Kanamori, Hiroo

    2013-11-01

    The megathrust fault between the underthrusting Cocos plate and overriding Caribbean plate recently experienced three large ruptures: the August 27, 2012 (Mw 7.3) El Salvador; September 5, 2012 (Mw 7.6) Costa Rica; and November 7, 2012 (Mw 7.4) Guatemala earthquakes. All three events involve shallow-dipping thrust faulting on the plate boundary, but they had variable rupture processes. The El Salvador earthquake ruptured from about 4 to 20 km depth, with a relatively large centroid time of ˜19 s, low seismic moment-scaled energy release, and a depleted teleseismic short-period source spectrum similar to that of the September 2, 1992 (Mw 7.6) Nicaragua tsunami earthquake that ruptured the adjacent shallow portion of the plate boundary. The Costa Rica and Guatemala earthquakes had large slip in the depth range 15 to 30 km, and more typical teleseismic source spectra. Regional seismic recordings have higher short-period energy levels for the Costa Rica event relative to the El Salvador event, consistent with the teleseismic observations. A broadband regional waveform template correlation analysis is applied to categorize the focal mechanisms for larger aftershocks of the three events. Modeling of regional wave spectral ratios for clustered events with similar mechanisms indicates that interplate thrust events have corner frequencies, normalized by a reference model, that increase down-dip from anomalously low values near the Middle America trench. Relatively high corner frequencies are found for thrust events near Costa Rica; thus, variations along strike of the trench may also be important. Geodetic observations indicate trench-parallel motion of a forearc sliver extending from Costa Rica to Guatemala, and low seismic coupling on the megathrust has been inferred from a lack of boundary-perpendicular strain accumulation. The slip distributions and seismic radiation from the large regional thrust events indicate relatively strong seismic coupling near Nicoya, Costa

  4. Resting-state networks associated with cognitive processing show more age-related decline than those associated with emotional processing.

    Science.gov (United States)

    Nashiro, Kaoru; Sakaki, Michiko; Braskie, Meredith N; Mather, Mara

    2017-06-01

    Correlations in activity across disparate brain regions during rest reveal functional networks in the brain. Although previous studies largely agree that there is an age-related decline in the "default mode network," how age affects other resting-state networks, such as emotion-related networks, is still controversial. Here we used a dual-regression approach to investigate age-related alterations in resting-state networks. The results revealed age-related disruptions in functional connectivity in all 5 identified cognitive networks, namely the default mode network, cognitive-auditory, cognitive-speech (or speech-related somatosensory), and right and left frontoparietal networks, whereas such age effects were not observed in the 3 identified emotion networks. In addition, we observed age-related decline in functional connectivity in 3 visual and 3 motor/visuospatial networks. Older adults showed greater functional connectivity in regions outside 4 out of the 5 identified cognitive networks, consistent with the dedifferentiation effect previously observed in task-based functional magnetic resonance imaging studies. Both reduced within-network connectivity and increased out-of-network connectivity were correlated with poor cognitive performance, providing potential biomarkers for cognitive aging. Copyright © 2017 Elsevier Inc. All rights reserved.

  5. Large-Scale Sentinel-1 Processing for Solid Earth Science and Urgent Response using Cloud Computing and Machine Learning

    Science.gov (United States)

    Hua, H.; Owen, S. E.; Yun, S. H.; Agram, P. S.; Manipon, G.; Starch, M.; Sacco, G. F.; Bue, B. D.; Dang, L. B.; Linick, J. P.; Malarout, N.; Rosen, P. A.; Fielding, E. J.; Lundgren, P.; Moore, A. W.; Liu, Z.; Farr, T.; Webb, F.; Simons, M.; Gurrola, E. M.

    2017-12-01

    With the increased availability of open SAR data (e.g. Sentinel-1 A/B), new challenges are being faced with processing and analyzing the voluminous SAR datasets to make geodetic measurements. Upcoming SAR missions such as NISAR are expected to generate close to 100TB per day. The Advanced Rapid Imaging and Analysis (ARIA) project can now generate geocoded unwrapped phase and coherence products from Sentinel-1 TOPS mode data in an automated fashion, using the ISCE software. This capability is currently being exercised on various study sites across the United States and around the globe, including Hawaii, Central California, Iceland and South America. The automated and large-scale SAR data processing and analysis capabilities use cloud computing techniques to speed the computations and provide scalable processing power and storage. Aspects such as how to processing these voluminous SLCs and interferograms at global scales, keeping up with the large daily SAR data volumes, and how to handle the voluminous data rates are being explored. Scene-partitioning approaches in the processing pipeline help in handling global-scale processing up to unwrapped interferograms with stitching done at a late stage. We have built an advanced science data system with rapid search functions to enable access to the derived data products. Rapid image processing of Sentinel-1 data to interferograms and time series is already being applied to natural hazards including earthquakes, floods, volcanic eruptions, and land subsidence due to fluid withdrawal. We will present the status of the ARIA science data system for generating science-ready data products and challenges that arise from being able to process SAR datasets to derived time series data products at large scales. For example, how do we perform large-scale data quality screening on interferograms? What approaches can be used to minimize compute, storage, and data movement costs for time series analysis in the cloud? We will also

  6. Analogue scale modelling of extensional tectonic processes using a large state-of-the-art centrifuge

    Science.gov (United States)

    Park, Heon-Joon; Lee, Changyeol

    2017-04-01

    Analogue scale modelling of extensional tectonic processes such as rifting and basin opening has been numerously conducted. Among the controlling factors, gravitational acceleration (g) on the scale models was regarded as a constant (Earth's gravity) in the most of the analogue model studies, and only a few model studies considered larger gravitational acceleration by using a centrifuge (an apparatus generating large centrifugal force by rotating the model at a high speed). Although analogue models using a centrifuge allow large scale-down and accelerated deformation that is derived by density differences such as salt diapir, the possible model size is mostly limited up to 10 cm. A state-of-the-art centrifuge installed at the KOCED Geotechnical Centrifuge Testing Center, Korea Advanced Institute of Science and Technology (KAIST) allows a large surface area of the scale-models up to 70 by 70 cm under the maximum capacity of 240 g-tons. Using the centrifuge, we will conduct analogue scale modelling of the extensional tectonic processes such as opening of the back-arc basin. Acknowledgement This research was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (grant number 2014R1A6A3A04056405).

  7. Plasma processing of large curved surfaces for superconducting rf cavity modification

    Directory of Open Access Journals (Sweden)

    J. Upadhyay

    2014-12-01

    Full Text Available Plasma-based surface modification of niobium is a promising alternative to wet etching of superconducting radio frequency (SRF cavities. We have demonstrated surface layer removal in an asymmetric nonplanar geometry, using a simple cylindrical cavity. The etching rate is highly correlated with the shape of the inner electrode, radio-frequency (rf circuit elements, gas pressure, rf power, chlorine concentration in the Cl_{2}/Ar gas mixtures, residence time of reactive species, and temperature of the cavity. Using variable radius cylindrical electrodes, large-surface ring-shaped samples, and dc bias in the external circuit, we have measured substantial average etching rates and outlined the possibility of optimizing plasma properties with respect to maximum surface processing effect.

  8. Subpixelic measurement of large 1D displacements: principle, processing algorithms, performances and software.

    Science.gov (United States)

    Guelpa, Valérian; Laurent, Guillaume J; Sandoz, Patrick; Zea, July Galeano; Clévy, Cédric

    2014-03-12

    This paper presents a visual measurement method able to sense 1D rigid body displacements with very high resolutions, large ranges and high processing rates. Sub-pixelic resolution is obtained thanks to a structured pattern placed on the target. The pattern is made of twin periodic grids with slightly different periods. The periodic frames are suited for Fourier-like phase calculations-leading to high resolution-while the period difference allows the removal of phase ambiguity and thus a high range-to-resolution ratio. The paper presents the measurement principle as well as the processing algorithms (source files are provided as supplementary materials). The theoretical and experimental performances are also discussed. The processing time is around 3 µs for a line of 780 pixels, which means that the measurement rate is mostly limited by the image acquisition frame rate. A 3-σ repeatability of 5 nm is experimentally demonstrated which has to be compared with the 168 µm measurement range.

  9. Event processing time prediction at the CMS experiment of the Large Hadron Collider

    International Nuclear Information System (INIS)

    Cury, Samir; Gutsche, Oliver; Kcira, Dorian

    2014-01-01

    The physics event reconstruction is one of the biggest challenges for the computing of the LHC experiments. Among the different tasks that computing systems of the CMS experiment performs, the reconstruction takes most of the available CPU resources. The reconstruction time of single collisions varies according to event complexity. Measurements were done in order to determine this correlation quantitatively, creating means to predict it based on the data-taking conditions of the input samples. Currently the data processing system splits tasks in groups with the same number of collisions and does not account for variations in the processing time. These variations can be large and can lead to a considerable increase in the time it takes for CMS workflows to finish. The goal of this study was to use estimates on processing time to more efficiently split the workflow into jobs. By considering the CPU time needed for each job the spread of the job-length distribution in a workflow is reduced.

  10. High-Rate Fabrication of a-Si-Based Thin-Film Solar Cells Using Large-Area VHF PECVD Processes

    Energy Technology Data Exchange (ETDEWEB)

    Deng, Xunming [University of Toledo; Fan, Qi Hua

    2011-12-31

    The University of Toledo (UT), working in concert with it’s a-Si-based PV industry partner Xunlight Corporation (Xunlight), has conducted a comprehensive study to develop a large-area (3ft x 3ft) VHF PECVD system for high rate uniform fabrication of silicon absorber layers, and the large-area VHF PECVD processes to achieve high performance a-Si/a-SiGe or a-Si/nc-Si tandem junction solar cells during the period of July 1, 2008 to Dec. 31, 2011, under DOE Award No. DE-FG36-08GO18073. The project had two primary goals: (i) to develop and improve a large area (3 ft × 3 ft) VHF PECVD system for high rate fabrication of > = 8 Å/s a-Si and >= 20 Å/s nc-Si or 4 Å/s a-SiGe absorber layers with high uniformity in film thicknesses and in material structures. (ii) to develop and optimize the large-area VHF PECVD processes to achieve high-performance a-Si/nc-Si or a-Si/a-SiGe tandem-junction solar cells with >= 10% stable efficiency. Our work has met the goals and is summarized in “Accomplishments versus goals and objectives”.

  11. High-Resiliency and Auto-Scaling of Large-Scale Cloud Computing for OCO-2 L2 Full Physics Processing

    Science.gov (United States)

    Hua, H.; Manipon, G.; Starch, M.; Dang, L. B.; Southam, P.; Wilson, B. D.; Avis, C.; Chang, A.; Cheng, C.; Smyth, M.; McDuffie, J. L.; Ramirez, P.

    2015-12-01

    Next generation science data systems are needed to address the incoming flood of data from new missions such as SWOT and NISAR where data volumes and data throughput rates are order of magnitude larger than present day missions. Additionally, traditional means of procuring hardware on-premise are already limited due to facilities capacity constraints for these new missions. Existing missions, such as OCO-2, may also require high turn-around time for processing different science scenarios where on-premise and even traditional HPC computing environments may not meet the high processing needs. We present our experiences on deploying a hybrid-cloud computing science data system (HySDS) for the OCO-2 Science Computing Facility to support large-scale processing of their Level-2 full physics data products. We will explore optimization approaches to getting best performance out of hybrid-cloud computing as well as common issues that will arise when dealing with large-scale computing. Novel approaches were utilized to do processing on Amazon's spot market, which can potentially offer ~10X costs savings but with an unpredictable computing environment based on market forces. We will present how we enabled high-tolerance computing in order to achieve large-scale computing as well as operational cost savings.

  12. Process mining in the large : a tutorial

    NARCIS (Netherlands)

    Aalst, van der W.M.P.; Zimányi, E.

    2014-01-01

    Recently, process mining emerged as a new scientific discipline on the interface between process models and event data. On the one hand, conventional Business Process Management (BPM) and Workflow Management (WfM) approaches and tools are mostly model-driven with little consideration for event data.

  13. A framework for the direct evaluation of large deviations in non-Markovian processes

    International Nuclear Information System (INIS)

    Cavallaro, Massimo; Harris, Rosemary J

    2016-01-01

    We propose a general framework to simulate stochastic trajectories with arbitrarily long memory dependence and efficiently evaluate large deviation functions associated to time-extensive observables. This extends the ‘cloning’ procedure of Giardiná et al (2006 Phys. Rev. Lett. 96 120603) to non-Markovian systems. We demonstrate the validity of this method by testing non-Markovian variants of an ion-channel model and the totally asymmetric exclusion process, recovering results obtainable by other means. (letter)

  14. The medicine selection process in four large university hospitals in Brazil: Does the DTC have a role?

    Directory of Open Access Journals (Sweden)

    Elisangela da Costa Lima-Dellamora

    2015-03-01

    Full Text Available Knowledge about evidence-based medicine selection and the role of the Drug and Therapeutics Committee (DTC is an important topic in the literature but is scarcely discussed in Brazil. Our objective, using a qualitative design, was to analyze the medicine selection process performed in four large university hospitals in the state of Rio de Janeiro. Information was collected from documents, interviews with key informants and direct observations. Two dimensions were analyzed: the structural and organizational aspects of the selection process and the criteria and methods used in medicine selection. The findings showed that the DTC was active in two hospitals. The structure for decision-making was weak. DTC members had little experience in evidence-based selection, and their everyday functions did not influence their participation in DTC activities. The methods used to evaluate evidence were inadequate. The uncritical adoption of new medicines in these complex hospital facilities may be hampering pharmaceutical services, with consequences for the entire health system. Although the qualitative approach considerably limits the extent to which the results can be extrapolated, we believe that our findings may be relevant to other university hospitals in the country.

  15. Computation of large covariance matrices by SAMMY on graphical processing units and multicore CPUs

    International Nuclear Information System (INIS)

    Arbanas, G.; Dunn, M.E.; Wiarda, D.

    2011-01-01

    Computational power of Graphical Processing Units and multicore CPUs was harnessed by the nuclear data evaluation code SAMMY to speed up computations of large Resonance Parameter Covariance Matrices (RPCMs). This was accomplished by linking SAMMY to vendor-optimized implementations of the matrix-matrix multiplication subroutine of the Basic Linear Algebra Library to compute the most time-consuming step. The 235 U RPCM computed previously using a triple-nested loop was re-computed using the NVIDIA implementation of the subroutine on a single Tesla Fermi Graphical Processing Unit, and also using the Intel's Math Kernel Library implementation on two different multicore CPU systems. A multiplication of two matrices of dimensions 16,000×20,000 that had previously taken days, took approximately one minute on the GPU. Comparable performance was achieved on a dual six-core CPU system. The magnitude of the speed-up suggests that these, or similar, combinations of hardware and libraries may be useful for large matrix operations in SAMMY. Uniform interfaces of standard linear algebra libraries make them a promising candidate for a programming framework of a new generation of SAMMY for the emerging heterogeneous computing platforms. (author)

  16. Computation of large covariance matrices by SAMMY on graphical processing units and multicore CPUs

    Energy Technology Data Exchange (ETDEWEB)

    Arbanas, G.; Dunn, M.E.; Wiarda, D., E-mail: arbanasg@ornl.gov, E-mail: dunnme@ornl.gov, E-mail: wiardada@ornl.gov [Oak Ridge National Laboratory, Oak Ridge, TN (United States)

    2011-07-01

    Computational power of Graphical Processing Units and multicore CPUs was harnessed by the nuclear data evaluation code SAMMY to speed up computations of large Resonance Parameter Covariance Matrices (RPCMs). This was accomplished by linking SAMMY to vendor-optimized implementations of the matrix-matrix multiplication subroutine of the Basic Linear Algebra Library to compute the most time-consuming step. The {sup 235}U RPCM computed previously using a triple-nested loop was re-computed using the NVIDIA implementation of the subroutine on a single Tesla Fermi Graphical Processing Unit, and also using the Intel's Math Kernel Library implementation on two different multicore CPU systems. A multiplication of two matrices of dimensions 16,000×20,000 that had previously taken days, took approximately one minute on the GPU. Comparable performance was achieved on a dual six-core CPU system. The magnitude of the speed-up suggests that these, or similar, combinations of hardware and libraries may be useful for large matrix operations in SAMMY. Uniform interfaces of standard linear algebra libraries make them a promising candidate for a programming framework of a new generation of SAMMY for the emerging heterogeneous computing platforms. (author)

  17. Risk Aversion in Game Shows

    DEFF Research Database (Denmark)

    Andersen, Steffen; Harrison, Glenn W.; Lau, Morten I.

    2008-01-01

    We review the use of behavior from television game shows to infer risk attitudes. These shows provide evidence when contestants are making decisions over very large stakes, and in a replicated, structured way. Inferences are generally confounded by the subjective assessment of skill in some games......, and the dynamic nature of the task in most games. We consider the game shows Card Sharks, Jeopardy!, Lingo, and finally Deal Or No Deal. We provide a detailed case study of the analyses of Deal Or No Deal, since it is suitable for inference about risk attitudes and has attracted considerable attention....

  18. Video game players show more precise multisensory temporal processing abilities.

    Science.gov (United States)

    Donohue, Sarah E; Woldorff, Marty G; Mitroff, Stephen R

    2010-05-01

    Recent research has demonstrated enhanced visual attention and visual perception in individuals with extensive experience playing action video games. These benefits manifest in several realms, but much remains unknown about the ways in which video game experience alters perception and cognition. In the present study, we examined whether video game players' benefits generalize beyond vision to multisensory processing by presenting auditory and visual stimuli within a short temporal window to video game players and non-video game players. Participants performed two discrimination tasks, both of which revealed benefits for video game players: In a simultaneity judgment task, video game players were better able to distinguish whether simple visual and auditory stimuli occurred at the same moment or slightly offset in time, and in a temporal-order judgment task, they revealed an enhanced ability to determine the temporal sequence of multisensory stimuli. These results suggest that people with extensive experience playing video games display benefits that extend beyond the visual modality to also impact multisensory processing.

  19. Geoinformation web-system for processing and visualization of large archives of geo-referenced data

    Science.gov (United States)

    Gordov, E. P.; Okladnikov, I. G.; Titov, A. G.; Shulgina, T. M.

    2010-12-01

    Developed working model of information-computational system aimed at scientific research in area of climate change is presented. The system will allow processing and analysis of large archives of geophysical data obtained both from observations and modeling. Accumulated experience of developing information-computational web-systems providing computational processing and visualization of large archives of geo-referenced data was used during the implementation (Gordov et al, 2007; Okladnikov et al, 2008; Titov et al, 2009). Functional capabilities of the system comprise a set of procedures for mathematical and statistical analysis, processing and visualization of data. At present five archives of data are available for processing: 1st and 2nd editions of NCEP/NCAR Reanalysis, ECMWF ERA-40 Reanalysis, JMA/CRIEPI JRA-25 Reanalysis, and NOAA-CIRES XX Century Global Reanalysis Version I. To provide data processing functionality a computational modular kernel and class library providing data access for computational modules were developed. Currently a set of computational modules for climate change indices approved by WMO is available. Also a special module providing visualization of results and writing to Encapsulated Postscript, GeoTIFF and ESRI shape files was developed. As a technological basis for representation of cartographical information in Internet the GeoServer software conforming to OpenGIS standards is used. Integration of GIS-functionality with web-portal software to provide a basis for web-portal’s development as a part of geoinformation web-system is performed. Such geoinformation web-system is a next step in development of applied information-telecommunication systems offering to specialists from various scientific fields unique opportunities of performing reliable analysis of heterogeneous geophysical data using approved computational algorithms. It will allow a wide range of researchers to work with geophysical data without specific programming

  20. Asymptotic description of two metastable processes of solidification for the case of large relaxation time

    International Nuclear Information System (INIS)

    Omel'yanov, G.A.

    1995-07-01

    The non-isothermal Cahn-Hilliard equations in the n-dimensional case (n = 2,3) are considered. The interaction length is proportional to a small parameter, and the relaxation time is proportional to a constant. The asymptotic solutions describing two metastable processes are constructed and justified. The soliton type solution describes the first stage of separation in alloy, when a set of ''superheated liquid'' appears inside the ''solid'' part. The Van der Waals type solution describes the free interface dynamics for large time. The smoothness of temperature is established for large time and the Mullins-Sekerka problem describing the free interface is derived. (author). 46 refs

  1. Forest landscape models, a tool for understanding the effect of the large-scale and long-term landscape processes

    Science.gov (United States)

    Hong S. He; Robert E. Keane; Louis R. Iverson

    2008-01-01

    Forest landscape models have become important tools for understanding large-scale and long-term landscape (spatial) processes such as climate change, fire, windthrow, seed dispersal, insect outbreak, disease propagation, forest harvest, and fuel treatment, because controlled field experiments designed to study the effects of these processes are often not possible (...

  2. In-database processing of a large collection of remote sensing data: applications and implementation

    Science.gov (United States)

    Kikhtenko, Vladimir; Mamash, Elena; Chubarov, Dmitri; Voronina, Polina

    2016-04-01

    Large archives of remote sensing data are now available to scientists, yet the need to work with individual satellite scenes or product files constrains studies that span a wide temporal range or spatial extent. The resources (storage capacity, computing power and network bandwidth) required for such studies are often beyond the capabilities of individual geoscientists. This problem has been tackled before in remote sensing research and inspired several information systems. Some of them such as NASA Giovanni [1] and Google Earth Engine have already proved their utility for science. Analysis tasks involving large volumes of numerical data are not unique to Earth Sciences. Recent advances in data science are enabled by the development of in-database processing engines that bring processing closer to storage, use declarative query languages to facilitate parallel scalability and provide high-level abstraction of the whole dataset. We build on the idea of bridging the gap between file archives containing remote sensing data and databases by integrating files into relational database as foreign data sources and performing analytical processing inside the database engine. Thereby higher level query language can efficiently address problems of arbitrary size: from accessing the data associated with a specific pixel or a grid cell to complex aggregation over spatial or temporal extents over a large number of individual data files. This approach was implemented using PostgreSQL for a Siberian regional archive of satellite data products holding hundreds of terabytes of measurements from multiple sensors and missions taken over a decade-long span. While preserving the original storage layout and therefore compatibility with existing applications the in-database processing engine provides a toolkit for provisioning remote sensing data in scientific workflows and applications. The use of SQL - a widely used higher level declarative query language - simplifies interoperability

  3. Large scale synthesis of α-Si3N4 nanowires through a kinetically favored chemical vapour deposition process

    Science.gov (United States)

    Liu, Haitao; Huang, Zhaohui; Zhang, Xiaoguang; Fang, Minghao; Liu, Yan-gai; Wu, Xiaowen; Min, Xin

    2018-01-01

    Understanding the kinetic barrier and driving force for crystal nucleation and growth is decisive for the synthesis of nanowires with controllable yield and morphology. In this research, we developed an effective reaction system to synthesize very large scale α-Si3N4 nanowires (hundreds of milligrams) and carried out a comparative study to characterize the kinetic influence of gas precursor supersaturation and liquid metal catalyst. The phase composition, morphology, microstructure and photoluminescence properties of the as-synthesized products were characterized by X-ray diffraction, fourier-transform infrared spectroscopy, field emission scanning electron microscopy, transmission electron microscopy and room temperature photoluminescence measurement. The yield of the products not only relates to the reaction temperature (thermodynamic condition) but also to the distribution of gas precursors (kinetic condition). As revealed in this research, by controlling the gas diffusion process, the yield of the nanowire products could be greatly improved. The experimental results indicate that the supersaturation is the dominant factor in the as-designed system rather than the catalyst. With excellent non-flammability and high thermal stability, the large scale α-Si3N4 products would have potential applications to the improvement of strength of high temperature ceramic composites. The photoluminescence spectrum of the α-Si3N4 shows a blue shift which could be valued for future applications in blue-green emitting devices. There is no doubt that the large scale products are the base of these applications.

  4. Large critical current density improvement in Bi-2212 wires through the groove-rolling process

    International Nuclear Information System (INIS)

    Malagoli, A; Bernini, C; Braccini, V; Romano, G; Putti, M; Chaud, X; Debray, F

    2013-01-01

    Recently there has been a growing interest in Bi-2212 superconductor round wire for high magnetic field use despite the fact that an increase of the critical current is still needed to boost its successful use in such applications. Recent studies have demonstrated that the main obstacle to current flow, especially in long wires, is the residual porosity inside these powder-in-tube processed conductors that develops from bubble agglomeration when the Bi-2212 melts. In this work we tried to overcome this issue affecting the wire densification by changing the deformation process. Here we show the effects of groove rolling versus the drawing process on the critical current density J C and on the microstructure. In particular, groove-rolled multifilamentary wires show a J C increased by a factor of about 3 with respect to drawn wires prepared with the same Bi-2212 powder and architecture. We think that this approach in the deformation process is able to produce the required improvements both because the superconducting properties are enhanced and because it makes the fabrication process faster and cheaper. (paper)

  5. Large-scale simulation of ductile fracture process of microstructured materials

    International Nuclear Information System (INIS)

    Tian Rong; Wang Chaowei

    2011-01-01

    The promise of computational science in the extreme-scale computing era is to reduce and decompose macroscopic complexities into microscopic simplicities with the expense of high spatial and temporal resolution of computing. In materials science and engineering, the direct combination of 3D microstructure data sets and 3D large-scale simulations provides unique opportunity for the development of a comprehensive understanding of nano/microstructure-property relationships in order to systematically design materials with specific desired properties. In the paper, we present a framework simulating the ductile fracture process zone in microstructural detail. The experimentally reconstructed microstructural data set is directly embedded into a FE mesh model to improve the simulation fidelity of microstructure effects on fracture toughness. To the best of our knowledge, it is for the first time that the linking of fracture toughness to multiscale microstructures in a realistic 3D numerical model in a direct manner is accomplished. (author)

  6. 7 CFR 201.33 - Seed in bulk or large quantities; seed for cleaning or processing.

    Science.gov (United States)

    2010-01-01

    ... quantities; seed for cleaning or processing. (a) In the case of seed in bulk, the information required under... seeds. (b) Seed consigned to a seed cleaning or processing establishment, for cleaning or processing for... pertaining to such seed show that it is “Seed for processing,” or, if the seed is in containers and in...

  7. A large-scale forest landscape model incorporating multi-scale processes and utilizing forest inventory data

    Science.gov (United States)

    Wen J. Wang; Hong S. He; Martin A. Spetich; Stephen R. Shifley; Frank R. Thompson III; David R. Larsen; Jacob S. Fraser; Jian. Yang

    2013-01-01

    Two challenges confronting forest landscape models (FLMs) are how to simulate fine, standscale processes while making large-scale (i.e., .107 ha) simulation possible, and how to take advantage of extensive forest inventory data such as U.S. Forest Inventory and Analysis (FIA) data to initialize and constrain model parameters. We present the LANDIS PRO model that...

  8. Large deviations and idempotent probability

    CERN Document Server

    Puhalskii, Anatolii

    2001-01-01

    In the view of many probabilists, author Anatolii Puhalskii''s research results stand among the most significant achievements in the modern theory of large deviations. In fact, his work marked a turning point in the depth of our understanding of the connections between the large deviation principle (LDP) and well-known methods for establishing weak convergence results.Large Deviations and Idempotent Probability expounds upon the recent methodology of building large deviation theory along the lines of weak convergence theory. The author develops an idempotent (or maxitive) probability theory, introduces idempotent analogues of martingales (maxingales), Wiener and Poisson processes, and Ito differential equations, and studies their properties. The large deviation principle for stochastic processes is formulated as a certain type of convergence of stochastic processes to idempotent processes. The author calls this large deviation convergence.The approach to establishing large deviation convergence uses novel com...

  9. Violation of the factorization theorem in large-angle radiative Bhabha scattering

    International Nuclear Information System (INIS)

    Arbuzov, A.B.; Kuraev, Eh.A.; Shajkhatdenov, B.G.

    1998-01-01

    The lowest order QED radiative corrections to the radiative large-angle Bhabha scattering process in the region where all the kinematical invariants are large compared to the electron mass are considered. We show that the leading logarithmic corrections do not factor before the Born cross section, contrary to the picture assumed in the renormalization group approach. Estimation of the leading and nonleading contributions for typical kinematics of the hard process for energy of Φ factory is done

  10. Processing large sensor data sets for safeguards : the knowledge generation system.

    Energy Technology Data Exchange (ETDEWEB)

    Thomas, Maikel A.; Smartt, Heidi Anne; Matthews, Robert F.

    2012-04-01

    Modern nuclear facilities, such as reprocessing plants, present inspectors with significant challenges due in part to the sheer amount of equipment that must be safeguarded. The Sandia-developed and patented Knowledge Generation system was designed to automatically analyze large amounts of safeguards data to identify anomalous events of interest by comparing sensor readings with those expected from a process of interest and operator declarations. This paper describes a demonstration of the Knowledge Generation system using simulated accountability tank sensor data to represent part of a reprocessing plant. The demonstration indicated that Knowledge Generation has the potential to address several problems critical to the future of safeguards. It could be extended to facilitate remote inspections and trigger random inspections. Knowledge Generation could analyze data to establish trust hierarchies, to facilitate safeguards use of operator-owned sensors.

  11. Asia-Pacific area shows big gains in processing

    International Nuclear Information System (INIS)

    Vielvoye, R.

    1991-01-01

    This paper reports on the Asia-Pacific region's buoyant refining and petrochemical industries that are reacting to lessons from the Persian gulf war. First-and least palatable-is the knowledge there is no alternative to oil from the Middle East to fuel headlong economic growth. Iraq's Aug. 2, 1990, invasion of Kuwait, resulting in the loss of crude oil from both countries and the flow of products from Kuwait's sophisticated refining complexes, hammered home another valuable lesson. In a crisis, the petroleum industry-oil exporting countries in particular-will in the short term find it easier to make substitute crude supplies available than to conjure up products from alternative processing capacity. The Japanese, as might be expected, are implementing new policies to take account of this lesson. Japan's tightly controlled refining sector has been told it can expand capacity for the first time in 18 years. And, with the blessing of the Japanese government, a group of companies led by Nippon Oil has agreed to a joint venture with Saudi Arabian Oil Co. that will lead to new refining capacity in Japan and a new export refinery in Saudi Arabia that is likely to be dedicated to the Japanese market

  12. Nonaqueous processing methods

    International Nuclear Information System (INIS)

    Coops, M.S.; Bowersox, D.F.

    1984-09-01

    A high-temperature process utilizing molten salt extraction from molten metal alloys has been developed for purification of spent power reactor fuels. Experiments with laboratory-scale processing operations show that purification and throughput parameters comparable to the Barnwell Purex process can be achieved by pyrochemical processing in equipment one-tenth the size, with all wastes being discharged as stable metal alloys at greatly reduced volume and disposal cost. This basic technology can be developed for large-scale processing of spent reactor fuels. 13 references, 4 figures

  13. Circumpolar assessment of rhizosphere priming shows limited increase in carbon loss estimates for permafrost soils but large regional variability

    Science.gov (United States)

    Wild, B.; Keuper, F.; Kummu, M.; Beer, C.; Blume-Werry, G.; Fontaine, S.; Gavazov, K.; Gentsch, N.; Guggenberger, G.; Hugelius, G.; Jalava, M.; Koven, C.; Krab, E. J.; Kuhry, P.; Monteux, S.; Richter, A.; Shazhad, T.; Dorrepaal, E.

    2017-12-01

    Predictions of soil organic carbon (SOC) losses in the northern circumpolar permafrost area converge around 15% (± 3% standard error) of the initial C pool by 2100 under the RCP 8.5 warming scenario. Yet, none of these estimates consider plant-soil interactions such as the rhizosphere priming effect (RPE). While laboratory experiments have shown that the input of plant-derived compounds can stimulate SOC losses by up to 1200%, the magnitude of RPE in natural ecosystems is unknown and no methods for upscaling exist so far. We here present the first spatial and depth explicit RPE model that allows estimates of RPE on a large scale (PrimeSCale). We combine available spatial data (SOC, C/N, GPP, ALT and ecosystem type) and new ecological insights to assess the importance of the RPE at the circumpolar scale. We use a positive saturating relationship between the RPE and belowground C allocation and two ALT-dependent rooting-depth distribution functions (for tundra and boreal forest) to proportionally assign belowground C allocation and RPE to individual soil depth increments. The model permits to take into account reasonable limiting factors on additional SOC losses by RPE including interactions between spatial and/or depth variation in GPP, plant root density, SOC stocks and ALT. We estimate potential RPE-induced SOC losses at 9.7 Pg C (5 - 95% CI: 1.5 - 23.2 Pg C) by 2100 (RCP 8.5). This corresponds to an increase of the current permafrost SOC-loss estimate from 15% of the initial C pool to about 16%. If we apply an additional molar C/N threshold of 20 to account for microbial C limitation as a requirement for the RPE, SOC losses by RPE are further reduced to 6.5 Pg C (5 - 95% CI: 1.0 - 16.8 Pg C) by 2100 (RCP 8.5). Although our results show that current estimates of permafrost soil C losses are robust without taking into account the RPE, our model also highlights high-RPE risk in Siberian lowland areas and Alaska north of the Brooks Range. The small overall impact of

  14. High-Throughput Tabular Data Processor - Platform independent graphical tool for processing large data sets.

    Science.gov (United States)

    Madanecki, Piotr; Bałut, Magdalena; Buckley, Patrick G; Ochocka, J Renata; Bartoszewski, Rafał; Crossman, David K; Messiaen, Ludwine M; Piotrowski, Arkadiusz

    2018-01-01

    High-throughput technologies generate considerable amount of data which often requires bioinformatic expertise to analyze. Here we present High-Throughput Tabular Data Processor (HTDP), a platform independent Java program. HTDP works on any character-delimited column data (e.g. BED, GFF, GTF, PSL, WIG, VCF) from multiple text files and supports merging, filtering and converting of data that is produced in the course of high-throughput experiments. HTDP can also utilize itemized sets of conditions from external files for complex or repetitive filtering/merging tasks. The program is intended to aid global, real-time processing of large data sets using a graphical user interface (GUI). Therefore, no prior expertise in programming, regular expression, or command line usage is required of the user. Additionally, no a priori assumptions are imposed on the internal file composition. We demonstrate the flexibility and potential of HTDP in real-life research tasks including microarray and massively parallel sequencing, i.e. identification of disease predisposing variants in the next generation sequencing data as well as comprehensive concurrent analysis of microarray and sequencing results. We also show the utility of HTDP in technical tasks including data merge, reduction and filtering with external criteria files. HTDP was developed to address functionality that is missing or rudimentary in other GUI software for processing character-delimited column data from high-throughput technologies. Flexibility, in terms of input file handling, provides long term potential functionality in high-throughput analysis pipelines, as the program is not limited by the currently existing applications and data formats. HTDP is available as the Open Source software (https://github.com/pmadanecki/htdp).

  15. A Pipeline for Large Data Processing Using Regular Sampling for Unstructured Grids

    Energy Technology Data Exchange (ETDEWEB)

    Berres, Anne Sabine [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Adhinarayanan, Vignesh [Virginia Polytechnic Inst. and State Univ. (Virginia Tech), Blacksburg, VA (United States); Turton, Terece [Univ. of Texas, Austin, TX (United States); Feng, Wu [Virginia Polytechnic Inst. and State Univ. (Virginia Tech), Blacksburg, VA (United States); Rogers, David Honegger [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-05-12

    Large simulation data requires a lot of time and computational resources to compute, store, analyze, visualize, and run user studies. Today, the largest cost of a supercomputer is not hardware but maintenance, in particular energy consumption. Our goal is to balance energy consumption and cognitive value of visualizations of resulting data. This requires us to go through the entire processing pipeline, from simulation to user studies. To reduce the amount of resources, data can be sampled or compressed. While this adds more computation time, the computational overhead is negligible compared to the simulation time. We built a processing pipeline at the example of regular sampling. The reasons for this choice are two-fold: using a simple example reduces unnecessary complexity as we know what to expect from the results. Furthermore, it provides a good baseline for future, more elaborate sampling methods. We measured time and energy for each test we did, and we conducted user studies in Amazon Mechanical Turk (AMT) for a range of different results we produced through sampling.

  16. Processing and properties of large-sized ceramic slabs

    Directory of Open Access Journals (Sweden)

    Fossa, L.

    2010-10-01

    Full Text Available Large-sized ceramic slabs – with dimensions up to 360x120 cm2 and thickness down to 2 mm – are manufactured through an innovative ceramic process, starting from porcelain stoneware formulations and involving wet ball milling, spray drying, die-less slow-rate pressing, a single stage of fast drying-firing, and finishing (trimming, assembling of ceramic-fiberglass composites. Fired and unfired industrial slabs were selected and characterized from the technological, compositional (XRF, XRD and microstructural (SEM viewpoints. Semi-finished products exhibit a remarkable microstructural uniformity and stability in a rather wide window of firing schedules. The phase composition and compact microstructure of fired slabs are very similar to those of porcelain stoneware tiles. The values of water absorption, bulk density, closed porosity, functional performances as well as mechanical and tribological properties conform to the top quality range of porcelain stoneware tiles. However, the large size coupled with low thickness bestow on the slab a certain degree of flexibility, which is emphasized in ceramic-fiberglass composites. These outstanding performances make the large-sized slabs suitable to be used in novel applications: building and construction (new floorings without dismantling the previous paving, ventilated façades, tunnel coverings, insulating panelling, indoor furnitures (table tops, doors, support for photovoltaic ceramic panels.

    Se han fabricado piezas de gran formato, con dimensiones de hasta 360x120 cm, y menos de 2 mm, de espesor, empleando métodos innovadores de fabricación, partiendo de composiciones de gres porcelánico y utilizando, molienda con bolas por vía húmeda, atomización, prensado a baja velocidad sin boquilla de extrusión, secado y cocción rápido en una sola etapa, y un acabado que incluye la adhesión de fibra de vidrio al soporte cerámico y el rectificado de la pieza final. Se han

  17. Large deviations in stochastic heat-conduction processes provide a gradient-flow structure for heat conduction

    International Nuclear Information System (INIS)

    Peletier, Mark A.; Redig, Frank; Vafayi, Kiamars

    2014-01-01

    We consider three one-dimensional continuous-time Markov processes on a lattice, each of which models the conduction of heat: the family of Brownian Energy Processes with parameter m (BEP(m)), a Generalized Brownian Energy Process, and the Kipnis-Marchioro-Presutti (KMP) process. The hydrodynamic limit of each of these three processes is a parabolic equation, the linear heat equation in the case of the BEP(m) and the KMP, and a nonlinear heat equation for the Generalized Brownian Energy Process with parameter a (GBEP(a)). We prove the hydrodynamic limit rigorously for the BEP(m), and give a formal derivation for the GBEP(a). We then formally derive the pathwise large-deviation rate functional for the empirical measure of the three processes. These rate functionals imply gradient-flow structures for the limiting linear and nonlinear heat equations. We contrast these gradient-flow structures with those for processes describing the diffusion of mass, most importantly the class of Wasserstein gradient-flow systems. The linear and nonlinear heat-equation gradient-flow structures are each driven by entropy terms of the form −log ρ; they involve dissipation or mobility terms of order ρ 2 for the linear heat equation, and a nonlinear function of ρ for the nonlinear heat equation

  18. Evolution and interaction of large interplanetary streams

    International Nuclear Information System (INIS)

    Whang, Y.C.; Burlaga, L.F.

    1985-02-01

    A computer simulation for the evolution and interaction of large interplanetary streams based on multi-spacecraft observations and an unsteady, one-dimensional MHD model is presented. Two events, each observed by two or more spacecraft separated by a distance of the order of 10 AU, were studied. The first simulation is based on the plasma and magnetic field observations made by two radially-aligned spacecraft. The second simulation is based on an event observed first by Helios-1 in May 1980 near 0.6 AU and later by Voyager-1 in June 1980 at 8.1 AU. These examples show that the dynamical evolution of large-scale solar wind structures is dominated by the shock process, including the formation, collision, and merging of shocks. The interaction of shocks with stream structures also causes a drastic decrease in the amplitude of the solar wind speed variation with increasing heliocentric distance, and as a result of interactions there is a large variation of shock-strengths and shock-speeds. The simulation results shed light on the interpretation for the interaction and evolution of large interplanetary streams. Observations were made along a few limited trajectories, but simulation results can supplement these by providing the detailed evolution process for large-scale solar wind structures in the vast region not directly observed. The use of a quantitative nonlinear simulation model including shock merging process is crucial in the interpretation of data obtained in the outer heliosphere

  19. On the possibility of the multiple inductively coupled plasma and helicon plasma sources for large-area processes

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Jin-Won; Lee, Yun-Seong, E-mail: leeeeys@kaist.ac.kr; Chang, Hong-Young [Low-temperature Plasma Laboratory, Department of Physics, Korea Advanced Institute of Science and Technology, Daejeon 305-701 (Korea, Republic of); An, Sang-Hyuk [Agency of Defense Development, Yuseong-gu, Daejeon 305-151 (Korea, Republic of)

    2014-08-15

    In this study, we attempted to determine the possibility of multiple inductively coupled plasma (ICP) and helicon plasma sources for large-area processes. Experiments were performed with the one and two coils to measure plasma and electrical parameters, and a circuit simulation was performed to measure the current at each coil in the 2-coil experiment. Based on the result, we could determine the possibility of multiple ICP sources due to a direct change of impedance due to current and saturation of impedance due to the skin-depth effect. However, a helicon plasma source is difficult to adapt to the multiple sources due to the consistent change of real impedance due to mode transition and the low uniformity of the B-field confinement. As a result, it is expected that ICP can be adapted to multiple sources for large-area processes.

  20. Manufacturing Process Simulation of Large-Scale Cryotanks

    Science.gov (United States)

    Babai, Majid; Phillips, Steven; Griffin, Brian

    2003-01-01

    NASA's Space Launch Initiative (SLI) is an effort to research and develop the technologies needed to build a second-generation reusable launch vehicle. It is required that this new launch vehicle be 100 times safer and 10 times cheaper to operate than current launch vehicles. Part of the SLI includes the development of reusable composite and metallic cryotanks. The size of these reusable tanks is far greater than anything ever developed and exceeds the design limits of current manufacturing tools. Several design and manufacturing approaches have been formulated, but many factors must be weighed during the selection process. Among these factors are tooling reachability, cycle times, feasibility, and facility impacts. The manufacturing process simulation capabilities available at NASA.s Marshall Space Flight Center have played a key role in down selecting between the various manufacturing approaches. By creating 3-D manufacturing process simulations, the varying approaches can be analyzed in a virtual world before any hardware or infrastructure is built. This analysis can detect and eliminate costly flaws in the various manufacturing approaches. The simulations check for collisions between devices, verify that design limits on joints are not exceeded, and provide cycle times which aide in the development of an optimized process flow. In addition, new ideas and concerns are often raised after seeing the visual representation of a manufacturing process flow. The output of the manufacturing process simulations allows for cost and safety comparisons to be performed between the various manufacturing approaches. This output helps determine which manufacturing process options reach the safety and cost goals of the SLI. As part of the SLI, The Boeing Company was awarded a basic period contract to research and propose options for both a metallic and a composite cryotank. Boeing then entered into a task agreement with the Marshall Space Flight Center to provide manufacturing

  1. Video Game Players Show More Precise Multisensory Temporal Processing Abilities

    OpenAIRE

    Donohue, Sarah E.; Woldorff, Marty G.; Mitroff, Stephen R.

    2010-01-01

    Recent research has demonstrated enhanced visual attention and visual perception in individuals with extensive experience playing action video games. These benefits manifest in several realms, but much remains unknown about the ways in which video game experience alters perception and cognition. The current study examined whether video game players’ benefits generalize beyond vision to multisensory processing by presenting video game players and non-video game players auditory and visual stim...

  2. Decision process in MCDM with large number of criteria and heterogeneous risk preferences

    Directory of Open Access Journals (Sweden)

    Jian Liu

    Full Text Available A new decision process is proposed to address the challenge that a large number criteria in the multi-criteria decision making (MCDM problem and the decision makers with heterogeneous risk preferences. First, from the perspective of objective data, the effective criteria are extracted based on the similarity relations between criterion values and the criteria are weighted, respectively. Second, the corresponding types of theoretic model of risk preferences expectations will be built, based on the possibility and similarity between criterion values to solve the problem for different interval numbers with the same expectation. Then, the risk preferences (Risk-seeking, risk-neutral and risk-aversion will be embedded in the decision process. Later, the optimal decision object is selected according to the risk preferences of decision makers based on the corresponding theoretic model. Finally, a new algorithm of information aggregation model is proposed based on fairness maximization of decision results for the group decision, considering the coexistence of decision makers with heterogeneous risk preferences. The scientific rationality verification of this new method is given through the analysis of real case. Keywords: Heterogeneous, Risk preferences, Fairness, Decision process, Group decision

  3. TO BE OR NOT TO BE: AN INFORMATIVE NON-SYMBOLIC NUMERICAL MAGNITUDE PROCESSING STUDY ABOUT SMALL VERSUS LARGE NUMBERS IN INFANTS

    Directory of Open Access Journals (Sweden)

    Annelies CEULEMANS

    2014-03-01

    Full Text Available Many studies tested the association between numerical magnitude processing and mathematical achievement with conflicting findings reported for individuals with mathematical learning disorders. Some of the inconsistencies might be explained by the number of non-symbolic stimuli or dot collections used in studies. It has been hypothesized that there is an object-file system for ‘small’ and an analogue magnitude system for ‘large’ numbers. This two-system account has been supported by the set size limit of the object-file system (three items. A boundary was defined, accordingly, categorizing numbers below four as ‘small’ and from four and above as ‘large’. However, data on ‘small’ number processing and on the ‘boundary’ between small and large numbers are missing. In this contribution we provide data from infants discriminating between the number sets 4 vs. 8 and 1 vs. 4, both containing the number four combined with a small and a large number respectively. Participants were 25 and 26 full term 9-month-olds for 4 vs. 8 and 1 vs. 4 respectively. The stimuli (dots were controlled for continuous variables. Eye-tracking was combined with the habituation paradigm. The results showed that the infants were successful in discriminating 1 from 4, but failed to discriminate 4 from 8 dots. This finding supports the assumption of the number four as a ‘small’ number and enlarges the object-file system’s limit. This study might help to explain inconsistencies in studies. Moreover, the information may be useful in answering parent’s questions about challenges that vulnerable children with number processing problems, such as children with mathematical learning disorders, might encounter. In addition, the study might give some information on the stimuli that can be used to effectively foster children’s magnitude processing skills.

  4. Development of Integrated Die Casting Process for Large Thin-Wall Magnesium Applications

    Energy Technology Data Exchange (ETDEWEB)

    Carter, Jon T. [General Motors LLC, Warren, MI (United States); Wang, Gerry [Meridian Lightweight Technologies, Plymouth MI (United States); Luo, Alan [General Motors LLC, Warren, MI (United States)

    2017-11-29

    The purpose of this project was to develop a process and product which would utilize magnesium die casting and result in energy savings when compared to the baseline steel product. The specific product chosen was a side door inner panel for a mid-size car. The scope of the project included: re-design of major structural parts of the door, design and build of the tooling required to make the parts, making of parts, assembly of doors, and testing (both physical and simulation) of doors. Additional work was done on alloy development, vacuum die casting, and overcasting, all in order to improve the performance of the doors and reduce cost. The project achieved the following objectives: 1. Demonstrated ability to design a large thin-wall magnesium die casting. 2. Demonstrated ability to manufacture a large thin-wall magnesium die casting in AM60 alloy. 3. Tested via simulations and/or physical tests the mechanical behavior and corrosion behavior of magnesium die castings and/or lightweight experimental automotive side doors which incorporate a large, thin-wall, powder coated, magnesium die casting. Under some load cases, the results revealed cracking of the casting, which can be addressed with re-design and better material models for CAE analysis. No corrosion of the magnesium panel was observed. 4. Using life cycle analysis models, compared the energy consumption and global warming potential of the lightweight door with those of a conventional steel door, both during manufacture and in service. Compared to a steel door, the lightweight door requires more energy to manufacture but less energy during operation (i.e., fuel consumption when driving vehicle). Similarly, compared to a steel door, the lightweight door has higher global warming potential (GWP) during manufacture, but lower GWP during operation. 5. Compared the conventional magnesium die casting process with the “super-vacuum” die casting process. Results achieved with cast tensile bars suggest some

  5. Large Area Sputter Coating on Glass

    Science.gov (United States)

    Katayama, Yoshihito

    Large glass has been used for commercial buildings, housings and vehicles for many years. Glass size for flat displays is getting larger and larger. The glass for the 8th generation is more than 5 m2 in area. Demand of the large glass is increasing not only in these markets but also in a solar cell market growing drastically. Therefore, large area coating is demanded to plus something else on glass more than ever. Sputtering and pyrolysis are the major coating methods on large glass today. Sputtering process is particularly popular because it can deposit a wide variety of materials in good coating uniformity on the glass. This paper describes typical industrial sputtering system and recent progress in sputtering technology. It also shows typical coated glass products in architectural, automotive and display fields and comments on their functions, film stacks and so on.

  6. Parallelizing Gene Expression Programming Algorithm in Enabling Large-Scale Classification

    Directory of Open Access Journals (Sweden)

    Lixiong Xu

    2017-01-01

    Full Text Available As one of the most effective function mining algorithms, Gene Expression Programming (GEP algorithm has been widely used in classification, pattern recognition, prediction, and other research fields. Based on the self-evolution, GEP is able to mine an optimal function for dealing with further complicated tasks. However, in big data researches, GEP encounters low efficiency issue due to its long time mining processes. To improve the efficiency of GEP in big data researches especially for processing large-scale classification tasks, this paper presents a parallelized GEP algorithm using MapReduce computing model. The experimental results show that the presented algorithm is scalable and efficient for processing large-scale classification tasks.

  7. High-energy, large-momentum-transfer processes: Ladder diagrams in φ3 theory. Pt. 1

    International Nuclear Information System (INIS)

    Osland, P.; Wu, T.T.; Harvard Univ., Cambridge, MA

    1987-01-01

    Relativistic quantum field theories may give us useful guidance to understanding high-energy, large-momentum-transfer processes, where the center-of-mass energy is much larger than the transverse momentum transfers, which are in turn much larger than the masses of the participating particles. With this possibility in mind, we study the ladder diagrams in φ 3 theory. In this paper, some of the necessary techniques are developed and applied to the simplest cases of the fourth- and sixth-order ladder diagrams. (orig.)

  8. ObspyDMT: a Python toolbox for retrieving and processing large seismological data sets

    Directory of Open Access Journals (Sweden)

    K. Hosseini

    2017-10-01

    Full Text Available We present obspyDMT, a free, open-source software toolbox for the query, retrieval, processing and management of seismological data sets, including very large, heterogeneous and/or dynamically growing ones. ObspyDMT simplifies and speeds up user interaction with data centers, in more versatile ways than existing tools. The user is shielded from the complexities of interacting with different data centers and data exchange protocols and is provided with powerful diagnostic and plotting tools to check the retrieved data and metadata. While primarily a productivity tool for research seismologists and observatories, easy-to-use syntax and plotting functionality also make obspyDMT an effective teaching aid. Written in the Python programming language, it can be used as a stand-alone command-line tool (requiring no knowledge of Python or can be integrated as a module with other Python codes. It facilitates data archiving, preprocessing, instrument correction and quality control – routine but nontrivial tasks that can consume much user time. We describe obspyDMT's functionality, design and technical implementation, accompanied by an overview of its use cases. As an example of a typical problem encountered in seismogram preprocessing, we show how to check for inconsistencies in response files of two example stations. We also demonstrate the fully automated request, remote computation and retrieval of synthetic seismograms from the Synthetics Engine (Syngine web service of the Data Management Center (DMC at the Incorporated Research Institutions for Seismology (IRIS.

  9. ObspyDMT: a Python toolbox for retrieving and processing large seismological data sets

    Science.gov (United States)

    Hosseini, Kasra; Sigloch, Karin

    2017-10-01

    We present obspyDMT, a free, open-source software toolbox for the query, retrieval, processing and management of seismological data sets, including very large, heterogeneous and/or dynamically growing ones. ObspyDMT simplifies and speeds up user interaction with data centers, in more versatile ways than existing tools. The user is shielded from the complexities of interacting with different data centers and data exchange protocols and is provided with powerful diagnostic and plotting tools to check the retrieved data and metadata. While primarily a productivity tool for research seismologists and observatories, easy-to-use syntax and plotting functionality also make obspyDMT an effective teaching aid. Written in the Python programming language, it can be used as a stand-alone command-line tool (requiring no knowledge of Python) or can be integrated as a module with other Python codes. It facilitates data archiving, preprocessing, instrument correction and quality control - routine but nontrivial tasks that can consume much user time. We describe obspyDMT's functionality, design and technical implementation, accompanied by an overview of its use cases. As an example of a typical problem encountered in seismogram preprocessing, we show how to check for inconsistencies in response files of two example stations. We also demonstrate the fully automated request, remote computation and retrieval of synthetic seismograms from the Synthetics Engine (Syngine) web service of the Data Management Center (DMC) at the Incorporated Research Institutions for Seismology (IRIS).

  10. Multi-format all-optical processing based on a large-scale, hybridly integrated photonic circuit.

    Science.gov (United States)

    Bougioukos, M; Kouloumentas, Ch; Spyropoulou, M; Giannoulis, G; Kalavrouziotis, D; Maziotis, A; Bakopoulos, P; Harmon, R; Rogers, D; Harrison, J; Poustie, A; Maxwell, G; Avramopoulos, H

    2011-06-06

    We investigate through numerical studies and experiments the performance of a large scale, silica-on-silicon photonic integrated circuit for multi-format regeneration and wavelength-conversion. The circuit encompasses a monolithically integrated array of four SOAs inside two parallel Mach-Zehnder structures, four delay interferometers and a large number of silica waveguides and couplers. Exploiting phase-incoherent techniques, the circuit is capable of processing OOK signals at variable bit rates, DPSK signals at 22 or 44 Gb/s and DQPSK signals at 44 Gbaud. Simulation studies reveal the wavelength-conversion potential of the circuit with enhanced regenerative capabilities for OOK and DPSK modulation formats and acceptable quality degradation for DQPSK format. Regeneration of 22 Gb/s OOK signals with amplified spontaneous emission (ASE) noise and DPSK data signals degraded with amplitude, phase and ASE noise is experimentally validated demonstrating a power penalty improvement up to 1.5 dB.

  11. Dyslexic Children Show Atypical Cerebellar Activation and Cerebro-Cerebellar Functional Connectivity in Orthographic and Phonological Processing.

    Science.gov (United States)

    Feng, Xiaoxia; Li, Le; Zhang, Manli; Yang, Xiujie; Tian, Mengyu; Xie, Weiyi; Lu, Yao; Liu, Li; Bélanger, Nathalie N; Meng, Xiangzhi; Ding, Guosheng

    2017-04-01

    Previous neuroimaging studies have found atypical cerebellar activation in individuals with dyslexia in either motor-related tasks or language tasks. However, studies investigating atypical cerebellar activation in individuals with dyslexia have mostly used tasks tapping phonological processing. A question that is yet unanswered is whether the cerebellum in individuals with dyslexia functions properly during orthographic processing of words, as growing evidence shows that the cerebellum is also involved in visual and spatial processing. Here, we investigated cerebellar activation and cerebro-cerebellar functional connectivity during word processing in dyslexic readers and typically developing readers using tasks that tap orthographic and phonological codes. In children with dyslexia, we observed an abnormally higher engagement of the bilateral cerebellum for the orthographic task, which was negatively correlated with literacy measures. The greater the reading impairment was for young dyslexic readers, the stronger the cerebellar activation was. This suggests a compensatory role of the cerebellum in reading for children with dyslexia. In addition, a tendency for higher cerebellar activation in dyslexic readers was found in the phonological task. Moreover, the functional connectivity was stronger for dyslexic readers relative to typically developing readers between the lobule VI of the right cerebellum and the left fusiform gyrus during the orthographic task and between the lobule VI of the left cerebellum and the left supramarginal gyrus during the phonological task. This pattern of results suggests that the cerebellum compensates for reading impairment through the connections with specific brain regions responsible for the ongoing reading task. These findings enhance our understanding of the cerebellum's involvement in reading and reading impairment.

  12. Sentence processing in anterior superior temporal cortex shows a social-emotional bias.

    Science.gov (United States)

    Mellem, Monika S; Jasmin, Kyle M; Peng, Cynthia; Martin, Alex

    2016-08-01

    The anterior region of the left superior temporal gyrus/superior temporal sulcus (aSTG/STS) has been implicated in two very different cognitive functions: sentence processing and social-emotional processing. However, the vast majority of the sentence stimuli in previous reports have been of a social or social-emotional nature suggesting that sentence processing may be confounded with semantic content. To evaluate this possibility we had subjects read word lists that differed in phrase/constituent size (single words, 3-word phrases, 6-word sentences) and semantic content (social-emotional, social, and inanimate objects) while scanned in a 7T environment. This allowed us to investigate if the aSTG/STS responded to increasing constituent structure (with increased activity as a function of constituent size) with or without regard to a specific domain of concepts, i.e., social and/or social-emotional content. Activity in the left aSTG/STS was found to increase with constituent size. This region was also modulated by content, however, such that social-emotional concepts were preferred over social and object stimuli. Reading also induced content type effects in domain-specific semantic regions. Those preferring social-emotional content included aSTG/STS, inferior frontal gyrus, posterior STS, lateral fusiform, ventromedial prefrontal cortex, and amygdala, regions included in the "social brain", while those preferring object content included parahippocampal gyrus, retrosplenial cortex, and caudate, regions involved in object processing. These results suggest that semantic content affects higher-level linguistic processing and should be taken into account in future studies. Copyright © 2016. Published by Elsevier Ltd.

  13. Summer Decay Processes in a Large Tabular Iceberg

    Science.gov (United States)

    Wadhams, P.; Wagner, T. M.; Bates, R.

    2012-12-01

    Summer Decay Processes in a Large Tabular Iceberg Peter Wadhams (1), Till J W Wagner(1) and Richard Bates(2) (1) Department of Applied Mathematics and Theoretical Physics, University of Cambridge, Wilberforce Road, Cambridge CB3 0WA, UK (2) Scottish Oceans Institute, School of Geography and Geosciences, University of St Andrews, St. Andrews, Scotland KY16 9AL We present observational results from an experiment carried out during July-August 2012 on a giant grounded tabular iceberg off Baffin Island. The iceberg studied was part of the Petermann Ice Island B1 (PIIB1) which calved off the Petermann Glacier in NW Greenland in 2010. Since 2011 it has been aground in 100 m of water on the Baffin Island shelf at 69 deg 06'N, 66 deg 06'W. As part of the project a set of high resolution GPS sensors and tiltmeters was placed on the ice island to record rigid body motion as well as flexural responses to wind, waves, current and tidal forces, while a Waverider buoy monitored incident waves and swell. On July 31, 2012 a major breakup event was recorded, with a piece of 25,000 sq m surface area calving off the iceberg. At the time of breakup, GPS sensors were collecting data both on the main berg as well as on the newly calved piece, while two of us (PW and TJWW) were standing on the broken-out portion which rose by 0.6 m to achieve a new isostatic equilibrium. Crucially, there was no significant swell at the time of breakup, which suggests a melt-driven decay process rather than wave-driven flexural break-up. The GPS sensors recorded two disturbances during the hour preceding the breakup, indicative of crack growth and propagation. Qualitative observation during the two weeks in which our research ship was moored to, or was close to, the ice island edge indicates that an important mechanism for summer ablation is successive collapses of the overburden from above an unsupported wave cut, which creates a submerged ram fringing the berg. A model of buoyancy stresses induced by

  14. Benchmarking processes for managing large international space programs

    Science.gov (United States)

    Mandell, Humboldt C., Jr.; Duke, Michael B.

    1993-01-01

    The relationship between management style and program costs is analyzed to determine the feasibility of financing large international space missions. The incorporation of management systems is considered to be essential to realizing low cost spacecraft and planetary surface systems. Several companies ranging from large Lockheed 'Skunk Works' to small companies including Space Industries, Inc., Rocket Research Corp., and Orbital Sciences Corp. were studied. It is concluded that to lower the prices, the ways in which spacecraft and hardware are developed must be changed. Benchmarking of successful low cost space programs has revealed a number of prescriptive rules for low cost managements, including major changes in the relationships between the public and private sectors.

  15. On the use of Cloud Computing and Machine Learning for Large-Scale SAR Science Data Processing and Quality Assessment Analysi

    Science.gov (United States)

    Hua, H.

    2016-12-01

    Geodetic imaging is revolutionizing geophysics, but the scope of discovery has been limited by labor-intensive technological implementation of the analyses. The Advanced Rapid Imaging and Analysis (ARIA) project has proven capability to automate SAR data processing and analysis. Existing and upcoming SAR missions such as Sentinel-1A/B and NISAR are also expected to generate massive amounts of SAR data. This has brought to the forefront the need for analytical tools for SAR quality assessment (QA) on the large volumes of SAR data-a critical step before higher-level time series and velocity products can be reliably generated. Initially leveraging an advanced hybrid-cloud computing science data system for performing large-scale processing, machine learning approaches were augmented for automated analysis of various quality metrics. Machine learning-based user-training of features, cross-validation, prediction models were integrated into our cloud-based science data processing flow to enable large-scale and high-throughput QA analytics for enabling improvements to the production quality of geodetic data products.

  16. Large-D gravity and low-D strings.

    Science.gov (United States)

    Emparan, Roberto; Grumiller, Daniel; Tanabe, Kentaro

    2013-06-21

    We show that in the limit of a large number of dimensions a wide class of nonextremal neutral black holes has a universal near-horizon limit. The limiting geometry is the two-dimensional black hole of string theory with a two-dimensional target space. Its conformal symmetry explains the properties of massless scalars found recently in the large-D limit. For black branes with string charges, the near-horizon geometry is that of the three-dimensional black strings of Horne and Horowitz. The analogies between the α' expansion in string theory and the large-D expansion in gravity suggest a possible effective string description of the large-D limit of black holes. We comment on applications to several subjects, in particular to the problem of critical collapse.

  17. Process chain validation in micro and nano replication

    DEFF Research Database (Denmark)

    Calaon, Matteo

    to quantification of replication quality over large areas of surface topography based on areal detection technique and angular diffraction measurements were developed. A series of injection molding and compression molding experiments aimed at process analysis and optimization showed the possibility to control...... features dimensional accuracy variation through the identification of relevant process parameters. Statistical design of experiment results, showed the influence of both process parameters (mold temperature, packing time, packing pressure) and design parameters (channel width and direction with respect......Innovations in nanotechnology propose applications integrating micro and nanometer structures fabricated as master geometries for final replication on polymer substrates. The possibility for polymer materials of being processed with technologies enabling large volume production introduces solutions...

  18. The large deviation approach to statistical mechanics

    International Nuclear Information System (INIS)

    Touchette, Hugo

    2009-01-01

    The theory of large deviations is concerned with the exponential decay of probabilities of large fluctuations in random systems. These probabilities are important in many fields of study, including statistics, finance, and engineering, as they often yield valuable information about the large fluctuations of a random system around its most probable state or trajectory. In the context of equilibrium statistical mechanics, the theory of large deviations provides exponential-order estimates of probabilities that refine and generalize Einstein's theory of fluctuations. This review explores this and other connections between large deviation theory and statistical mechanics, in an effort to show that the mathematical language of statistical mechanics is the language of large deviation theory. The first part of the review presents the basics of large deviation theory, and works out many of its classical applications related to sums of random variables and Markov processes. The second part goes through many problems and results of statistical mechanics, and shows how these can be formulated and derived within the context of large deviation theory. The problems and results treated cover a wide range of physical systems, including equilibrium many-particle systems, noise-perturbed dynamics, nonequilibrium systems, as well as multifractals, disordered systems, and chaotic systems. This review also covers many fundamental aspects of statistical mechanics, such as the derivation of variational principles characterizing equilibrium and nonequilibrium states, the breaking of the Legendre transform for nonconcave entropies, and the characterization of nonequilibrium fluctuations through fluctuation relations.

  19. The large deviation approach to statistical mechanics

    Science.gov (United States)

    Touchette, Hugo

    2009-07-01

    The theory of large deviations is concerned with the exponential decay of probabilities of large fluctuations in random systems. These probabilities are important in many fields of study, including statistics, finance, and engineering, as they often yield valuable information about the large fluctuations of a random system around its most probable state or trajectory. In the context of equilibrium statistical mechanics, the theory of large deviations provides exponential-order estimates of probabilities that refine and generalize Einstein’s theory of fluctuations. This review explores this and other connections between large deviation theory and statistical mechanics, in an effort to show that the mathematical language of statistical mechanics is the language of large deviation theory. The first part of the review presents the basics of large deviation theory, and works out many of its classical applications related to sums of random variables and Markov processes. The second part goes through many problems and results of statistical mechanics, and shows how these can be formulated and derived within the context of large deviation theory. The problems and results treated cover a wide range of physical systems, including equilibrium many-particle systems, noise-perturbed dynamics, nonequilibrium systems, as well as multifractals, disordered systems, and chaotic systems. This review also covers many fundamental aspects of statistical mechanics, such as the derivation of variational principles characterizing equilibrium and nonequilibrium states, the breaking of the Legendre transform for nonconcave entropies, and the characterization of nonequilibrium fluctuations through fluctuation relations.

  20. Drell–Yan process at Large Hadron Collider

    Indian Academy of Sciences (India)

    Drell–Yan process at LHC, q q ¯ → /* → ℓ+ ℓ-, is one of the benchmarks for confirmation of Standard Model at TeV energy scale. Since the theoretical prediction for the rate is precise and the final state is clean as well as relatively easy to measure, the process can be studied at the LHC even at relatively low luminosity.

  1. Innovation Processes in Large-Scale Public Foodservice-Case Findings from the Implementation of Organic Foods in a Danish County

    DEFF Research Database (Denmark)

    Mikkelsen, Bent Egberg; Nielsen, Thorkild; Kristensen, Niels Heine

    2005-01-01

    is the idea that the large-scale foodservice such as hospital food service should adopt a buy organic policy due to their large buying volume. But whereas implementation of organic foods has developed quite unproblematically in smaller institutions such as kindergartens and nurseries, introduction of organic...... foods into large-scale foodservice such as that taking place in hospitals and larger homes for the elderly, has proven to be quite difficult. The very complex planning, procurement and processing procedures used in such facilities are among reasons for this. Against this background an evaluation...

  2. Design of an RF Antenna for a Large-Bore, High Power, Steady State Plasma Processing Chamber for Material Separation

    International Nuclear Information System (INIS)

    Rasmussen, D.A.; Freeman, R.L.

    2001-01-01

    The purpose of this Cooperative Research and Development Agreement (CRADA) between UT-Battelle, LLC, (Contractor), and Archimedes Technology Group, (Participant) is to evaluate the design of an RF antenna for a large-bore, high power, steady state plasma processing chamber for material separation. Criteria for optimization will be to maximize the power deposition in the plasma while operating at acceptable voltages and currents in the antenna structure. The project objectives are to evaluate the design of an RF antenna for a large-bore, high power, steady state plasma processing chamber for material separation. Criteria for optimization will be to maximize the power deposition in the plasma while operating at acceptable voltages and currents in the antenna structure

  3. Research on the drawing process with a large total deformation wires of AZ31 alloy

    International Nuclear Information System (INIS)

    Bajor, T; Muskalski, Z; Suliga, M

    2010-01-01

    Magnesium and their alloys have been extensively studied in recent years, not only because of their potential applications as light-weight engineering materials, but also owing to their biodegradability. Due to their hexagonal close-packed crystallographic structure, cold plastic processing of magnesium alloys is difficult. The preliminary researches carried out by the authors have indicated that the application of the KOBO method, based on the effect of cyclic strain path change, for the deformation of magnesium alloys, provides the possibility of obtaining a fine-grained structure material to be used for further cold plastic processing with large total deformation. The main purpose of this work is to present research findings concerning a detailed analysis of mechanical properties and changes occurring in the structure of AZ31 alloy wire during the multistage cold drawing process. The appropriate selection of drawing parameters and the application of multistep heat treatment operations enable the deformation of the AZ31 alloy in the cold drawing process with a total draft of about 90%.

  4. Gluon bremstrahlung effects in large P/sub perpendicular/ hadron-hadron scattering

    International Nuclear Information System (INIS)

    Fox, G.C.; Kelly, R.L.

    1982-02-01

    We consider effects of parton (primarily gluon) bremstrahlung in the initial and final states of high transverse momentum hadron-hadron scattering. Monte Carlo calculations based on conventional QCD parton branching and scattering processes are presented. The calculations are carried only to the parton level in the final state. We apply the model to the Drell-Yan process and to high transverse momentum hadron-hadron scattering triggered with a large aperture calorimeter. We show that the latter triggers are biased in that they select events with unusually large bremstrahlung effects. We suggest that this trigger bias explains the large cross section and non-coplanar events observed in the NA5 experiment at the SPS

  5. Quenches in large superconducting magnets

    International Nuclear Information System (INIS)

    Eberhard, P.H.; Alston-Garnjost, M.; Green, M.A.; Lecomte, P.; Smits, R.G.; Taylor, J.D.; Vuillemin, V.

    1977-08-01

    The development of large high current density superconducting magnets requires an understanding of the quench process by which the magnet goes normal. A theory which describes the quench process in large superconducting magnets is presented and compared with experimental measurements. The use of a quench theory to improve the design of large high current density superconducting magnets is discussed

  6. Different types of nitrogen deposition show variable effects on the soil carbon cycle process of temperate forests.

    Science.gov (United States)

    Du, Yuhan; Guo, Peng; Liu, Jianqiu; Wang, Chunyu; Yang, Ning; Jiao, Zhenxia

    2014-10-01

    Nitrogen (N) deposition significantly affects the soil carbon (C) cycle process of forests. However, the influence of different types of N on it still remained unclear. In this work, ammonium nitrate was selected as an inorganic N (IN) source, while urea and glycine were chosen as organic N (ON) sources. Different ratios of IN to ON (1 : 4, 2 : 3, 3 : 2, 4 : 1, and 5 : 0) were mixed with equal total amounts and then used to fertilize temperate forest soils for 2 years. Results showed that IN deposition inhibited soil C cycle processes, such as soil respiration, soil organic C decomposition, and enzymatic activities, and induced the accumulation of recalcitrant organic C. By contrast, ON deposition promoted these processes. Addition of ON also resulted in accelerated transformation of recalcitrant compounds into labile compounds and increased CO2 efflux. Meanwhile, greater ON deposition may convert C sequestration in forest soils into C source. These results indicated the importance of the IN to ON ratio in controlling the soil C cycle, which can consequently change the ecological effect of N deposition. © 2014 John Wiley & Sons Ltd.

  7. Statistical process control charts for attribute data involving very large sample sizes: a review of problems and solutions.

    Science.gov (United States)

    Mohammed, Mohammed A; Panesar, Jagdeep S; Laney, David B; Wilson, Richard

    2013-04-01

    The use of statistical process control (SPC) charts in healthcare is increasing. The primary purpose of SPC is to distinguish between common-cause variation which is attributable to the underlying process, and special-cause variation which is extrinsic to the underlying process. This is important because improvement under common-cause variation requires action on the process, whereas special-cause variation merits an investigation to first find the cause. Nonetheless, when dealing with attribute or count data (eg, number of emergency admissions) involving very large sample sizes, traditional SPC charts often produce tight control limits with most of the data points appearing outside the control limits. This can give a false impression of common and special-cause variation, and potentially misguide the user into taking the wrong actions. Given the growing availability of large datasets from routinely collected databases in healthcare, there is a need to present a review of this problem (which arises because traditional attribute charts only consider within-subgroup variation) and its solutions (which consider within and between-subgroup variation), which involve the use of the well-established measurements chart and the more recently developed attribute charts based on Laney's innovative approach. We close by making some suggestions for practice.

  8. Consultancy on Large-Scale Submerged Aerobic Cultivation Process Design - Final Technical Report: February 1, 2016 -- June 30, 2016

    Energy Technology Data Exchange (ETDEWEB)

    Crater, Jason [Gemomatica, Inc., San Diego, CA (United States); Galleher, Connor [Gemomatica, Inc., San Diego, CA (United States); Lievense, Jeff [Gemomatica, Inc., San Diego, CA (United States)

    2017-05-12

    NREL is developing an advanced aerobic bubble column model using Aspen Custom Modeler (ACM). The objective of this work is to integrate the new fermentor model with existing techno-economic models in Aspen Plus and Excel to establish a new methodology for guiding process design. To assist this effort, NREL has contracted Genomatica to critique and make recommendations for improving NREL's bioreactor model and large scale aerobic bioreactor design for biologically producing lipids at commercial scale. Genomatica has highlighted a few areas for improving the functionality and effectiveness of the model. Genomatica recommends using a compartment model approach with an integrated black-box kinetic model of the production microbe. We also suggest including calculations for stirred tank reactors to extend the models functionality and adaptability for future process designs. Genomatica also suggests making several modifications to NREL's large-scale lipid production process design. The recommended process modifications are based on Genomatica's internal techno-economic assessment experience and are focused primarily on minimizing capital and operating costs. These recommendations include selecting/engineering a thermotolerant yeast strain with lipid excretion; using bubble column fermentors; increasing the size of production fermentors; reducing the number of vessels; employing semi-continuous operation; and recycling cell mass.

  9. Haptically guided grasping. FMRI shows right-hemisphere parietal stimulus encoding, and bilateral dorso-ventral parietal gradients of object- and action-related processing during grasp execution

    Directory of Open Access Journals (Sweden)

    Mattia eMarangon

    2016-01-01

    Full Text Available The neural bases of haptically-guided grasp planning and execution are largely unknown, especially for stimuli having no visual representations. Therefore, we used functional magnetic resonance imaging (fMRI to monitor brain activity during haptic exploration of novel 3D complex objects, subsequent grasp planning, and the execution of the pre-planned grasps. Haptic object exploration, involving extraction of shape, orientation and length of the to-be-grasped targets, was associated with the fronto-parietal, temporo-occipital, and insular cortex activity. Yet, only the anterior divisions of the posterior parietal cortex (PPC of the right hemisphere were significantly more engaged in exploration of complex objects (vs. simple control disks. None of these regions were re-recruited during the planning phase. Even more surprisingly, the left-hemisphere intraparietal, temporal, and occipital areas that were significantly invoked for grasp planning did not show sensitivity to object features. Finally, grasp execution, involving the re-recruitment of the critical right-hemisphere PPC clusters, was also significantly associated with two kinds of bilateral parieto-frontal processes. The first represents transformations of grasp-relevant target features and is linked to the dorso-dorsal (lateral and medial parieto-frontal networks. The second monitors grasp kinematics and belongs to the ventro-dorsal networks. Indeed, signal modulations associated with these distinct functions follow dorso-ventral gradients, with left aIPS showing significant sensitivity to both target features and the characteristics of the required grasp. Thus, our results from the haptic domain are consistent with the notion that the parietal processing for action guidance reflects primarily transformations from object-related to effector-related coding, and these mechanisms are rather independent of sensory input modality.

  10. Reducing process delays for real-time earthquake parameter estimation - An application of KD tree to large databases for Earthquake Early Warning

    Science.gov (United States)

    Yin, Lucy; Andrews, Jennifer; Heaton, Thomas

    2018-05-01

    Earthquake parameter estimations using nearest neighbor searching among a large database of observations can lead to reliable prediction results. However, in the real-time application of Earthquake Early Warning (EEW) systems, the accurate prediction using a large database is penalized by a significant delay in the processing time. We propose to use a multidimensional binary search tree (KD tree) data structure to organize large seismic databases to reduce the processing time in nearest neighbor search for predictions. We evaluated the performance of KD tree on the Gutenberg Algorithm, a database-searching algorithm for EEW. We constructed an offline test to predict peak ground motions using a database with feature sets of waveform filter-bank characteristics, and compare the results with the observed seismic parameters. We concluded that large database provides more accurate predictions of the ground motion information, such as peak ground acceleration, velocity, and displacement (PGA, PGV, PGD), than source parameters, such as hypocenter distance. Application of the KD tree search to organize the database reduced the average searching process by 85% time cost of the exhaustive method, allowing the method to be feasible for real-time implementation. The algorithm is straightforward and the results will reduce the overall time of warning delivery for EEW.

  11. Process evaluation of treatment times in a large radiotherapy department

    International Nuclear Information System (INIS)

    Beech, R.; Burgess, K.; Stratford, J.

    2016-01-01

    Purpose/objective: The Department of Health (DH) recognises access to appropriate and timely radiotherapy (RT) services as crucial in improving cancer patient outcomes, especially when facing a predicted increase in cancer diagnosis. There is a lack of ‘real-time’ data regarding daily demand of a linear accelerator, the impact of increasingly complex techniques on treatment times, and whether current scheduling reflects time needed for RT delivery, which would be valuable in highlighting current RT provision. Material/methods: A systematic quantitative process evaluation was undertaken in a large regional cancer centre, including a satellite centre, between January and April 2014. Data collected included treatment room-occupancy time, RT site, RT and verification technique and patient mobility status. Data was analysed descriptively; average room-occupancy times were calculated for RT techniques and compared to historical standardised treatment times within the department. Results: Room-occupancy was recorded for over 1300 fractions, over 50% of which overran their allotted treatment time. In a focused sample of 16 common techniques, 10 overran their allocated timeslots. Verification increased room-occupancy by six minutes (50%) over non-imaging. Treatments for patients requiring mobility assistance took four minutes (29%) longer. Conclusion: The majority of treatments overran their standardised timeslots. Although technique advancement has reduced RT delivery time, room-occupancy has not necessarily decreased. Verification increases room-occupancy and needs to be considered when moving towards adaptive techniques. Mobility affects room-occupancy and will become increasingly significant in an ageing population. This evaluation assesses validity of current treatment times in this department, and can be modified and repeated as necessary. - Highlights: • A process evaluation examined room-occupancy for various radiotherapy techniques. • Appointment lengths

  12. The effects of large scale processing on caesium leaching from cemented simulant sodium nitrate waste

    International Nuclear Information System (INIS)

    Lee, D.J.; Brown, D.J.

    1982-01-01

    The effects of large scale processing on the properties of cemented simulant sodium nitrate waste have been investigated. Leach tests have been performed on full-size drums, cores and laboratory samples of cement formulations containing Ordinary Portland Cement (OPC), Sulphate Resisting Portland Cement (SRPC) and a blended cement (90% ground granulated blast furnace slag/10% OPC). In addition, development of the cement hydration exotherms with time and the temperature distribution in 220 dm 3 samples have been followed. (author)

  13. Enhanced process understanding and multivariate prediction of the relationship between cell culture process and monoclonal antibody quality.

    Science.gov (United States)

    Sokolov, Michael; Ritscher, Jonathan; MacKinnon, Nicola; Souquet, Jonathan; Broly, Hervé; Morbidelli, Massimo; Butté, Alessandro

    2017-09-01

    This work investigates the insights and understanding which can be deduced from predictive process models for the product quality of a monoclonal antibody based on designed high-throughput cell culture experiments performed at milliliter (ambr-15 ® ) scale. The investigated process conditions include various media supplements as well as pH and temperature shifts applied during the process. First, principal component analysis (PCA) is used to show the strong correlation characteristics among the product quality attributes including aggregates, fragments, charge variants, and glycans. Then, partial least square regression (PLS1 and PLS2) is applied to predict the product quality variables based on process information (one by one or simultaneously). The comparison of those two modeling techniques shows that a single (PLS2) model is capable of revealing the interrelationship of the process characteristics to the large set product quality variables. In order to show the dynamic evolution of the process predictability separate models are defined at different time points showing that several product quality attributes are mainly driven by the media composition and, hence, can be decently predicted from early on in the process, while others are strongly affected by process parameter changes during the process. Finally, by coupling the PLS2 models with a genetic algorithm first the model performance can be further improved and, most importantly, the interpretation of the large-dimensioned process-product-interrelationship can be significantly simplified. The generally applicable toolset presented in this case study provides a solid basis for decision making and process optimization throughout process development. © 2017 American Institute of Chemical Engineers Biotechnol. Prog., 33:1368-1380, 2017. © 2017 American Institute of Chemical Engineers.

  14. Process Simulation and Characterization of Substrate Engineered Silicon Thin Film Transistor for Display Sensors and Large Area Electronics

    International Nuclear Information System (INIS)

    Hashmi, S M; Ahmed, S

    2013-01-01

    Design, simulation, fabrication and post-process qualification of substrate-engineered Thin Film Transistors (TFTs) are carried out to suggest an alternate manufacturing process step focused on display sensors and large area electronics applications. Damage created by ion implantation of Helium and Silicon ions into single-crystalline n-type silicon substrate provides an alternate route to create an amorphized region responsible for the fabrication of TFT structures with controllable and application-specific output parameters. The post-process qualification of starting material and full-cycle devices using Rutherford Backscattering Spectrometry (RBS) and Proton or Particle induced X-ray Emission (PIXE) techniques also provide an insight to optimize the process protocols as well as their applicability in the manufacturing cycle

  15. Survey of high-voltage pulse technology suitable for large-scale plasma source ion implantation processes

    International Nuclear Information System (INIS)

    Reass, W.A.

    1994-01-01

    Many new plasma processes ideas are finding their way from the research lab to the manufacturing plant floor. These require high voltage (HV) pulse power equipment, which must be optimized for application, system efficiency, and reliability. Although no single HV pulse technology is suitable for all plasma processes, various classes of high voltage pulsers may offer a greater versatility and economy to the manufacturer. Technology developed for existing radar and particle accelerator modulator power systems can be utilized to develop a modern large scale plasma source ion implantation (PSII) system. The HV pulse networks can be broadly defined by two classes of systems, those that generate the voltage directly, and those that use some type of pulse forming network and step-up transformer. This article will examine these HV pulse technologies and discuss their applicability to the specific PSII process. Typical systems that will be reviewed will include high power solid state, hard tube systems such as crossed-field ''hollow beam'' switch tubes and planar tetrodes, and ''soft'' tube systems with crossatrons and thyratrons. Results will be tabulated and suggestions provided for a particular PSII process

  16. A progress report for the large block test of the coupled thermal-mechanical-hydrological-chemical processes

    International Nuclear Information System (INIS)

    Lin, W.; Wilder, D.G.; Blink, J.

    1994-10-01

    This is a progress report on the Large Block Test (LBT) project. The purpose of the LBT is to study some of the coupled thermal-mechanical-hydrological-chemical (TMHC) processes in the near field of a nuclear waste repository under controlled boundary conditions. To do so, a large block of Topopah Spring tuff will be heated from within for about 4 to 6 months, then cooled down for about the same duration. Instruments to measure temperature, moisture content, stress, displacement, and chemical changes will be installed in three directions in the block. Meanwhile, laboratory tests will be conducted on small blocks to investigate individual thermal-mechanical, thermal-hydrological, and thermal-chemical processes. The fractures in the large block will be characterized from five exposed surfaces. The minerals on fracture surfaces will be studied before and after the test. The results from the LBT will be useful for testing and building confidence in models that will be used to predict TMHC processes in a repository. The boundary conditions to be controlled on the block include zero moisture flux and zero heat flux on the sides, constant temperature on the top, and constant stress on the outside surfaces of the block. To control these boundary conditions, a load-retaining frame is required. A 3 x 3 x 4.5 m block of Topopah Spring tuff has been isolated on the outcrop at Fran Ridge, Nevada Test Site. Pre-test model calculations indicate that a permeability of at least 10 -15 m 2 is required so that a dryout zone can be created within a practical time frame when the block is heated from within. Neutron logging was conducted in some of the vertical holes to estimate the initial moisture content of the block. It was found that about 60 to 80% of the pore volume of the block is saturated with water. Cores from the vertical holes have been used to map the fractures and to determine the properties of the rock. A current schedule is included in the report

  17. Massive Cloud Computing Processing of P-SBAS Time Series for Displacement Analyses at Large Spatial Scale

    Science.gov (United States)

    Casu, F.; de Luca, C.; Lanari, R.; Manunta, M.; Zinno, I.

    2016-12-01

    A methodology for computing surface deformation time series and mean velocity maps of large areas is presented. Our approach relies on the availability of a multi-temporal set of Synthetic Aperture Radar (SAR) data collected from ascending and descending orbits over an area of interest, and also permits to estimate the vertical and horizontal (East-West) displacement components of the Earth's surface. The adopted methodology is based on an advanced Cloud Computing implementation of the Differential SAR Interferometry (DInSAR) Parallel Small Baseline Subset (P-SBAS) processing chain which allows the unsupervised processing of large SAR data volumes, from the raw data (level-0) imagery up to the generation of DInSAR time series and maps. The presented solution, which is highly scalable, has been tested on the ascending and descending ENVISAT SAR archives, which have been acquired over a large area of Southern California (US) that extends for about 90.000 km2. Such an input dataset has been processed in parallel by exploiting 280 computing nodes of the Amazon Web Services Cloud environment. Moreover, to produce the final mean deformation velocity maps of the vertical and East-West displacement components of the whole investigated area, we took also advantage of the information available from external GPS measurements that permit to account for possible regional trends not easily detectable by DInSAR and to refer the P-SBAS measurements to an external geodetic datum. The presented results clearly demonstrate the effectiveness of the proposed approach that paves the way to the extensive use of the available ERS and ENVISAT SAR data archives. Furthermore, the proposed methodology can be particularly suitable to deal with the very huge data flow provided by the Sentinel-1 constellation, thus permitting to extend the DInSAR analyses at a nearly global scale. This work is partially supported by: the DPC-CNR agreement, the EPOS-IP project and the ESA GEP project.

  18. Statistical processing of large image sequences.

    Science.gov (United States)

    Khellah, F; Fieguth, P; Murray, M J; Allen, M

    2005-01-01

    The dynamic estimation of large-scale stochastic image sequences, as frequently encountered in remote sensing, is important in a variety of scientific applications. However, the size of such images makes conventional dynamic estimation methods, for example, the Kalman and related filters, impractical. In this paper, we present an approach that emulates the Kalman filter, but with considerably reduced computational and storage requirements. Our approach is illustrated in the context of a 512 x 512 image sequence of ocean surface temperature. The static estimation step, the primary contribution here, uses a mixture of stationary models to accurately mimic the effect of a nonstationary prior, simplifying both computational complexity and modeling. Our approach provides an efficient, stable, positive-definite model which is consistent with the given correlation structure. Thus, the methods of this paper may find application in modeling and single-frame estimation.

  19. Large eddy simulation of turbulent and stably-stratified flows

    International Nuclear Information System (INIS)

    Fallon, Benoit

    1994-01-01

    The unsteady turbulent flow over a backward-facing step is studied by mean of Large Eddy Simulations with structure function sub grid model, both in isothermal and stably-stratified configurations. Without stratification, the flow develops highly-distorted Kelvin-Helmholtz billows, undergoing to helical pairing, with A-shaped vortices shed downstream. We show that forcing injected by recirculation fluctuations governs this oblique mode instabilities development. The statistical results show good agreements with the experimental measurements. For stably-stratified configurations, the flow remains more bi-dimensional. We show with increasing stratification, how the shear layer growth is frozen by inhibition of pairing process then of Kelvin-Helmholtz instabilities, and the development of gravity waves or stable density interfaces. Eddy structures of the flow present striking analogies with the stratified mixing layer. Additional computations show the development of secondary Kelvin-Helmholtz instabilities on the vorticity layers between two primary structures. This important mechanism based on baroclinic effects (horizontal density gradients) constitutes an additional part of the turbulent mixing process. Finally, the feasibility of Large Eddy Simulation is demonstrated for industrial flows, by studying a complex stratified cavity. Temperature fluctuations are compared to experimental measurements. We also develop three-dimensional un-stationary animations, in order to understand and visualize turbulent interactions. (author) [fr

  20. Automation of Survey Data Processing, Documentation and Dissemination: An Application to Large-Scale Self-Reported Educational Survey.

    Science.gov (United States)

    Shim, Eunjae; Shim, Minsuk K.; Felner, Robert D.

    Automation of the survey process has proved successful in many industries, yet it is still underused in educational research. This is largely due to the facts (1) that number crunching is usually carried out using software that was developed before information technology existed, and (2) that the educational research is to a great extent trapped…

  1. Children with dyslexia show a reduced processing benefit from bimodal speech information compared to their typically developing peers.

    Science.gov (United States)

    Schaadt, Gesa; van der Meer, Elke; Pannekamp, Ann; Oberecker, Regine; Männel, Claudia

    2018-01-17

    During information processing, individuals benefit from bimodally presented input, as has been demonstrated for speech perception (i.e., printed letters and speech sounds) or the perception of emotional expressions (i.e., facial expression and voice tuning). While typically developing individuals show this bimodal benefit, school children with dyslexia do not. Currently, it is unknown whether the bimodal processing deficit in dyslexia also occurs for visual-auditory speech processing that is independent of reading and spelling acquisition (i.e., no letter-sound knowledge is required). Here, we tested school children with and without spelling problems on their bimodal perception of video-recorded mouth movements pronouncing syllables. We analyzed the event-related potential Mismatch Response (MMR) to visual-auditory speech information and compared this response to the MMR to monomodal speech information (i.e., auditory-only, visual-only). We found a reduced MMR with later onset to visual-auditory speech information in children with spelling problems compared to children without spelling problems. Moreover, when comparing bimodal and monomodal speech perception, we found that children without spelling problems showed significantly larger responses in the visual-auditory experiment compared to the visual-only response, whereas children with spelling problems did not. Our results suggest that children with dyslexia exhibit general difficulties in bimodal speech perception independently of letter-speech sound knowledge, as apparent in altered bimodal speech perception and lacking benefit from bimodal information. This general deficit in children with dyslexia may underlie the previously reported reduced bimodal benefit for letter-speech sound combinations and similar findings in emotion perception. Copyright © 2018 Elsevier Ltd. All rights reserved.

  2. Low-cost synthesis of pure ZnO nanowalls showing three-fold symmetry

    Science.gov (United States)

    Scuderi, Mario; Strano, Vincenzina; Spinella, Corrado; Nicotra, Giuseppe; Mirabella, Salvo

    2018-04-01

    ZnO nanowalls (NWLs) represent a non-toxic, Earth abundant, high surface-to-volume ratio, semiconducting nanostructure which has already showed potential applications in biosensing, environmental monitoring and energy. Low-cost synthesis of these nanostructures is extremely appealing for large scale upgrading of laboratory results, and its implementation has to be tested at the nanoscale, at least in terms of chemical purity and crystallographic orientation. Here, we have produced pure and texturized ZnO NWLs by using chemical bath deposition (CBD) synthesis followed by a thermal treatment at 300 °C. We examined the NWL formation process and the new obtained structure at the nanoscale, by means of scanning and transmission electron microscopy in combination with x-ray diffraction and Rutherford backscattering spectrometry. We have shown that only after annealing at 300 °C in nitrogen does the as-grown material, composed of a mixture of Zn compounds NWLs, show its peculiar crystal arrangement. The resulting ZnO sheets are in fact made by ZnO wurtzite domains (4-5 nm) that show a particular kind of texturization; indeed, they are aligned with their own c-axis always perpendicular to the sheets forming the wall and rotated (around the c-axis) by multiples of 20° from each other. The presented data show that low-cost CBD, followed by an annealing process, gives pure ZnO with a peculiarly ordered nanostructure that shows three-fold symmetry. Such evidence at the nanoscale will have significant implications for realizing sensing or catalyst devices based on ZnO NWLs.

  3. Precise large deviations of aggregate claims in a size-dependent renewal risk model with stopping time claim-number process

    Directory of Open Access Journals (Sweden)

    Shuo Zhang

    2017-04-01

    Full Text Available Abstract In this paper, we consider a size-dependent renewal risk model with stopping time claim-number process. In this model, we do not make any assumption on the dependence structure of claim sizes and inter-arrival times. We study large deviations of the aggregate amount of claims. For the subexponential heavy-tailed case, we obtain a precise large-deviation formula; our method substantially relies on a martingale for the structure of our models.

  4. Prediction Model of Machining Failure Trend Based on Large Data Analysis

    Science.gov (United States)

    Li, Jirong

    2017-12-01

    The mechanical processing has high complexity, strong coupling, a lot of control factors in the machining process, it is prone to failure, in order to improve the accuracy of fault detection of large mechanical equipment, research on fault trend prediction requires machining, machining fault trend prediction model based on fault data. The characteristics of data processing using genetic algorithm K mean clustering for machining, machining feature extraction which reflects the correlation dimension of fault, spectrum characteristics analysis of abnormal vibration of complex mechanical parts processing process, the extraction method of the abnormal vibration of complex mechanical parts processing process of multi-component spectral decomposition and empirical mode decomposition Hilbert based on feature extraction and the decomposition results, in order to establish the intelligent expert system for the data base, combined with large data analysis method to realize the machining of the Fault trend prediction. The simulation results show that this method of fault trend prediction of mechanical machining accuracy is better, the fault in the mechanical process accurate judgment ability, it has good application value analysis and fault diagnosis in the machining process.

  5. Reducing Plug and Process Loads for a Large Scale, Low Energy Office Building: NREL's Research Support Facility; Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Lobato, C.; Pless, S.; Sheppy, M.; Torcellini, P.

    2011-02-01

    This paper documents the design and operational plug and process load energy efficiency measures needed to allow a large scale office building to reach ultra high efficiency building goals. The appendices of this document contain a wealth of documentation pertaining to plug and process load design in the RSF, including a list of equipment was selected for use.

  6. Large Spatial Scale Ground Displacement Mapping through the P-SBAS Processing of Sentinel-1 Data on a Cloud Computing Environment

    Science.gov (United States)

    Casu, F.; Bonano, M.; de Luca, C.; Lanari, R.; Manunta, M.; Manzo, M.; Zinno, I.

    2017-12-01

    Since its launch in 2014, the Sentinel-1 (S1) constellation has played a key role on SAR data availability and dissemination all over the World. Indeed, the free and open access data policy adopted by the European Copernicus program together with the global coverage acquisition strategy, make the Sentinel constellation as a game changer in the Earth Observation scenario. Being the SAR data become ubiquitous, the technological and scientific challenge is focused on maximizing the exploitation of such huge data flow. In this direction, the use of innovative processing algorithms and distributed computing infrastructures, such as the Cloud Computing platforms, can play a crucial role. In this work we present a Cloud Computing solution for the advanced interferometric (DInSAR) processing chain based on the Parallel SBAS (P-SBAS) approach, aimed at processing S1 Interferometric Wide Swath (IWS) data for the generation of large spatial scale deformation time series in efficient, automatic and systematic way. Such a DInSAR chain ingests Sentinel 1 SLC images and carries out several processing steps, to finally compute deformation time series and mean deformation velocity maps. Different parallel strategies have been designed ad hoc for each processing step of the P-SBAS S1 chain, encompassing both multi-core and multi-node programming techniques, in order to maximize the computational efficiency achieved within a Cloud Computing environment and cut down the relevant processing times. The presented P-SBAS S1 processing chain has been implemented on the Amazon Web Services platform and a thorough analysis of the attained parallel performances has been performed to identify and overcome the major bottlenecks to the scalability. The presented approach is used to perform national-scale DInSAR analyses over Italy, involving the processing of more than 3000 S1 IWS images acquired from both ascending and descending orbits. Such an experiment confirms the big advantage of

  7. Loss aversion, large deviation preferences and optimal portfolio weights for some classes of return processes

    Science.gov (United States)

    Duffy, Ken; Lobunets, Olena; Suhov, Yuri

    2007-05-01

    We propose a model of a loss averse investor who aims to maximize his expected wealth under certain constraints. The constraints are that he avoids, with high probability, incurring an (suitably defined) unacceptable loss. The methodology employed comes from the theory of large deviations. We explore a number of fundamental properties of the model and illustrate its desirable features. We demonstrate its utility by analyzing assets that follow some commonly used financial return processes: Fractional Brownian Motion, Jump Diffusion, Variance Gamma and Truncated Lévy.

  8. Gravitational segregation of liquid slag in large ladle

    Directory of Open Access Journals (Sweden)

    J. Chen

    2012-04-01

    Full Text Available The process of gravitational segregation makes liquid steel slag components occur differentiation. And it shows that the upper part slag in the slag ladle contains higher CaO; and the lower part slag contains higher SiO2. The content of MgO (5,48 % in the upper part slag is higher than that of the lower part (2,50 %, and only Al2O3 content of the upper and the lower part slag is close to each other. The difference of chemical compositions in the slag ladle shows that there is gravitational segregation during slow solidification of liquid steel slag, which will has some impact of the steel slag processing on the large slag ladle.

  9. Torque measurements reveal large process differences between materials during high solid enzymatic hydrolysis of pretreated lignocellulose

    Directory of Open Access Journals (Sweden)

    Palmqvist Benny

    2012-08-01

    Full Text Available Abstract Background A common trend in the research on 2nd generation bioethanol is the focus on intensifying the process and increasing the concentration of water insoluble solids (WIS throughout the process. However, increasing the WIS content is not without problems. For example, the viscosity of pretreated lignocellulosic materials is known to increase drastically with increasing WIS content. Further, at elevated viscosities, problems arise related to poor mixing of the material, such as poor distribution of the enzymes and/or difficulties with temperature and pH control, which results in possible yield reduction. Achieving good mixing is unfortunately not without cost, since the power requirements needed to operate the impeller at high viscosities can be substantial. This highly important scale-up problem can easily be overlooked. Results In this work, we monitor the impeller torque (and hence power input in a stirred tank reactor throughout high solid enzymatic hydrolysis (Arundo donax and spruce. Two different process modes were evaluated, where either the impeller speed or the impeller power input was kept constant. Results from hydrolysis experiments at a fixed impeller speed of 10 rpm show that a very rapid decrease in impeller torque is experienced during hydrolysis of pretreated arundo (i.e. it loses its fiber network strength, whereas the fiber strength is retained for a longer time within the spruce material. This translates into a relatively low, rather WIS independent, energy input for arundo whereas the stirring power demand for spruce is substantially larger and quite WIS dependent. By operating the impeller at a constant power input (instead of a constant impeller speed it is shown that power input greatly affects the glucose yield of pretreated spruce whereas the hydrolysis of arundo seems unaffected. Conclusions The results clearly highlight the large differences between the arundo and spruce materials, both in terms of

  10. Haptically Guided Grasping. fMRI Shows Right-Hemisphere Parietal Stimulus Encoding, and Bilateral Dorso-Ventral Parietal Gradients of Object- and Action-Related Processing during Grasp Execution.

    Science.gov (United States)

    Marangon, Mattia; Kubiak, Agnieszka; Króliczak, Gregory

    2015-01-01

    The neural bases of haptically-guided grasp planning and execution are largely unknown, especially for stimuli having no visual representations. Therefore, we used functional magnetic resonance imaging (fMRI) to monitor brain activity during haptic exploration of novel 3D complex objects, subsequent grasp planning, and the execution of the pre-planned grasps. Haptic object exploration, involving extraction of shape, orientation, and length of the to-be-grasped targets, was associated with the fronto-parietal, temporo-occipital, and insular cortex activity. Yet, only the anterior divisions of the posterior parietal cortex (PPC) of the right hemisphere were significantly more engaged in exploration of complex objects (vs. simple control disks). None of these regions were re-recruited during the planning phase. Even more surprisingly, the left-hemisphere intraparietal, temporal, and occipital areas that were significantly invoked for grasp planning did not show sensitivity to object features. Finally, grasp execution, involving the re-recruitment of the critical right-hemisphere PPC clusters, was also significantly associated with two kinds of bilateral parieto-frontal processes. The first represents transformations of grasp-relevant target features and is linked to the dorso-dorsal (lateral and medial) parieto-frontal networks. The second monitors grasp kinematics and belongs to the ventro-dorsal networks. Indeed, signal modulations associated with these distinct functions follow dorso-ventral gradients, with left aIPS showing significant sensitivity to both target features and the characteristics of the required grasp. Thus, our results from the haptic domain are consistent with the notion that the parietal processing for action guidance reflects primarily transformations from object-related to effector-related coding, and these mechanisms are rather independent of sensory input modality.

  11. DB-XES : enabling process discovery in the large

    NARCIS (Netherlands)

    Syamsiyah, A.; van Dongen, B.F.; van der Aalst, W.M.P.; Ceravolo, P.; Guetl, C.; Rinderle-Ma, S.

    2018-01-01

    Dealing with the abundance of event data is one of the main process discovery challenges. Current process discovery techniques are able to efficiently handle imported event log files that fit in the computer’s memory. Once data files get bigger, scalability quickly drops since the speed required to

  12. Development of a Large Area Advanced Fast RICH Detector for Particle Identification at the Large Hadron Collider Operated with Heavy Ions

    CERN Multimedia

    Piuz, F; Braem, A; Van beelen, J B; Lion, G; Gandi, A

    2002-01-01

    %RD26 %title\\\\ \\\\During the past two years, RD26 groups have focused their activities on the production of CsI-RICH prototypes of large area, up to a square meter, to demonstrate their application in High Energy experiments. Many large CsI-photocathodes (up to 40) were produced following the processing techniques furthermore developped in the collaboration. Taking the Quantum Efficiency (QE) measured at 180 nm as a comparative figure of merit of a CsI-PC. Figure 1 shows the increase of the performance while improvements were successively implemented in the PC processing sequence. Most efficient were the use of substrates made of nickel, the heat treatment and handling of the PCs under inert gas. Actually, three large systems based on CsI-RICH have got approval in the following HEP experiments: HADES at GSI, COMPASS/NA58 at CERN and HMPID/ALICE at LHC implying up to 14 square metres of CsI-PC. In addition, several CsI-RICH detectors have been successfully operated in the Threshold Imaging Detector at NA44 and ...

  13. The large-scale process of microbial carbonate precipitation for nickel remediation from an industrial soil.

    Science.gov (United States)

    Zhu, Xuejiao; Li, Weila; Zhan, Lu; Huang, Minsheng; Zhang, Qiuzhuo; Achal, Varenyam

    2016-12-01

    Microbial carbonate precipitation is known as an efficient process for the remediation of heavy metals from contaminated soils. In the present study, a urease positive bacterial isolate, identified as Bacillus cereus NS4 through 16S rDNA sequencing, was utilized on a large scale to remove nickel from industrial soil contaminated by the battery industry. The soil was highly contaminated with an initial total nickel concentration of approximately 900 mg kg -1 . The soluble-exchangeable fraction was reduced to 38 mg kg -1 after treatment. The primary objective of metal stabilization was achieved by reducing the bioavailability through immobilizing the nickel in the urease-driven carbonate precipitation. The nickel removal in the soils contributed to the transformation of nickel from mobile species into stable biominerals identified as calcite, vaterite, aragonite and nickelous carbonate when analyzed under XRD. It was proven that during precipitation of calcite, Ni 2+ with an ion radius close to Ca 2+ was incorporated into the CaCO 3 crystal. The biominerals were also characterized by using SEM-EDS to observe the crystal shape and Raman-FTIR spectroscopy to predict responsible bonding during bioremediation with respect to Ni immobilization. The electronic structure and chemical-state information of the detected elements during MICP bioremediation process was studied by XPS. This is the first study in which microbial carbonate precipitation was used for the large-scale remediation of metal-contaminated industrial soil. Copyright © 2016 Elsevier Ltd. All rights reserved.

  14. MEASUREMENT OF THE HIGH-FIELD Q-DROP IN A LARGE-GRAIN NIOBIUM CAVITY FOR DIFFERENT OXIDATION PROCESSES

    International Nuclear Information System (INIS)

    Gianluigi Ciovati; Peter Kneisel; Alex Gurevich

    2008-01-01

    In this contribution, we present the results from a series of RF tests at 1.7 K and 2.0 K on a single-cell cavity made of high-purity large (with area of the order of few cm2) grain niobium which underwent various oxidation processes. After initial buffered chemical polishing, anodization, baking in pure oxygen atmosphere and baking in air up to 180 C was applied with the objective of clearly identifying the role of oxygen and the oxide layer on the Q-drop. During each rf test a temperature mapping system was used allowing to measure the local temperature rise of the cavity outer surface due to RF losses, which gives information about the losses location, their field dependence and space distribution on the RF surface. The results confirmed that the depth affected by baking is about 20-30 nm from the surface and showed that the Q-drop did not re-appear in a previously baked cavity by further baking at 120 C in pure oxygen atmosphere or in air up to 180 C. A statistic of the position of the ''hot-spots'' on the cavity surface showed that grain-boundaries are not the preferred location. An interesting correlation was found between the Q-drop onset, the quench field and the low-field energy gap, which supports the hypothesis of thermomagnetic instability governing the Q-drop and the baking effect.

  15. MEASUREMENT OF THE HIGH-FIELD Q-DROP IN A LARGE-GRAIN NIOBIUM CAVITY FOR DIFFERENT OXIDATION PROCESSES

    Energy Technology Data Exchange (ETDEWEB)

    Gianluigi Ciovati; Peter Kneisel; Alex Gurevich

    2008-01-23

    In this contribution, we present the results from a series of RF tests at 1.7 K and 2.0 K on a single-cell cavity made of high-purity large (with area of the order of few cm2) grain niobium which underwent various oxidation processes. After initial buffered chemical polishing, anodization, baking in pure oxygen atmosphere and baking in air up to 180 °C was applied with the objective of clearly identifying the role of oxygen and the oxide layer on the Q-drop. During each rf test a temperature mapping system was used allowing to measure the local temperature rise of the cavity outer surface due to RF losses, which gives information about the losses location, their field dependence and space distribution on the RF surface. The results confirmed that the depth affected by baking is about 20 – 30 nm from the surface and showed that the Q-drop did not re-appear in a previously baked cavity by further baking at 120 °C in pure oxygen atmosphere or in air up to 180 °C. A statistic of the position of the “hot-spots” on the cavity surface showed that grain-boundaries are not the preferred location. An interesting correlation was found between the Q-drop onset, the quench field and the low-field energy gap, which supports the hypothesis of thermo-magnetic instability governing the Q-drop and the baking effect.

  16. Future-oriented maintenance strategy based on automated processes is finding its way into large astronomical facilities at remote observing sites

    Science.gov (United States)

    Silber, Armin; Gonzalez, Christian; Pino, Francisco; Escarate, Patricio; Gairing, Stefan

    2014-08-01

    With expanding sizes and increasing complexity of large astronomical observatories on remote observing sites, the call for an efficient and recourses saving maintenance concept becomes louder. The increasing number of subsystems on telescopes and instruments forces large observatories, like in industries, to rethink conventional maintenance strategies for reaching this demanding goal. The implementation of full-, or semi-automatic processes for standard service activities can help to keep the number of operating staff on an efficient level and to reduce significantly the consumption of valuable consumables or equipment. In this contribution we will demonstrate on the example of the 80 Cryogenic subsystems of the ALMA Front End instrument, how an implemented automatic service process increases the availability of spare parts and Line Replaceable Units. Furthermore how valuable staff recourses can be freed from continuous repetitive maintenance activities, to allow focusing more on system diagnostic tasks, troubleshooting and the interchanging of line replaceable units. The required service activities are decoupled from the day-to-day work, eliminating dependencies on workload peaks or logistic constrains. The automatic refurbishing processes running in parallel to the operational tasks with constant quality and without compromising the performance of the serviced system components. Consequentially that results in an efficiency increase, less down time and keeps the observing schedule on track. Automatic service processes in combination with proactive maintenance concepts are providing the necessary flexibility for the complex operational work structures of large observatories. The gained planning flexibility is allowing an optimization of operational procedures and sequences by considering the required cost efficiency.

  17. The PREP Pipeline: Standardized preprocessing for large-scale EEG analysis

    Directory of Open Access Journals (Sweden)

    Nima eBigdelys Shamlo

    2015-06-01

    Full Text Available The technology to collect brain imaging and physiological measures has become portable and ubiquitous, opening the possibility of large-scale analysis of real-world human imaging. By its nature, such data is large and complex, making automated processing essential. This paper shows how lack of attention to the very early stages of an EEG preprocessing pipeline can reduce the signal-to-noise ratio and introduce unwanted artifacts into the data, particularly for computations done in single precision. We demonstrate that ordinary average referencing improves the signal-to-noise ratio, but that noisy channels can contaminate the results. We also show that identification of noisy channels depends on the reference and examine the complex interaction of filtering, noisy channel identification, and referencing. We introduce a multi-stage robust referencing scheme to deal with the noisy channel-reference interaction. We propose a standardized early-stage EEG processing pipeline (PREP and discuss the application of the pipeline to more than 600 EEG datasets. The pipeline includes an automatically generated report for each dataset processed. Users can download the PREP pipeline as a freely available MATLAB library from http://eegstudy.org/prepcode/.

  18. The PREP pipeline: standardized preprocessing for large-scale EEG analysis.

    Science.gov (United States)

    Bigdely-Shamlo, Nima; Mullen, Tim; Kothe, Christian; Su, Kyung-Min; Robbins, Kay A

    2015-01-01

    The technology to collect brain imaging and physiological measures has become portable and ubiquitous, opening the possibility of large-scale analysis of real-world human imaging. By its nature, such data is large and complex, making automated processing essential. This paper shows how lack of attention to the very early stages of an EEG preprocessing pipeline can reduce the signal-to-noise ratio and introduce unwanted artifacts into the data, particularly for computations done in single precision. We demonstrate that ordinary average referencing improves the signal-to-noise ratio, but that noisy channels can contaminate the results. We also show that identification of noisy channels depends on the reference and examine the complex interaction of filtering, noisy channel identification, and referencing. We introduce a multi-stage robust referencing scheme to deal with the noisy channel-reference interaction. We propose a standardized early-stage EEG processing pipeline (PREP) and discuss the application of the pipeline to more than 600 EEG datasets. The pipeline includes an automatically generated report for each dataset processed. Users can download the PREP pipeline as a freely available MATLAB library from http://eegstudy.org/prepcode.

  19. Large-Scale Processes Associated with Inter-Decadal and Inter-Annual Early Spring Rainfall Variability in Taiwan

    Directory of Open Access Journals (Sweden)

    Jau-Ming Chen

    2016-02-01

    Full Text Available Early spring (March - April rainfall in Taiwan exhibits evident and distinct inter-annual and inter-decadal variability. The inter-annual varibility has a positive correlation with the El Niño/Southern Oscillation while the inter-decadal variability features a phase change beginning in the late 1970s, coherent with the major phase change in the Pacific decadal oscillation. Rainfall variability in both timescales is regulated by large-scale processes showing consistent dynamic features. Rainfall increases are associated with positive sea surface temperature (SST anomalies in the tropical eastern Pacific and negative SST anomalies in the tropical central Pacific. An anomalous lower-level divergent center appears in the tropical central Pacific. Via a Rossby-wave-like response, an anomalous lower-level anticyclone appears to the southeast of Taiwan over the Philippine Sea-tropical western Pacific region, which is accompanied by an anomalous cyclone to the north-northeast of Taiwan. Both circulation anomalies induce anomalous southwesterly flows to enhance moisture flux from the South China Sea onto Taiwan, resulting in significant moisture convergence nearby Taiwan. With enhanced moisture supplied by anomalous southwesterly flows, significant rainfall increases occur in both inter-annual and inter-decadal timescales in early spring rainfall on Taiwan.

  20. Process Improvement to Enhance Quality in a Large Volume Labor and Birth Unit.

    Science.gov (United States)

    Bell, Ashley M; Bohannon, Jessica; Porthouse, Lisa; Thompson, Heather; Vago, Tony

    using the Lean process, frontline clinicians identified areas that needed improvement, developed and implemented successful strategies that addressed each gap, and enhanced the quality and safety of care for a large volume perinatal service.

  1. Measuring the In-Process Figure, Final Prescription, and System Alignment of Large Optics and Segmented Mirrors Using Lidar Metrology

    Science.gov (United States)

    Ohl, Raymond; Slotwinski, Anthony; Eegholm, Bente; Saif, Babak

    2011-01-01

    The fabrication of large optics is traditionally a slow process, and fabrication capability is often limited by measurement capability. W hile techniques exist to measure mirror figure with nanometer precis ion, measurements of large-mirror prescription are typically limited to submillimeter accuracy. Using a lidar instrument enables one to measure the optical surface rough figure and prescription in virtuall y all phases of fabrication without moving the mirror from its polis hing setup. This technology improves the uncertainty of mirror presc ription measurement to the micron-regime.

  2. Inclusive spectra of mesons with large transverse momenta in proton-nuclear collisions at high energies

    International Nuclear Information System (INIS)

    Lykasov, G.I.; Sherkhonov, B.Kh.

    1982-01-01

    Basing on the proposed earlier quark model of hadron-nucleus processes with large transverse momenta psub(perpendicular) the spectra of π +- , K +- meson production with large psub(perpendicular) in proton-nucleus collisions at high energies are calculated. The performed comparison of their dependence of the nucleus-target atomic number A with experimental data shows a good agreement. Theoretical and experimental ratios of inclusive spectra of K +- and π +- mesons in the are compared. Results of calculations show a rather good description of experimental data on large psub(perpendicular) meson production at high energies

  3. Process parameter impact on properties of sputtered large-area Mo bilayers for CIGS thin film solar cell applications

    Energy Technology Data Exchange (ETDEWEB)

    Badgujar, Amol C.; Dhage, Sanjay R., E-mail: dhage@arci.res.in; Joshi, Shrikant V.

    2015-08-31

    Copper indium gallium selenide (CIGS) has emerged as a promising candidate for thin film solar cells, with efficiencies approaching those of silicon-based solar cells. To achieve optimum performance in CIGS solar cells, uniform, conductive, stress-free, well-adherent, reflective, crystalline molybdenum (Mo) thin films with preferred orientation (110) are desirable as a back contact on large area glass substrates. The present study focuses on cylindrical rotating DC magnetron sputtered bilayer Mo thin films on 300 mm × 300 mm soda lime glass (SLG) substrates. Key sputtering variables, namely power and Ar gas flow rates, were optimized to achieve best structural, electrical and optical properties. The Mo films were comprehensively characterized and found to possess high degree of thickness uniformity over large area. Best crystallinity, reflectance and sheet resistance was obtained at high sputtering powers and low argon gas flow rates, while mechanical properties like adhesion and residual stress were found to be best at low sputtering power and high argon gas flow rate, thereby indicating a need to arrive at a suitable trade-off during processing. - Highlights: • Sputtering of bilayer molybdenum thin films on soda lime glass • Large area deposition using rotating cylindrical direct current magnetron • Trade of sputter process parameters power and pressure • High uniformity of thickness and best electrical properties obtained • Suitable mechanical and optical properties of molybdenum are achieved for CIGS application.

  4. Methods for Prediction of Steel Temperature Curve in the Whole Process of a Localized Fire in Large Spaces

    Directory of Open Access Journals (Sweden)

    Zhang Guowei

    2014-01-01

    Full Text Available Based on a full-scale bookcase fire experiment, a fire development model is proposed for the whole process of localized fires in large-space buildings. We found that for localized fires in large-space buildings full of wooden combustible materials the fire growing phases can be simplified into a t2 fire with a 0.0346 kW/s2 fire growth coefficient. FDS technology is applied to study the smoke temperature curve for a 2 MW to 25 MW fire occurring within a large space with a height of 6 m to 12 m and a building area of 1 500 m2 to 10 000 m2 based on the proposed fire development model. Through the analysis of smoke temperature in various fire scenarios, a new approach is proposed to predict the smoke temperature curve. Meanwhile, a modified model of steel temperature development in localized fire is built. In the modified model, the localized fire source is treated as a point fire source to evaluate the flame net heat flux to steel. The steel temperature curve in the whole process of a localized fire could be accurately predicted by the above findings. These conclusions obtained in this paper could provide valuable reference to fire simulation, hazard assessment, and fire protection design.

  5. Semantic processing in deaf and hard-of-hearing children: Large N400 mismatch effects in brain responses, despite poor semantic ability

    Directory of Open Access Journals (Sweden)

    Petter Kallioinen

    2016-08-01

    Full Text Available Difficulties in auditory and phonological processing affect semantic processing in speech comprehension of deaf and hard-of-hearing (DHH children. However, little is known about brain responses of semantic processing in this group. We investigated event-related potentials (ERPs in DHH children with cochlear implants (CI and/or hearing aids (HA, and in normally hearing controls (NH. We used a semantic priming task with spoken word primes followed by picture targets. In both DHH children and controls, response differences between matching and mismatching targets revealed a typical N400-effect associated with semantic processing. Children with CI had the largest mismatch response despite poor semantic abilities overall, children with CI also had the largest ERP differentiation between mismatch types, with small effects of within-category mismatches (target from same category as prime and large effects between-category mismatches (were target is from a different category than prime. NH and HA children had similar responses to both mismatch types. While the large and differentiated ERP responses in the CI group were unexpected and should be interpreted with caution, the results could reflect less precision in semantic processing among children with CI, or a stronger reliance on predictive processing.

  6. Mizan: A system for dynamic load balancing in large-scale graph processing

    KAUST Repository

    Khayyat, Zuhair

    2013-01-01

    Pregel [23] was recently introduced as a scalable graph mining system that can provide significant performance improvements over traditional MapReduce implementations. Existing implementations focus primarily on graph partitioning as a preprocessing step to balance computation across compute nodes. In this paper, we examine the runtime characteristics of a Pregel system. We show that graph partitioning alone is insufficient for minimizing end-to-end computation. Especially where data is very large or the runtime behavior of the algorithm is unknown, an adaptive approach is needed. To this end, we introduce Mizan, a Pregel system that achieves efficient load balancing to better adapt to changes in computing needs. Unlike known implementations of Pregel, Mizan does not assume any a priori knowledge of the structure of the graph or behavior of the algorithm. Instead, it monitors the runtime characteristics of the system. Mizan then performs efficient fine-grained vertex migration to balance computation and communication. We have fully implemented Mizan; using extensive evaluation we show that - especially for highly-dynamic workloads - Mizan provides up to 84% improvement over techniques leveraging static graph pre-partitioning. © 2013 ACM.

  7. Validation of the process control system of an automated large scale manufacturing plant.

    Science.gov (United States)

    Neuhaus, H; Kremers, H; Karrer, T; Traut, R H

    1998-02-01

    The validation procedure for the process control system of a plant for the large scale production of human albumin from plasma fractions is described. A validation master plan is developed, defining the system and elements to be validated, the interfaces with other systems with the validation limits, a general validation concept and supporting documentation. Based on this master plan, the validation protocols are developed. For the validation, the system is subdivided into a field level, which is the equipment part, and an automation level. The automation level is further subdivided into sections according to the different software modules. Based on a risk categorization of the modules, the qualification activities are defined. The test scripts for the different qualification levels (installation, operational and performance qualification) are developed according to a previously performed risk analysis.

  8. Optical methods to study the gas exchange processes in large diesel engines

    Energy Technology Data Exchange (ETDEWEB)

    Gros, S.; Hattar, C. [Wartsila Diesel International Oy, Vaasa (Finland); Hernberg, R.; Vattulainen, J. [Tampere Univ. of Technology, Tampere (Finland). Plasma Technology Lab.

    1996-12-01

    To be able to study the gas exchange processes in realistic conditions for a single cylinder of a large production-line-type diesel engine, a fast optical absorption spectroscopic method was developed. With this method line-of-sight UV-absorption of SO{sub 2} contained in the exhaust gas was measured as a function of time in the exhaust port area in a continuously fired medium speed diesel engine type Waertsilae 6L20. SO{sub 2} formed during the combustion from the fuel contained sulphur was used as a tracer to study the gas exchange as a function of time in the exhaust channel. In this case of a 4-stroke diesel engine by assuming a known concentration of SO{sub 2} in the exhaust gas after exhaust valve opening and before inlet and exhaust valve overlap period, the measured optical absorption was used to determine the gas density and further the instantaneous exhaust gas temperature during the exhaust cycle. (author)

  9. On the self-organizing process of large scale shear flows

    Energy Technology Data Exchange (ETDEWEB)

    Newton, Andrew P. L. [Department of Applied Maths, University of Sheffield, Sheffield, Yorkshire S3 7RH (United Kingdom); Kim, Eun-jin [School of Mathematics and Statistics, University of Sheffield, Sheffield, Yorkshire S3 7RH (United Kingdom); Liu, Han-Li [High Altitude Observatory, National Centre for Atmospheric Research, P. O. BOX 3000, Boulder, Colorado 80303-3000 (United States)

    2013-09-15

    Self organization is invoked as a paradigm to explore the processes governing the evolution of shear flows. By examining the probability density function (PDF) of the local flow gradient (shear), we show that shear flows reach a quasi-equilibrium state as its growth of shear is balanced by shear relaxation. Specifically, the PDFs of the local shear are calculated numerically and analytically in reduced 1D and 0D models, where the PDFs are shown to converge to a bimodal distribution in the case of finite correlated temporal forcing. This bimodal PDF is then shown to be reproduced in nonlinear simulation of 2D hydrodynamic turbulence. Furthermore, the bimodal PDF is demonstrated to result from a self-organizing shear flow with linear profile. Similar bimodal structure and linear profile of the shear flow are observed in gulf stream, suggesting self-organization.

  10. Signal formation processes in Micromegas detectors and quality control for large size detector construction for the ATLAS new small wheel

    Energy Technology Data Exchange (ETDEWEB)

    Kuger, Fabian

    2017-07-31

    The Micromegas technology is one of the most successful modern gaseous detector concepts and widely utilized in nuclear and particle physics experiments. Twenty years of R and D rendered the technology sufficiently mature to be selected as precision tracking detector for the New Small Wheel (NSW) upgrade of the ATLAS Muon spectrometer. This will be the first large scale application of Micromegas in one of the major LHC experiments. However, many of the fundamental microscopic processes in these gaseous detectors are still not fully understood and studies on several detector aspects, like the micromesh geometry, have never been addressed systematically. The studies on signal formation in Micromegas, presented in the first part of this thesis, focuses on the microscopic signal electron loss mechanisms and the amplification processes in electron gas interaction. Based on a detailed model of detector parameter dependencies, these processes are scrutinized in an iterating comparison between experimental results, theory prediction of the macroscopic observables and process simulation on the microscopic level. Utilizing the specialized detectors developed in the scope of this thesis as well as refined simulation algorithms, an unprecedented level of accuracy in the description of the microscopic processes is reached, deepening the understanding of the fundamental process in gaseous detectors. The second part is dedicated to the challenges arising with the large scale Micromegas production for the ATLAS NSW. A selection of technological choices, partially influenced or determined by the herein presented studies, are discussed alongside a final report on two production related tasks addressing the detectors' core components: For the industrial production of resistive anode PCBs a detailed quality control (QC) and quality assurance (QA) scheme as well as the therefore required testing tools have been developed. In parallel the study on micromesh parameter optimization

  11. Signal formation processes in Micromegas detectors and quality control for large size detector construction for the ATLAS new small wheel

    International Nuclear Information System (INIS)

    Kuger, Fabian

    2017-01-01

    The Micromegas technology is one of the most successful modern gaseous detector concepts and widely utilized in nuclear and particle physics experiments. Twenty years of R and D rendered the technology sufficiently mature to be selected as precision tracking detector for the New Small Wheel (NSW) upgrade of the ATLAS Muon spectrometer. This will be the first large scale application of Micromegas in one of the major LHC experiments. However, many of the fundamental microscopic processes in these gaseous detectors are still not fully understood and studies on several detector aspects, like the micromesh geometry, have never been addressed systematically. The studies on signal formation in Micromegas, presented in the first part of this thesis, focuses on the microscopic signal electron loss mechanisms and the amplification processes in electron gas interaction. Based on a detailed model of detector parameter dependencies, these processes are scrutinized in an iterating comparison between experimental results, theory prediction of the macroscopic observables and process simulation on the microscopic level. Utilizing the specialized detectors developed in the scope of this thesis as well as refined simulation algorithms, an unprecedented level of accuracy in the description of the microscopic processes is reached, deepening the understanding of the fundamental process in gaseous detectors. The second part is dedicated to the challenges arising with the large scale Micromegas production for the ATLAS NSW. A selection of technological choices, partially influenced or determined by the herein presented studies, are discussed alongside a final report on two production related tasks addressing the detectors' core components: For the industrial production of resistive anode PCBs a detailed quality control (QC) and quality assurance (QA) scheme as well as the therefore required testing tools have been developed. In parallel the study on micromesh parameter optimization

  12. Towards large-scale production of solution-processed organic tandem modules based on ternary composites: Design of the intermediate layer, device optimization and laser based module processing

    DEFF Research Database (Denmark)

    Li, Ning; Kubis, Peter; Forberich, Karen

    2014-01-01

    on commercially available materials, which enhances the absorption of poly(3-hexylthiophene) (P3HT) and as a result increase the PCE of the P3HT-based large-scale OPV devices; 3. laser-based module processing, which provides an excellent processing resolution and as a result can bring the power conversion...... efficiency (PCE) of mass-produced organic photovoltaic (OPV) devices close to the highest PCE values achieved for lab-scale solar cells through a significant increase in the geometrical fill factor. We believe that the combination of the above mentioned concepts provides a clear roadmap to push OPV towards...

  13. On Assumptions in Development of a Mathematical Model of Thermo-gravitational Convection in the Large Volume Process Tanks Taking into Account Fermentation

    Directory of Open Access Journals (Sweden)

    P. M. Shkapov

    2015-01-01

    Full Text Available The paper provides a mathematical model of thermo-gravity convection in a large volume vertical cylinder. The heat is removed from the product via the cooling jacket at the top of the cylinder. We suppose that a laminar fluid motion takes place. The model is based on the NavierStokes equation, the equation of heat transfer through the wall, and the heat transfer equation. The peculiarity of the process in large volume tanks was the distribution of the physical parameters of the coordinates that was taken into account when constructing the model. The model corresponds to a process of wort beer fermentation in the cylindrical-conical tanks (CCT. The CCT volume is divided into three zones and for each zone model equations was obtained. The first zone has an annular cross-section and it is limited to the height by the cooling jacket. In this zone the heat flow from the cooling jacket to the product is uppermost. Model equation of the first zone describes the process of heat transfer through the wall and is presented by linear inhomogeneous differential equation in partial derivatives that is solved analytically. For the second and third zones description there was a number of engineering assumptions. The fluid was considered Newtonian, viscous and incompressible. Convective motion considered in the Boussinesq approximation. The effect of viscous dissipation is not considered. The topology of fluid motion is similar to the cylindrical Poiseuille. The second zone model consists of the Navier-Stokes equations in cylindrical coordinates with the introduction of a simplified and the heat equation in the liquid layer. The volume that is occupied by an upward convective flow pertains to the third area. Convective flows do not mix and do not exchange heat. At the start of the process a medium has the same temperature and a zero initial velocity in the whole volume that allows us to specify the initial conditions for the process. The paper shows the

  14. Earthquake cycles and physical modeling of the process leading up to a large earthquake

    Science.gov (United States)

    Ohnaka, Mitiyasu

    2004-08-01

    A thorough discussion is made on what the rational constitutive law for earthquake ruptures ought to be from the standpoint of the physics of rock friction and fracture on the basis of solid facts observed in the laboratory. From this standpoint, it is concluded that the constitutive law should be a slip-dependent law with parameters that may depend on slip rate or time. With the long-term goal of establishing a rational methodology of forecasting large earthquakes, the entire process of one cycle for a typical, large earthquake is modeled, and a comprehensive scenario that unifies individual models for intermediate-and short-term (immediate) forecasts is presented within the framework based on the slip-dependent constitutive law and the earthquake cycle model. The earthquake cycle includes the phase of accumulation of elastic strain energy with tectonic loading (phase II), and the phase of rupture nucleation at the critical stage where an adequate amount of the elastic strain energy has been stored (phase III). Phase II plays a critical role in physical modeling of intermediate-term forecasting, and phase III in physical modeling of short-term (immediate) forecasting. The seismogenic layer and individual faults therein are inhomogeneous, and some of the physical quantities inherent in earthquake ruptures exhibit scale-dependence. It is therefore critically important to incorporate the properties of inhomogeneity and physical scaling, in order to construct realistic, unified scenarios with predictive capability. The scenario presented may be significant and useful as a necessary first step for establishing the methodology for forecasting large earthquakes.

  15. Research Update: Large-area deposition, coating, printing, and processing techniques for the upscaling of perovskite solar cell technology

    Directory of Open Access Journals (Sweden)

    Stefano Razza

    2016-09-01

    Full Text Available To bring perovskite solar cells to the industrial world, performance must be maintained at the photovoltaic module scale. Here we present large-area manufacturing and processing options applicable to large-area cells and modules. Printing and coating techniques, such as blade coating, slot-die coating, spray coating, screen printing, inkjet printing, and gravure printing (as alternatives to spin coating, as well as vacuum or vapor based deposition and laser patterning techniques are being developed for an effective scale-up of the technology. The latter also enables the manufacture of solar modules on flexible substrates, an option beneficial for many applications and for roll-to-roll production.

  16. Sentinel-1 data massive processing for large scale DInSAR analyses within Cloud Computing environments through the P-SBAS approach

    Science.gov (United States)

    Lanari, Riccardo; Bonano, Manuela; Buonanno, Sabatino; Casu, Francesco; De Luca, Claudio; Fusco, Adele; Manunta, Michele; Manzo, Mariarosaria; Pepe, Antonio; Zinno, Ivana

    2017-04-01

    -core programming techniques. Currently, Cloud Computing environments make available large collections of computing resources and storage that can be effectively exploited through the presented S1 P-SBAS processing chain to carry out interferometric analyses at a very large scale, in reduced time. This allows us to deal also with the problems connected to the use of S1 P-SBAS chain in operational contexts, related to hazard monitoring and risk prevention and mitigation, where handling large amounts of data represents a challenging task. As a significant experimental result we performed a large spatial scale SBAS analysis relevant to the Central and Southern Italy by exploiting the Amazon Web Services Cloud Computing platform. In particular, we processed in parallel 300 S1 acquisitions covering the Italian peninsula from Lazio to Sicily through the presented S1 P-SBAS processing chain, generating 710 interferograms, thus finally obtaining the displacement time series of the whole processed area. This work has been partially supported by the CNR-DPC agreement, the H2020 EPOS-IP project (GA 676564) and the ESA GEP project.

  17. The peculiarities of large intron splicing in animals.

    Directory of Open Access Journals (Sweden)

    Samuel Shepard

    Full Text Available In mammals a considerable 92% of genes contain introns, with hundreds and hundreds of these introns reaching the incredible size of over 50,000 nucleotides. These "large introns" must be spliced out of the pre-mRNA in a timely fashion, which involves bringing together distant 5' and 3' acceptor and donor splice sites. In invertebrates, especially Drosophila, it has been shown that larger introns can be spliced efficiently through a process known as recursive splicing-a consecutive splicing from the 5'-end at a series of combined donor-acceptor splice sites called RP-sites. Using a computational analysis of the genomic sequences, we show that vertebrates lack the proper enrichment of RP-sites in their large introns, and, therefore, require some other method to aid splicing. We analyzed over 15,000 non-redundant, large introns from six mammals, 1,600 from chicken and zebrafish, and 560 non-redundant large introns from five invertebrates. Our bioinformatic investigation demonstrates that, unlike the studied invertebrates, the studied vertebrate genomes contain consistently abundant amounts of direct and complementary strand interspersed repetitive elements (mainly SINEs and LINEs that may form stems with each other in large introns. This examination showed that predicted stems are indeed abundant and stable in the large introns of mammals. We hypothesize that such stems with long loops within large introns allow intron splice sites to find each other more quickly by folding the intronic RNA upon itself at smaller intervals and, thus, reducing the distance between donor and acceptor sites.

  18. Large size space construction for space exploitation

    Science.gov (United States)

    Kondyurin, Alexey

    2016-07-01

    Space exploitation is impossible without large space structures. We need to make sufficient large volume of pressurized protecting frames for crew, passengers, space processing equipment, & etc. We have to be unlimited in space. Now the size and mass of space constructions are limited by possibility of a launch vehicle. It limits our future in exploitation of space by humans and in development of space industry. Large-size space construction can be made with using of the curing technology of the fibers-filled composites and a reactionable matrix applied directly in free space. For curing the fabric impregnated with a liquid matrix (prepreg) is prepared in terrestrial conditions and shipped in a container to orbit. In due time the prepreg is unfolded by inflating. After polymerization reaction, the durable construction can be fitted out with air, apparatus and life support systems. Our experimental studies of the curing processes in the simulated free space environment showed that the curing of composite in free space is possible. The large-size space construction can be developed. A project of space station, Moon base, Mars base, mining station, interplanet space ship, telecommunication station, space observatory, space factory, antenna dish, radiation shield, solar sail is proposed and overviewed. The study was supported by Humboldt Foundation, ESA (contract 17083/03/NL/SFe), NASA program of the stratospheric balloons and RFBR grants (05-08-18277, 12-08-00970 and 14-08-96011).

  19. Improving predictions of large scale soil carbon dynamics: Integration of fine-scale hydrological and biogeochemical processes, scaling, and benchmarking

    Science.gov (United States)

    Riley, W. J.; Dwivedi, D.; Ghimire, B.; Hoffman, F. M.; Pau, G. S. H.; Randerson, J. T.; Shen, C.; Tang, J.; Zhu, Q.

    2015-12-01

    Numerical model representations of decadal- to centennial-scale soil-carbon dynamics are a dominant cause of uncertainty in climate change predictions. Recent attempts by some Earth System Model (ESM) teams to integrate previously unrepresented soil processes (e.g., explicit microbial processes, abiotic interactions with mineral surfaces, vertical transport), poor performance of many ESM land models against large-scale and experimental manipulation observations, and complexities associated with spatial heterogeneity highlight the nascent nature of our community's ability to accurately predict future soil carbon dynamics. I will present recent work from our group to develop a modeling framework to integrate pore-, column-, watershed-, and global-scale soil process representations into an ESM (ACME), and apply the International Land Model Benchmarking (ILAMB) package for evaluation. At the column scale and across a wide range of sites, observed depth-resolved carbon stocks and their 14C derived turnover times can be explained by a model with explicit representation of two microbial populations, a simple representation of mineralogy, and vertical transport. Integrating soil and plant dynamics requires a 'process-scaling' approach, since all aspects of the multi-nutrient system cannot be explicitly resolved at ESM scales. I will show that one approach, the Equilibrium Chemistry Approximation, improves predictions of forest nitrogen and phosphorus experimental manipulations and leads to very different global soil carbon predictions. Translating model representations from the site- to ESM-scale requires a spatial scaling approach that either explicitly resolves the relevant processes, or more practically, accounts for fine-resolution dynamics at coarser scales. To that end, I will present recent watershed-scale modeling work that applies reduced order model methods to accurately scale fine-resolution soil carbon dynamics to coarse-resolution simulations. Finally, we

  20. Large-group psychodynamics and massive violence

    Directory of Open Access Journals (Sweden)

    Vamik D. Volkan

    2006-06-01

    Full Text Available Beginning with Freud, psychoanalytic theories concerning large groups have mainly focused on individuals' perceptions of what their large groups psychologically mean to them. This chapter examines some aspects of large-group psychology in its own right and studies psychodynamics of ethnic, national, religious or ideological groups, the membership of which originates in childhood. I will compare the mourning process in individuals with the mourning process in large groups to illustrate why we need to study large-group psychology as a subject in itself. As part of this discussion I will also describe signs and symptoms of large-group regression. When there is a threat against a large-group's identity, massive violence may be initiated and this violence in turn, has an obvious impact on public health.

  1. Are reading and face processing related?

    DEFF Research Database (Denmark)

    Starrfelt, Randi; Klargaard, Solja; Petersen, Anders

    2015-01-01

    Traditionally, perceptual processing of faces and words is considered highly specialized, strongly lateralized, and largely independent. This has, however, recently been challenged by studies showing that learning to read may affect the perceptual and neural processes involved in face recognition......, a lower perceptual threshold, and higher processing speed for words compared to letters. In sum, we find no evidence that reading skills are abnormal in developmental prosopagnosia, a finding that may challenge the recently proposed hypothesis that reading development and face processing abilities...

  2. Innovation Processes in Large-Scale Public Foodservice-Case Findings from the Implementation of Organic Foods in a Danish County

    DEFF Research Database (Denmark)

    Mikkelsen, Bent Egberg; Nielsen, Thorkild; Kristensen, Niels Heine

    2005-01-01

    was carried out of the change process related implementation of organic foods in large-scale foodservice facilities in Greater Copenhagen county in order to study the effects of such a change. Based on the findings, a set of guidelines has been developed for the successful implementation of organic foods...

  3. Chunking of Large Multidimensional Arrays

    Energy Technology Data Exchange (ETDEWEB)

    Rotem, Doron; Otoo, Ekow J.; Seshadri, Sridhar

    2007-02-28

    Data intensive scientific computations as well on-lineanalytical processing applications as are done on very large datasetsthat are modeled as k-dimensional arrays. The storage organization ofsuch arrays on disks is done by partitioning the large global array intofixed size hyper-rectangular sub-arrays called chunks or tiles that formthe units of data transfer between disk and memory. Typical queriesinvolve the retrieval of sub-arrays in a manner that accesses all chunksthat overlap the query results. An important metric of the storageefficiency is the expected number of chunks retrieved over all suchqueries. The question that immediately arises is "what shapes of arraychunks give the minimum expected number of chunks over a query workload?"In this paper we develop two probabilistic mathematical models of theproblem and provide exact solutions using steepest descent and geometricprogramming methods. Experimental results, using synthetic workloads onreal life data sets, show that our chunking is much more efficient thanthe existing approximate solutions.

  4. Nonhomogeneous fractional Poisson processes

    International Nuclear Information System (INIS)

    Wang Xiaotian; Zhang Shiying; Fan Shen

    2007-01-01

    In this paper, we propose a class of non-Gaussian stationary increment processes, named nonhomogeneous fractional Poisson processes W H (j) (t), which permit the study of the effects of long-range dependance in a large number of fields including quantum physics and finance. The processes W H (j) (t) are self-similar in a wide sense, exhibit more fatter tail than Gaussian processes, and converge to the Gaussian processes in distribution in some cases. In addition, we also show that the intensity function λ(t) strongly influences the existence of the highest finite moment of W H (j) (t) and the behaviour of the tail probability of W H (j) (t)

  5. INVESTIGATION OF LAUNCHING PROCESS FOR STEEL REINFORCED CONCRETE FRAMEWORK OF LARGE BRIDGES

    Directory of Open Access Journals (Sweden)

    V. A. Grechukhin

    2017-01-01

    Full Text Available Bridges are considered as the most complicated, labour-consuming and expensive components in roadway network of the Republic of Belarus. So their construction and operation are to be carried out at high technological level. One of the modern industrial methods is a cyclic longitudinal launching of large frameworks which provide the possibility to reject usage of expensive auxiliary facilities and reduce a construction period. There are several variants of longitudinal launching according to shipping conditions and span length: without launching girder, with launching girder, with top strut-framed beam in the form of cable-stayed system, with strut-framed beam located under span. While using method for the cyclic longitudinal launching manufacturing process of span is concentrated on the shore. The main task of the investigations is to select economic, quick and technologically simple type of the cyclic longitudinal launching with minimum resource- and labour inputs. Span launching has been comparatively analyzed with temporary supports being specially constructed within the span and according to capital supports with the help of launching girder. Conclusions made on the basis of calculations for constructive elements of span according to bearing ability of element sections during launching and also during the process of reinforced concrete plate grouting and at the stage of operation have shown that span assembly with application of temporary supports does not reduce steel spread in comparison with the variant excluding them. Results of the conducted investigations have been approbated in cooperation with state enterprise “Belgiprodor” while designing a bridge across river Sozh.

  6. Importance of regional species pools and functional traits in colonization processes: predicting re-colonization after large-scale destruction of ecosystems

    NARCIS (Netherlands)

    Kirmer, A.; Tischew, S.; Ozinga, W.A.; Lampe, von M.; Baasch, A.; Groenendael, van J.M.

    2008-01-01

    Large-scale destruction of ecosystems caused by surface mining provides an opportunity for the study of colonization processes starting with primary succession. Surprisingly, over several decades and without any restoration measures, most of these sites spontaneously developed into valuable biotope

  7. Cogeneration in large processing power stations; Cogeneracion en grandes centrales de proceso

    Energy Technology Data Exchange (ETDEWEB)

    Munoz, Jose Manuel [Observatorio Ciudadano de la Energia A. C., (Mexico)

    2004-06-15

    In this communication it is spoken of the cogeneration in large processing power stations with or without electricity surplus, the characteristics of combined cycle power plants and a comparative analysis in a graph entitled Sale price of electricity in combined cycle and cogeneration power plants. The industrial plants, such as refineries, petrochemical, breweries, paper mills and cellulose plants, among others, with steam necessities for their processes, have the technical and economical conditions to cogenerate, that is, to produce steam and electricity simultaneously. In fact, many of such facilities that exist at the moment in any country, count on cogeneration equipment that allows them to obtain their electricity at a very low cost, taking advantage of the existence steam generators that anyway are indispensable to satisfy their demand. In Mexico, given the existing legal frame, the public services of electricity as well as the oil industry are activities of obligatory character for the State. For these reasons, the subject should be part of the agenda of planning of this power sector. The opportunities to which we are referring to, are valid for the small industries, but from the point of view of the national interest, they are more important for the large size facilities and in that rank, the most numerous are indeed in PEMEX, whereas large energy surplus and capacity would result into cogenerations in refineries and petrochemical facilities and they would be of a high value, precisely for the electricity public service, that is, for the Comision Federal de Electricidad (CFE). [Spanish] En esta ponencia se habla de la cogeneracion en grandes centrales de proceso con o sin excedentes de electricidad, las caracteristicas de plantas de ciclo combinado y se muestra el analisis comparativo en una grafica titulada precio de venta de electricidad en plantas de ciclo combinado y de cogeneracion. Las plantas industriales, tales como refinerias, petroquimicas

  8. Solution processed large area fabrication of Ag patterns as electrodes for flexible heaters, electrochromics and organic solar cells

    DEFF Research Database (Denmark)

    Gupta, Ritu; Walia, Sunil; Hösel, Markus

    2014-01-01

    , the process takes only a few minutes without any expensive instrumentation. The electrodes exhibited excellent adhesion and mechanical properties, important for flexible device application. Using Ag patterned electrodes, heaters operating at low voltages, pixelated electrochromic displays as well as organic...... solar cells have been demonstrated. The method is extendable to produce defect-free patterns over large areas as demonstrated by roll coating....

  9. Large-Scale Production of Nanographite by Tube-Shear Exfoliation in Water.

    Directory of Open Access Journals (Sweden)

    Nicklas Blomquist

    Full Text Available The number of applications based on graphene, few-layer graphene, and nanographite is rapidly increasing. A large-scale process for production of these materials is critically needed to achieve cost-effective commercial products. Here, we present a novel process to mechanically exfoliate industrial quantities of nanographite from graphite in an aqueous environment with low energy consumption and at controlled shear conditions. This process, based on hydrodynamic tube shearing, produced nanometer-thick and micrometer-wide flakes of nanographite with a production rate exceeding 500 gh-1 with an energy consumption about 10 Whg-1. In addition, to facilitate large-area coating, we show that the nanographite can be mixed with nanofibrillated cellulose in the process to form highly conductive, robust and environmentally friendly composites. This composite has a sheet resistance below 1.75 Ω/sq and an electrical resistivity of 1.39×10-4 Ωm and may find use in several applications, from supercapacitors and batteries to printed electronics and solar cells. A batch of 100 liter was processed in less than 4 hours. The design of the process allow scaling to even larger volumes and the low energy consumption indicates a low-cost process.

  10. Microwave Readout Techniques for Very Large Arrays of Nuclear Sensors

    Energy Technology Data Exchange (ETDEWEB)

    Ullom, Joel [Univ. of Colorado, Boulder, CO (United States). Dept. of Physics

    2017-05-17

    During this project, we transformed the use of microwave readout techniques for nuclear sensors from a speculative idea to reality. The core of the project consisted of the development of a set of microwave electronics able to generate and process large numbers of microwave tones. The tones can be used to probe a circuit containing a series of electrical resonances whose frequency locations and widths depend on the state of a network of sensors, with one sensor per resonance. The amplitude and phase of the tones emerging from the circuit are processed by the same electronics and are reduced to the sensor signals after two demodulation steps. This approach allows a large number of sensors to be interrogated using a single pair of coaxial cables. We successfully developed hardware, firmware, and software to complete a scalable implementation of these microwave control electronics and demonstrated their use in two areas. First, we showed that the electronics can be used at room temperature to read out a network of diverse sensor types relevant to safeguards or process monitoring. Second, we showed that the electronics can be used to measure large numbers of ultrasensitive cryogenic sensors such as gamma-ray microcalorimeters. In particular, we demonstrated the undegraded readout of up to 128 channels and established a path to even higher multiplexing factors. These results have transformed the prospects for gamma-ray spectrometers based on cryogenic microcalorimeter arrays by enabling spectrometers whose collecting areas and count rates can be competitive with high purity germanium but with 10x better spectral resolution.

  11. Solid-state supercapacitors with rationally designed heterogeneous electrodes fabricated by large area spray processing for wearable energy storage applications

    Science.gov (United States)

    Huang, Chun; Zhang, Jin; Young, Neil P.; Snaith, Henry J.; Grant, Patrick S.

    2016-01-01

    Supercapacitors are in demand for short-term electrical charge and discharge applications. Unlike conventional supercapacitors, solid-state versions have no liquid electrolyte and do not require robust, rigid packaging for containment. Consequently they can be thinner, lighter and more flexible. However, solid-state supercapacitors suffer from lower power density and where new materials have been developed to improve performance, there remains a gap between promising laboratory results that usually require nano-structured materials and fine-scale processing approaches, and current manufacturing technology that operates at large scale. We demonstrate a new, scalable capability to produce discrete, multi-layered electrodes with a different material and/or morphology in each layer, and where each layer plays a different, critical role in enhancing the dynamics of charge/discharge. This layered structure allows efficient utilisation of each material and enables conservative use of hard-to-obtain materials. The layered electrode shows amongst the highest combinations of energy and power densities for solid-state supercapacitors. Our functional design and spray manufacturing approach to heterogeneous electrodes provide a new way forward for improved energy storage devices. PMID:27161379

  12. Solid-state supercapacitors with rationally designed heterogeneous electrodes fabricated by large area spray processing for wearable energy storage applications.

    Science.gov (United States)

    Huang, Chun; Zhang, Jin; Young, Neil P; Snaith, Henry J; Grant, Patrick S

    2016-05-10

    Supercapacitors are in demand for short-term electrical charge and discharge applications. Unlike conventional supercapacitors, solid-state versions have no liquid electrolyte and do not require robust, rigid packaging for containment. Consequently they can be thinner, lighter and more flexible. However, solid-state supercapacitors suffer from lower power density and where new materials have been developed to improve performance, there remains a gap between promising laboratory results that usually require nano-structured materials and fine-scale processing approaches, and current manufacturing technology that operates at large scale. We demonstrate a new, scalable capability to produce discrete, multi-layered electrodes with a different material and/or morphology in each layer, and where each layer plays a different, critical role in enhancing the dynamics of charge/discharge. This layered structure allows efficient utilisation of each material and enables conservative use of hard-to-obtain materials. The layered electrode shows amongst the highest combinations of energy and power densities for solid-state supercapacitors. Our functional design and spray manufacturing approach to heterogeneous electrodes provide a new way forward for improved energy storage devices.

  13. Characterize kinematic rupture history of large earthquakes with Multiple Haskell sources

    Science.gov (United States)

    Jia, Z.; Zhan, Z.

    2017-12-01

    Earthquakes are often regarded as continuous rupture along a single fault, but the occurrence of complex large events involving multiple faults and dynamic triggering challenges this view. Such rupture complexities cause difficulties in existing finite fault inversion algorithms, because they rely on specific parameterizations and regularizations to obtain physically meaningful solutions. Furthermore, it is difficult to assess reliability and uncertainty of obtained rupture models. Here we develop a Multi-Haskell Source (MHS) method to estimate rupture process of large earthquakes as a series of sub-events of varying location, timing and directivity. Each sub-event is characterized by a Haskell rupture model with uniform dislocation and constant unilateral rupture velocity. This flexible yet simple source parameterization allows us to constrain first-order rupture complexity of large earthquakes robustly. Additionally, relatively few parameters in the inverse problem yields improved uncertainty analysis based on Markov chain Monte Carlo sampling in a Bayesian framework. Synthetic tests and application of MHS method on real earthquakes show that our method can capture major features of large earthquake rupture process, and provide information for more detailed rupture history analysis.

  14. The evaluation of the introduction of a quality management system. A process-oriented case study in a large rehabilitation hospital

    NARCIS (Netherlands)

    van Harten, Willem H.; Casparie, Ton F.; Fisscher, O.A.M.

    2002-01-01

    Objectives: So far, there is limited proof concerning the effects of the introduction of quality management systems (QMS) on organisational level. This study concerns the introduction of a QMS in a large rehabilitation hospital. Methods: Using an observational framework, a process-analysis is

  15. Risk Management and Uncertainty in Large Complex Public Projects

    DEFF Research Database (Denmark)

    Neerup Themsen, Tim; Harty, Chris; Tryggestad, Kjell

    Governmental actors worldwide are promoting risk management as a rational approach to man-age uncertainty and improve the abilities to deliver large complex projects according to budget, time plans, and pre-set project specifications: But what do we know about the effects of risk management...... on the abilities to meet such objectives? Using Callon’s (1998) twin notions of framing and overflowing we examine the implementation of risk management within the Dan-ish public sector and the effects this generated for the management of two large complex pro-jects. We show how the rational framing of risk...... management have generated unexpected costly outcomes such as: the undermining of the longer-term value and societal relevance of the built asset, the negligence of the wider range of uncertainties emerging during project processes, and constraining forms of knowledge. We also show how expert accountants play...

  16. A semantic approach for business process model abstraction

    NARCIS (Netherlands)

    Smirnov, S.; Reijers, H.A.; Weske, M.H.; Mouratidis, H.; Rolland, C.

    2011-01-01

    Models of business processes can easily become large and difficult to understand. Abstraction has proven to be an effective means to present a readable, high-level view of a business process model, by showing aggregated activities and leaving out irrelevant details. Yet, it is an open question how

  17. Really big data: Processing and analysis of large datasets

    Science.gov (United States)

    Modern animal breeding datasets are large and getting larger, due in part to the recent availability of DNA data for many animals. Computational methods for efficiently storing and analyzing those data are under development. The amount of storage space required for such datasets is increasing rapidl...

  18. Large-area homogeneous periodic surface structures generated on the surface of sputtered boron carbide thin films by femtosecond laser processing

    Energy Technology Data Exchange (ETDEWEB)

    Serra, R., E-mail: ricardo.serra@dem.uc.pt [SEG-CEMUC, Mechanical Engineering Department, University of Coimbra, Rua Luís Reis Santos, 3030-788 Coimbra (Portugal); Oliveira, V. [ICEMS-Instituto de Ciência e Engenharia de Materiais e Superfícies, Avenida Rovisco Pais no 1, 1049-001 Lisbon (Portugal); Instituto Superior de Engenharia de Lisboa, Avenida Conselheiro Emídio Navarro no 1, 1959-007 Lisbon (Portugal); Oliveira, J.C. [SEG-CEMUC, Mechanical Engineering Department, University of Coimbra, Rua Luís Reis Santos, 3030-788 Coimbra (Portugal); Kubart, T. [The Ångström Laboratory, Solid State Electronics, P.O. Box 534, SE-751 21 Uppsala (Sweden); Vilar, R. [Instituto Superior de Engenharia de Lisboa, Avenida Conselheiro Emídio Navarro no 1, 1959-007 Lisbon (Portugal); Instituto Superior Técnico, Avenida Rovisco Pais no 1, 1049-001 Lisbon (Portugal); Cavaleiro, A. [SEG-CEMUC, Mechanical Engineering Department, University of Coimbra, Rua Luís Reis Santos, 3030-788 Coimbra (Portugal)

    2015-03-15

    Highlights: • Large-area LIPSS were formed by femtosecond laser processing B-C films surface. • The LIPSS spatial period increases with laser fluence (140–200 nm). • Stress-related sinusoidal-like undulations were formed on the B-C films surface. • The undulations amplitude (down to a few nanometres) increases with laser fluence. • Laser radiation absorption increases with surface roughness. - Abstract: Amorphous and crystalline sputtered boron carbide thin films have a very high hardness even surpassing that of bulk crystalline boron carbide (≈41 GPa). However, magnetron sputtered B-C films have high friction coefficients (C.o.F) which limit their industrial application. Nanopatterning of materials surfaces has been proposed as a solution to decrease the C.o.F. The contact area of the nanopatterned surfaces is decreased due to the nanometre size of the asperities which results in a significant reduction of adhesion and friction. In the present work, the surface of amorphous and polycrystalline B-C thin films deposited by magnetron sputtering was nanopatterned using infrared femtosecond laser radiation. Successive parallel laser tracks 10 μm apart were overlapped in order to obtain a processed area of about 3 mm{sup 2}. Sinusoidal-like undulations with the same spatial period as the laser tracks were formed on the surface of the amorphous boron carbide films after laser processing. The undulations amplitude increases with increasing laser fluence. The formation of undulations with a 10 μm period was also observed on the surface of the crystalline boron carbide film processed with a pulse energy of 72 μJ. The amplitude of the undulations is about 10 times higher than in the amorphous films processed at the same pulse energy due to the higher roughness of the films and consequent increase in laser radiation absorption. LIPSS formation on the surface of the films was achieved for the three B-C films under study. However, LIPSS are formed under

  19. Large-area homogeneous periodic surface structures generated on the surface of sputtered boron carbide thin films by femtosecond laser processing

    International Nuclear Information System (INIS)

    Serra, R.; Oliveira, V.; Oliveira, J.C.; Kubart, T.; Vilar, R.; Cavaleiro, A.

    2015-01-01

    Highlights: • Large-area LIPSS were formed by femtosecond laser processing B-C films surface. • The LIPSS spatial period increases with laser fluence (140–200 nm). • Stress-related sinusoidal-like undulations were formed on the B-C films surface. • The undulations amplitude (down to a few nanometres) increases with laser fluence. • Laser radiation absorption increases with surface roughness. - Abstract: Amorphous and crystalline sputtered boron carbide thin films have a very high hardness even surpassing that of bulk crystalline boron carbide (≈41 GPa). However, magnetron sputtered B-C films have high friction coefficients (C.o.F) which limit their industrial application. Nanopatterning of materials surfaces has been proposed as a solution to decrease the C.o.F. The contact area of the nanopatterned surfaces is decreased due to the nanometre size of the asperities which results in a significant reduction of adhesion and friction. In the present work, the surface of amorphous and polycrystalline B-C thin films deposited by magnetron sputtering was nanopatterned using infrared femtosecond laser radiation. Successive parallel laser tracks 10 μm apart were overlapped in order to obtain a processed area of about 3 mm 2 . Sinusoidal-like undulations with the same spatial period as the laser tracks were formed on the surface of the amorphous boron carbide films after laser processing. The undulations amplitude increases with increasing laser fluence. The formation of undulations with a 10 μm period was also observed on the surface of the crystalline boron carbide film processed with a pulse energy of 72 μJ. The amplitude of the undulations is about 10 times higher than in the amorphous films processed at the same pulse energy due to the higher roughness of the films and consequent increase in laser radiation absorption. LIPSS formation on the surface of the films was achieved for the three B-C films under study. However, LIPSS are formed under different

  20. Thermally Dried Ink-Jet Process for 6,13-Bis(triisopropylsilylethynyl)-Pentacene for High Mobility and High Uniformity on a Large Area Substrate

    Science.gov (United States)

    Ryu, Gi Seong; Lee, Myung Won; Jeong, Seung Hyeon; Song, Chung Kun

    2012-05-01

    In this study we developed a simple ink-jet process for 6,13-bis(triisopropylsilylethynyl)-pentacene (TIPS-pentacene), which is known as a high-mobility soluble organic semiconductor, to achieve relatively high-mobility and high-uniformity performance for large-area applications. We analyzed the behavior of fluorescent particles in droplets and applied the results to determining a method of controlling the behavior of TIPS-pentacene molecules. The grain morphology of TIPS-pentacene varied depending on the temperature applied to the droplets during drying. We were able to obtain large and uniform grains at 46 °C without any “coffee stain”. The process was applied to a large-size organic thin-film transistor (OTFT) backplane for an electrophoretic display panel containing 192×150 pixels on a 6-in.-sized substrate. The average of mobilities of 36 OTFTs, which were taken from different locations of the backplane, was 0.44±0.08 cm2·V-1·s-1, with a small deviation of 20%, over a 6-in.-size area comprising 28,800 OTFTs. This process providing high mobility and high uniformity can be achieved by simply maintaining the whole area of the substrate at a specific temperature (46 °C in this case) during drying of the droplets.

  1. Thermally dried ink-jet process for 6,13-bis(triisopropylsilylethynyl)-pentacene for high mobility and high uniformity on a large area substrate

    Science.gov (United States)

    Ryu, Gi Seong; Lee, Myung Won; Jeong, Seung Hyeon; Song, Chung Kun

    2012-01-01

    In this study we developed a simple ink-jet process for 6,13-bis(triisopropylsilylethynyl)-pentacene (TIPS-pentacene), which is known as a high-mobility soluble organic semiconductor, to achieve relatively high-mobility and high-uniformity performance for large-area applications. We analyzed the behavior of fluorescent particles in droplets and applied the results to determining a method of controlling the behavior of TIPS-pentacene molecules. The grain morphology of TIPS-pentacene varied depending on the temperature applied to the droplets during drying. We were able to obtain large and uniform grains at 46 degrees C without any "coffee stain". The process was applied to a large-size organic thin-film transistor (OTFT) backplane for an electrophoretic display panel containing 192 x 150 pixels on a 6-in.-sized substrate. The average of mobilities of 36 OTFTs, which were taken from different locations of the backplane, was 0.44 +/- 0.08 cm2.V-1.s-1, with a small deviation of 20%, over a 6-in.-size area comprising 28,800 OTFTs. This process providing high mobility and high uniformity can be achieved by simply maintaining the whole area of the substrate at a specific temperature (46 degrees C in this case) during drying of the droplets.

  2. Age distribution of human gene families shows significant roles of both large- and small-scale duplications in vertebrate evolution.

    Science.gov (United States)

    Gu, Xun; Wang, Yufeng; Gu, Jianying

    2002-06-01

    The classical (two-round) hypothesis of vertebrate genome duplication proposes two successive whole-genome duplication(s) (polyploidizations) predating the origin of fishes, a view now being seriously challenged. As the debate largely concerns the relative merits of the 'big-bang mode' theory (large-scale duplication) and the 'continuous mode' theory (constant creation by small-scale duplications), we tested whether a significant proportion of paralogous genes in the contemporary human genome was indeed generated in the early stage of vertebrate evolution. After an extensive search of major databases, we dated 1,739 gene duplication events from the phylogenetic analysis of 749 vertebrate gene families. We found a pattern characterized by two waves (I, II) and an ancient component. Wave I represents a recent gene family expansion by tandem or segmental duplications, whereas wave II, a rapid paralogous gene increase in the early stage of vertebrate evolution, supports the idea of genome duplication(s) (the big-bang mode). Further analysis indicated that large- and small-scale gene duplications both make a significant contribution during the early stage of vertebrate evolution to build the current hierarchy of the human proteome.

  3. Combining Vertex-centric Graph Processing with SPARQL for Large-scale RDF Data Analytics

    KAUST Repository

    Abdelaziz, Ibrahim

    2017-06-27

    Modern applications, such as drug repositioning, require sophisticated analytics on RDF graphs that combine structural queries with generic graph computations. Existing systems support either declarative SPARQL queries, or generic graph processing, but not both. We bridge the gap by introducing Spartex, a versatile framework for complex RDF analytics. Spartex extends SPARQL to support programs that combine seamlessly generic graph algorithms (e.g., PageRank, Shortest Paths, etc.) with SPARQL queries. Spartex builds on existing vertex-centric graph processing frameworks, such as Graphlab or Pregel. It implements a generic SPARQL operator as a vertex-centric program that interprets SPARQL queries and executes them efficiently using a built-in optimizer. In addition, any graph algorithm implemented in the underlying vertex-centric framework, can be executed in Spartex. We present various scenarios where our framework simplifies significantly the implementation of complex RDF data analytics programs. We demonstrate that Spartex scales to datasets with billions of edges, and show that our core SPARQL engine is at least as fast as the state-of-the-art specialized RDF engines. For complex analytical tasks that combine generic graph processing with SPARQL, Spartex is at least an order of magnitude faster than existing alternatives.

  4. A Large-Scale Analysis of Variance in Written Language.

    Science.gov (United States)

    Johns, Brendan T; Jamieson, Randall K

    2018-01-22

    The collection of very large text sources has revolutionized the study of natural language, leading to the development of several models of language learning and distributional semantics that extract sophisticated semantic representations of words based on the statistical redundancies contained within natural language (e.g., Griffiths, Steyvers, & Tenenbaum, ; Jones & Mewhort, ; Landauer & Dumais, ; Mikolov, Sutskever, Chen, Corrado, & Dean, ). The models treat knowledge as an interaction of processing mechanisms and the structure of language experience. But language experience is often treated agnostically. We report a distributional semantic analysis that shows written language in fiction books varies appreciably between books from the different genres, books from the same genre, and even books written by the same author. Given that current theories assume that word knowledge reflects an interaction between processing mechanisms and the language environment, the analysis shows the need for the field to engage in a more deliberate consideration and curation of the corpora used in computational studies of natural language processing. Copyright © 2018 Cognitive Science Society, Inc.

  5. The large deviation principle and steady-state fluctuation theorem for the entropy production rate of a stochastic process in magnetic fields

    International Nuclear Information System (INIS)

    Chen, Yong; Ge, Hao; Xiong, Jie; Xu, Lihu

    2016-01-01

    Fluctuation theorem is one of the major achievements in the field of nonequilibrium statistical mechanics during the past two decades. There exist very few results for steady-state fluctuation theorem of sample entropy production rate in terms of large deviation principle for diffusion processes due to the technical difficulties. Here we give a proof for the steady-state fluctuation theorem of a diffusion process in magnetic fields, with explicit expressions of the free energy function and rate function. The proof is based on the Karhunen-Loève expansion of complex-valued Ornstein-Uhlenbeck process.

  6. ParaText : scalable solutions for processing and searching very large document collections : final LDRD report.

    Energy Technology Data Exchange (ETDEWEB)

    Crossno, Patricia Joyce; Dunlavy, Daniel M.; Stanton, Eric T.; Shead, Timothy M.

    2010-09-01

    This report is a summary of the accomplishments of the 'Scalable Solutions for Processing and Searching Very Large Document Collections' LDRD, which ran from FY08 through FY10. Our goal was to investigate scalable text analysis; specifically, methods for information retrieval and visualization that could scale to extremely large document collections. Towards that end, we designed, implemented, and demonstrated a scalable framework for text analysis - ParaText - as a major project deliverable. Further, we demonstrated the benefits of using visual analysis in text analysis algorithm development, improved performance of heterogeneous ensemble models in data classification problems, and the advantages of information theoretic methods in user analysis and interpretation in cross language information retrieval. The project involved 5 members of the technical staff and 3 summer interns (including one who worked two summers). It resulted in a total of 14 publications, 3 new software libraries (2 open source and 1 internal to Sandia), several new end-user software applications, and over 20 presentations. Several follow-on projects have already begun or will start in FY11, with additional projects currently in proposal.

  7. Large transverse momentum processes in a non-scaling parton model

    International Nuclear Information System (INIS)

    Stirling, W.J.

    1977-01-01

    The production of large transverse momentum mesons in hadronic collisions by the quark fusion mechanism is discussed in a parton model which gives logarithmic corrections to Bjorken scaling. It is found that the moments of the large transverse momentum structure function exhibit a simple scale breaking behaviour similar to the behaviour of the Drell-Yan and deep inelastic structure functions of the model. An estimate of corresponding experimental consequences is made and the extent to which analogous results can be expected in an asymptotically free gauge theory is discussed. A simple set of rules is presented for incorporating the logarithmic corrections to scaling into all covariant parton model calculations. (Auth.)

  8. Nonhomogeneous fractional Poisson processes

    Energy Technology Data Exchange (ETDEWEB)

    Wang Xiaotian [School of Management, Tianjin University, Tianjin 300072 (China)]. E-mail: swa001@126.com; Zhang Shiying [School of Management, Tianjin University, Tianjin 300072 (China); Fan Shen [Computer and Information School, Zhejiang Wanli University, Ningbo 315100 (China)

    2007-01-15

    In this paper, we propose a class of non-Gaussian stationary increment processes, named nonhomogeneous fractional Poisson processes W{sub H}{sup (j)}(t), which permit the study of the effects of long-range dependance in a large number of fields including quantum physics and finance. The processes W{sub H}{sup (j)}(t) are self-similar in a wide sense, exhibit more fatter tail than Gaussian processes, and converge to the Gaussian processes in distribution in some cases. In addition, we also show that the intensity function {lambda}(t) strongly influences the existence of the highest finite moment of W{sub H}{sup (j)}(t) and the behaviour of the tail probability of W{sub H}{sup (j)}(t)

  9. Numerical Prediction of the Influence of Process Parameters on Large Area Diamond Deposition by DC Arcjet with ARC Roots Rotating and Operating at Gas Recycling Mode

    Science.gov (United States)

    Lu, F. X.; Huang, T. B.; Tang, W. Z.; Song, J. H.; Tong, Y. M.

    A computer model have been set up for simulation of the flow and temperature field, and the radial distribution of atomic hydrogen and active carbonaceous species over a large area substrate surface for a new type dc arc plasma torch with rotating arc roots and operating at gas recycling mode A gas recycling radio of 90% was assumed. In numerical calculation of plasma chemistry, the Thermal-Calc program and a powerful thermodynamic database were employed. Numerical calculations to the computer model were performed using boundary conditions close to the experimental setup for large area diamond films deposition. The results showed that the flow and temperature field over substrate surface of Φ60-100mm were smooth and uniform. Calculations were also made with plasma of the same geometry but no arc roots rotation. It was clearly demonstrated that the design of rotating arc roots was advantageous for high quality uniform deposition of large area diamond films. Theoretical predictions on growth rate and film quality as well as their radial uniformity, and the influence of process parameters on large area diamond deposition were discussed in detail based on the spatial distribution of atomic hydrogen and the carbonaceous species in the plasma over the substrate surface obtained from thermodynamic calculations of plasma chemistry, and were compared with experimental observations.

  10. Analysis of cyclic variations of liquid fuel-air mixing processes in a realistic DISI IC-engine using Large Eddy Simulation

    International Nuclear Information System (INIS)

    Goryntsev, D.; Sadiki, A.; Klein, M.; Janicka, J.

    2010-01-01

    Direct injection spark ignition (DISI) engines have a large potential to reduce emissions and specific fuel consumption. One of the most important problem in the design of DISI engines is the cycle-to-cycle variations of the flow, mixing and combustion processes. The Large Eddy Simulation (LES) based analysis is used to characterize the cycle-to-cycle fluctuations of the flow field as well as the mixture preparation in a realistic four-stroke internal combustion engine with variable charge motion system. Based on the analysis of cycle-to-cycle velocity fluctuations of in-cylinder flow, the impact of various fuel spray boundary conditions on injection processes and mixture preparation is pointed out. The joint effect of both cycle-to-cycle velocity fluctuations and variable spray boundary conditions is discussed in terms of mean and standard deviation of relative air-fuel ratio, velocity and mass fraction. Finally a qualitative analysis of the intensity of cyclic fluctuations below the spark plug is provided.

  11. A new trapped-ion instability with large frequency and radial wavenumber

    International Nuclear Information System (INIS)

    Tagger, M.

    1979-01-01

    The need for theoretical previsions concerning anomalous transport in large Tokamaks, as well as the recent results of PLT, ask the question of the process responsible for non-linear saturation of trapped-ion instabilities. This in turn necessitates the knowledge of the linear behaviour of these waves at large frequencies and large radial wavenumbers. We study the linear dispersion relation of these modes, in the radially local approximation, but including a term due to a new physical effect, combining finite banana-width and bounce resonances. Limiting ourselves presently to the first harmonic expansion of the bounce motion of trapped ions, we show that the effect of finite banana-width on the usual trapped-ion mode is complex and quite different from what is generally expected. In addition we show, analytically and numerically, the appearance of a nex branch of this instability. Essentially due to this new effect, it involves large frequencies (ω approximately ωsub(b) and is destabilized by large radial wavelengths (ksub(x) Λ approximately 1, where Λ is the typical banana-width). We discuss the nature of this new mode and its potential relevance of the experiments

  12. PART 2: LARGE PARTICLE MODELLING Simulation of particle filtration processes in deformable media

    Directory of Open Access Journals (Sweden)

    Gernot Boiger

    2008-06-01

    Full Text Available In filtration processes it is necessary to consider both, the interaction of thefluid with the solid parts as well as the effect of particles carried in the fluidand accumulated on the solid. While part 1 of this paper deals with themodelling of fluid structure interaction effects, the accumulation of dirtparticles will be addressed in this paper. A closer look is taken on theimplementation of a spherical, LAGRANGIAN particle model suitable forsmall and large particles. As dirt accumulates in the fluid stream, it interactswith the surrounding filter fibre structure and over time causes modificationsof the filter characteristics. The calculation of particle force interactioneffects is necessary for an adequate simulation of this situation. A detailedDiscrete Phase Lagrange Model was developed to take into account thetwo-way coupling of the fluid and accumulated particles. The simulation oflarge particles and the fluid-structure interaction is realised in a single finitevolume flow solver on the basis of the OpenSource software OpenFoam.

  13. Hot seeding using large Y-123 seeds

    International Nuclear Information System (INIS)

    Scruggs, S J; Putman, P T; Zhou, Y X; Fang, H; Salama, K

    2006-01-01

    There are several motivations for increasing the diameter of melt textured single domain discs. The maximum magnetic field produced by a trapped field magnet is proportional to the radius of the sample. Furthermore, the availability of trapped field magnets with large diameter could enable their use in applications that have traditionally been considered to require wound electromagnets, such as beam bending magnets for particle accelerators and electric propulsion. We have investigated the possibility of using large area epitaxial growth instead of the conventional point nucleation growth mechanism. This process involves the use of large Y123 seeds for the purpose of increasing the maximum achievable Y123 single domain size. The hot seeding technique using large Y-123 seeds was employed to seed Y-123 samples. Trapped field measurements indicate that single domain samples were indeed grown by this technique. Microstructural evaluation indicates that growth can be characterized by a rapid nucleation followed by the usual peritectic grain growth which occurs when large seeds are used. Critical temperature measurements show that no local T c suppression occurs in the vicinity of the seed. This work supports the suggestion of using an iterative method for increasing the size of Y-123 single domains that can be grown

  14. Performance evaluation of the DCMD desalination process under bench scale and large scale module operating conditions

    KAUST Repository

    Francis, Lijo

    2014-04-01

    The flux performance of different hydrophobic microporous flat sheet commercial membranes made of poly tetrafluoroethylene (PTFE) and poly propylene (PP) was tested for Red Sea water desalination using the direct contact membrane distillation (DCMD) process, under bench scale (high δT) and large scale module (low δT) operating conditions. Membranes were characterized for their surface morphology, water contact angle, thickness, porosity, pore size and pore size distribution. The DCMD process performance was optimized using a locally designed and fabricated module aiming to maximize the flux at different levels of operating parameters, mainly feed water and coolant inlet temperatures at different temperature differences across the membrane (δT). Water vapor flux of 88.8kg/m2h was obtained using a PTFE membrane at high δT (60°C). In addition, the flux performance was compared to the first generation of a new locally synthesized and fabricated membrane made of a different class of polymer under the same conditions. A total salt rejection of 99.99% and boron rejection of 99.41% were achieved under extreme operating conditions. On the other hand, a detailed water characterization revealed that low molecular weight non-ionic molecules (ppb level) were transported with the water vapor molecules through the membrane structure. The membrane which provided the highest flux was then tested under large scale module operating conditions. The average flux of the latter study (low δT) was found to be eight times lower than that of the bench scale (high δT) operating conditions.

  15. Performance evaluation of the DCMD desalination process under bench scale and large scale module operating conditions

    KAUST Repository

    Francis, Lijo; Ghaffour, NorEddine; Alsaadi, Ahmad Salem; Nunes, Suzana Pereira; Amy, Gary L.

    2014-01-01

    The flux performance of different hydrophobic microporous flat sheet commercial membranes made of poly tetrafluoroethylene (PTFE) and poly propylene (PP) was tested for Red Sea water desalination using the direct contact membrane distillation (DCMD) process, under bench scale (high δT) and large scale module (low δT) operating conditions. Membranes were characterized for their surface morphology, water contact angle, thickness, porosity, pore size and pore size distribution. The DCMD process performance was optimized using a locally designed and fabricated module aiming to maximize the flux at different levels of operating parameters, mainly feed water and coolant inlet temperatures at different temperature differences across the membrane (δT). Water vapor flux of 88.8kg/m2h was obtained using a PTFE membrane at high δT (60°C). In addition, the flux performance was compared to the first generation of a new locally synthesized and fabricated membrane made of a different class of polymer under the same conditions. A total salt rejection of 99.99% and boron rejection of 99.41% were achieved under extreme operating conditions. On the other hand, a detailed water characterization revealed that low molecular weight non-ionic molecules (ppb level) were transported with the water vapor molecules through the membrane structure. The membrane which provided the highest flux was then tested under large scale module operating conditions. The average flux of the latter study (low δT) was found to be eight times lower than that of the bench scale (high δT) operating conditions.

  16. The seismic cycles of large Romanian earthquake: The physical foundation, and the next large earthquake in Vrancea

    International Nuclear Information System (INIS)

    Purcaru, G.

    2002-01-01

    The occurrence patterns of large/great earthquakes at subduction zone interface and in-slab are complex in the space-time dynamics, and make even long-term forecasts very difficult. For some favourable cases where a predictive (empirical) law was found successful predictions were possible (eg. Aleutians, Kuriles, etc). For the large Romanian events (M > 6.7), occurring in the Vrancea seismic slab below 60 km, Purcaru (1974) first found the law of the occurrence time and magnitude: the law of 'quasicycles' and 'supercycles', for large and largest events (M > 7.25), respectively. The quantitative model of Purcaru with these seismic cycles has three time-bands (periods of large earthquakes)/century, discovered using the earthquake history (1100-1973) (however incomplete) of large Vrancea earthquakes for which M was initially estimated (Purcaru, 1974, 1979). Our long-term prediction model is essentially quasideterministic, it predicts uniquely the time and magnitude; since is not strict deterministic the forecasting is interval valued. It predicted the next large earthquake in 1980 in the 3rd time-band (1970-1990), and which occurred in 1977 (M7.1, M w 7.5). The prediction was successful, in long-term sense. We discuss the unpredicted events in 1986 and 1990. Since the laws are phenomenological, we give their physical foundation based on the large scale of rupture zone (RZ) and subscale of the rupture process (RP). First results show that: (1) the 1940 event (h=122 km) ruptured the lower part of the oceanic slab entirely along strike, and down dip, and similarly for 1977 but its upper part, (2) the RZ of 1977 and 1990 events overlap and the first asperity of 1977 event was rebroken in 1990. This shows the size of the events strongly depends on RZ, asperity size/strength and, thus on the failure stress level (FSL), but not on depth, (3) when FSL of high strength (HS) larger zones is critical largest events (eg. 1802, 1940) occur, thus explaining the supercyles (the 1940

  17. Jet Substructure as a New Higgs-Search Channel at the Large Hadron Collider

    International Nuclear Information System (INIS)

    Butterworth, Jonathan M.; Davison, Adam R.; Rubin, Mathieu; Salam, Gavin P.

    2008-01-01

    It is widely considered that, for Higgs boson searches at the CERN Large Hadron Colider, WH and ZH production where the Higgs boson decays to bb are poor search channels due to large backgrounds. We show that at high transverse momenta, employing state-of-the-art jet reconstruction and decomposition techniques, these processes can be recovered as promising search channels for the standard model Higgs boson around 120 GeV in mass

  18. Large eddy simulation of the low temperature ignition and combustion processes on spray flame with the linear eddy model

    Science.gov (United States)

    Wei, Haiqiao; Zhao, Wanhui; Zhou, Lei; Chen, Ceyuan; Shu, Gequn

    2018-03-01

    Large eddy simulation coupled with the linear eddy model (LEM) is employed for the simulation of n-heptane spray flames to investigate the low temperature ignition and combustion process in a constant-volume combustion vessel under diesel-engine relevant conditions. Parametric studies are performed to give a comprehensive understanding of the ignition processes. The non-reacting case is firstly carried out to validate the present model by comparing the predicted results with the experimental data from the Engine Combustion Network (ECN). Good agreements are observed in terms of liquid and vapour penetration length, as well as the mixture fraction distributions at different times and different axial locations. For the reacting cases, the flame index was introduced to distinguish between the premixed and non-premixed combustion. A reaction region (RR) parameter is used to investigate the ignition and combustion characteristics, and to distinguish the different combustion stages. Results show that the two-stage combustion process can be identified in spray flames, and different ignition positions in the mixture fraction versus RR space are well described at low and high initial ambient temperatures. At an initial condition of 850 K, the first-stage ignition is initiated at the fuel-lean region, followed by the reactions in fuel-rich regions. Then high-temperature reaction occurs mainly at the places with mixture concentration around stoichiometric mixture fraction. While at an initial temperature of 1000 K, the first-stage ignition occurs at the fuel-rich region first, then it moves towards fuel-richer region. Afterwards, the high-temperature reactions move back to the stoichiometric mixture fraction region. For all of the initial temperatures considered, high-temperature ignition kernels are initiated at the regions richer than stoichiometric mixture fraction. By increasing the initial ambient temperature, the high-temperature ignition kernels move towards richer

  19. Model reduction for the dynamics and control of large structural systems via neutral network processing direct numerical optimization

    Science.gov (United States)

    Becus, Georges A.; Chan, Alistair K.

    1993-01-01

    Three neural network processing approaches in a direct numerical optimization model reduction scheme are proposed and investigated. Large structural systems, such as large space structures, offer new challenges to both structural dynamicists and control engineers. One such challenge is that of dimensionality. Indeed these distributed parameter systems can be modeled either by infinite dimensional mathematical models (typically partial differential equations) or by high dimensional discrete models (typically finite element models) often exhibiting thousands of vibrational modes usually closely spaced and with little, if any, damping. Clearly, some form of model reduction is in order, especially for the control engineer who can actively control but a few of the modes using system identification based on a limited number of sensors. Inasmuch as the amount of 'control spillover' (in which the control inputs excite the neglected dynamics) and/or 'observation spillover' (where neglected dynamics affect system identification) is to a large extent determined by the choice of particular reduced model (RM), the way in which this model reduction is carried out is often critical.

  20. Large-scale data analysis of power grid resilience across multiple US service regions

    Science.gov (United States)

    Ji, Chuanyi; Wei, Yun; Mei, Henry; Calzada, Jorge; Carey, Matthew; Church, Steve; Hayes, Timothy; Nugent, Brian; Stella, Gregory; Wallace, Matthew; White, Joe; Wilcox, Robert

    2016-05-01

    Severe weather events frequently result in large-scale power failures, affecting millions of people for extended durations. However, the lack of comprehensive, detailed failure and recovery data has impeded large-scale resilience studies. Here, we analyse data from four major service regions representing Upstate New York during Super Storm Sandy and daily operations. Using non-stationary spatiotemporal random processes that relate infrastructural failures to recoveries and cost, our data analysis shows that local power failures have a disproportionally large non-local impact on people (that is, the top 20% of failures interrupted 84% of services to customers). A large number (89%) of small failures, represented by the bottom 34% of customers and commonplace devices, resulted in 56% of the total cost of 28 million customer interruption hours. Our study shows that extreme weather does not cause, but rather exacerbates, existing vulnerabilities, which are obscured in daily operations.

  1. Identifying temporal bottlenecks for the conservation of large-bodied fishes: Lake Sturgeon (Acipenser fulvescens show highly restricted movement and habitat use over-winter

    Directory of Open Access Journals (Sweden)

    Donnette Thayer

    2017-04-01

    Full Text Available The relationship between species’ size and home range size has been well studied. In practice, home range may provide a good surrogate of broad spatial coverage needed for species conservation, however, many species can show restricted movement during critical life stages, such as breeding and over-wintering. This suggests the existence of either a behavioral or habitat mediated ‘temporal bottleneck,’ where restricted or sedentary movement can make populations more susceptible to harm during specific life stages. Here, we study over-winter movement and habitat use of Lake Sturgeon (Acipenser fulvescens, the largest freshwater fish in North America. We monitored over-winter movement of 86 fish using a hydro-acoustic receiver array in the South Saskatchewan River, Canada. Overall, 20 fish remained within our study system throughout the winter. Lake Sturgeon showed strong aggregation and sedentary movement over-winter, demonstrating a temporal bottleneck. Movement was highly restricted during ice-on periods (ranging from 0.9 km/day in November and April to 0.2 km/day in mid-November to mid-March, with Lake Sturgeon seeking deeper, slower pools. We also show that Lake Sturgeon have strong aggregation behavior, where distance to conspecifics decreased (from 575 to 313 m in preparation for and during ice-on periods. Although the Lake Sturgeon we studied had access to 1100 kilometers of unfragmented riverine habitat, we show that during the over-winter period Lake Sturgeon utilized a single, deep pool (<0.1% of available habitat. The temporal discrepancy between mobile and sedentary behaviors in Lake Sturgeon suggest adaptive management is needed with more localized focus during periods of temporal bottlenecks, even for large-bodied species.

  2. Supernova Remnants with Fermi Large Area Telescope

    Directory of Open Access Journals (Sweden)

    Caragiulo M.

    2017-01-01

    Full Text Available The Large Area Telescope (LAT, on-board the Fermi satellite, proved to be, after 8 years of data taking, an excellent instrument to detect and observe Supernova Remnants (SNRs in a range of energies running from few hundred MeV up to few hundred GeV. It provides essential information on physical processes that occur at the source, involving both accelerated leptons and hadrons, in order to understand the mechanisms responsible for the primary Cosmic Ray (CR acceleration. We show the latest results in the observation of Galactic SNRs by Fermi-LAT.

  3. Young Adults with Autism Spectrum Disorder Show Early Atypical Neural Activity during Emotional Face Processing

    Directory of Open Access Journals (Sweden)

    Rachel C. Leung

    2018-02-01

    Full Text Available Social cognition is impaired in autism spectrum disorder (ASD. The ability to perceive and interpret affect is integral to successful social functioning and has an extended developmental course. However, the neural mechanisms underlying emotional face processing in ASD are unclear. Using magnetoencephalography (MEG, the present study explored neural activation during implicit emotional face processing in young adults with and without ASD. Twenty-six young adults with ASD and 26 healthy controls were recruited. Participants indicated the location of a scrambled pattern (target that was presented alongside a happy or angry face. Emotion-related activation sources for each emotion were estimated using the Empirical Bayes Beamformer (pcorr ≤ 0.001 in Statistical Parametric Mapping 12 (SPM12. Emotional faces elicited elevated fusiform, amygdala and anterior insula and reduced anterior cingulate cortex (ACC activity in adults with ASD relative to controls. Within group comparisons revealed that angry vs. happy faces elicited distinct neural activity in typically developing adults; there was no distinction in young adults with ASD. Our data suggest difficulties in affect processing in ASD reflect atypical recruitment of traditional emotional processing areas. These early differences may contribute to difficulties in deriving social reward from faces, ascribing salience to faces, and an immature threat processing system, which collectively could result in deficits in emotional face processing.

  4. Adding large EM stack support

    KAUST Repository

    Holst, Glendon

    2016-12-01

    Serial section electron microscopy (SSEM) image stacks generated using high throughput microscopy techniques are an integral tool for investigating brain connectivity and cell morphology. FIB or 3View scanning electron microscopes easily generate gigabytes of data. In order to produce analyzable 3D dataset from the imaged volumes, efficient and reliable image segmentation is crucial. Classical manual approaches to segmentation are time consuming and labour intensive. Semiautomatic seeded watershed segmentation algorithms, such as those implemented by ilastik image processing software, are a very powerful alternative, substantially speeding up segmentation times. We have used ilastik effectively for small EM stacks – on a laptop, no less; however, ilastik was unable to carve the large EM stacks we needed to segment because its memory requirements grew too large – even for the biggest workstations we had available. For this reason, we refactored the carving module of ilastik to scale it up to large EM stacks on large workstations, and tested its efficiency. We modified the carving module, building on existing blockwise processing functionality to process data in manageable chunks that can fit within RAM (main memory). We review this refactoring work, highlighting the software architecture, design choices, modifications, and issues encountered.

  5. EBSD-based techniques for characterization of microstructural restoration processes during annealing of metals deformed to large plastic strains

    DEFF Research Database (Denmark)

    Godfrey, A.; Mishin, Oleg; Yu, Tianbo

    2012-01-01

    Some methods for quantitative characterization of the microstructures deformed to large plastic strains both before and after annealing are discussed and illustrated using examples of samples after equal channel angular extrusion and cold-rolling. It is emphasized that the microstructures...... in such deformed samples exhibit a heterogeneity in the microstructural refinement by high angle boundaries. Based on this, a new parameter describing the fraction of regions containing predominantly low angle boundaries is introduced. This parameter has some advantages over the simpler high angle boundary...... on mode of the distribution of dislocation cell sizes is outlined, and it is demonstrated how this parameter can be used to investigate the uniformity, or otherwise, of the restoration processes occurring during annealing of metals deformed to large plastic strains. © (2012) Trans Tech Publications...

  6. Large scale chromatographic separations using continuous displacement chromatography (CDC)

    International Nuclear Information System (INIS)

    Taniguchi, V.T.; Doty, A.W.; Byers, C.H.

    1988-01-01

    A process for large scale chromatographic separations using a continuous chromatography technique is described. The process combines the advantages of large scale batch fixed column displacement chromatography with conventional analytical or elution continuous annular chromatography (CAC) to enable large scale displacement chromatography to be performed on a continuous basis (CDC). Such large scale, continuous displacement chromatography separations have not been reported in the literature. The process is demonstrated with the ion exchange separation of a binary lanthanide (Nd/Pr) mixture. The process is, however, applicable to any displacement chromatography separation that can be performed using conventional batch, fixed column chromatography

  7. Uncertainty of measurement for large product verification: evaluation of large aero gas turbine engine datums

    International Nuclear Information System (INIS)

    Muelaner, J E; Wang, Z; Keogh, P S; Brownell, J; Fisher, D

    2016-01-01

    Understanding the uncertainty of dimensional measurements for large products such as aircraft, spacecraft and wind turbines is fundamental to improving efficiency in these products. Much work has been done to ascertain the uncertainty associated with the main types of instruments used, based on laser tracking and photogrammetry, and the propagation of this uncertainty through networked measurements. Unfortunately this is not sufficient to understand the combined uncertainty of industrial measurements, which include secondary tooling and datum structures used to locate the coordinate frame. This paper presents for the first time a complete evaluation of the uncertainty of large scale industrial measurement processes. Generic analysis and design rules are proven through uncertainty evaluation and optimization for the measurement of a large aero gas turbine engine. This shows how the instrument uncertainty can be considered to be negligible. Before optimization the dominant source of uncertainty was the tooling design, after optimization the dominant source was thermal expansion of the engine; meaning that no further improvement can be made without measurement in a temperature controlled environment. These results will have a significant impact on the ability of aircraft and wind turbines to improve efficiency and therefore reduce carbon emissions, as well as the improved reliability of these products. (paper)

  8. Analysis using large-scale ringing data

    Directory of Open Access Journals (Sweden)

    Baillie, S. R.

    2004-06-01

    ]; Peach et al., 1998; DeSante et al., 2001 are generally co–ordinated by ringing centres such as those that make up the membership of EURING. In some countries volunteer census work (often called Breeding Bird Surveys is undertaken by the same organizations while in others different bodies may co–ordinate this aspect of the work. This session was concerned with the analysis of such extensive data sets and the approaches that are being developed to address the key theoretical and applied issues outlined above. The papers reflect the development of more spatially explicit approaches to analyses of data gathered at large spatial scales. They show that while the statistical tools that have been developed in recent years can be used to derive useful biological conclusions from such data, there is additional need for further developments. Future work should also consider how to best implement such analytical developments within future study designs. In his plenary paper Andy Royle (Royle, 2004 addresses this theme directly by describing a general framework for modelling spatially replicated abundance data. The approach is based on the idea that a set of spatially referenced local populations constitutes a metapopulation, within which local abundance is determined as a random process. This provides an elegant and general approach in which the metapopulation model as described above is combined with a data–generating model specific to the type of data being analysed to define a simple hierarchical model that can be analysed using conventional methods. It should be noted, however, that further software development will be needed if the approach is to be made readily available to biologists. The approach is well suited to dealing with sparse data and avoids the need for data aggregation prior to analysis. Spatial synchrony has received most attention in studies of species whose populations show cyclic fluctuations, particularly certain game birds and small mammals. However

  9. Subgrid-scale models for large-eddy simulation of rotating turbulent channel flows

    Science.gov (United States)

    Silvis, Maurits H.; Bae, Hyunji Jane; Trias, F. Xavier; Abkar, Mahdi; Moin, Parviz; Verstappen, Roel

    2017-11-01

    We aim to design subgrid-scale models for large-eddy simulation of rotating turbulent flows. Rotating turbulent flows form a challenging test case for large-eddy simulation due to the presence of the Coriolis force. The Coriolis force conserves the total kinetic energy while transporting it from small to large scales of motion, leading to the formation of large-scale anisotropic flow structures. The Coriolis force may also cause partial flow laminarization and the occurrence of turbulent bursts. Many subgrid-scale models for large-eddy simulation are, however, primarily designed to parametrize the dissipative nature of turbulent flows, ignoring the specific characteristics of transport processes. We, therefore, propose a new subgrid-scale model that, in addition to the usual dissipative eddy viscosity term, contains a nondissipative nonlinear model term designed to capture transport processes, such as those due to rotation. We show that the addition of this nonlinear model term leads to improved predictions of the energy spectra of rotating homogeneous isotropic turbulence as well as of the Reynolds stress anisotropy in spanwise-rotating plane-channel flows. This work is financed by the Netherlands Organisation for Scientific Research (NWO) under Project Number 613.001.212.

  10. Working memory processes show different degrees of lateralization : Evidence from event-related potentials

    NARCIS (Netherlands)

    Talsma, D; Wijers, A.A.; Klaver, P; Mulder, G.

    This study aimed to identify different processes in working memory, using event-related potentials (ERPs) and response times. Abstract polygons were presented for memorization and subsequent recall in a delayed matching-to-sample paradigm. Two polygons were presented bilaterally for memorization and

  11. Statistical measurement of power spectrum density of large aperture optical component

    International Nuclear Information System (INIS)

    Xu Jiancheng; Xu Qiao; Chai Liqun

    2010-01-01

    According to the requirement of ICF, a method based on statistical theory has been proposed to measure the power spectrum density (PSD) of large aperture optical components. The method breaks the large-aperture wavefront into small regions, and obtains the PSD of the large-aperture wavefront by weighted averaging of the PSDs of the regions, where the weight factor is each region's area. Simulation and experiment demonstrate the effectiveness of the proposed method. They also show that, the obtained PSDs of the large-aperture wavefront by statistical method and sub-aperture stitching method fit well, when the number of small regions is no less than 8 x 8. The statistical method is not sensitive to translation stage's errors and environment instabilities, thus it is appropriate for PSD measurement during the process of optical fabrication. (authors)

  12. State of the Art in Large-Scale Soil Moisture Monitoring

    Science.gov (United States)

    Ochsner, Tyson E.; Cosh, Michael Harold; Cuenca, Richard H.; Dorigo, Wouter; Draper, Clara S.; Hagimoto, Yutaka; Kerr, Yan H.; Larson, Kristine M.; Njoku, Eni Gerald; Small, Eric E.; hide

    2013-01-01

    Soil moisture is an essential climate variable influencing land atmosphere interactions, an essential hydrologic variable impacting rainfall runoff processes, an essential ecological variable regulating net ecosystem exchange, and an essential agricultural variable constraining food security. Large-scale soil moisture monitoring has advanced in recent years creating opportunities to transform scientific understanding of soil moisture and related processes. These advances are being driven by researchers from a broad range of disciplines, but this complicates collaboration and communication. For some applications, the science required to utilize large-scale soil moisture data is poorly developed. In this review, we describe the state of the art in large-scale soil moisture monitoring and identify some critical needs for research to optimize the use of increasingly available soil moisture data. We review representative examples of 1) emerging in situ and proximal sensing techniques, 2) dedicated soil moisture remote sensing missions, 3) soil moisture monitoring networks, and 4) applications of large-scale soil moisture measurements. Significant near-term progress seems possible in the use of large-scale soil moisture data for drought monitoring. Assimilation of soil moisture data for meteorological or hydrologic forecasting also shows promise, but significant challenges related to model structures and model errors remain. Little progress has been made yet in the use of large-scale soil moisture observations within the context of ecological or agricultural modeling. Opportunities abound to advance the science and practice of large-scale soil moisture monitoring for the sake of improved Earth system monitoring, modeling, and forecasting.

  13. On-demand Overlay Networks for Large Scientific Data Transfers

    Energy Technology Data Exchange (ETDEWEB)

    Ramakrishnan, Lavanya [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Guok, Chin [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Jackson, Keith [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Kissel, Ezra [Univ. of Delaware, Newark, DE (United States); Swany, D. Martin [Univ. of Delaware, Newark, DE (United States); Agarwal, Deborah [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2009-10-12

    Large scale scientific data transfers are central to scientific processes. Data from large experimental facilities have to be moved to local institutions for analysis or often data needs to be moved between local clusters and large supercomputing centers. In this paper, we propose and evaluate a network overlay architecture to enable highthroughput, on-demand, coordinated data transfers over wide-area networks. Our work leverages Phoebus and On-demand Secure Circuits and AdvanceReservation System (OSCARS) to provide high performance wide-area network connections. OSCARS enables dynamic provisioning of network paths with guaranteed bandwidth and Phoebus enables the coordination and effective utilization of the OSCARS network paths. Our evaluation shows that this approach leads to improved end-to-end data transfer throughput with minimal overheads. The achievedthroughput using our overlay was limited only by the ability of the end hosts to sink the data.

  14. GATECloud.net: a platform for large-scale, open-source text processing on the cloud.

    Science.gov (United States)

    Tablan, Valentin; Roberts, Ian; Cunningham, Hamish; Bontcheva, Kalina

    2013-01-28

    Cloud computing is increasingly being regarded as a key enabler of the 'democratization of science', because on-demand, highly scalable cloud computing facilities enable researchers anywhere to carry out data-intensive experiments. In the context of natural language processing (NLP), algorithms tend to be complex, which makes their parallelization and deployment on cloud platforms a non-trivial task. This study presents a new, unique, cloud-based platform for large-scale NLP research--GATECloud. net. It enables researchers to carry out data-intensive NLP experiments by harnessing the vast, on-demand compute power of the Amazon cloud. Important infrastructural issues are dealt with by the platform, completely transparently for the researcher: load balancing, efficient data upload and storage, deployment on the virtual machines, security and fault tolerance. We also include a cost-benefit analysis and usage evaluation.

  15. Are reading and face processing related?

    DEFF Research Database (Denmark)

    Starrfelt, Randi; Klargaard, Solja K.; Petersen, Anders

    Traditionally, perceptual processing of faces and words is considered highly specialized, strongly lateralized, and largely independent. This has, however, recently been challenged by studies showing that learning to read may affect the perceptual and neural processes involved in face recognition......, reflected in better overall accuracy, a lower perceptual threshold, and higher processing speed for words compared to letters. In sum, we find no evidence that reading skills are abnormal in developmental prosopagnosia, a finding that may challenge the recently proposed hypothesis that reading development...

  16. Precision Optical Coatings for Large Space Telescope Mirrors

    Science.gov (United States)

    Sheikh, David

    This proposal “Precision Optical Coatings for Large Space Telescope Mirrors” addresses the need to develop and advance the state-of-the-art in optical coating technology. NASA is considering large monolithic mirrors 1 to 8-meters in diameter for future telescopes such as HabEx and LUVOIR. Improved large area coating processes are needed to meet the future requirements of large astronomical mirrors. In this project, we will demonstrate a broadband reflective coating process for achieving high reflectivity from 90-nm to 2500-nm over a 2.3-meter diameter coating area. The coating process is scalable to larger mirrors, 6+ meters in diameter. We will use a battery-driven coating process to make an aluminum reflector, and a motion-controlled coating technology for depositing protective layers. We will advance the state-of-the-art for coating technology and manufacturing infrastructure, to meet the reflectance and wavefront requirements of both HabEx and LUVOIR. Specifically, we will combine the broadband reflective coating designs and processes developed at GSFC and JPL with large area manufacturing technologies developed at ZeCoat Corporation. Our primary objectives are to: Demonstrate an aluminum coating process to create uniform coatings over large areas with near-theoretical aluminum reflectance Demonstrate a motion-controlled coating process to apply very precise 2-nm to 5- nm thick protective/interference layers to large areas, Demonstrate a broadband coating system (90-nm to 2500-nm) over a 2.3-meter coating area and test it against the current coating specifications for LUVOIR/HabEx. We will perform simulated space-environment testing, and we expect to advance the TRL from 3 to >5 in 3-years.

  17. Dutch-Cantonese Bilinguals Show Segmental Processing during Sinitic Language Production

    Directory of Open Access Journals (Sweden)

    Kalinka Timmer

    2017-07-01

    Full Text Available This study addressed the debate on the primacy of syllable vs. segment (i.e., phoneme as a functional unit of phonological encoding in syllabic languages by investigating both behavioral and neural responses of Dutch-Cantonese (DC bilinguals in a color-object picture naming task. Specifically, we investigated whether DC bilinguals exhibit the phonemic processing strategy, evident in monolingual Dutch speakers, during planning of their Cantonese speech production. Participants named the color of colored line-drawings in Cantonese faster when color and object matched in the first segment than when they were mismatched (e.g., 藍駱駝, /laam4/ /lok3to4/, “blue camel;” 紅饑駝, /hung4/ /lok3to4/, “red camel”. This is in contrast to previous studies in Sinitic languages that did not reveal such phoneme-only facilitation. Phonemic overlap also modulated the event-related potentials (ERPs in the 125–175, 200–300, and 300–400 ms time windows, suggesting earlier ERP modulations than in previous studies with monolingual Sinitic speakers or unbalanced Sinitic-Germanic bilinguals. Conjointly, our results suggest that, while the syllable may be considered the primary unit of phonological encoding in Sinitic languages, the phoneme can serve as the primary unit of phonological encoding, both behaviorally and neurally, for DC bilinguals. The presence/absence of a segment onset effect in Sinitic languages may be related to the proficiency in the Germanic language of bilinguals.

  18. Challenges and opportunities : One stop processing of automatic large-scale base map production using airborne lidar data within gis environment case study: Makassar City, Indonesia

    NARCIS (Netherlands)

    Widyaningrum, E.; Gorte, B.G.H.

    2017-01-01

    LiDAR data acquisition is recognized as one of the fastest solutions to provide basis data for large-scale topographical base maps worldwide. Automatic LiDAR processing is believed one possible scheme to accelerate the large-scale topographic base map provision by the Geospatial Information

  19. Initial crystallization and growth in melt processing of large-domain YBa2Cu3Ox for magnetic levitation

    International Nuclear Information System (INIS)

    Shi, D.

    1994-10-01

    Crystallization temperature in YBa 2 Cu 3 O x (123) during peritectic reaction has been studied by differential thermal analysis (DTA) and optical microscopy. It has been found that YBa 2 Cu 3 O x experiences partial melting near 1,010 C during heating while crystallization takes place at a much lower temperature range upon cooling indicating a delayed nucleation process. A series of experiments have been conducted to search for the initial crystallization temperature in the Y 2 BaCuO x + liquid phase field. The authors have found that the slow-cool period (1 C/h) for the 123 grain texturing can start at as low as 960 C. This novel processing has resulted in high-quality, large-domain, strongly pinned 123 magnetic levitators

  20. In-Storage Embedded Accelerator for Sparse Pattern Processing

    OpenAIRE

    Jun, Sang-Woo; Nguyen, Huy T.; Gadepally, Vijay N.; Arvind

    2016-01-01

    We present a novel architecture for sparse pattern processing, using flash storage with embedded accelerators. Sparse pattern processing on large data sets is the essence of applications such as document search, natural language processing, bioinformatics, subgraph matching, machine learning, and graph processing. One slice of our prototype accelerator is capable of handling up to 1TB of data, and experiments show that it can outperform C/C++ software solutions on a 16-core system at a fracti...

  1. On the universal character of the large scale structure of the universe

    International Nuclear Information System (INIS)

    Demianski, M.; International Center for Relativistic Astrophysics; Rome Univ.; Doroshkevich, A.G.

    1991-01-01

    We review different theories of formation of the large scale structure of the Universe. Special emphasis is put on the theory of inertial instability. We show that for a large class of initial spectra the resulting two point correlation functions are similar. We discuss also the adhesion theory which uses the Burgers equation, Navier-Stokes equation or coagulation process. We review the Zeldovich theory of gravitational instability and discuss the internal structure of pancakes. Finally we discuss the role of the velocity potential in determining the global characteristics of large scale structures (distribution of caustics, scale of voids, etc.). In the last chapter we list the main unsolved problems and main successes of the theory of formation of large scale structure. (orig.)

  2. Co-digestion of sewage sludge from external small WWTP's in a large plant

    Science.gov (United States)

    Miodoński, Stanisław

    2017-11-01

    Improving energy efficiency of WWTPs (Waste Water Treatment Plants) is crucial action of modern wastewater treatment technology. Technological treatment process optimization is important but the main goal will not be achieved without increasing production of renewable energy from sewage sludge in anaerobic digestion process which is most often used as sludge stabilization method on large WWTP's. Usually, anaerobic digestion reactors used for sludge digestion were designed with reserve and most of them is oversized. In many cases that reserve is unused. On the other hand, smaller WWTPs have problem with management of sewage sludge due to lack of adequately developed infrastructure for sludge stabilization. Paper shows an analysis of using a technological reserve of anaerobic digestion reactors at large WWTP (1 million P.E.) for sludge stabilization collected from smaller WWTP in a co-digestion process. Over 30 small WWTPs from the same region as the large WWTP were considered in this study. Furthermore, performed analysis included also evaluation of potential sludge disintegration pre-treatment for co-digestion efficiency improvement.

  3. Optimized method for manufacturing large aspheric surfaces

    Science.gov (United States)

    Zhou, Xusheng; Li, Shengyi; Dai, Yifan; Xie, Xuhui

    2007-12-01

    Aspheric optics are being used more and more widely in modern optical systems, due to their ability of correcting aberrations, enhancing image quality, enlarging the field of view and extending the range of effect, while reducing the weight and volume of the system. With optical technology development, we have more pressing requirement to large-aperture and high-precision aspheric surfaces. The original computer controlled optical surfacing (CCOS) technique cannot meet the challenge of precision and machining efficiency. This problem has been thought highly of by researchers. Aiming at the problem of original polishing process, an optimized method for manufacturing large aspheric surfaces is put forward. Subsurface damage (SSD), full aperture errors and full band of frequency errors are all in control of this method. Lesser SSD depth can be gained by using little hardness tool and small abrasive grains in grinding process. For full aperture errors control, edge effects can be controlled by using smaller tools and amendment model with material removal function. For full band of frequency errors control, low frequency errors can be corrected with the optimized material removal function, while medium-high frequency errors by using uniform removing principle. With this optimized method, the accuracy of a K9 glass paraboloid mirror can reach rms 0.055 waves (where a wave is 0.6328μm) in a short time. The results show that the optimized method can guide large aspheric surface manufacturing effectively.

  4. Large scale expression changes of genes related to neuronal signaling and developmental processes found in lateral septum of postpartum outbred mice.

    Directory of Open Access Journals (Sweden)

    Brian E Eisinger

    Full Text Available Coordinated gene expression changes across the CNS are required to produce the mammalian maternal phenotype. Lateral septum (LS is a brain region critically involved with aspects of maternal care, and we recently examined gene expression of whole septum (LS and medial septum in selectively bred maternal mice. Here, we expand on the prior study by 1 conducting microarray analysis solely on LS in virgin and postpartum mice, 2 using outbred mice, and 3 evaluating the role of sensory input on gene expression changes. Large scale changes in genes related to neuronal signaling were identified, including four GABAA receptor subunits. Subunits α4 and δ were downregulated in maternal LS, likely reflecting a reduction in the extrasynaptic, neurosteroid-sensitive α4/δ containing receptor subtype. Conversely, subunits ε and θ were increased in maternal LS. Fifteen K+ channel related genes showed altered expression, as did dopamine receptors Drd1a and Drd2 (both downregulated, hypocretin receptor 1 (Hcrtr1, kappa opioid receptor 1 (Oprk1, and transient receptor potential channel 4 (Trpc4. Expression of a large number of genes linked to developmental processes or cell differentiation were also altered in postpartum LS, including chemokine (C-X-C motif ligand 12 (Cxcl12, fatty acid binding protein 7 (Fabp7, plasma membrane proteolipid (Pllp, and suppressor of cytokine signaling 2 (Socs2. Additional genes that are linked to anxiety, such as glutathione reductase (Gsr, exhibited altered expression. Pathway analysis also identified changes in genes related to cyclic nucleotide metabolism, chromatin structure, and the Ras gene family. The sensory presence of pups was found to contribute to the altered expression of a subset of genes across all categories. This study suggests that both large changes in neuronal signaling and the possible terminal differentiation of neuronal and/or glial cells play important roles in producing the maternal state.

  5. Mining by-products show potential

    International Nuclear Information System (INIS)

    Douglas, Grant

    2013-01-01

    Full text: Many mining and industrial processes generate wastewater that contains a variety of contaminants, such as metals and metalloids. These must be removed to ensure the wastewater is suitable for reuse or safe discharge to the environment. However, mining wastewater treatment processes have traditionally been difficult due to the large range of different contaminants present, requiring a number of complex steps. In current processes, the mining industry generally adds lime to the wastewater to purify it. While often effective, the key issue with this method has been the volume of sludge that forms and the subsequent problems with dealing with this sludge - either to extract the contained water, which often requires additional treatment, or to find enough space for long-term disposal. This complex practice could end soon thanks to a new treatment solution that utilises hydrotalcites. Developed by CSIRO, the treatment overcomes the complexities of lime-based methods and offers a simpler and more water smart process. The CSIRO team found that hydrotalcites, which are layered minerals consisting of aluminium- and magnesium- rich layers, can simultaneously remove a variety of contaminants in wastewater in a single step. Scientists noticed that hydrotalcites begin to form when aluminium and magnesium are present at an ideal ratio and under conditions during neutralisation of acidic waters. As hydrotalcites form, the contaminants become trapped and are easily removed from the wastewater as a solid. Mining wastewater often contains substantial magnesium and aluminium concentrations. This means that we can create hydrotalcites utilising common contaminants that are already present in the wastewater, by simply adjusting their concentrations and adding alkaline compounds to rapidly increase pH. Initial applications have focussed on treating wastewater generated from the mining and extraction of uranium. A range of contaminants including uranium, rare earth elements

  6. Large-scale genomic analysis shows association between homoplastic genetic variation in Mycobacterium tuberculosis genes and meningeal or pulmonary tuberculosis.

    NARCIS (Netherlands)

    Ruesen, Carolien; Chaidir, Lidya; van Laarhoven, Arjan; Dian, Sofiati; Ganiem, Ahmad Rizal; Nebenzahl-Guimaraes, Hanna; Huynen, Martijn A; Alisjahbana, Bachti; Dutilh, Bas E; van Crevel, Reinout

    2018-01-01

    Meningitis is the most severe manifestation of tuberculosis. It is largely unknown why some people develop pulmonary TB (PTB) and others TB meningitis (TBM); we examined if the genetic background of infecting M. tuberculosis strains may be relevant.

  7. Process for producing curved surface of membrane rings for large containers, particulary for prestressed concrete pressure vessels of nuclear reactors

    International Nuclear Information System (INIS)

    Kumpf, H.

    1977-01-01

    Membrane rings for large pressure vessels, particularly for prestressed-concrete pressure vessels, often have curved surfaces. The invention describes a process of producing these at site, which is particularly advantageous as the forming and installation of the vessel component coincide. According to the invention, the originally flat membrane ring is set in a predetermined position, is then pressed in sections by a forming tool (with a preformed support ring as opposite tool), and shaped. After this, the shaped parts are welded to the ring-shaped wall parts of the large vessel. The manufacture of single and double membrane rings arrangements is described. (HP) [de

  8. The synthesis of alternatives for the bioconversion of waste-monoethanolamine from large-scale CO{sub 2}-removal processes

    Energy Technology Data Exchange (ETDEWEB)

    Ohtaguchi, Kazuhisa; Yokoyama, Takahisa [Tokyo Inst. of Tech. (Japan). Dept. of Chemical Engineering

    1998-12-31

    The alternatives for bioconversion of monoethanolamine (MEA), which would appear in large quantities in industrial effluent of CO{sub 2}-removal process of power companies, have been proposed by investigating the ability of some microorganisms to deaminate MEA. An evaluation of biotechnology, which includes productions from MEA of acetic acid and acetaldehyde with Escherichia coli, of formic and acetic acids with Clostridium formicoaceticum, confirms and extends our earlier remarks on availability of ecotechnology for solving the above problem. (Author)

  9. Nano/biosensors based on large-area graphene

    Science.gov (United States)

    Ducos, Pedro Jose

    Two dimensional materials have properties that make them ideal for applications in chemical and biomolecular sensing. Their high surface/volume ratio implies that all atoms are exposed to the environment, in contrast to three dimensional materials with most atoms shielded from interactions inside the bulk. Graphene additionally has an extremely high carrier mobility, even at ambient temperature and pressure, which makes it ideal as a transduction device. The work presented in this thesis describes large-scale fabrication of Graphene Field Effect Transistors (GFETs), their physical and chemical characterization, and their application as biomolecular sensors. Initially, work was focused on developing an easily scalable fabrication process. A large-area graphene growth, transfer and photolithography process was developed that allowed the scaling of production of devices from a few devices per single transfer in a chip, to over a thousand devices per transfer in a full wafer of fabrication. Two approaches to biomolecules sensing were then investigated, through nanoparticles and through chemical linkers. Gold and platinum Nanoparticles were used as intermediary agents to immobilize a biomolecule. First, gold nanoparticles were monodispersed and functionalized with thiolated probe DNA to yield DNA biosensors with a detection limit of 1 nM and high specificity against noncomplementary DNA. Second, devices are modified with platinum nanoparticles and functionalized with thiolated genetically engineered scFv HER3 antibodies to realize a HER3 biosensor. Sensors retain the high affinity from the scFv fragment and show a detection limit of 300 pM. We then show covalent and non-covalent chemical linkers between graphene and antibodies. The chemical linker 1-pyrenebutanoic acid succinimidyl ester (pyrene) stacks to the graphene by Van der Waals interaction, being a completely non-covalent interaction. The linker 4-Azide-2,3,5,6-tetrafluorobenzoic acid, succinimidyl ester (azide

  10. Extreme value statistics and thermodynamics of earthquakes. Large earthquakes

    Energy Technology Data Exchange (ETDEWEB)

    Lavenda, B. [Camerino Univ., Camerino, MC (Italy); Cipollone, E. [ENEA, Centro Ricerche Casaccia, S. Maria di Galeria, RM (Italy). National Centre for Research on Thermodynamics

    2000-06-01

    A compound Poisson process is used to derive a new shape parameter which can be used to discriminate between large earthquakes and aftershocks sequences. Sample exceedance distributions of large earthquakes are fitted to the Pareto tail and the actual distribution of the maximum to the Frechet distribution, while the sample distribution of aftershocks are fitted to a Beta distribution and the distribution of the minimum to the Weibull distribution for the smallest value. The transition between initial sample distributions and asymptotic extreme value distributions show that self-similar power laws are transformed into non scaling exponential distributions so that neither self-similarity nor the Gutenberg-Richter law can be considered universal. The energy-magnitude transformation converts the Frechet distribution into the Gumbel distribution, originally proposed by Epstein and Lomnitz, and not the Gompertz distribution as in the Lomnitz-Adler and Lomnitz generalization of the Gutenberg-Richter law. Numerical comparison is made with the Lomnitz-Adler and Lomnitz analysis using the same catalogue of Chinese earthquakes. An analogy is drawn between large earthquakes and high energy particle physics. A generalized equation of state is used to transform the Gamma density into the order-statistic Frechet distribution. Earthquake temperature and volume are determined as functions of the energy. Large insurance claims based on the Pareto distribution, which does not have a right endpoint, show why there cannot be a maximum earthquake energy.

  11. A Study on the regulation improvement through the analysis of domestic and international categorization and licensing process for large particle accelerator

    Energy Technology Data Exchange (ETDEWEB)

    Gwon, Da-Yeong; Jeon, Yeo-Ryeong; Kim, Yong-Min [Catholic University of Daegu, Gyeongsan (Korea, Republic of); Jung, Nam-Suk; Lee, Hee-Seock [POSTECH, Pohang (Korea, Republic of)

    2016-10-15

    Many foreign countries use separate criteria and regulation procedure according to the categorization of accelerators. In Korea, nuclear and radiation related facilities are divided into 4 groups: 1) Nuclear Reactor and related facilities, 2) Nuclear fuel cycle, nuclear material facilities, 3) Disposal and transport, 4) Radioisotope and radiation generating devices related facilities. All accelerator facilities are categorized as group 4 regardless of their size and type. For facilities that belong to group 1 and 2, Radiation Environmental Impact Assessment Report(REIR) and Preliminary Decommissioning Plan Report(PDPR) should be submitted in construction licensing stage, but there are no rules about above documents for large particle accelerator facilities. Facilities that belong to 4) RI and RG, only two documents of Radiation Safety Report(RSR) and Safety Control Regulation(SCR) are submitted in licensing stage. Because there is no detailed guidelines according to facilities type, properties of each facility are not considered in preparation and licensing process. If we set up the categorization of accelerator facilities, we can expect the effective and safe construction and operation of the large accelerator facilities on the licensing and operation process. Similarly to other counties' criteria, 50 MeV of particle energy could be used as energy band of large particle accelerator. According to categorization, it is necessary to adopt graded licensing stages and separated safety documents. In case of large particle accelerators, it is appropriate to divide the licensing stages to construction and operation. We currently submit PDPR in case of reactor and related facilities, nuclear fuel cycle, and nuclear material facilities. Depending on the energy of particle accelerators, it is necessary to prepare the decontamination and decommissioning for the decrease of current and future burden from radioactive waste. From the arrangement of separated guidelines on

  12. A Study on the regulation improvement through the analysis of domestic and international categorization and licensing process for large particle accelerator

    International Nuclear Information System (INIS)

    Gwon, Da-Yeong; Jeon, Yeo-Ryeong; Kim, Yong-Min; Jung, Nam-Suk; Lee, Hee-Seock

    2016-01-01

    Many foreign countries use separate criteria and regulation procedure according to the categorization of accelerators. In Korea, nuclear and radiation related facilities are divided into 4 groups: 1) Nuclear Reactor and related facilities, 2) Nuclear fuel cycle, nuclear material facilities, 3) Disposal and transport, 4) Radioisotope and radiation generating devices related facilities. All accelerator facilities are categorized as group 4 regardless of their size and type. For facilities that belong to group 1 and 2, Radiation Environmental Impact Assessment Report(REIR) and Preliminary Decommissioning Plan Report(PDPR) should be submitted in construction licensing stage, but there are no rules about above documents for large particle accelerator facilities. Facilities that belong to 4) RI and RG, only two documents of Radiation Safety Report(RSR) and Safety Control Regulation(SCR) are submitted in licensing stage. Because there is no detailed guidelines according to facilities type, properties of each facility are not considered in preparation and licensing process. If we set up the categorization of accelerator facilities, we can expect the effective and safe construction and operation of the large accelerator facilities on the licensing and operation process. Similarly to other counties' criteria, 50 MeV of particle energy could be used as energy band of large particle accelerator. According to categorization, it is necessary to adopt graded licensing stages and separated safety documents. In case of large particle accelerators, it is appropriate to divide the licensing stages to construction and operation. We currently submit PDPR in case of reactor and related facilities, nuclear fuel cycle, and nuclear material facilities. Depending on the energy of particle accelerators, it is necessary to prepare the decontamination and decommissioning for the decrease of current and future burden from radioactive waste. From the arrangement of separated guidelines on

  13. Assessment of present and future large-scale semiconductor detector systems

    International Nuclear Information System (INIS)

    Spieler, H.G.; Haller, E.E.

    1984-11-01

    The performance of large-scale semiconductor detector systems is assessed with respect to their theoretical potential and to the practical limitations imposed by processing techniques, readout electronics and radiation damage. In addition to devices which detect reaction products directly, the analysis includes photodetectors for scintillator arrays. Beyond present technology we also examine currently evolving structures and techniques which show potential for producing practical devices in the foreseeable future

  14. A Hybrid Approach to Processing Big Data Graphs on Memory-Restricted Systems

    KAUST Repository

    Harshvardhan,

    2015-05-01

    With the advent of big-data, processing large graphs quickly has become increasingly important. Most existing approaches either utilize in-memory processing techniques that can only process graphs that fit completely in RAM, or disk-based techniques that sacrifice performance. In this work, we propose a novel RAM-Disk hybrid approach to graph processing that can scale well from a single shared-memory node to large distributed-memory systems. It works by partitioning the graph into sub graphs that fit in RAM and uses a paging-like technique to load sub graphs. We show that without modifying the algorithms, this approach can scale from small memory-constrained systems (such as tablets) to large-scale distributed machines with 16, 000+ cores.

  15. Large deviations and portfolio optimization

    Science.gov (United States)

    Sornette, Didier

    Risk control and optimal diversification constitute a major focus in the finance and insurance industries as well as, more or less consciously, in our everyday life. We present a discussion of the characterization of risks and of the optimization of portfolios that starts from a simple illustrative model and ends by a general functional integral formulation. A major item is that risk, usually thought of as one-dimensional in the conventional mean-variance approach, has to be addressed by the full distribution of losses. Furthermore, the time-horizon of the investment is shown to play a major role. We show the importance of accounting for large fluctuations and use the theory of Cramér for large deviations in this context. We first treat a simple model with a single risky asset that exemplifies the distinction between the average return and the typical return and the role of large deviations in multiplicative processes, and the different optimal strategies for the investors depending on their size. We then analyze the case of assets whose price variations are distributed according to exponential laws, a situation that is found to describe daily price variations reasonably well. Several portfolio optimization strategies are presented that aim at controlling large risks. We end by extending the standard mean-variance portfolio optimization theory, first within the quasi-Gaussian approximation and then using a general formulation for non-Gaussian correlated assets in terms of the formalism of functional integrals developed in the field theory of critical phenomena.

  16. Five-Axis Milling of Large Spiral Bevel Gears: Toolpath Definition, Finishing, and Shape Errors

    Directory of Open Access Journals (Sweden)

    Álvaro Álvarez

    2018-05-01

    Full Text Available In this paper, a five-axis machining process is analyzed for large spiral-bevel gears, an interesting process for one-of-kind manufacturing. The work is focused on large sized spiral bevel gears manufacturing using universal multitasking machines or five-axis milling centers. Different machining strategies, toolpath patterns, and parameters are tested for both gear roughing and finishing operations. Machining time, tools’ wear, and gear surface are analyzed in order to determine which are the best strategies and parameters for large modulus gear manufacturing on universal machines. The case study results are discussed in the last section, showing the capacity of a universal five-axis milling for this niche. Special attention was paid to the possible affectations of the metal surfaces, since gear durability is very sensitive to thermo-mechanical damage, affected layers, and flank gear surface state.

  17. Solid-state-processing of d_PVDF

    OpenAIRE

    Martín, Jaime; Zhao, Dong; Lenz, Thomas; Katsouras, Ilias; de Leeuw, Dago M.; Stingelin, Natalie

    2017-01-01

    Poly(vinylidene fluoride) (PVDF) has long been regarded as an ideal piezoelectric plastic because it exhibits a large piezoelectric response and a high thermal stability. However, the realization of piezoelectric PVDF elements has proven to be problematic, amongst others, due to the lack of industrially-scalable methods to process PVDF into the appropriate polar crystalline forms. Here, we show that fully piezoelectric PVDF films can be produced via a single-step process that exploits the fac...

  18. Information paths within the new product development process

    DEFF Research Database (Denmark)

    Jespersen, Kristina Risom

    2007-01-01

    collection platform to obtain measurements from within the NPD process. 42 large, international companies participated in the data collecting simulation. Results revealed five different information paths that were not connecting all stages of the NPD process. Moreover, results show that the front......-end is not driving the information acquisition through the stages of the NPD process, and that environmental turbulence disconnects stages from the information paths in the NPD process. This implies that information is at the same time a key to success and a key to entrapment in the NPD process....

  19. Process component inventory in a large commercial reprocessing facility

    International Nuclear Information System (INIS)

    Canty, M.J.; Berliner, A.; Spannagel, G.

    1983-01-01

    Using a computer simulation program, the equilibrium operation of the Pu-extraction and purification processes of a reference commercial reprocessing facility was investigated. Particular attention was given to the long-term net fluctuations of Pu inventories in hard-to-measure components such as the solvent extraction contractors. Comparing the variance of these inventories with the measurement variance for Pu contained in feed, analysis and buffer tanks, it was concluded that direct or indirect periodic estimation of contactor inventories would not contribute significantly to improving the quality of closed material balances over the process MBA

  20. Measurement of the high-field Q-drop in a high-purity large-grain niobium cavity for different oxidation processes

    Energy Technology Data Exchange (ETDEWEB)

    Ciovati, Gianluigi; Kneisel, Peter; gurevich, alex

    2007-06-01

    The most challenging issue for understanding the performance of superconducting radio-frequency (rf) cavities made of high-purity (residual resistivity ratio > 200) niobium is due to a sharp degradation (“Q-drop”) of the cavity quality factor Q0(Bp) as the peak surface magnetic field (Bp) exceeds about 90 mT, in the absence of field emission. In addition, a low-temperature (100 – 140 C) “in-situ” baking of the cavity was found to be beneficial in reducing the Q-drop. In this contribution, we present the results from a series of rf tests at 1.7 K and 2.0 K on a single-cell cavity made of high-purity large (with area of the order of few cm2) grain niobium which underwent various oxidation processes, after initial buffered chemical polishing, such as anodization, baking in pure oxygen atmosphere and baking in air up to 180 °C, with the objective of clearly identifying the role of oxygen and the oxide layer on the Q-drop. During each rf test a temperature mapping system allows measuring the local temperature rise of the cavity outer surface due to rf losses, which gives information about the losses location, their field dependence and space distribution. The results confirmed that the depth affected by baking is about 20 – 30 nm from the surface and showed that the Q-drop did not re-appear in a previously baked cavity by further baking at 120 °C in pure oxygen atmosphere or in air up to 180 °C. These treatments increased the oxide thickness and oxygen concentration, measured on niobium samples which were processed with the cavity and were analyzed with Transmission Electron Microscope (TEM) and Secondary Ion Mass Spectroscopy (SIMS). Nevertheless, the performance of the cavity after air baking at 180 °C degraded significantly and the temperature maps showed high losses, uniformly distributed on the surface, which could be completely recovered only by a post-purification treatment at 1250 °C. A statistic of the position of the “hot-spots” on the

  1. Measurement of the high-field Q drop in a high-purity large-grain niobium cavity for different oxidation processes

    Directory of Open Access Journals (Sweden)

    G. Ciovati

    2007-06-01

    Full Text Available The most challenging issue for understanding the performance of superconducting radio-frequency (rf cavities made of high-purity (residual resistivity ratio >200 niobium is due to a sharp degradation (“Q-drop” of the cavity quality factor Q_{0}(B_{p} as the peak surface magnetic field (B_{p} exceeds about 90 mT, in the absence of field emission. In addition, a low-temperature (100–140°C in situ baking of the cavity was found to be beneficial in reducing the Q-drop. In this contribution, we present the results from a series of rf tests at 1.7 and 2.0 K on a single-cell cavity made of high-purity large (with area of the order of few cm^{2} grain niobium which underwent various oxidation processes, after initial buffered chemical polishing, such as anodization, baking in pure oxygen atmosphere, and baking in air up to 180°C, with the objective of clearly identifying the role of oxygen and the oxide layer on the Q-drop. During each rf test a temperature mapping system allows measuring the local temperature rise of the cavity outer surface due to rf losses, which gives information about the losses location, their field dependence, and space distribution. The results confirmed that the depth affected by baking is about 20–30 nm from the surface and showed that the Q-drop did not reappear in a previously baked cavity by further baking at 120°C in pure oxygen atmosphere or in air up to 180°C. These treatments increased the oxide thickness and oxygen concentration, measured on niobium samples which were processed with the cavity and were analyzed with transmission electron microscope and secondary ion mass spectroscopy. Nevertheless, the performance of the cavity after air baking at 180°C degraded significantly and the temperature maps showed high losses, uniformly distributed on the surface, which could be completely recovered only by a postpurification treatment at 1250°C. A statistic of the position of the “hot spots” on the

  2. Metallic nanomaterials formed by exerting large plastic strains

    International Nuclear Information System (INIS)

    Richert, M; Richert, J.; Zasadzinski, J.; Hawrylkiewicz, S.

    2002-01-01

    The investigations included pure Al and Cu single crystals, AlMg5 alloy and AlCuZr alloy have been presented. The materials were deformed by the cyclic extrusion compression method (CEC) within the range of true strains φ = 0.4-59.8 (1 to 67 deformation cycles by the CEC method). In all examined materials a strong tendency to form banded was observed. Within the range of very large plastic strains there was observed intensive rebuilding of the banded microstructure into subgrains, at first of rhombic shape, and next into equiaxial subgrains. A characteristic feature of the newly formed subgrains, not encountered in the range of conventional deformations, was the occurrence of large misorientation angles between the newly formed subgrains. The proportion of large misorientation angles in the microstructure varied, and it increased with increasing deformation. Reduction of the recovery process in AlMg5 and AlCuZr alloys preserved the growth of the newly formed nanograins, favoring the retaining of the nanomeric dimensions. This results show that there is the effective possibility of production of metallic nanomaterials by exerting of very large nonconventional plastic strains. (author)

  3. Control system for technological processes in tritium processing plants with process analysis

    International Nuclear Information System (INIS)

    Retevoi, Carmen Maria; Stefan, Iuliana; Balteanu, Ovidiu; Stefan, Liviu; Bucur, Ciprian

    2005-01-01

    Integration of a large variety of installations and equipment into a unitary system for controlling the technological process in tritium processing nuclear facilities appears to be a rather complex approach particularly when experimental or new technologies are developed. Ensuring a high degree of versatility allowing easy modifications in configurations and process parameters is a major requirement imposed on experimental installations. The large amount of data which must be processed, stored and easily accessed for subsequent analyses imposes development of a large information network based on a highly integrated system containing the acquisition, control and technological process analysis data as well as data base system. On such a basis integrated systems of computation and control able to conduct the technological process could be developed as well protection systems for cases of failures or break down. The integrated system responds to the control and security requirements in case of emergency and of the technological processes specific to the industry that processes radioactive or toxic substances with severe consequences in case of technological failure as in the case of tritium processing nuclear plant. In order to lower the risk technological failure of these processes an integrated software, data base and process analysis system are developed, which, based on identification algorithm of the important parameters for protection and security systems, will display the process evolution trend. The system was checked on a existing plant that includes a removal tritium unit, finally used in a nuclear power plant, by simulating the failure events as well as the process. The system will also include a complete data base monitoring all the parameters and a process analysis software for the main modules of the tritium processing plant, namely, isotope separation, catalytic purification and cryogenic distillation

  4. Large, but not small, antigens require time- and temperature-dependent processing in accessory cells before they can be recognized by T cells

    DEFF Research Database (Denmark)

    Buus, S; Werdelin, O

    1986-01-01

    We have studied if antigens of different size and structure all require processing in antigen-presenting cells of guinea-pigs before they can be recognized by T cells. The method of mild paraformaldehyde fixation was used to stop antigen-processing in the antigen-presenting cells. As a measure...... of antigen presentation we used the proliferative response of appropriately primed T cells during a co-culture with the paraformaldehyde-fixed and antigen-exposed presenting cells. We demonstrate that the large synthetic polypeptide antigen, dinitrophenyl-poly-L-lysine, requires processing. After an initial......-dependent and consequently energy-requiring. Processing is strongly inhibited by the lysosomotrophic drug, chloroquine, suggesting a lysosomal involvement in antigen processing. The existence of a minor, non-lysosomal pathway is suggested, since small amounts of antigen were processed even at 10 degrees C, at which...

  5. Application of simplified models to CO2 migration and immobilization in large-scale geological systems

    KAUST Repository

    Gasda, Sarah E.

    2012-07-01

    Long-term stabilization of injected carbon dioxide (CO 2) is an essential component of risk management for geological carbon sequestration operations. However, migration and trapping phenomena are inherently complex, involving processes that act over multiple spatial and temporal scales. One example involves centimeter-scale density instabilities in the dissolved CO 2 region leading to large-scale convective mixing that can be a significant driver for CO 2 dissolution. Another example is the potentially important effect of capillary forces, in addition to buoyancy and viscous forces, on the evolution of mobile CO 2. Local capillary effects lead to a capillary transition zone, or capillary fringe, where both fluids are present in the mobile state. This small-scale effect may have a significant impact on large-scale plume migration as well as long-term residual and dissolution trapping. Computational models that can capture both large and small-scale effects are essential to predict the role of these processes on the long-term storage security of CO 2 sequestration operations. Conventional modeling tools are unable to resolve sufficiently all of these relevant processes when modeling CO 2 migration in large-scale geological systems. Herein, we present a vertically-integrated approach to CO 2 modeling that employs upscaled representations of these subgrid processes. We apply the model to the Johansen formation, a prospective site for sequestration of Norwegian CO 2 emissions, and explore the sensitivity of CO 2 migration and trapping to subscale physics. Model results show the relative importance of different physical processes in large-scale simulations. The ability of models such as this to capture the relevant physical processes at large spatial and temporal scales is important for prediction and analysis of CO 2 storage sites. © 2012 Elsevier Ltd.

  6. Illumina MiSeq Phylogenetic Amplicon Sequencing Shows a Large Reduction of an Uncharacterised Succinivibrionaceae and an Increase of the Methanobrevibacter gottschalkii Clade in Feed Restricted Cattle.

    Directory of Open Access Journals (Sweden)

    Matthew Sean McCabe

    Full Text Available Periodic feed restriction is used in cattle production to reduce feed costs. When normal feed levels are resumed, cattle catch up to a normal weight by an acceleration of normal growth rate, known as compensatory growth, which is not yet fully understood. Illumina Miseq Phylogenetic marker amplicon sequencing of DNA extracted from rumen contents of 55 bulls showed that restriction of feed (70% concentrate, 30% grass silage for 125 days, to levels that caused a 60% reduction of growth rate, resulted in a large increase of relative abundance of Methanobrevibacter gottschalkii clade (designated as OTU-M7, and a large reduction of an uncharacterised Succinivibrionaceae species (designated as OTU-S3004. There was a strong negative Spearman correlation (ρ = -0.72, P = <1x10(-20 between relative abundances of OTU-3004 and OTU-M7 in the liquid rumen fraction. There was also a significant increase in acetate:propionate ratio (A:P in feed restricted animals that showed a negative Spearman correlation (ρ = -0.69, P = <1x10(-20 with the relative abundance of OTU-S3004 in the rumen liquid fraction but not the solid fraction, and a strong positive Spearman correlation with OTU-M7 in the rumen liquid (ρ = 0.74, P = <1x10(-20 and solid (ρ = 0.69, P = <1x10(-20 fractions. Reduced A:P ratios in the rumen are associated with increased feed efficiency and reduced production of methane which has a global warming potential (GWP 100 years of 28. Succinivibrionaceae growth in the rumen was previously suggested to reduce methane emissions as some members of this family utilise hydrogen, which is also utilised by methanogens for methanogenesis, to generate succinate which is converted to propionate. Relative abundance of OTU-S3004 showed a positive Spearman correlation with propionate (ρ = 0.41, P = <0.01 but not acetate in the liquid rumen fraction.

  7. Performance of the front-end signal processing electronics for the drift chambers of the Stanford Large Detector

    International Nuclear Information System (INIS)

    Honma, A.; Haller, G.M.; Usher, T.; Shypit, R.

    1990-10-01

    This paper reports on the performance of the front-end analog and digital signal processing electronics for the drift chambers of the Stanford Large Detector (SLD) detector at the Stanford Linear Collider. The electronics mounted on printed circuit boards include up to 64 channels of transimpedance amplification, analog sampling, A/D conversion, and associated control circuitry. Measurements of the time resolution, gain, noise, linearity, crosstalk, and stability of the readout electronics are described and presented. The expected contribution of the electronics to the relevant drift chamber measurement resolutions (i.e., timing and charge division) is given

  8. How engineering data management and system support the main process[-oriented] functions of a large-scale project

    CERN Document Server

    Hameri, A P

    1999-01-01

    By dividing the development process into successive functional operations, this paper studies the benefits of establishing configuration management procedures and of using an engineering data management systems (EDMS) in order to execute the tasks. The underlying environment is that of CERN and the ongoing, a decade long, Large Hadron Collider (LHC)-project. By identifying the main functional groups who will use the EDMS the paper outlines the basic motivations and services provided by such a system to each process function. The implications of strict configuration management on the daily operation of each functional user group are also discussed. The main argument of the paper is that each and every user of the EDMS must act in compliance with the configuration management procedures to guarantee the overall benefits from the system. The pilot EDMS being developed at CERN, which serves as a test-bed to discover the real functional needs of the organisation of an EDMS supports the conclusions. The preliminary ...

  9. Calculating Soil Wetness, Evapotranspiration and Carbon Cycle Processes Over Large Grid Areas Using a New Scaling Technique

    Science.gov (United States)

    Sellers, Piers

    2012-01-01

    Soil wetness typically shows great spatial variability over the length scales of general circulation model (GCM) grid areas (approx 100 km ), and the functions relating evapotranspiration and photosynthetic rate to local-scale (approx 1 m) soil wetness are highly non-linear. Soil respiration is also highly dependent on very small-scale variations in soil wetness. We therefore expect significant inaccuracies whenever we insert a single grid area-average soil wetness value into a function to calculate any of these rates for the grid area. For the particular case of evapotranspiration., this method - use of a grid-averaged soil wetness value - can also provoke severe oscillations in the evapotranspiration rate and soil wetness under some conditions. A method is presented whereby the probability distribution timction(pdf) for soil wetness within a grid area is represented by binning. and numerical integration of the binned pdf is performed to provide a spatially-integrated wetness stress term for the whole grid area, which then permits calculation of grid area fluxes in a single operation. The method is very accurate when 10 or more bins are used, can deal realistically with spatially variable precipitation, conserves moisture exactly and allows for precise modification of the soil wetness pdf after every time step. The method could also be applied to other ecological problems where small-scale processes must be area-integrated, or upscaled, to estimate fluxes over large areas, for example in treatments of the terrestrial carbon budget or trace gas generation.

  10. Distributed processing and network of data acquisition and diagnostics control for Large Helical Device (LHD)

    International Nuclear Information System (INIS)

    Nakanishi, H.; Kojima, M.; Hidekuma, S.

    1997-11-01

    The LHD (Large Helical Device) data processing system has been designed in order to deal with the huge amount of diagnostics data of 600-900 MB per 10-second short-pulse experiment. It prepares the first plasma experiment in March 1998. The recent increase of the data volume obliged to adopt the fully distributed system structure which uses multiple data transfer paths in parallel and separates all of the computer functions into clients and servers. The fundamental element installed for every diagnostic device consists of two kinds of server computers; the data acquisition PC/Windows NT and the real-time diagnostics control VME/VxWorks. To cope with diversified kinds of both device control channels and diagnostics data, the object-oriented method are utilized wholly for the development of this system. It not only reduces the development burden, but also widen the software portability and flexibility. 100Mbps EDDI-based fast networks will re-integrate the distributed server computers so that they can behave as one virtual macro-machine for users. Network methods applied for the LHD data processing system are completely based on the TCP/IP internet technology, and it provides the same accessibility to the remote collaborators as local participants can operate. (author)

  11. Parallel processing based decomposition technique for efficient collaborative optimization

    International Nuclear Information System (INIS)

    Park, Hyung Wook; Kim, Sung Chan; Kim, Min Soo; Choi, Dong Hoon

    2001-01-01

    In practical design studies, most of designers solve multidisciplinary problems with large sized and complex design system. These multidisciplinary problems have hundreds of analysis and thousands of variables. The sequence of process to solve these problems affects the speed of total design cycle. Thus it is very important for designer to reorder the original design processes to minimize total computational cost. This is accomplished by decomposing large multidisciplinary problem into several MultiDisciplinary Analysis SubSystem (MDASS) and processing it in parallel. This paper proposes new strategy for parallel decomposition of multidisciplinary problem to raise design efficiency by using genetic algorithm and shows the relationship between decomposition and Multidisciplinary Design Optimization(MDO) methodology

  12. Efficient Partitioning of Large Databases without Query Statistics

    Directory of Open Access Journals (Sweden)

    Shahidul Islam KHAN

    2016-11-01

    Full Text Available An efficient way of improving the performance of a database management system is distributed processing. Distribution of data involves fragmentation or partitioning, replication, and allocation process. Previous research works provided partitioning based on empirical data about the type and frequency of the queries. These solutions are not suitable at the initial stage of a distributed database as query statistics are not available then. In this paper, I have presented a fragmentation technique, Matrix based Fragmentation (MMF, which can be applied at the initial stage as well as at later stages of distributed databases. Instead of using empirical data, I have developed a matrix, Modified Create, Read, Update and Delete (MCRUD, to partition a large database properly. Allocation of fragments is done simultaneously in my proposed technique. So using MMF, no additional complexity is added for allocating the fragments to the sites of a distributed database as fragmentation is synchronized with allocation. The performance of a DDBMS can be improved significantly by avoiding frequent remote access and high data transfer among the sites. Results show that proposed technique can solve the initial partitioning problem of large distributed databases.

  13. New implementation of OGC Web Processing Service in Python programming language. PyWPS-4 and issues we are facing with processing of large raster data using OGC WPS

    Directory of Open Access Journals (Sweden)

    J. Čepický

    2016-06-01

    Full Text Available The OGC® Web Processing Service (WPS Interface Standard provides rules for standardizing inputs and outputs (requests and responses for geospatial processing services, such as polygon overlay. The standard also defines how a client can request the execution of a process, and how the output from the process is handled. It defines an interface that facilitates publishing of geospatial processes and client discovery of processes and and binding to those processes into workflows. Data required by a WPS can be delivered across a network or they can be available at a server. PyWPS was one of the first implementations of OGC WPS on the server side. It is written in the Python programming language and it tries to connect to all existing tools for geospatial data analysis, available on the Python platform. During the last two years, the PyWPS development team has written a new version (called PyWPS-4 completely from scratch. The analysis of large raster datasets poses several technical issues in implementing the WPS standard. The data format has to be defined and validated on the server side and binary data have to be encoded using some numeric representation. Pulling raster data from remote servers introduces security risks, in addition, running several processes in parallel has to be possible, so that system resources are used efficiently while preserving security. Here we discuss these topics and illustrate some of the solutions adopted within the PyWPS implementation.

  14. A Unique Autothermal Thermophilic Aerobic Digestion Process Showing a Dynamic Transition of Physicochemical and Bacterial Characteristics from the Mesophilic to the Thermophilic Phase.

    Science.gov (United States)

    Tashiro, Yukihiro; Kanda, Kosuke; Asakura, Yuya; Kii, Toshihiko; Cheng, Huijun; Poudel, Pramod; Okugawa, Yuki; Tashiro, Kosuke; Sakai, Kenji

    2018-03-15

    A unique autothermal thermophilic aerobic digestion (ATAD) process has been used to convert human excreta to liquid fertilizer in Japan. This study investigated the changes in physicochemical and bacterial community characteristics during the full-scale ATAD process operated for approximately 3 weeks in 2 different years. After initiating simultaneous aeration and mixing using an air-inducing circulator (aerator), the temperature autothermally increased rapidly in the first 1 to 2 days with exhaustive oxygen consumption, leading to a drastic decrease and gradual increase in oxidation-reduction potential in the first 2 days, reached >50°C in the middle 4 to 6 days, and remained steady in the final phase. Volatile fatty acids were rapidly consumed and diminished in the first 2 days, whereas the ammonia nitrogen concentration was relatively stable during the process, despite a gradual pH increase to 9.3. Principal-coordinate analysis of 16S rRNA gene amplicons using next-generation sequencing divided the bacterial community structures into distinct clusters corresponding to three phases, and they were similar in the final phase in both years despite different transitions in the middle phase. The predominant phyla (closest species, dominancy) in the initial, middle, and final phases were Proteobacteria ( Arcobacter trophiarum , 19 to 43%; Acinetobacter towneri , 6.3 to 30%), Bacteroidetes ( Moheibacter sediminis , 43 to 54%), and Firmicutes ( Thermaerobacter composti , 11 to 28%; Heliorestis baculata , 2.1 to 16%), respectively. Two predominant operational taxonomic units (OTUs) in the final phase showed very low similarities to the closest species, indicating that the process is unique compared with previously published ones. This unique process with three distinctive phases would be caused by the aerator with complete aeration. IMPORTANCE Although the autothermal thermophilic aerobic digestion (ATAD) process has several advantages, such as a high degradation

  15. Monitoring and controlling the biogas process

    Energy Technology Data Exchange (ETDEWEB)

    Ahring, B K; Angelidaki, I [The Technical Univ. of Denmark, Dept. of Environmental Science and Engineering, Lyngby (Denmark)

    1997-08-01

    Many modern large-scale biogas plants have been constructed recently, increasing the demand for proper monitoring and control of these large reactor systems. For monitoring the biogas process, an easy to measure and reliable indicator is required, which reflects the metabolic state and the activity of the bacterial populations in the reactor. In this paper, we discuss existing indicators as well as indicators under development which can potentially be used to monitor the state of the biogas process in a reactor. Furthermore, data are presented from two large scale thermophilic biogas plants, subjected to temperature changes and where the concentration of volatile fatty acids was monitored. The results clearly demonstrated that significant changes in the concentration of the individual VFA occurred although the biogas production was not significantly changed. Especially the concentrations of butyrate, isobutyrate and isovalerate showed significant changes. Future improvements of process control could therefore be based on monitoring of the concentration of specific VFA`s together with information about the bacterial populations in the reactor. The last information could be supplied by the use of modern molecular techniques. (au) 51 refs.

  16. Inducing a health-promoting change process within an organization: the effectiveness of a large-scale intervention on social capital, openness, and autonomous motivation toward health.

    Science.gov (United States)

    van Scheppingen, Arjella R; de Vroome, Ernest M M; Ten Have, Kristin C J M; Bos, Ellen H; Zwetsloot, Gerard I J M; van Mechelen, W

    2014-11-01

    To examine the effectiveness of an organizational large-scale intervention applied to induce a health-promoting organizational change process. A quasi-experimental, "as-treated" design was used. Regression analyses on data of employees of a Dutch dairy company (n = 324) were used to examine the effects on bonding social capital, openness, and autonomous motivation toward health and on employees' lifestyle, health, vitality, and sustainable employability. Also, the sensitivity of the intervention components was examined. Intervention effects were found for bonding social capital, openness toward health, smoking, healthy eating, and sustainable employability. The effects were primarily attributable to the intervention's dialogue component. The change process initiated by the large-scale intervention contributed to a social climate in the workplace that promoted health and ownership toward health. The study confirms the relevance of collective change processes for health promotion.

  17. Bilinguals Show Weaker Lexical Access during Spoken Sentence Comprehension

    Science.gov (United States)

    Shook, Anthony; Goldrick, Matthew; Engstler, Caroline; Marian, Viorica

    2015-01-01

    When bilinguals process written language, they show delays in accessing lexical items relative to monolinguals. The present study investigated whether this effect extended to spoken language comprehension, examining the processing of sentences with either low or high semantic constraint in both first and second languages. English-German…

  18. Differential gene expression of two extreme honey bee (Apis mellifera) colonies showing varroa tolerance and susceptibility.

    Science.gov (United States)

    Jiang, S; Robertson, T; Mostajeran, M; Robertson, A J; Qiu, X

    2016-06-01

    Varroa destructor, an ectoparasitic mite of honey bees (Apis mellifera), is the most serious pest threatening the apiculture industry. In our honey bee breeding programme, two honey bee colonies showing extreme phenotypes for varroa tolerance/resistance (S88) and susceptibility (G4) were identified by natural selection from a large gene pool over a 6-year period. To investigate potential defence mechanisms for honey bee tolerance to varroa infestation, we employed DNA microarray and real time quantitative (PCR) analyses to identify differentially expressed genes in the tolerant and susceptible colonies at pupa and adult stages. Our results showed that more differentially expressed genes were identified in the tolerant bees than in bees from the susceptible colony, indicating that the tolerant colony showed an increased genetic capacity to respond to varroa mite infestation. In both colonies, there were more differentially expressed genes identified at the pupa stage than at the adult stage, indicating that pupa bees are more responsive to varroa infestation than adult bees. Genes showing differential expression in the colony phenotypes were categorized into several groups based on their molecular functions, such as olfactory signalling, detoxification processes, exoskeleton formation, protein degradation and long-chain fatty acid metabolism, suggesting that these biological processes play roles in conferring varroa tolerance to naturally selected colonies. Identification of differentially expressed genes between the two colony phenotypes provides potential molecular markers for selecting and breeding varroa-tolerant honey bees. © 2016 The Royal Entomological Society.

  19. Computational Modelling of Large Scale Phage Production Using a Two-Stage Batch Process

    Directory of Open Access Journals (Sweden)

    Konrad Krysiak-Baltyn

    2018-04-01

    Full Text Available Cost effective and scalable methods for phage production are required to meet an increasing demand for phage, as an alternative to antibiotics. Computational models can assist the optimization of such production processes. A model is developed here that can simulate the dynamics of phage population growth and production in a two-stage, self-cycling process. The model incorporates variable infection parameters as a function of bacterial growth rate and employs ordinary differential equations, allowing application to a setup with multiple reactors. The model provides simple cost estimates as a function of key operational parameters including substrate concentration, feed volume and cycling times. For the phage and bacteria pairing examined, costs and productivity varied by three orders of magnitude, with the lowest cost found to be most sensitive to the influent substrate concentration and low level setting in the first vessel. An example case study of phage production is also presented, showing how parameter values affect the production costs and estimating production times. The approach presented is flexible and can be used to optimize phage production at laboratory or factory scale by minimizing costs or maximizing productivity.

  20. Modern algorithms for large sparse eigenvalue problems

    International Nuclear Information System (INIS)

    Meyer, A.

    1987-01-01

    The volume is written for mathematicians interested in (numerical) linear algebra and in the solution of large sparse eigenvalue problems, as well as for specialists in engineering, who use the considered algorithms in the investigation of eigenoscillations of structures, in reactor physics, etc. Some variants of the algorithms based on the idea of a gradient-type direction of movement are presented and their convergence properties are discussed. From this, a general strategy for the direct use of preconditionings for the eigenvalue problem is derived. In this new approach the necessity of the solution of large linear systems is entirely avoided. Hence, these methods represent a new alternative to some other modern eigenvalue algorithms, as they show a slightly slower convergence on the one hand but essentially lower numerical and data processing problems on the other hand. A brief description and comparison of some well-known methods (i.e. simultaneous iteration, Lanczos algorithm) completes this volume. (author)

  1. New insight in the template decomposition process of large zeolite ZSM-5 crystals: an in situ UV-Vis/fluorescence micro-spectroscopy study

    NARCIS (Netherlands)

    Karwacki, L.|info:eu-repo/dai/nl/304824283; Weckhuysen, B.M.|info:eu-repo/dai/nl/285484397

    2011-01-01

    A combination of in situ UV-Vis and confocal fluorescence micro-spectroscopy was used to study the template decomposition process in large zeolite ZSM-5 crystals. Correlation of polarized light dependent UV-Vis absorption spectra with confocal fluorescence emission spectra in the 400–750 nm region

  2. Karhunen-Loève (PCA) based detection of multiple oscillations in multiple measurement signals from large-scale process plants

    DEFF Research Database (Denmark)

    Odgaard, Peter Fogh; Wickerhauser, M.V.

    2007-01-01

     In the perspective of optimizing the control and operation of large scale process plants, it is important to detect and to locate oscillations in the plants. This paper presents a scheme for detecting and localizing multiple oscillations in multiple measurements from such a large-scale power plant....... The scheme is based on a Karhunen-Lo\\`{e}ve analysis of the data from the plant. The proposed scheme is subsequently tested on two sets of data: a set of synthetic data and a set of data from a coal-fired power plant. In both cases the scheme detects the beginning of the oscillation within only a few samples....... In addition the oscillation localization has also shown its potential by localizing the oscillations in both data sets....

  3. Extreme value statistics and thermodynamics of earthquakes: large earthquakes

    Directory of Open Access Journals (Sweden)

    B. H. Lavenda

    2000-06-01

    Full Text Available A compound Poisson process is used to derive a new shape parameter which can be used to discriminate between large earthquakes and aftershock sequences. Sample exceedance distributions of large earthquakes are fitted to the Pareto tail and the actual distribution of the maximum to the Fréchet distribution, while the sample distribution of aftershocks are fitted to a Beta distribution and the distribution of the minimum to the Weibull distribution for the smallest value. The transition between initial sample distributions and asymptotic extreme value distributions shows that self-similar power laws are transformed into nonscaling exponential distributions so that neither self-similarity nor the Gutenberg-Richter law can be considered universal. The energy-magnitude transformation converts the Fréchet distribution into the Gumbel distribution, originally proposed by Epstein and Lomnitz, and not the Gompertz distribution as in the Lomnitz-Adler and Lomnitz generalization of the Gutenberg-Richter law. Numerical comparison is made with the Lomnitz-Adler and Lomnitz analysis using the same Catalogue of Chinese Earthquakes. An analogy is drawn between large earthquakes and high energy particle physics. A generalized equation of state is used to transform the Gamma density into the order-statistic Fréchet distribution. Earthquaketemperature and volume are determined as functions of the energy. Large insurance claims based on the Pareto distribution, which does not have a right endpoint, show why there cannot be a maximum earthquake energy.

  4. Reactor materials program process water component failure probability

    International Nuclear Information System (INIS)

    Daugherty, W. L.

    1988-01-01

    The maximum rate loss of coolant accident for the Savannah River Production Reactors is presently specified as the abrupt double-ended guillotine break (DEGB) of a large process water pipe. This accident is not considered credible in light of the low applied stresses and the inherent ductility of the piping materials. The Reactor Materials Program was initiated to provide the technical basis for an alternate, credible maximum rate LOCA. The major thrust of this program is to develop an alternate worst case accident scenario by deterministic means. In addition, the probability of a DEGB is also being determined; to show that in addition to being mechanistically incredible, it is also highly improbable. The probability of a DEGB of the process water piping is evaluated in two parts: failure by direct means, and indirectly-induced failure. These two areas have been discussed in other reports. In addition, the frequency of a large bread (equivalent to a DEGB) in other process water system components is assessed. This report reviews the large break frequency for each component as well as the overall large break frequency for the reactor system

  5. Technical basis and programmatic requirements for large block testing of coupled thermal-mechanical-hydrological-chemical processes

    International Nuclear Information System (INIS)

    Lin, Wunan.

    1993-09-01

    This document contains the technical basis and programmatic requirements for a scientific investigation plan that governs tests on a large block of tuff for understanding the coupled thermal- mechanical-hydrological-chemical processes. This study is part of the field testing described in Section 8.3.4.2.4.4.1 of the Site Characterization Plan (SCP) for the Yucca Mountain Project. The first, and most important objective is to understand the coupled TMHC processes in order to develop models that will predict the performance of a nuclear waste repository. The block and fracture properties (including hydrology and geochemistry) can be well characterized from at least five exposed surfaces, and the block can be dismantled for post-test examinations. The second objective is to provide preliminary data for development of models that will predict the quality and quantity of water in the near-field environment of a repository over the current 10,000 year regulatory period of radioactive decay. The third objective is to develop and evaluate the various measurement systems and techniques that will later be employed in the Engineered Barrier System Field Tests (EBSFT)

  6. Superconducting properties of single-crystal Nb sphere formed by large-undercooling solidification process

    Energy Technology Data Exchange (ETDEWEB)

    Takeya, H.; Sung, Y.S.; Hirata, K.; Togano, K

    2003-10-15

    An electrostatic levitation (ESL) system has been used for investigating undercooling effects on superconducting materials. In this report, preliminary experiments on Nb (melting temperature: T{sub m}=2477 deg. C) have been performed by melting Nb in levitation using 150 and 250 W Nd-YAG lasers. Since molten Nb is solidified without any contact in a high vacuum condition, a significantly undercooled state up to 400 deg. C is maintained before recalescence followed by solidification. Spherical single crystals of Nb are formed by the ESL process due to the suppression of heterogeneous nucleation. The field dependence of magnetization of Nb shows a reversible behavior as an ideal type II superconductor, implying that it contains almost no flux-pinning centers.

  7. Primary central nervous system diffuse large B-cell lymphoma shows an activated B-cell-like phenotype with co-expression of C-MYC, BCL-2, and BCL-6.

    Science.gov (United States)

    Li, Xiaomei; Huang, Ying; Bi, Chengfeng; Yuan, Ji; He, Hong; Zhang, Hong; Yu, QiuBo; Fu, Kai; Li, Dan

    2017-06-01

    Diffuse large B-cell lymphoma (DLBCL) is the most common non-Hodgkin lymphoma, whose main prognostic factor is closely related to germinal center B-cell-like subtype (GCB- DLBCL) or activated B-cell-like type (non-GCB-DLBCL). The most common type of primary central nervous system lymphoma is diffuse large B-cell type with poor prognosis and the reason is unclear. This study aims to stratify primary central nervous system diffuse large B-cell lymphoma (PCNS-DLBCL) according to the cell-of-origin (COO) and to investigate the multiple proteins expression of C-MYC, BCL-6, BCL-2, TP53, further to elucidate the reason why primary central nervous system diffuse large B-cell lymphoma possesses a poor clinical outcome as well. Nineteen cases of primary central nervous system DLBCL were stratified according to immunostaining algorithms of Hans, Choi and Meyer (Tally) and we investigated the multiple proteins expression of C-MYC, BCL-6, BCL-2, TP53. The Epstein-Barr virus and Borna disease virus infection were also detected. Among nineteen cases, most (15-17 cases) were assigned to the activated B-cell-like subtype, highly expression of C-MYC (15 cases, 78.9%), BCL-2 (10 cases, 52.6%), BCL-6 (15 cases, 78.9%). Unfortunately, two cases were positive for PD-L1 while PD-L2 was not expressed in any case. Two cases infected with BDV but no one infected with EBV. In conclusion, most primary central nervous system DLBCLs show an activated B-cell-like subtype characteristic and have multiple expressions of C-MYC, BCL-2, BCL-6 protein, these features might be significant factor to predict the outcome and guide treatment of PCNS-DLBCLs. Copyright © 2017 Elsevier GmbH. All rights reserved.

  8. TMS Affects Moral Judgment, Showing the Role of DLPFC and TPJ in Cognitive and Emotional Processing

    Directory of Open Access Journals (Sweden)

    Danique eJeurissen

    2014-02-01

    Full Text Available Decision-making involves a complex interplay of emotional responses and reasoning processes. In this study, we use TMS to explore the neurobiological substrates of moral decisions in humans. To examining the effects of TMS on the outcome of a moral-decision, we compare the decision outcome of moral-personal and moral-impersonal dilemmas to each other and examine the differential effects of applying TMS over the right DLPFC or right TPJ. In this comparison, we find that the TMS-induced disruption of the DLPFC during the decision process, affects the outcome of the moral-personal judgment, while TMS-induced disruption of TPJ affects only moral-impersonal conditions. In other words, we find a double-dissociation between DLPFC and TPJ in the outcome of a moral decision. Furthermore, we find that TMS-induced disruption of the DLPFC during non-moral, moral-impersonal, and moral-personal decisions lead to lower ratings of regret about the decision. Our results are in line with the dual-process theory and suggest a role for both the emotional response and cognitive reasoning process in moral judgment. Both the emotional and cognitive processes were shown to be involved in the decision outcome.

  9. TMS affects moral judgment, showing the role of DLPFC and TPJ in cognitive and emotional processing.

    Science.gov (United States)

    Jeurissen, Danique; Sack, Alexander T; Roebroeck, Alard; Russ, Brian E; Pascual-Leone, Alvaro

    2014-01-01

    Decision-making involves a complex interplay of emotional responses and reasoning processes. In this study, we use TMS to explore the neurobiological substrates of moral decisions in humans. To examining the effects of TMS on the outcome of a moral-decision, we compare the decision outcome of moral-personal and moral-impersonal dilemmas to each other and examine the differential effects of applying TMS over the right DLPFC or right TPJ. In this comparison, we find that the TMS-induced disruption of the DLPFC during the decision process, affects the outcome of the moral-personal judgment, while TMS-induced disruption of TPJ affects only moral-impersonal conditions. In other words, we find a double-dissociation between DLPFC and TPJ in the outcome of a moral decision. Furthermore, we find that TMS-induced disruption of the DLPFC during non-moral, moral-impersonal, and moral-personal decisions lead to lower ratings of regret about the decision. Our results are in line with the dual-process theory and suggest a role for both the emotional response and cognitive reasoning process in moral judgment. Both the emotional and cognitive processes were shown to be involved in the decision outcome.

  10. Invisible Axions and Large-Radius Compactifications

    CERN Document Server

    Dienes, Keith R.; Gherghetta, Tony; Dienes, Keith R.; Dudas, Emilian; Gherghetta, Tony

    2000-01-01

    We study some of the novel effects that arise when the QCD axion is placed in the ``bulk'' of large extra spacetime dimensions. First, we find that the mass of the axion can become independent of the energy scale associated with the breaking of the Peccei-Quinn symmetry. This implies that the mass of the axion can be adjusted independently of its couplings to ordinary matter, thereby providing a new method of rendering the axion invisible. Second, we discuss the new phenomenon of laboratory axion oscillations (analogous to neutrino oscillations), and show that these oscillations cause laboratory axions to ``decohere'' extremely rapidly as a result of Kaluza-Klein mixing. This decoherence may also be a contributing factor to axion invisibility. Third, we discuss the role of Kaluza-Klein axions in axion-mediated processes and decays, and propose several experimental tests of the higher-dimensional nature of the axion. Finally, we show that under certain circumstances, the presence of an infinite tower of Kaluza...

  11. Large-scale grid management

    International Nuclear Information System (INIS)

    Langdal, Bjoern Inge; Eggen, Arnt Ove

    2003-01-01

    The network companies in the Norwegian electricity industry now have to establish a large-scale network management, a concept essentially characterized by (1) broader focus (Broad Band, Multi Utility,...) and (2) bigger units with large networks and more customers. Research done by SINTEF Energy Research shows so far that the approaches within large-scale network management may be structured according to three main challenges: centralization, decentralization and out sourcing. The article is part of a planned series

  12. Unraveling The Connectome: Visualizing and Abstracting Large-Scale Connectomics Data

    KAUST Repository

    Al-Awami, Ali K.

    2017-04-30

    We explore visualization and abstraction approaches to represent neuronal data. Neuroscientists acquire electron microscopy volumes to reconstruct a complete wiring diagram of the neurons in the brain, called the connectome. This will be crucial to understanding brains and their development. However, the resulting data is complex and large, posing a big challenge to existing visualization techniques in terms of clarity and scalability. We describe solutions to tackle the problems of scalability and cluttered presentation. We first show how a query-guided interactive approach to visual exploration can reduce the clutter and help neuroscientists explore their data dynamically. We use a knowledge-based query algebra that facilitates the interactive creation of queries. This allows neuroscientists to pose domain-specific questions related to their research. Simple queries can be combined to form complex queries to answer more sophisticated questions. We then show how visual abstractions from 3D to 2D can significantly reduce the visual clutter and add clarity to the visualization so that scientists can focus more on the analysis. We abstract the topology of 3D neurons into a multi-scale, relative distance-preserving subway map visualization that allows scientists to interactively explore the morphological and connectivity features of neuronal cells. We then focus on the process of acquisition, where neuroscientists segment electron microscopy images to reconstruct neurons. The segmentation process of such data is tedious, time-intensive, and usually performed using a diverse set of tools. We present a novel web-based visualization system for tracking the state, progress, and evolution of segmentation data in neuroscience. Our multi-user system seamlessly integrates a diverse set of tools. Our system provides support for the management, provenance, accountability, and auditing of large-scale segmentations. Finally, we present a novel architecture to render very large

  13. Rheo-processing of an alloy specifically designed for semi-solid metal processing based on the Al-Mg-Si system

    International Nuclear Information System (INIS)

    Patel, J.B.; Liu, Y.Q.; Shao, G.; Fan, Z.

    2008-01-01

    Semi-solid metal (SSM) processing is a promising technology for forming alloys and composites to near-net shaped products. Alloys currently used for SSM processing are mainly conventional aluminium cast alloys. This is an obstacle to the realisation of full potential of SSM processing, since these alloys were originally designed for liquid state processing and not for semi-solid state processing. Therefore, there is a significant need for designing new alloys specifically for semi-solid state processing to fulfil its potential. In this study, thermodynamic calculations have been carried out to design alloys based on the Al-Mg-Si system for SSM processing via the 'rheo-route'. The suitability of a selected alloy composition has been assessed in terms of the criteria considered by the thermodynamic design process, mechanical properties and heat treatability. The newly designed alloy showed good processability with rheo-processing in terms of good control of solid fraction during processing and a reasonably large processing window. The mechanical property variation was very small and the alloy showed good potential for age hardening by T5 temper heat treatment after rheo-processing

  14. USING THE BUSINESS ENGINEERING APPROACH IN THE DEVELOPMENT OF A STRATEGIC MANAGEMENT PROCESS FOR A LARGE CORPORATION: A CASE STUDY

    Directory of Open Access Journals (Sweden)

    C.M. Moll

    2012-01-01

    Full Text Available Most South African organisations were historically part of a closed competitive system with little global competition and a relatively stable economy (Manning: 18, Sunter: 32. Since the political transformation, the globalisation of the world economy, the decline of world economic fundamentals and specific challenges in the South African scenario such as GEAR and employment equity, the whole playingfield has changed. With these changes, new challenges ', appear. A significant challenge for organisations within this scenario is to think, plan and manage strategically. In order to do so, the organisation must understand its relationship with its environment and establish innovative new strategies to manipulate; interact with; and ultimately survive in the environment. The legacy of the past has, in many organisations, implanted an operational short-term focus because the planning horizon was stable. It was sufficient to construct annual plans rather than strategies. These plans were typically internally focused rather than driven by the external environment. Strategic planning in this environment tended to be a form of team building through which the various members of the organisation 's management team discussed and documented the problems of the day. A case study is presented of the development of a strategic management process for a large South African Mining company. The authors believe that the approach is a new and different way of addressing a problem that exists in many organisations - the establishment of a process of strategic thinking, whilst at the same time ensuring that a formal process of strategic planning is followed in order to prompt the management of the organisation for strategic action. The lessons that were drawn from this process are applicable to a larger audience due to the homogenous nature of the management style of a large number of South African organisations.

  15. An algebraic sub-structuring method for large-scale eigenvalue calculation

    International Nuclear Information System (INIS)

    Yang, C.; Gao, W.; Bai, Z.; Li, X.; Lee, L.; Husbands, P.; Ng, E.

    2004-01-01

    We examine sub-structuring methods for solving large-scale generalized eigenvalue problems from a purely algebraic point of view. We use the term 'algebraic sub-structuring' to refer to the process of applying matrix reordering and partitioning algorithms to divide a large sparse matrix into smaller submatrices from which a subset of spectral components are extracted and combined to provide approximate solutions to the original problem. We are interested in the question of which spectral components one should extract from each sub-structure in order to produce an approximate solution to the original problem with a desired level of accuracy. Error estimate for the approximation to the smallest eigenpair is developed. The estimate leads to a simple heuristic for choosing spectral components (modes) from each sub-structure. The effectiveness of such a heuristic is demonstrated with numerical examples. We show that algebraic sub-structuring can be effectively used to solve a generalized eigenvalue problem arising from the simulation of an accelerator structure. One interesting characteristic of this application is that the stiffness matrix produced by a hierarchical vector finite elements scheme contains a null space of large dimension. We present an efficient scheme to deflate this null space in the algebraic sub-structuring process

  16. Production of High Quality Die Steels from Large ESR Slab Ingots

    Science.gov (United States)

    Geng, Xin; Jiang, Zhou-hua; Li, Hua-bing; Liu, Fu-bin; Li, Xing

    With the rapid development of manufacture industry in China, die steels are in great need of large slab ingot of high quality and large tonnage, such as P20, WSM718R and so on. Solidification structure and size of large slab ingots produced with conventional methods are not satisfied. However, large slab ingots manufactured by ESR process have a good solidification structure and enough section size. In the present research, the new slab ESR process was used to produce the die steels large slab ingots with the maximum size of 980×2000×3200mm. The compact and sound ingot can be manufactured by the slab ESR process. The ultra-heavy plates with the maximum thickness of 410 mm can be obtained after rolling the 49 tons ingots. Due to reducing the cogging and forging process, the ESR for large slab ingots process can increase greatly the yield and production efficiency, and evidently cut off product costs.

  17. Autonomous sensor particle for parameter tracking in large vessels

    International Nuclear Information System (INIS)

    Thiele, Sebastian; Da Silva, Marco Jose; Hampel, Uwe

    2010-01-01

    A self-powered and neutrally buoyant sensor particle has been developed for the long-term measurement of spatially distributed process parameters in the chemically harsh environments of large vessels. One intended application is the measurement of flow parameters in stirred fermentation biogas reactors. The prototype sensor particle is a robust and neutrally buoyant capsule, which allows free movement with the flow. It contains measurement devices that log the temperature, absolute pressure (immersion depth) and 3D-acceleration data. A careful calibration including an uncertainty analysis has been performed. Furthermore, autonomous operation of the developed prototype was successfully proven in a flow experiment in a stirred reactor model. It showed that the sensor particle is feasible for future application in fermentation reactors and other industrial processes

  18. Large Eddy Simulation of Transient Flow, Solidification, and Particle Transport Processes in Continuous-Casting Mold

    Science.gov (United States)

    Liu, Zhongqiu; Li, Linmin; Li, Baokuan; Jiang, Maofa

    2014-07-01

    The current study developed a coupled computational model to simulate the transient fluid flow, solidification, and particle transport processes in a slab continuous-casting mold. Transient flow of molten steel in the mold is calculated using the large eddy simulation. An enthalpy-porosity approach is used for the analysis of solidification processes. The transport of bubble and non-metallic inclusion inside the liquid pool is calculated using the Lagrangian approach based on the transient flow field. A criterion of particle entrapment in the solidified shell is developed using the user-defined functions of FLUENT software (ANSYS, Inc., Canonsburg, PA). The predicted results of this model are compared with the measurements of the ultrasonic testing of the rolled steel plates and the water model experiments. The transient asymmetrical flow pattern inside the liquid pool exhibits quite satisfactory agreement with the corresponding measurements. The predicted complex instantaneous velocity field is composed of various small recirculation zones and multiple vortices. The transport of particles inside the liquid pool and the entrapment of particles in the solidified shell are not symmetric. The Magnus force can reduce the entrapment ratio of particles in the solidified shell, especially for smaller particles, but the effect is not obvious. The Marangoni force can play an important role in controlling the motion of particles, which increases the entrapment ratio of particles in the solidified shell obviously.

  19. VESPA: Very large-scale Evolutionary and Selective Pressure Analyses

    Directory of Open Access Journals (Sweden)

    Andrew E. Webb

    2017-06-01

    Full Text Available Background Large-scale molecular evolutionary analyses of protein coding sequences requires a number of preparatory inter-related steps from finding gene families, to generating alignments and phylogenetic trees and assessing selective pressure variation. Each phase of these analyses can represent significant challenges, particularly when working with entire proteomes (all protein coding sequences in a genome from a large number of species. Methods We present VESPA, software capable of automating a selective pressure analysis using codeML in addition to the preparatory analyses and summary statistics. VESPA is written in python and Perl and is designed to run within a UNIX environment. Results We have benchmarked VESPA and our results show that the method is consistent, performs well on both large scale and smaller scale datasets, and produces results in line with previously published datasets. Discussion Large-scale gene family identification, sequence alignment, and phylogeny reconstruction are all important aspects of large-scale molecular evolutionary analyses. VESPA provides flexible software for simplifying these processes along with downstream selective pressure variation analyses. The software automatically interprets results from codeML and produces simplified summary files to assist the user in better understanding the results. VESPA may be found at the following website: http://www.mol-evol.org/VESPA.

  20. A comparison of parallel dust and fibre measurements of airborne chrysotile asbestos in a large mine and processing factories in the Russian Federation

    NARCIS (Netherlands)

    Feletto, Eleonora; Schonfeld, Sara J; Kovalevskiy, Evgeny V; Bukhtiyarov, Igor V; Kashanskiy, Sergey V; Moissonnier, Monika; Straif, Kurt; Kromhout, Hans

    2017-01-01

    INTRODUCTION: Historic dust concentrations are available in a large-scale cohort study of workers in a chrysotile mine and processing factories in Asbest, Russian Federation. Parallel dust (gravimetric) and fibre (phase-contrast optical microscopy) concentrations collected in 1995, 2007 and 2013/14

  1. Signal Formation Processes in Micromegas Detectors and Quality Control for large size Detector Construction for the ATLAS New Small Wheel

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00387450; Rembser, Christoph

    2017-08-04

    The Micromegas technology is one of the most successful modern gaseous detector concepts and widely utilized in nuclear and particle physics experiments. Twenty years of R & D rendered the technology sufficiently mature to be selected as precision tracking detector for the New Small Wheel (NSW) upgrade of the ATLAS Muon spectrometer. This will be the first large scale application of Micromegas in one of the major LHC experiments. However, many of the fundamental microscopic processes in these gaseous detectors are still not fully understood and studies on several detector aspects, like the micromesh geometry, have never been addressed systematically. The studies on signal formation in Micromegas, presented in the first part of this thesis, focuses on the microscopic signal electron loss mechanisms and the amplification processes in electron gas interaction. Based on a detailed model of detector parameter dependencies, these processes are scrutinized in an iterating comparison between exper- imental result...

  2. Large-scale numerical simulations of plasmas

    International Nuclear Information System (INIS)

    Hamaguchi, Satoshi

    2004-01-01

    The recent trend of large scales simulations of fusion plasma and processing plasmas is briefly summarized. Many advanced simulation techniques have been developed for fusion plasmas and some of these techniques are now applied to analyses of processing plasmas. (author)

  3. A High Density Low Cost Digital Signal Processing Module for Large Scale Radiation Detectors

    International Nuclear Information System (INIS)

    Tan, Hui; Hennig, Wolfgang; Walby, Mark D.; Breus, Dimitry; Harris, Jackson T.; Grudberg, Peter M.; Warburton, William K.

    2013-06-01

    A 32-channel digital spectrometer PIXIE-32 is being developed for nuclear physics or other radiation detection applications requiring digital signal processing with large number of channels at relatively low cost. A single PIXIE-32 provides spectrometry and waveform acquisition for 32 input signals per module whereas multiple modules can be combined into larger systems. It is based on the PCI Express standard which allows data transfer rates to the host computer of up to 800 MB/s. Each of the 32 channels in a PIXIE-32 module accepts signals directly from a detector preamplifier or photomultiplier. Digitally controlled offsets can be individually adjusted for each channel. Signals are digitized in 12-bit, 50 MHz multi-channel ADCs. Triggering, pile-up inspection and filtering of the data stream are performed in real time, and pulse heights and other event data are calculated on an event-by event basis. The hardware architecture, internal and external triggering features, and the spectrometry and waveform acquisition capability of the PIXIE- 32 as well as its capability to distribute clock and triggers among multiple modules, are presented. (authors)

  4. Accelerating solidification process simulation for large-sized system of liquid metal atoms using GPU with CUDA

    Energy Technology Data Exchange (ETDEWEB)

    Jie, Liang [School of Information Science and Engineering, Hunan University, Changshang, 410082 (China); Li, KenLi, E-mail: lkl@hnu.edu.cn [School of Information Science and Engineering, Hunan University, Changshang, 410082 (China); National Supercomputing Center in Changsha, 410082 (China); Shi, Lin [School of Information Science and Engineering, Hunan University, Changshang, 410082 (China); Liu, RangSu [School of Physics and Micro Electronic, Hunan University, Changshang, 410082 (China); Mei, Jing [School of Information Science and Engineering, Hunan University, Changshang, 410082 (China)

    2014-01-15

    Molecular dynamics simulation is a powerful tool to simulate and analyze complex physical processes and phenomena at atomic characteristic for predicting the natural time-evolution of a system of atoms. Precise simulation of physical processes has strong requirements both in the simulation size and computing timescale. Therefore, finding available computing resources is crucial to accelerate computation. However, a tremendous computational resource (GPGPU) are recently being utilized for general purpose computing due to its high performance of floating-point arithmetic operation, wide memory bandwidth and enhanced programmability. As for the most time-consuming component in MD simulation calculation during the case of studying liquid metal solidification processes, this paper presents a fine-grained spatial decomposition method to accelerate the computation of update of neighbor lists and interaction force calculation by take advantage of modern graphics processors units (GPU), enlarging the scale of the simulation system to a simulation system involving 10 000 000 atoms. In addition, a number of evaluations and tests, ranging from executions on different precision enabled-CUDA versions, over various types of GPU (NVIDIA 480GTX, 580GTX and M2050) to CPU clusters with different number of CPU cores are discussed. The experimental results demonstrate that GPU-based calculations are typically 9∼11 times faster than the corresponding sequential execution and approximately 1.5∼2 times faster than 16 CPU cores clusters implementations. On the basis of the simulated results, the comparisons between the theoretical results and the experimental ones are executed, and the good agreement between the two and more complete and larger cluster structures in the actual macroscopic materials are observed. Moreover, different nucleation and evolution mechanism of nano-clusters and nano-crystals formed in the processes of metal solidification is observed with large

  5. Modeling of a Large-Scale High Temperature Regenerative Sulfur Removal Process

    DEFF Research Database (Denmark)

    Konttinen, Jukka T.; Johnsson, Jan Erik

    1999-01-01

    model that does not account for bed hydrodynamics. The pilot-scale test run results, obtained in the test runs of the sulfur removal process with real coal gasifier gas, have been used for parameter estimation. The validity of the reactor model for commercial-scale design applications is discussed.......Regenerable mixed metal oxide sorbents are prime candidates for the removal of hydrogen sulfide from hot gasifier gas in the simplified integrated gasification combined cycle (IGCC) process. As part of the regenerative sulfur removal process development, reactor models are needed for scale......-up. Steady-state kinetic reactor models are needed for reactor sizing, and dynamic models can be used for process control design and operator training. The regenerative sulfur removal process to be studied in this paper consists of two side-by-side fluidized bed reactors operating at temperatures of 400...

  6. Large-Scale Analysis of Art Proportions

    DEFF Research Database (Denmark)

    Jensen, Karl Kristoffer

    2014-01-01

    While literature often tries to impute mathematical constants into art, this large-scale study (11 databases of paintings and photos, around 200.000 items) shows a different truth. The analysis, consisting of the width/height proportions, shows a value of rarely if ever one (square) and with majo......While literature often tries to impute mathematical constants into art, this large-scale study (11 databases of paintings and photos, around 200.000 items) shows a different truth. The analysis, consisting of the width/height proportions, shows a value of rarely if ever one (square...

  7. Printing Outside the Box: Additive Manufacturing Processes for Fabrication of Large Aerospace Structures

    Science.gov (United States)

    Babai, Majid; Peters, Warren

    2015-01-01

    To achieve NASA's mission of space exploration, innovative manufacturing processes are being applied to the fabrication of propulsion elements. Liquid rocket engines (LREs) are comprised of a thrust chamber and nozzle extension as illustrated in figure 1 for the J2X upper stage engine. Development of the J2X engine, designed for the Ares I launch vehicle, is currently being incorporated on the Space Launch System. A nozzle extension is attached to the combustion chamber to obtain the expansion ratio needed to increase specific impulse. If the nozzle extension could be printed as one piece using free-form additive manufacturing (AM) processes, rather than the current method of forming welded parts, a considerable time savings could be realized. Not only would this provide a more homogenous microstructure than a welded structure, but could also greatly shorten the overall fabrication time. The main objective of this study is to fabricate test specimens using a pulsed arc source and solid wire as shown in figure 2. The mechanical properties of these specimens will be compared with those fabricated using the powder bed, selective laser melting technology at NASA Marshall Space Flight Center. As printed components become larger, maintaining a constant temperature during the build process becomes critical. This predictive capability will require modeling of the moving heat source as illustrated in figure 3. Predictive understanding of the heat profile will allow a constant temperature to be maintained as a function of height from substrate while printing complex shapes. In addition, to avoid slumping, this will also allow better control of the microstructural development and hence the properties. Figure 4 shows a preliminary comparison of the mechanical properties obtained.

  8. Utilization of Workflow Process Maps to Analyze Gaps in Critical Event Notification at a Large, Urban Hospital.

    Science.gov (United States)

    Bowen, Meredith; Prater, Adam; Safdar, Nabile M; Dehkharghani, Seena; Fountain, Jack A

    2016-08-01

    Stroke care is a time-sensitive workflow involving multiple specialties acting in unison, often relying on one-way paging systems to alert care providers. The goal of this study was to map and quantitatively evaluate such a system and address communication gaps with system improvements. A workflow process map of the stroke notification system at a large, urban hospital was created via observation and interviews with hospital staff. We recorded pager communication regarding 45 patients in the emergency department (ED), neuroradiology reading room (NRR), and a clinician residence (CR), categorizing transmissions as successful or unsuccessful (dropped or unintelligible). Data analysis and consultation with information technology staff and the vendor informed a quality intervention-replacing one paging antenna and adding another. Data from a 1-month post-intervention period was collected. Error rates before and after were compared using a chi-squared test. Seventy-five pages regarding 45 patients were recorded pre-intervention; 88 pages regarding 86 patients were recorded post-intervention. Initial transmission error rates in the ED, NRR, and CR were 40.0, 22.7, and 12.0 %. Post-intervention, error rates were 5.1, 18.8, and 1.1 %, a statistically significant improvement in the ED (p workflow process maps. The workflow process map effectively defined communication failure parameters, allowing for systematic testing and intervention to improve communication in essential clinical locations.

  9. Reflections on Teaching a Large Class.

    Science.gov (United States)

    Miner, Rick

    1992-01-01

    Uses an analysis of small- and large-class differences as a framework for planning for and teaching a large class. Analyzes the process of developing and offering an organizational behavior class to 141 college students. Suggests ways to improve teaching effectiveness by minimizing psychological and physical distances, redistributing resources,…

  10. Discovering Reference Process Models by Mining Process Variants

    NARCIS (Netherlands)

    Li, C.; Reichert, M.U.; Wombacher, Andreas

    Recently, a new generation of adaptive Process-Aware Information Systems (PAIS) has emerged, which allows for dynamic process and service changes (e.g., to insert, delete, and move activities and service executions in a running process). This, in turn, has led to a large number of process variants

  11. A Novel Method of Fabricating Flexible Transparent Conductive Large Area Graphene Film

    International Nuclear Information System (INIS)

    Fan Tian-Ju; Yuan Chun-Qiu; Tang Wei; Tong Song-Zhao; Huang Wei; Min Yong-Gang; Liu Yi-Dong; Epstein, Arthur J.

    2015-01-01

    We fabricate flexible conductive and transparent graphene films on position-emission-tomography substrates and prepare large area graphene films by graphite oxide sheets with the new technical process. The multi-layer graphene oxide sheets can be chemically reduced by HNO 3 and HI to form a highly conductive graphene film on a substrate at lower temperature. The reduced graphene oxide sheets show a high conductivity sheet with resistance of 476 Ω/sq and transmittance of 76% at 550 nm (6 layers). The technique used to produce the transparent conductive graphene thin film is facile, inexpensive, and can be tunable for a large area production applied for electronics or touch screens. (paper)

  12. Lepton number violation in theories with a large number of standard model copies

    International Nuclear Information System (INIS)

    Kovalenko, Sergey; Schmidt, Ivan; Paes, Heinrich

    2011-01-01

    We examine lepton number violation (LNV) in theories with a saturated black hole bound on a large number of species. Such theories have been advocated recently as a possible solution to the hierarchy problem and an explanation of the smallness of neutrino masses. On the other hand, the violation of the lepton number can be a potential phenomenological problem of this N-copy extension of the standard model as due to the low quantum gravity scale black holes may induce TeV scale LNV operators generating unacceptably large rates of LNV processes. We show, however, that this issue can be avoided by introducing a spontaneously broken U 1(B-L) . Then, due to the existence of a specific compensation mechanism between contributions of different Majorana neutrino states, LNV processes in the standard model copy become extremely suppressed with rates far beyond experimental reach.

  13. A digital gigapixel large-format tile-scan camera.

    Science.gov (United States)

    Ben-Ezra, M

    2011-01-01

    Although the resolution of single-lens reflex (SLR) and medium-format digital cameras has increased in recent years, applications for cultural-heritage preservation and computational photography require even higher resolutions. Addressing this issue, a large-format cameras' large image planes can achieve very high resolution without compromising pixel size and thus can provide high-quality, high-resolution images.This digital large-format tile scan camera can acquire high-quality, high-resolution images of static scenes. It employs unique calibration techniques and a simple algorithm for focal-stack processing of very large images with significant magnification variations. The camera automatically collects overlapping focal stacks and processes them into a high-resolution, extended-depth-of-field image.

  14. Large-scale inference of gene function through phylogenetic annotation of Gene Ontology terms: case study of the apoptosis and autophagy cellular processes.

    Science.gov (United States)

    Feuermann, Marc; Gaudet, Pascale; Mi, Huaiyu; Lewis, Suzanna E; Thomas, Paul D

    2016-01-01

    We previously reported a paradigm for large-scale phylogenomic analysis of gene families that takes advantage of the large corpus of experimentally supported Gene Ontology (GO) annotations. This 'GO Phylogenetic Annotation' approach integrates GO annotations from evolutionarily related genes across ∼100 different organisms in the context of a gene family tree, in which curators build an explicit model of the evolution of gene functions. GO Phylogenetic Annotation models the gain and loss of functions in a gene family tree, which is used to infer the functions of uncharacterized (or incompletely characterized) gene products, even for human proteins that are relatively well studied. Here, we report our results from applying this paradigm to two well-characterized cellular processes, apoptosis and autophagy. This revealed several important observations with respect to GO annotations and how they can be used for function inference. Notably, we applied only a small fraction of the experimentally supported GO annotations to infer function in other family members. The majority of other annotations describe indirect effects, phenotypes or results from high throughput experiments. In addition, we show here how feedback from phylogenetic annotation leads to significant improvements in the PANTHER trees, the GO annotations and GO itself. Thus GO phylogenetic annotation both increases the quantity and improves the accuracy of the GO annotations provided to the research community. We expect these phylogenetically based annotations to be of broad use in gene enrichment analysis as well as other applications of GO annotations.Database URL: http://amigo.geneontology.org/amigo. © The Author(s) 2016. Published by Oxford University Press.

  15. Dew point vs bubble point : a misunderstood constraint on gravity drainage processes

    Energy Technology Data Exchange (ETDEWEB)

    Nenninger, J. [N-Solv Corp., Calgary, AB (Canada); Gunnewiek, L. [Hatch Ltd., Mississauga, ON (Canada)

    2009-07-01

    This study demonstrated that gravity drainage processes that use blended fluids such as solvents have an inherently unstable material balance due to differences between dew point and bubble point compositions. The instability can lead to the accumulation of volatile components within the chamber, and impair mass and heat transfer processes. Case studies were used to demonstrate the large temperature gradients within the vapour chamber caused by temperature differences between the bubble point and dew point for blended fluids. A review of published data showed that many experiments on in-situ processes do not account for unstable material balances caused by a lack of steam trap control. A study of temperature profiles during steam assisted gravity drainage (SAGD) studies showed significant temperature depressions caused by methane accumulations at the outside perimeter of the steam chamber. It was demonstrated that the condensation of large volumes of purified solvents provided an efficient mechanism for the removal of methane from the chamber. It was concluded that gravity drainage processes can be optimized by using pure propane during the injection process. 22 refs., 1 tab., 18 figs.

  16. High-Performance Monitoring Architecture for Large-Scale Distributed Systems Using Event Filtering

    Science.gov (United States)

    Maly, K.

    1998-01-01

    Monitoring is an essential process to observe and improve the reliability and the performance of large-scale distributed (LSD) systems. In an LSD environment, a large number of events is generated by the system components during its execution or interaction with external objects (e.g. users or processes). Monitoring such events is necessary for observing the run-time behavior of LSD systems and providing status information required for debugging, tuning and managing such applications. However, correlated events are generated concurrently and could be distributed in various locations in the applications environment which complicates the management decisions process and thereby makes monitoring LSD systems an intricate task. We propose a scalable high-performance monitoring architecture for LSD systems to detect and classify interesting local and global events and disseminate the monitoring information to the corresponding end- points management applications such as debugging and reactive control tools to improve the application performance and reliability. A large volume of events may be generated due to the extensive demands of the monitoring applications and the high interaction of LSD systems. The monitoring architecture employs a high-performance event filtering mechanism to efficiently process the large volume of event traffic generated by LSD systems and minimize the intrusiveness of the monitoring process by reducing the event traffic flow in the system and distributing the monitoring computation. Our architecture also supports dynamic and flexible reconfiguration of the monitoring mechanism via its Instrumentation and subscription components. As a case study, we show how our monitoring architecture can be utilized to improve the reliability and the performance of the Interactive Remote Instruction (IRI) system which is a large-scale distributed system for collaborative distance learning. The filtering mechanism represents an Intrinsic component integrated

  17. Computer-Controlled Cylindrical Polishing Process for Large X-Ray Mirror Mandrels

    Science.gov (United States)

    Khan, Gufran S.; Gubarev, Mikhail; Speegle, Chet; Ramsey, Brian

    2010-01-01

    We are developing high-energy grazing incidence shell optics for hard-x-ray telescopes. The resolution of a mirror shells depends on the quality of cylindrical mandrel from which they are being replicated. Mid-spatial-frequency axial figure error is a dominant contributor in the error budget of the mandrel. This paper presents our efforts to develop a deterministic cylindrical polishing process in order to keep the mid-spatial-frequency axial figure errors to a minimum. Simulation software is developed to model the residual surface figure errors of a mandrel due to the polishing process parameters and the tools used, as well as to compute the optical performance of the optics. The study carried out using the developed software was focused on establishing a relationship between the polishing process parameters and the mid-spatial-frequency error generation. The process parameters modeled are the speeds of the lap and the mandrel, the tool s influence function, the contour path (dwell) of the tools, their shape and the distribution of the tools on the polishing lap. Using the inputs from the mathematical model, a mandrel having conical approximated Wolter-1 geometry, has been polished on a newly developed computer-controlled cylindrical polishing machine. The preliminary results of a series of polishing experiments demonstrate a qualitative agreement with the developed model. We report our first experimental results and discuss plans for further improvements in the polishing process. The ability to simulate the polishing process is critical to optimize the polishing process, improve the mandrel quality and significantly reduce the cost of mandrel production

  18. Quality Function Deployment for Large Systems

    Science.gov (United States)

    Dean, Edwin B.

    1992-01-01

    Quality Function Deployment (QFD) is typically applied to small subsystems. This paper describes efforts to extend QFD to large scale systems. It links QFD to the system engineering process, the concurrent engineering process, the robust design process, and the costing process. The effect is to generate a tightly linked project management process of high dimensionality which flushes out issues early to provide a high quality, low cost, and, hence, competitive product. A pre-QFD matrix linking customers to customer desires is described.

  19. In situ vitrification large-scale operational acceptance test analysis

    International Nuclear Information System (INIS)

    Buelt, J.L.; Carter, J.G.

    1986-05-01

    A thermal treatment process is currently under study to provide possible enhancement of in-place stabilization of transuranic and chemically contaminated soil sites. The process is known as in situ vitrification (ISV). In situ vitrification is a remedial action process that destroys solid and liquid organic contaminants and incorporates radionuclides into a glass-like material that renders contaminants substantially less mobile and less likely to impact the environment. A large-scale operational acceptance test (LSOAT) was recently completed in which more than 180 t of vitrified soil were produced in each of three adjacent settings. The LSOAT demonstrated that the process conforms to the functional design criteria necessary for the large-scale radioactive test (LSRT) to be conducted following verification of the performance capabilities of the process. The energy requirements and vitrified block size, shape, and mass are sufficiently equivalent to those predicted by the ISV mathematical model to confirm its usefulness as a predictive tool. The LSOAT demonstrated an electrode replacement technique, which can be used if an electrode fails, and techniques have been identified to minimize air oxidation, thereby extending electrode life. A statistical analysis was employed during the LSOAT to identify graphite collars and an insulative surface as successful cold cap subsidence techniques. The LSOAT also showed that even under worst-case conditions, the off-gas system exceeds the flow requirements necessary to maintain a negative pressure on the hood covering the area being vitrified. The retention of simulated radionuclides and chemicals in the soil and off-gas system exceeds requirements so that projected emissions are one to two orders of magnitude below the maximum permissible concentrations of contaminants at the stack

  20. Cosmic Ecstasy and Process Theology

    Directory of Open Access Journals (Sweden)

    Blair Reynolds

    2006-01-01

    Full Text Available The notion that God and the world are mutually interdependent is generally taken to be unique to twentieth-century process theology. Largely, process thinkers have focused on classical theists, rather than the mystics. My thesis, however, is that, centuries before process came along, there were Western mystical concepts stressing that God needed the universe in order to become conscious and complete. In support of my thesis, I will provide a synopsis of the doctrines of God as found in mystics such as Boehme, Dionysius, Eckhart, and then show how Whitehead’s aesthetic provides a coherent philosophical psychology of ecstasy. Key words: aesthetic experience, causal efficacy, consequent nature of God, ecstasy, feeling, German Romanticism, primordial nature of God, reformed subjectivist principle, Nicht, unconscious experience.

  1. Electron impact ionization of large krypton clusters

    Institute of Scientific and Technical Information of China (English)

    Li Shao-Hui; Li Ru-Xin; Ni Guo-Quan; Xu Zhi-Zhan

    2004-01-01

    We show that the detection of ionization of very large van der Waals clusters in a pulsed jet or a beam can be realized by using a fast ion gauge. Rapid positive feedback electron impact ionization and fragmentation processes,which are initially ignited by electron impact ionization of the krypton clusters with the electron current of the ion gauge, result in the appearance of a progressional oscillation-like ion spectrum, or just of a single fast event under critical conditions. Each line in the spectrum represents a correlated explosion or avalanche ionization of the clusters.The phenomena have been analysed qualitatively along with a Rayleigh scattering experiment of the corresponding cluster jet.

  2. Large deviations for noninteracting infinite-particle systems

    International Nuclear Information System (INIS)

    Donsker, M.D.; Varadhan, S.R.S.

    1987-01-01

    A large deviation property is established for noninteracting infinite particle systems. Previous large deviation results obtained by the authors involved a single I-function because the cases treated always involved a unique invariant measure for the process. In the context of this paper there is an infinite family of invariant measures and a corresponding infinite family of I-functions governing the large deviations

  3. Cerebral methodology based computing to estimate real phenomena from large-scale nuclear simulation

    International Nuclear Information System (INIS)

    Suzuki, Yoshio

    2011-01-01

    Our final goal is to estimate real phenomena from large-scale nuclear simulations by using computing processes. Large-scale simulations mean that they include scale variety and physical complexity so that corresponding experiments and/or theories do not exist. In nuclear field, it is indispensable to estimate real phenomena from simulations in order to improve the safety and security of nuclear power plants. Here, the analysis of uncertainty included in simulations is needed to reveal sensitivity of uncertainty due to randomness, to reduce the uncertainty due to lack of knowledge and to lead a degree of certainty by verification and validation (V and V) and uncertainty quantification (UQ) processes. To realize this, we propose 'Cerebral Methodology based Computing (CMC)' as computing processes with deductive and inductive approaches by referring human reasoning processes. Our idea is to execute deductive and inductive simulations contrasted with deductive and inductive approaches. We have established its prototype system and applied it to a thermal displacement analysis of a nuclear power plant. The result shows that our idea is effective to reduce the uncertainty and to get the degree of certainty. (author)

  4. Study of a large scale neutron measurement channel

    International Nuclear Information System (INIS)

    Amarouayache, Anissa; Ben Hadid, Hayet.

    1982-12-01

    A large scale measurement channel allows the processing of the signal coming from an unique neutronic sensor, during three different running modes: impulses, fluctuations and current. The study described in this note includes three parts: - A theoretical study of the large scale channel and its brief description are given. The results obtained till now in that domain are presented. - The fluctuation mode is thoroughly studied and the improvements to be done are defined. The study of a fluctuation linear channel with an automatic commutation of scales is described and the results of the tests are given. In this large scale channel, the method of data processing is analogical. - To become independent of the problems generated by the use of a an analogical processing of the fluctuation signal, a digital method of data processing is tested. The validity of that method is improved. The results obtained on a test system realized according to this method are given and a preliminary plan for further research is defined [fr

  5. Large Data Visualization with Open-Source Tools

    CERN Multimedia

    CERN. Geneva

    2015-01-01

    Visualization and post-processing of large data have become increasingly challenging and require more and more tools to support the diversity of data to process. In this seminar, we will present a suite of open-source tools supported and developed by Kitware to perform large-scale data visualization and analysis. In particular, we will present ParaView, an open-source tool for parallel visualization of massive datasets, the Visualization Toolkit (VTK), an open-source toolkit for scientific visualization, and Tangelohub, a suite of tools for large data analytics. About the speaker Julien Jomier is directing Kitware's European subsidiary in Lyon, France, where he focuses on European business development. Julien works on a variety of projects in the areas of parallel and distributed computing, mobile computing, image processing, and visualization. He is one of the developers of the Insight Toolkit (ITK), the Visualization Toolkit (VTK), and ParaView. Julien is also leading the CDash project, an open-source co...

  6. Hybrid Laser Welding of Large Steel Structures

    DEFF Research Database (Denmark)

    Farrokhi, Farhang

    Manufacturing of large steel structures requires the processing of thick-section steels. Welding is one of the main processes during the manufacturing of such structures and includes a significant part of the production costs. One of the ways to reduce the production costs is to use the hybrid...... laser welding technology instead of the conventional arc welding methods. However, hybrid laser welding is a complicated process that involves several complex physical phenomena that are highly coupled. Understanding of the process is very important for obtaining quality welds in an efficient way....... This thesis investigates two different challenges related to the hybrid laser welding of thick-section steel plates. Employing empirical and analytical approaches, this thesis attempts to provide further knowledge towards obtaining quality welds in the manufacturing of large steel structures....

  7. GPU-based large-scale visualization

    KAUST Repository

    Hadwiger, Markus

    2013-11-19

    Recent advances in image and volume acquisition as well as computational advances in simulation have led to an explosion of the amount of data that must be visualized and analyzed. Modern techniques combine the parallel processing power of GPUs with out-of-core methods and data streaming to enable the interactive visualization of giga- and terabytes of image and volume data. A major enabler for interactivity is making both the computational and the visualization effort proportional to the amount of data that is actually visible on screen, decoupling it from the full data size. This leads to powerful display-aware multi-resolution techniques that enable the visualization of data of almost arbitrary size. The course consists of two major parts: An introductory part that progresses from fundamentals to modern techniques, and a more advanced part that discusses details of ray-guided volume rendering, novel data structures for display-aware visualization and processing, and the remote visualization of large online data collections. You will learn how to develop efficient GPU data structures and large-scale visualizations, implement out-of-core strategies and concepts such as virtual texturing that have only been employed recently, as well as how to use modern multi-resolution representations. These approaches reduce the GPU memory requirements of extremely large data to a working set size that fits into current GPUs. You will learn how to perform ray-casting of volume data of almost arbitrary size and how to render and process gigapixel images using scalable, display-aware techniques. We will describe custom virtual texturing architectures as well as recent hardware developments in this area. We will also describe client/server systems for distributed visualization, on-demand data processing and streaming, and remote visualization. We will describe implementations using OpenGL as well as CUDA, exploiting parallelism on GPUs combined with additional asynchronous

  8. Manufacturing process to reduce large grain growth in zirconium alloys

    International Nuclear Information System (INIS)

    Rosecrans, P.M.

    1987-01-01

    A method is described of treating cold worked zirconium alloys to reduce large grain growth during thermal treatment above its recrystallization temperature. The method comprises heating the zirconium alloy at a temperature of about 1300 0 F. to 1350 0 F. for about 1 to 3 hours subsequent to cold working the zirconium alloy and prior to the thermal treatment at a temperature of between 1450 0 -1550 0 F., the thermal treatment temperature being above the recrystallization temperature

  9. Evaluating cloud processes in large-scale models: Of idealized case studies, parameterization testbeds and single-column modelling on climate time-scales

    Science.gov (United States)

    Neggers, Roel

    2016-04-01

    Boundary-layer schemes have always formed an integral part of General Circulation Models (GCMs) used for numerical weather and climate prediction. The spatial and temporal scales associated with boundary-layer processes and clouds are typically much smaller than those at which GCMs are discretized, which makes their representation through parameterization a necessity. The need for generally applicable boundary-layer parameterizations has motivated many scientific studies, which in effect has created its own active research field in the atmospheric sciences. Of particular interest has been the evaluation of boundary-layer schemes at "process-level". This means that parameterized physics are studied in isolated mode from the larger-scale circulation, using prescribed forcings and excluding any upscale interaction. Although feedbacks are thus prevented, the benefit is an enhanced model transparency, which might aid an investigator in identifying model errors and understanding model behavior. The popularity and success of the process-level approach is demonstrated by the many past and ongoing model inter-comparison studies that have been organized by initiatives such as GCSS/GASS. A red line in the results of these studies is that although most schemes somehow manage to capture first-order aspects of boundary layer cloud fields, there certainly remains room for improvement in many areas. Only too often are boundary layer parameterizations still found to be at the heart of problems in large-scale models, negatively affecting forecast skills of NWP models or causing uncertainty in numerical predictions of future climate. How to break this parameterization "deadlock" remains an open problem. This presentation attempts to give an overview of the various existing methods for the process-level evaluation of boundary-layer physics in large-scale models. This includes i) idealized case studies, ii) longer-term evaluation at permanent meteorological sites (the testbed approach

  10. Integrated process development-a robust, rapid method for inclusion body harvesting and processing at the microscale level.

    Science.gov (United States)

    Walther, Cornelia; Kellner, Martin; Berkemeyer, Matthias; Brocard, Cécile; Dürauer, Astrid

    2017-10-21

    Escherichia coli stores large amounts of highly pure product within inclusion bodies (IBs). To take advantage of this beneficial feature, after cell disintegration, the first step to optimal product recovery is efficient IB preparation. This step is also important in evaluating upstream optimization and process development, due to the potential impact of bioprocessing conditions on product quality and on the nanoscale properties of IBs. Proper IB preparation is often neglected, due to laboratory-scale methods requiring large amounts of materials and labor. Miniaturization and parallelization can accelerate analyses of individual processing steps and provide a deeper understanding of up- and downstream processing interdependencies. Consequently, reproducible, predictive microscale methods are in demand. In the present study, we complemented a recently established high-throughput cell disruption method with a microscale method for preparing purified IBs. This preparation provided results comparable to laboratory-scale IB processing, regarding impurity depletion, and product loss. Furthermore, with this method, we performed a "design of experiments" study to demonstrate the influence of fermentation conditions on the performance of subsequent downstream steps and product quality. We showed that this approach provided a 300-fold reduction in material consumption for each fermentation condition and a 24-fold reduction in processing time for 24 samples.

  11. Show-Bix &

    DEFF Research Database (Denmark)

    2014-01-01

    The anti-reenactment 'Show-Bix &' consists of 5 dias projectors, a dial phone, quintophonic sound, and interactive elements. A responsive interface will enable the Dias projectors to show copies of original dias slides from the Show-Bix piece ”March på Stedet”, 265 images in total. The copies are...

  12. Analysis and evaluation in the production process and equipment area of the low-cost solar array project

    Science.gov (United States)

    Goldman, H.; Wolf, M.

    1979-01-01

    Analyses of slicing processes and junction formation processes are presented. A simple method for evaluation of the relative economic merits of competing process options with respect to the cost of energy produced by the system is described. An energy consumption analysis was developed and applied to determine the energy consumption in the solar module fabrication process sequence, from the mining of the SiO2 to shipping. The analysis shows that, in current technology practice, inordinate energy use in the purification step, and large wastage of the invested energy through losses, particularly poor conversion in slicing, as well as inadequate yields throughout. The cell process energy expenditures already show a downward trend based on increased throughput rates. The large improvement, however, depends on the introduction of a more efficient purification process and of acceptable ribbon growing techniques.

  13. Rhesus monkeys (Macaca mulatta) show robust primacy and recency in memory for lists from small, but not large, image sets.

    Science.gov (United States)

    Basile, Benjamin M; Hampton, Robert R

    2010-02-01

    The combination of primacy and recency produces a U-shaped serial position curve typical of memory for lists. In humans, primacy is often thought to result from rehearsal, but there is little evidence for rehearsal in nonhumans. To further evaluate the possibility that rehearsal contributes to primacy in monkeys, we compared memory for lists of familiar stimuli (which may be easier to rehearse) to memory for unfamiliar stimuli (which are likely difficult to rehearse). Six rhesus monkeys saw lists of five images drawn from either large, medium, or small image sets. After presentation of each list, memory for one item was assessed using a serial probe recognition test. Across four experiments, we found robust primacy and recency with lists drawn from small and medium, but not large, image sets. This finding is consistent with the idea that familiar items are easier to rehearse and that rehearsal contributes to primacy, warranting further study of the possibility of rehearsal in monkeys. However, alternative interpretations are also viable and are discussed. Copyright 2009 Elsevier B.V. All rights reserved.

  14. Hidden supersymmetry and large N

    International Nuclear Information System (INIS)

    Alfaro, J.

    1988-01-01

    In this paper we present a new method to deal with the leading order in the large-N expansion of a quantum field theory. The method uses explicitly the hidden supersymmetry that is present in the path-integral formulation of a stochastic process. In addition to this we derive a new relation that is valid in the leading order of the large-N expansion of the hermitian-matrix model for any spacetime dimension. (orig.)

  15. Large-deviation principles, stochastic effective actions, path entropies, and the structure and meaning of thermodynamic descriptions

    International Nuclear Information System (INIS)

    Smith, Eric

    2011-01-01

    The meaning of thermodynamic descriptions is found in large-deviations scaling (Ellis 1985 Entropy, Large Deviations, and Statistical Mechanics (New York: Springer); Touchette 2009 Phys. Rep. 478 1-69) of the probabilities for fluctuations of averaged quantities. The central function expressing large-deviations scaling is the entropy, which is the basis both for fluctuation theorems and for characterizing the thermodynamic interactions of systems. Freidlin-Wentzell theory (Freidlin and Wentzell 1998 Random Perturbations in Dynamical Systems 2nd edn (New York: Springer)) provides a quite general formulation of large-deviations scaling for non-equilibrium stochastic processes, through a remarkable representation in terms of a Hamiltonian dynamical system. A number of related methods now exist to construct the Freidlin-Wentzell Hamiltonian for many kinds of stochastic processes; one method due to Doi (1976 J. Phys. A: Math. Gen. 9 1465-78; 1976 J. Phys. A: Math. Gen. 9 1479) and Peliti (1985 J. Physique 46 1469; 1986 J. Phys. A: Math. Gen. 19 L365, appropriate to integer counting statistics, is widely used in reaction-diffusion theory. Using these tools together with a path-entropy method due to Jaynes (1980 Annu. Rev. Phys. Chem. 31 579-601), this review shows how to construct entropy functions that both express large-deviations scaling of fluctuations, and describe system-environment interactions, for discrete stochastic processes either at or away from equilibrium. A collection of variational methods familiar within quantum field theory, but less commonly applied to the Doi-Peliti construction, is used to define a 'stochastic effective action', which is the large-deviations rate function for arbitrary non-equilibrium paths. We show how common principles of entropy maximization, applied to different ensembles of states or of histories, lead to different entropy functions and different sets of thermodynamic state variables. Yet the relations among all these levels of

  16. The Role of Attention in Somatosensory Processing: A Multi-Trait, Multi-Method Analysis

    Science.gov (United States)

    Wodka, Ericka L.; Puts, Nicolaas A. J.; Mahone, E. Mark; Edden, Richard A. E.; Tommerdahl, Mark; Mostofsky, Stewart H.

    2016-01-01

    Sensory processing abnormalities in autism have largely been described by parent report. This study used a multi-method (parent-report and measurement), multi-trait (tactile sensitivity and attention) design to evaluate somatosensory processing in ASD. Results showed multiple significant within-method (e.g., parent report of different…

  17. IAEA Conference on Large Radiation Sources in Industry (Warsaw 1959): Which technologies of radiation processing survived and why?

    International Nuclear Information System (INIS)

    Zagorski, Z.P.

    1999-01-01

    The IAEA has organized in Warsaw an International Conference on Large Radiation Sources in Industry from 8 to 12 September 1959. Proceedings of the Conference have been published in two volumes of summary amount of 925 pages. This report presents analysis, which technologies presented at the Conference have survived and why. The analysis is interesting because already in the fifties practically full range of possibilities of radiation processing was explored, and partially implemented. Not many new technologies were presented at the next IAEA Conferences on the same theme. Already at the time of the Warsaw Conference an important role of economy of the technology has recognized. The present report selects the achievements of the Conference into two groups: the first concerns technologies which have not been implemented in the next decades and the second group which is the basis of highly profitable, unsubsidized commercial production. The criterion of belonging of the technology to the second group, is the value of the quotient of the cost of the ready, saleable product diminished by the cost of a raw material before processing, to the expense of radiation processing, being the sum of irradiation cost and such operations as transportation of the object to and from the irradiation facility. Low value of the quotient, as compared to successful technologies is prophesying badly as concerns the future of the commercial proposal. A special position among objects of radiation processing is occupied by radiation processing technologies direct towards the protection or improving of the environment. Market economy does not apply here and the implementation has to be subsidized. (author)

  18. Strategic alliances between SMEs and large firms: An exploration of the dynamic process

    OpenAIRE

    Rothkegel, Senad; Erakovic, Ljiljana; Shepherd, Deborah

    2006-01-01

    This paper explores the dynamics in strategic alliances between small and medium sized enterprises (SMEs) and large organisations (corporates). Despite the volumes written on this subject, few studies take into account this context of interorganisational relationships. The dynamics in strategic partnerships between small and large organisations are potentially multifaceted and fraught with complexities and contradictions. The partner organisations bring diverse interests and resources to the ...

  19. Foam decontamination of large nuclear components before dismantling

    International Nuclear Information System (INIS)

    Costes, J.R.; Sahut, C.

    1998-01-01

    Following some simple theoretical considerations, the authors show that foam compositions can be advantageously circulated them for a few hours in components requiring decontamination before dismantling. The technique is illustrated on six large ferritic steel valves, then on austenitic steel heat exchangers for which the Ce(III)/Ce(IV) redox pair was used to dissolve the chromium; Ce(III) was reoxidized by ozone injection into the foam vector gas. Biodegradable surfactants are sued in the process; tests have shown that the foaming power disappears after a few days, provided the final radioactive liquid waste is adjusted to neutral pH, allowing subsequent coprecipitation of concentration treatment. (author)

  20. A Microwave Holographic Procedure for Large Symmetric Reflector Antennas Using a Fresnel-Zone Field Data Processing

    Directory of Open Access Journals (Sweden)

    Giuseppe Mazzarella

    2012-01-01

    Full Text Available In this paper we propose a new holographic procedure for the diagnostic of large reflector antennas, based on the direct use of the Fresnel-field pattern. The relation leading from the Fresnel field to the current on the reflector surface is formulated in the least-squares sense as a discrete data inverse problem and then regularized by using a singular value decomposition approach. A detailed theoretical analysis of the problem and full assessment of the presented technique are provided. Simulations are carried out by using the radiative near-field pattern generated with a commercial software. Results show good accuracy and robustness to noise for the retrieval of the panel-to-panel misalignment of a reflector antenna.

  1. Automatic Optimization for Large-Scale Real-Time Coastal Water Simulation

    Directory of Open Access Journals (Sweden)

    Shunli Wang

    2016-01-01

    Full Text Available We introduce an automatic optimization approach for the simulation of large-scale coastal water. To solve the singular problem of water waves obtained with the traditional model, a hybrid deep-shallow-water model is estimated by using an automatic coupling algorithm. It can handle arbitrary water depth and different underwater terrain. As a certain feature of coastal terrain, coastline is detected with the collision detection technology. Then, unnecessary water grid cells are simplified by the automatic simplification algorithm according to the depth. Finally, the model is calculated on Central Processing Unit (CPU and the simulation is implemented on Graphics Processing Unit (GPU. We show the effectiveness of our method with various results which achieve real-time rendering on consumer-level computer.

  2. Reliable solution processed planar perovskite hybrid solar cells with large-area uniformity by chloroform soaking and spin rinsing induced surface precipitation

    Directory of Open Access Journals (Sweden)

    Yann-Cherng Chern

    2015-08-01

    Full Text Available A solvent soaking and rinsing method, in which the solvent was allowed to soak all over the surface followed by a spinning for solvent draining, was found to produce perovskite layers with high uniformity on a centimeter scale and with much improved reliability. Besides the enhanced crystallinity and surface morphology due to the rinsing induced surface precipitation that constrains the grain growth underneath in the precursor films, large-area uniformity with film thickness determined exclusively by the rotational speed of rinsing spinning for solvent draining was observed. With chloroform as rinsing solvent, highly uniform and mirror-like perovskite layers of area as large as 8 cm × 8 cm were produced and highly uniform planar perovskite solar cells with power conversion efficiency of 10.6 ± 0.2% as well as much prolonged lifetime were obtained. The high uniformity and reliability observed with this solvent soaking and rinsing method were ascribed to the low viscosity of chloroform as well as its feasibility of mixing with the solvent used in the precursor solution. Moreover, since the surface precipitation forms before the solvent draining, this solvent soaking and rinsing method may be adapted to spinless process and be compatible with large-area and continuous production. With the large-area uniformity and reliability for the resultant perovskite layers, this chloroform soaking and rinsing approach may thus be promising for the mass production and commercialization of large-area perovskite solar cells.

  3. Estimating carbon and showing impacts of drought using satellite data in regression-tree models

    Science.gov (United States)

    Boyte, Stephen; Wylie, Bruce K.; Howard, Danny; Dahal, Devendra; Gilmanov, Tagir G.

    2018-01-01

    Integrating spatially explicit biogeophysical and remotely sensed data into regression-tree models enables the spatial extrapolation of training data over large geographic spaces, allowing a better understanding of broad-scale ecosystem processes. The current study presents annual gross primary production (GPP) and annual ecosystem respiration (RE) for 2000–2013 in several short-statured vegetation types using carbon flux data from towers that are located strategically across the conterminous United States (CONUS). We calculate carbon fluxes (annual net ecosystem production [NEP]) for each year in our study period, which includes 2012 when drought and higher-than-normal temperatures influence vegetation productivity in large parts of the study area. We present and analyse carbon flux dynamics in the CONUS to better understand how drought affects GPP, RE, and NEP. Model accuracy metrics show strong correlation coefficients (r) (r ≥ 94%) between training and estimated data for both GPP and RE. Overall, average annual GPP, RE, and NEP are relatively constant throughout the study period except during 2012 when almost 60% less carbon is sequestered than normal. These results allow us to conclude that this modelling method effectively estimates carbon dynamics through time and allows the exploration of impacts of meteorological anomalies and vegetation types on carbon dynamics.

  4. Aftershocks and triggering processes in rock fracture

    Science.gov (United States)

    Davidsen, J.; Kwiatek, G.; Goebel, T.; Stanchits, S. A.; Dresen, G.

    2017-12-01

    One of the hallmarks of our understanding of seismicity in nature is the importance of triggering processes, which makes the forecasting of seismic activity feasible. These triggering processes by which one earthquake induces (dynamic or static) stress changes leading to potentially multiple other earthquakes are at the core relaxation processes. A specic example of triggering are aftershocks following a large earthquake, which have been observed to follow certain empirical relationships such as the Omori-Utsu relation. Such an empirical relation should arise from the underlying microscopic dynamics of the involved physical processes but the exact connection remains to be established. Simple explanations have been proposed but their general applicability is unclear. Many explanations involve the picture of an earthquake as a purely frictional sliding event. Here, we present experimental evidence that these empirical relationships are not limited to frictional processes but also arise in fracture zone formation and are mostly related to compaction-type events. Our analysis is based on tri-axial compression experiments under constant displacement rate on sandstone and granite samples using spatially located acoustic emission events and their focal mechanisms. More importantly, we show that event-event triggering plays an important role in the presence of large-scale or macrocopic imperfections while such triggering is basically absent if no signicant imperfections are present. We also show that spatial localization and an increase in activity rates close to failure do not necessarily imply triggering behavior associated with aftershocks. Only if a macroscopic crack is formed and its propagation remains subcritical do we observe significant triggering.

  5. Large-scale hydrology in Europe : observed patterns and model performance

    Energy Technology Data Exchange (ETDEWEB)

    Gudmundsson, Lukas

    2011-06-15

    In a changing climate, terrestrial water storages are of great interest as water availability impacts key aspects of ecosystem functioning. Thus, a better understanding of the variations of wet and dry periods will contribute to fully grasp processes of the earth system such as nutrient cycling and vegetation dynamics. Currently, river runoff from small, nearly natural, catchments is one of the few variables of the terrestrial water balance that is regularly monitored with detailed spatial and temporal coverage on large scales. River runoff, therefore, provides a foundation to approach European hydrology with respect to observed patterns on large scales, with regard to the ability of models to capture these.The analysis of observed river flow from small catchments, focused on the identification and description of spatial patterns of simultaneous temporal variations of runoff. These are dominated by large-scale variations of climatic variables but also altered by catchment processes. It was shown that time series of annual low, mean and high flows follow the same atmospheric drivers. The observation that high flows are more closely coupled to large scale atmospheric drivers than low flows, indicates the increasing influence of catchment properties on runoff under dry conditions. Further, it was shown that the low-frequency variability of European runoff is dominated by two opposing centres of simultaneous variations, such that dry years in the north are accompanied by wet years in the south.Large-scale hydrological models are simplified representations of our current perception of the terrestrial water balance on large scales. Quantification of the models strengths and weaknesses is the prerequisite for a reliable interpretation of simulation results. Model evaluations may also enable to detect shortcomings with model assumptions and thus enable a refinement of the current perception of hydrological systems. The ability of a multi model ensemble of nine large

  6. The role of large-scale, extratropical dynamics in climate change

    Energy Technology Data Exchange (ETDEWEB)

    Shepherd, T.G. [ed.

    1994-02-01

    The climate modeling community has focused recently on improving our understanding of certain processes, such as cloud feedbacks and ocean circulation, that are deemed critical to climate-change prediction. Although attention to such processes is warranted, emphasis on these areas has diminished a general appreciation of the role played by the large-scale dynamics of the extratropical atmosphere. Lack of interest in extratropical dynamics may reflect the assumption that these dynamical processes are a non-problem as far as climate modeling is concerned, since general circulation models (GCMs) calculate motions on this scale from first principles. Nevertheless, serious shortcomings in our ability to understand and simulate large-scale dynamics exist. Partly due to a paucity of standard GCM diagnostic calculations of large-scale motions and their transports of heat, momentum, potential vorticity, and moisture, a comprehensive understanding of the role of large-scale dynamics in GCM climate simulations has not been developed. Uncertainties remain in our understanding and simulation of large-scale extratropical dynamics and their interaction with other climatic processes, such as cloud feedbacks, large-scale ocean circulation, moist convection, air-sea interaction and land-surface processes. To address some of these issues, the 17th Stanstead Seminar was convened at Bishop`s University in Lennoxville, Quebec. The purpose of the Seminar was to promote discussion of the role of large-scale extratropical dynamics in global climate change. Abstracts of the talks are included in this volume. On the basis of these talks, several key issues emerged concerning large-scale extratropical dynamics and their climatic role. Individual records are indexed separately for the database.

  7. The role of large-scale, extratropical dynamics in climate change

    International Nuclear Information System (INIS)

    Shepherd, T.G.

    1994-02-01

    The climate modeling community has focused recently on improving our understanding of certain processes, such as cloud feedbacks and ocean circulation, that are deemed critical to climate-change prediction. Although attention to such processes is warranted, emphasis on these areas has diminished a general appreciation of the role played by the large-scale dynamics of the extratropical atmosphere. Lack of interest in extratropical dynamics may reflect the assumption that these dynamical processes are a non-problem as far as climate modeling is concerned, since general circulation models (GCMs) calculate motions on this scale from first principles. Nevertheless, serious shortcomings in our ability to understand and simulate large-scale dynamics exist. Partly due to a paucity of standard GCM diagnostic calculations of large-scale motions and their transports of heat, momentum, potential vorticity, and moisture, a comprehensive understanding of the role of large-scale dynamics in GCM climate simulations has not been developed. Uncertainties remain in our understanding and simulation of large-scale extratropical dynamics and their interaction with other climatic processes, such as cloud feedbacks, large-scale ocean circulation, moist convection, air-sea interaction and land-surface processes. To address some of these issues, the 17th Stanstead Seminar was convened at Bishop's University in Lennoxville, Quebec. The purpose of the Seminar was to promote discussion of the role of large-scale extratropical dynamics in global climate change. Abstracts of the talks are included in this volume. On the basis of these talks, several key issues emerged concerning large-scale extratropical dynamics and their climatic role. Individual records are indexed separately for the database

  8. Thermoelectric properties of P-type Sb2Te3 thick film processed by a screen-printing technique and a subsequent annealing process

    International Nuclear Information System (INIS)

    Kim, Sun Jin; We, Ju Hyung; Kim, Jin Sang; Kim, Gyung Soo; Cho, Byung Jin

    2014-01-01

    Highlights: • We report on thermoelectric properties of screen-printed Sb 2 Te 3 thick film. • Subsequent annealing process determines thermoelectric properties of Sb 2 Te 3 film. • Annealing in tellurium powder ambient contributes to tellurium-rich Sb 2 Te 3 film. • Annealing in tellurium powder ambient enhances carrier mobility of Sb 2 Te 3 film. -- Abstract: We herein report the thermoelectric properties of Sb 2 Te 3 thick film fabricated by a screen-printing technique and a subsequent annealing process. Each step of the screen-printing fabrication process of Sb 2 Te 3 thick film is described in detail. It was found that the subsequent annealing process must be carefully designed to achieve good thermoelectric properties of the screen-printed film. The results show that the annealing of the screen-printed Sb 2 Te 3 thick film together with tellurium powder in the same process chamber significantly improves the carrier mobility by increasing the average scattering time of the carrier in the film, resulting in a large improvement of the power factor. By optimizing the annealing process, we achieved a maximum thermoelectric figure-of-merit, ZT, of 0.32 at room temperature, which is slightly higher than that of bulk Sb 2 Te 3 . Because screen-printing is a simple and low-cost process and given that it is easy to scale up to large sizes, this result will be useful for the realization of large, film-type thermoelectric devices

  9. Auditory temporal-order thresholds show no gender differences

    NARCIS (Netherlands)

    van Kesteren, Marlieke T. R.; Wierslnca-Post, J. Esther C.

    2007-01-01

    Purpose: Several studies on auditory temporal-order processing showed gender differences. Women needed longer inter-stimulus intervals than men when indicating the temporal order of two clicks presented to the left and right ear. In this study, we examined whether we could reproduce these results in

  10. Auditory temporal-order thresholds show no gender differences

    NARCIS (Netherlands)

    van Kesteren, Marlieke T R; Wiersinga-Post, J Esther C

    2007-01-01

    PURPOSE: Several studies on auditory temporal-order processing showed gender differences. Women needed longer inter-stimulus intervals than men when indicating the temporal order of two clicks presented to the left and right ear. In this study, we examined whether we could reproduce these results in

  11. Large aperture deformable mirror with a transferred single-crystal silicon membrane actuated using large-stroke PZT Unimorph Actuators

    Science.gov (United States)

    Hishinumat, Yoshikazu; Yang, Eui - Hyeok (EH)

    2005-01-01

    We have demonstrated a large aperture (50 mm x 50 mm) continuous membrane deformable mirror (DM) with a large-stroke piezoelectric unimorph actuator array. The DM consists of a continuous, large aperture, silicon membrane 'transferred' in its entirety onto a 20 x 20 piezoelectric unimorph actuator array. A PZT unimorph actuator, 2.5 mm in diameter with optimized PZT/Si thickness and design showed a deflection of 5.7 [m at 20V. An assembled DM showed an operating frequency bandwidth of 30 kHz and influence function of approximately 30%.

  12. Destruction and Reallocation of Skills Following Large Company Exit

    DEFF Research Database (Denmark)

    Holm, Jacob Rubæk; Østergaard, Christian Richter; Olesen, Thomas Roslyng

    , skill destruction. This study is based on the closure of four shipyards in Denmark from 1987-2000. The analysis is based on detailed longitudinal micro data from a matched employer-employee dataset that allow us to follow the mobility of the laid-off employees in great detail. The analysis shows......What happens to redundant skills and workers when a large company closes down in a region? The knowledge embodied in firms is lost when firms exit. However, the skills, competences and knowledge embodied in the displaced employees are suddenly released and can become channels of knowledge transfer...... for other firms that hire them. This process can be very disruptive. For instance, when a large, old and well-renowned company closes down displacing thousands of workers over a short period of time, then it may be a shock to the regional economy and lead to unemployment and skill destruction. The question...

  13. The large hadron computer

    CERN Multimedia

    Hirstius, Andreas

    2008-01-01

    Plans for dealing with the torrent of data from the Large Hadron Collider's detectors have made the CERN particle-phycis lab, yet again, a pioneer in computing as well as physics. The author describes the challenges of processing and storing data in the age of petabyt science. (4 pages)

  14. Different healing process of esophageal large mucosal defects by endoscopic mucosal dissection between with and without steroid injection in an animal model.

    Science.gov (United States)

    Nonaka, Kouichi; Miyazawa, Mitsuo; Ban, Shinichi; Aikawa, Masayasu; Akimoto, Naoe; Koyama, Isamu; Kita, Hiroto

    2013-04-25

    Stricture formation is one of the major complications after endoscopic removal of large superficial squamous cell neoplasms of the esophagus, and local steroid injections have been adopted to prevent it. However, fundamental pathological alterations related to them have not been well analyzed so far. The aim of this study was to analyze the time course of the healing process of esophageal large mucosal defects resulting in stricture formation and its modification by local steroid injection, using an animal model. Esophageal circumferential mucosal defects were created by endoscopic mucosal dissection (ESD) for four pigs. One pig was sacrificed five minutes after the ESD, and other two pigs were followed-up on endoscopy and sacrificed at the time of one week and three weeks after the ESD, respectively. The remaining one pig was followed-up on endoscopy with five times of local steroid injection and sacrificed at the time of eight weeks after the ESD. The esophageal tissues of all pigs were subjected to pathological analyses. For the pigs without steroid injection, the esophageal stricture was completed around three weeks after the ESD on both endoscopy and esophagography. Histopathological examination of the esophageal tissues revealed that spindle-shaped α-smooth muscle actin (SMA)-positive myofibroblasts arranged in a parallel fashion and extending horizontally were identified at the ulcer bed one week after the ESD, and increased contributing to formation of the stenotic luminal ridge covered with the regenerated epithelium three weeks after the ESD. The proper muscle layer of the stricture site was thinned with some myocytes which seemingly showed transition to the myofibroblast layer. By contrast, for the pig with steroid injection, esophageal stricture formation was not evident with limited appearance of the spindle-shaped myofibroblasts, instead, appearance of stellate or polygocal SMA-positive stromal cells arranged haphazardly in the persistent granulation

  15. Large wood recruitment processes and transported volumes in Swiss mountain streams during the extreme flood of August 2005

    Science.gov (United States)

    Steeb, Nicolas; Rickenmann, Dieter; Badoux, Alexandre; Rickli, Christian; Waldner, Peter

    2017-02-01

    The extreme flood event that occurred in August 2005 was the most costly (documented) natural hazard event in the history of Switzerland. The flood was accompanied by the mobilization of > 69,000 m3 of large wood (LW) throughout the affected area. As recognized afterward, wood played an important role in exacerbating the damages, mainly because of log jams at bridges and weirs. The present study aimed at assessing the risk posed by wood in various catchments by investigating the amount and spatial variability of recruited and transported LW. Data regarding LW quantities were obtained by field surveys, remote sensing techniques (LiDAR), and GIS analysis and was subsequently translated into a conceptual model of wood transport mass balance. Detailed wood budgets and transport diagrams were established for four study catchments of Swiss mountain streams, showing the spatial variability of LW recruitment and deposition. Despite some uncertainties with regard to parameter assumptions, the sum of reconstructed wood input and observed deposition volumes agree reasonably well. Mass wasting such as landslides and debris flows were the dominant recruitment processes in headwater streams. In contrast, LW recruitment from lateral bank erosion became significant in the lower part of mountain streams where the catchment reached a size of about 100 km2. According to our analysis, 88% of the reconstructed total wood input was fresh, i.e., coming from living trees that were recruited from adjacent areas during the event. This implies an average deadwood contribution of 12%, most of which was estimated to have been in-channel deadwood entrained during the flood event.

  16. Large-scale patterns in Rayleigh-Benard convection

    International Nuclear Information System (INIS)

    Hardenberg, J. von; Parodi, A.; Passoni, G.; Provenzale, A.; Spiegel, E.A.

    2008-01-01

    Rayleigh-Benard convection at large Rayleigh number is characterized by the presence of intense, vertically moving plumes. Both laboratory and numerical experiments reveal that the rising and descending plumes aggregate into separate clusters so as to produce large-scale updrafts and downdrafts. The horizontal scales of the aggregates reported so far have been comparable to the horizontal extent of the containers, but it has not been clear whether that represents a limitation imposed by domain size. In this work, we present numerical simulations of convection at sufficiently large aspect ratio to ascertain whether there is an intrinsic saturation scale for the clustering process when that ratio is large enough. From a series of simulations of Rayleigh-Benard convection with Rayleigh numbers between 10 5 and 10 8 and with aspect ratios up to 12π, we conclude that the clustering process has a finite horizontal saturation scale with at most a weak dependence on Rayleigh number in the range studied

  17. PROTOTIPE VIDEO EDITOR DENGAN MENGGUNAKAN DIRECT X DAN DIRECT SHOW

    Directory of Open Access Journals (Sweden)

    Djoni Haryadi Setiabudi

    2004-01-01

    Full Text Available Technology development had given people the chance to capture their memorable moments in video format. A high quality digital video is a result of a good editing process. Which in turn, arise the new need of an editor application. In accordance to the problem, here the process of making a simple application for video editing needs. The application development use the programming techniques often applied in multimedia applications, especially video. First part of the application will begin with the video file compression and decompression, then we'll step into the editing part of the digital video file. Furthermore, the application also equipped with the facilities needed for the editing processes. The application made with Microsoft Visual C++ with DirectX technology, particularly DirectShow. The application provides basic facilities that will help the editing process of a digital video file. The application will produce an AVI format file after the editing process is finished. Through the testing process of this application shows the ability of this application to do the 'cut' and 'insert' of video files in AVI, MPEG, MPG and DAT formats. The 'cut' and 'insert' process only can be done in static order. Further, the aplication also provide the effects facility for transition process in each clip. Lastly, the process of saving the new edited video file in AVI format from the application. Abstract in Bahasa Indonesia : Perkembangan teknologi memberi kesempatan masyarakat untuk mengabadikan saat - saat yang penting menggunakan video. Pembentukan video digital yang baik membutuhkan proses editing yang baik pula. Untuk melakukan proses editing video digital dibutuhkan program editor. Berdasarkan permasalahan diatas maka pada penelitian ini dibuat prototipe editor sederhana untuk video digital. Pembuatan aplikasi memakai teknik pemrograman di bidang multimedia, khususnya video. Perencanaan dalam pembuatan aplikasi tersebut dimulai dengan pembentukan

  18. Large Area Active Brazing of Multi-tile Ceramic-Metal Structures

    Science.gov (United States)

    2012-05-01

    metallurgical bonds. The major disadvantage of using active brazing for metals and ceramics is the high processing temperature required that results in...steels) and form strong, metallurgical bonds. However, the high processing temperatures result in large strain (stress) build-up from the inherent...metals such as titanium alloys and stainless steels) and form strong, metallurgical bonds. However, the high processing temperatures result in large

  19. The Planform Mobility of Large River Channel Confluences

    Science.gov (United States)

    Sambrook Smith, Greg; Dixon, Simon; Nicholas, Andrew; Bull, Jon; Vardy, Mark; Best, James; Goodbred, Steven; Sarker, Maminul

    2017-04-01

    Large river confluences are widely acknowledged as exerting a controlling influence upon both upstream and downstream morphology and thus channel planform evolution. Despite their importance, little is known concerning their longer-term evolution and planform morphodynamics, with much of the literature focusing on confluences as representing fixed, nodal points in the fluvial network. In contrast, some studies of large sand bed rivers in India and Bangladesh have shown large river confluences can be highly mobile, although the extent to which this is representative of large confluences around the world is unknown. Confluences have also been shown to generate substantial bed scours, and if the confluence location is mobile these scours could 'comb' across wide areas. This paper presents field data of large confluences morphologies in the Ganges-Brahmaputra-Meghna river basin, illustrating the spatial extent of large river bed scours and showing scour depth can extend below base level, enhancing long term preservation potential. Based on a global review of the planform of large river confluences using Landsat imagery from 1972 to 2014 this study demonstrates such scour features can be highly mobile and there is an array of confluence morphodynamic types: from freely migrating confluences, through confluences migrating on decadal timescales to fixed confluences. Based on this analysis, a conceptual model of large river confluence types is proposed, which shows large river confluences can be sites of extensive bank erosion and avulsion, creating substantial management challenges. We quantify the abundance of mobile confluence types by classifying all large confluences in both the Amazon and Ganges-Brahmaputra-Meghna basins, showing these two large rivers have contrasting confluence morphodynamics. We show large river confluences have multiple scales of planform adjustment with important implications for river management, infrastructure and interpretation of the rock

  20. Direct large-scale synthesis of perovskite barium strontium titanate nano-particles from solutions

    International Nuclear Information System (INIS)

    Qi Jianquan; Wang Yu; Wan Pingchen; Long Tuli; Chan, Helen Lai Wah

    2005-01-01

    This paper reports a wet chemical synthesis technique for large-scale fabrication of perovskite barium strontium titanate nano-particles near room temperature and under ambient pressure. The process employs titanium alkoxide and alkali earth hydroxides as starting materials and involves very simple operation steps. Particle size and crystallinity of the particles are controllable by changing the processing parameters. Observations by X-ray diffraction, scanning electron microscopy and transmission electron microscopy TEM indicate that the particles are well-crystallized, chemically stoichiometric and ∼50nm in diameter. The nanoparticles can be sintered into ceramics at 1150 deg. C and show typical ferroelectric hysteresis loops

  1. Pediatric Intubation by Paramedics in a Large Emergency Medical Services System: Process, Challenges, and Outcomes.

    Science.gov (United States)

    Prekker, Matthew E; Delgado, Fernanda; Shin, Jenny; Kwok, Heemun; Johnson, Nicholas J; Carlbom, David; Grabinsky, Andreas; Brogan, Thomas V; King, Mary A; Rea, Thomas D

    2016-01-01

    Pediatric intubation is a core paramedic skill in some emergency medical services (EMS) systems. The literature lacks a detailed examination of the challenges and subsequent adjustments made by paramedics when intubating children in the out-of-hospital setting. We undertake a descriptive evaluation of the process of out-of-hospital pediatric intubation, focusing on challenges, adjustments, and outcomes. We performed a retrospective analysis of EMS responses between 2006 and 2012 that involved attempted intubation of children younger than 13 years by paramedics in a large, metropolitan EMS system. We calculated the incidence rate of attempted pediatric intubation with EMS and county census data. To summarize the intubation process, we linked a detailed out-of-hospital airway registry with clinical records from EMS, hospital, or autopsy encounters for each child. The main outcome measures were procedural challenges, procedural success, complications, and patient disposition. Paramedics attempted intubation in 299 cases during 6.3 years, with an incidence of 1 pediatric intubation per 2,198 EMS responses. Less than half of intubations (44%) were for patients in cardiac arrest. Two thirds of patients were intubated on the first attempt (66%), and overall success was 97%. The most prevalent challenge was body fluids obscuring the laryngeal view (33%). After a failed first intubation attempt, corrective actions taken by paramedics included changing equipment (33%), suctioning (32%), and repositioning the patient (27%). Six patients (2%) experienced peri-intubation cardiac arrest and 1 patient had an iatrogenic tracheal injury. No esophageal intubations were observed. Of patients transported to the hospital, 86% were admitted to intensive care and hospital mortality was 27%. Pediatric intubation by paramedics was performed infrequently in this EMS system. Although overall intubation success was high, a detailed evaluation of the process of intubation revealed specific

  2. A new decomposition method for parallel processing multi-level optimization

    International Nuclear Information System (INIS)

    Park, Hyung Wook; Kim, Min Soo; Choi, Dong Hoon

    2002-01-01

    In practical designs, most of the multidisciplinary problems have a large-size and complicate design system. Since multidisciplinary problems have hundreds of analyses and thousands of variables, the grouping of analyses and the order of the analyses in the group affect the speed of the total design cycle. Therefore, it is very important to reorder and regroup the original design processes in order to minimize the total computational cost by decomposing large multidisciplinary problems into several MultiDisciplinary Analysis SubSystems (MDASS) and by processing them in parallel. In this study, a new decomposition method is proposed for parallel processing of multidisciplinary design optimization, such as Collaborative Optimization (CO) and Individual Discipline Feasible (IDF) method. Numerical results for two example problems are presented to show the feasibility of the proposed method

  3. THE IMPORTANCE OF INFORMATION SYSTEMS IN THE MANAGEMENT AND PROCESSING OF LARGE DATA VOLUMES IN PUBLIC INSTITUTIONS

    Directory of Open Access Journals (Sweden)

    CARINA-ELENA STEGĂROIU

    2016-12-01

    Full Text Available Under a computerized society, technological resources become a source of identification for any community, institution or country. Globalization of information becomes a reality, all the resources having entered into a relationship of subordination with the World Wide Web, the information highways and the Internet. "Information technology - with its most important branch, data management computer science - enters a new era, in which the computer leads to the benefit of a navigable and transparent communication space, focusing on information". Therefore, in an information-based economy, information systems have been established which, based on management systems through the methods of algebra, with applications in economic engineering, have come to manage and process large volumes of data, especially in public institutions. Consequently, the Ministry of Public Affairs has implemented the “Increasing the public administration’s responsibility by modernising the information systems for generating the reports of the financial situations of public institutions” project (FOREXEBUG”, cod SMIS 34952, for which it received in 2012 non-refundable financing from the European Social Fund through the Operational Program for Developing the Administrative Capacity 2007-2013, based on which this paper will analyse the usefulness of implementing such a program in public institutions. Such a system aims to achieve a new form of reporting of budget execution and financial statements (including information related to legal commitments submitted monthly by each public institution in electronic, standardized, secure form, with increasing the reliability of data collected by cross-checking data from the treasury and providing reliable information for use by the Ministry of Finance, public institutions, other relevant institutions and the public, both at the level of detail and the consolidation possibilities at various levels, in parallel with their use for

  4. Large-scale climatic anomalies affect marine predator foraging behaviour and demography

    Science.gov (United States)

    Bost, Charles A.; Cotté, Cedric; Terray, Pascal; Barbraud, Christophe; Bon, Cécile; Delord, Karine; Gimenez, Olivier; Handrich, Yves; Naito, Yasuhiko; Guinet, Christophe; Weimerskirch, Henri

    2015-10-01

    Determining the links between the behavioural and population responses of wild species to environmental variations is critical for understanding the impact of climate variability on ecosystems. Using long-term data sets, we show how large-scale climatic anomalies in the Southern Hemisphere affect the foraging behaviour and population dynamics of a key marine predator, the king penguin. When large-scale subtropical dipole events occur simultaneously in both subtropical Southern Indian and Atlantic Oceans, they generate tropical anomalies that shift the foraging zone southward. Consequently the distances that penguins foraged from the colony and their feeding depths increased and the population size decreased. This represents an example of a robust and fast impact of large-scale climatic anomalies affecting a marine predator through changes in its at-sea behaviour and demography, despite lack of information on prey availability. Our results highlight a possible behavioural mechanism through which climate variability may affect population processes.

  5. Radiation processing and sterilization

    International Nuclear Information System (INIS)

    Takehisa, M.; Machi, S.

    1987-01-01

    This growth of commercial radiation processing has been largely dependent on the achievement in production of reliable and less expensive radiation facilities as well as the research and development effort for new applications. Although world statistics of the growth are not available, Figure 20-1 shows steady growth in the number of EBAs installed in Japan for various purposes. Growth rate of Co-60 sources supplied by AECL (Atomic Energy of Canada Limited), which supplied approximately 80% of the world market, approximately 10% per year, including future growth estimates. Potential applications of radiation processing under development are in environmental conservation (e.g., treatment of sewage sludge, waste water, and exhaust gases) and bioengineering (e.g., immobilization of bioactive materials). The authors plan to introduce her the characteristics of radiation processing, examples of its industrial applications, the status of its research and development activities, and an economic analysis

  6. Poisson processes

    NARCIS (Netherlands)

    Boxma, O.J.; Yechiali, U.; Ruggeri, F.; Kenett, R.S.; Faltin, F.W.

    2007-01-01

    The Poisson process is a stochastic counting process that arises naturally in a large variety of daily life situations. We present a few definitions of the Poisson process and discuss several properties as well as relations to some well-known probability distributions. We further briefly discuss the

  7. Detecting Difference between Process Models Based on the Refined Process Structure Tree

    Directory of Open Access Journals (Sweden)

    Jing Fan

    2017-01-01

    Full Text Available The development of mobile workflow management systems (mWfMS leads to large number of business process models. In the meantime, the location restriction embedded in mWfMS may result in different process models for a single business process. In order to help users quickly locate the difference and rebuild the process model, detecting the difference between different process models is needed. Existing detection methods either provide a dissimilarity value to represent the difference or use predefined difference template to generate the result, which cannot reflect the entire composition of the difference. Hence, in this paper, we present a new approach to solve this problem. Firstly, we parse the process models to their corresponding refined process structure trees (PSTs, that is, decomposing a process model into a hierarchy of subprocess models. Then we design a method to convert the PST to its corresponding task based process structure tree (TPST. As a consequence, the problem of detecting difference between two process models is transformed to detect difference between their corresponding TPSTs. Finally, we obtain the difference between two TPSTs based on the divide and conquer strategy, where the difference is described by an edit script and we make the cost of the edit script close to minimum. The extensive experimental evaluation shows that our method can meet the real requirements in terms of precision and efficiency.

  8. The defense-responsive genes showing enhanced and repressed expression after pathogen infection in rice (Oryza sativa L.)

    Institute of Scientific and Technical Information of China (English)

    ZHOU; Bin(周斌); PENG; Kaiman(彭开蔓); CHU; Zhaohui(储昭晖); WANG; Shiping(王石平); ZHANG; Qifa(张启发)

    2002-01-01

    Despite large numbers of studies about defense response, processes involved in the resistance of plants to incompatible pathogens are still largely uncharacterized. The objective of this study was to identify genes involved in defense response by cDNA array analysis and to gain knowledge about the functions of the genes involved in defense response. Approximately 20000 rice cDNA clones were arrayed on nylon filters. RNA samples isolated from different rice lines after infection with incompatible strains or isolates of Xanthomonas oryzae pv. oryzae or Pyricularia grisea, respectively, were used to synthesize cDNA as probes for screening the cDNA arrays. A total of 100 differentially expressed unique sequences were identified from 5 pathogen-host combinations. Fifty-three sequences were detected as showing enhanced expression and 47 sequences were detected as showing repressed expression after pathogen infection. Sequence analysis revealed that most of the 100 sequences had various degrees of homology with genes in databases which encode or putatively encode transcription regulating proteins, translation regulating proteins, transport proteins, kinases, metabolic enzymes, and proteins involved in other functions. Most of the genes have not been previously reported as being involved in the disease resistance response in rice. The results from cDNA arrays, reverse transcription-polymerase chain reaction, and RNA gel blot analysis suggest that activation or repression of most of these genes might occur commonly in the defense response.

  9. Quantum information processing beyond ten ion-qubits

    International Nuclear Information System (INIS)

    Monz, T.

    2011-01-01

    Successful processing of quantum information is, to a large degree, based on two aspects: a) the implementation of high-fidelity quantum gates, as well as b) avoiding or suppressing decoherence processes that destroy quantum information. The presented work shows our progress in the field of experimental quantum information processing over the last years: the implementation and characterisation of several quantum operations, amongst others the first realisation of the quantum Toffoli gate in an ion-trap based quantum computer. The creation of entangled states with up to 14 qubits serves as basis for investigations of decoherence processes. Based on the realised quantum operations as well as the knowledge about dominant noise processes in the employed apparatus, entanglement swapping as well as quantum operations within a decoherence-free subspace are demonstrated. (author) [de

  10. The Arabidopsis gene DIG6 encodes a large 60S subunit nuclear export GTPase 1 that is involved in ribosome biogenesis and affects multiple auxin-regulated development processes

    KAUST Repository

    Zhao, Huayan

    2015-08-13

    The circularly permuted GTPase large subunit GTPase 1 (LSG1) is involved in the maturation step of the 60S ribosome and is essential for cell viability in yeast. Here, an Arabidopsis mutant dig6 (drought inhibited growth of lateral roots) was isolated. The mutant exhibited multiple auxin-related phenotypes, which included reduced lateral root number, altered leaf veins, and shorter roots. Genetic mapping combined with next-generation DNA sequencing identified that the mutation occurred in AtLSG1-2. This gene was highly expressed in regions of auxin accumulation. Ribosome profiling revealed that a loss of function of AtLSG1-2 led to decreased levels of monosomes, further demonstrating its role in ribosome biogenesis. Quantitative proteomics showed that the expression of certain proteins involved in ribosome biogenesis was differentially regulated, indicating that ribosome biogenesis processes were impaired in the mutant. Further investigations showed that an AtLSG1-2 deficiency caused the alteration of auxin distribution, response, and transport in plants. It is concluded that AtLSG1-2 is integral to ribosome biogenesis, consequently affecting auxin homeostasis and plant development.

  11. The Arabidopsis gene DIG6 encodes a large 60S subunit nuclear export GTPase 1 that is involved in ribosome biogenesis and affects multiple auxin-regulated development processes

    KAUST Repository

    Zhao, Huayan; Lü , Shiyou; Li, Ruixi; Chen, Tao; Zhang, Huoming; Cui, Peng; Ding, Feng; Liu, Pei; Wang, Guangchao; Xia, Yiji; Running, Mark P.; Xiong, Liming

    2015-01-01

    The circularly permuted GTPase large subunit GTPase 1 (LSG1) is involved in the maturation step of the 60S ribosome and is essential for cell viability in yeast. Here, an Arabidopsis mutant dig6 (drought inhibited growth of lateral roots) was isolated. The mutant exhibited multiple auxin-related phenotypes, which included reduced lateral root number, altered leaf veins, and shorter roots. Genetic mapping combined with next-generation DNA sequencing identified that the mutation occurred in AtLSG1-2. This gene was highly expressed in regions of auxin accumulation. Ribosome profiling revealed that a loss of function of AtLSG1-2 led to decreased levels of monosomes, further demonstrating its role in ribosome biogenesis. Quantitative proteomics showed that the expression of certain proteins involved in ribosome biogenesis was differentially regulated, indicating that ribosome biogenesis processes were impaired in the mutant. Further investigations showed that an AtLSG1-2 deficiency caused the alteration of auxin distribution, response, and transport in plants. It is concluded that AtLSG1-2 is integral to ribosome biogenesis, consequently affecting auxin homeostasis and plant development.

  12. [Electrophysiological bases of semantic processing of objects].

    Science.gov (United States)

    Kahlaoui, Karima; Baccino, Thierry; Joanette, Yves; Magnié, Marie-Noële

    2007-02-01

    How pictures and words are stored and processed in the human brain constitute a long-standing question in cognitive psychology. Behavioral studies have yielded a large amount of data addressing this issue. Generally speaking, these data show that there are some interactions between the semantic processing of pictures and words. However, behavioral methods can provide only limited insight into certain findings. Fortunately, Event-Related Potential (ERP) provides on-line cues about the temporal nature of cognitive processes and contributes to the exploration of their neural substrates. ERPs have been used in order to better understand semantic processing of words and pictures. The main objective of this article is to offer an overview of the electrophysiologic bases of semantic processing of words and pictures. Studies presented in this article showed that the processing of words is associated with an N 400 component, whereas pictures elicited both N 300 and N 400 components. Topographical analysis of the N 400 distribution over the scalp is compatible with the idea that both image-mediated concrete words and pictures access an amodal semantic system. However, given the distinctive N 300 patterns, observed only during picture processing, it appears that picture and word processing rely upon distinct neuronal networks, even if they end up activating more or less similar semantic representations.

  13. PIV study of the effect of piston position on the in-cylinder swirling flow during the scavenging process in large two-stroke marine diesel engines

    DEFF Research Database (Denmark)

    Haider, Sajjad; Schnipper, Teis; Obeidat, Anas

    2013-01-01

    A simplified model of a low speed large twostroke marine diesel engine cylinder is developed. The effect of piston position on the in-cylinder swirling flow during the scavenging process is studied using the stereoscopic particle image velocimetry technique. The measurements are conducted...

  14. How physiological and physical processes contribute to the phenology of cyanobacterial blooms in large shallow lakes: A new Euler-Lagrangian coupled model.

    Science.gov (United States)

    Feng, Tao; Wang, Chao; Wang, Peifang; Qian, Jin; Wang, Xun

    2018-09-01

    Cyanobacterial blooms have emerged as one of the most severe ecological problems affecting large and shallow freshwater lakes. To improve our understanding of the factors that influence, and could be used to predict, surface blooms, this study developed a novel Euler-Lagrangian coupled approach combining the Eulerian model with agent-based modelling (ABM). The approach was subsequently verified based on monitoring datasets and MODIS data in a large shallow lake (Lake Taihu, China). The Eulerian model solves the Eulerian variables and physiological parameters, whereas ABM generates the complete life cycle and transport processes of cyanobacterial colonies. This model ensemble performed well in fitting historical data and predicting the dynamics of cyanobacterial biomass, bloom distribution, and area. Based on the calculated physical and physiological characteristics of surface blooms, principal component analysis (PCA) captured the major processes influencing surface bloom formation at different stages (two bloom clusters). Early bloom outbreaks were influenced by physical processes (horizontal transport and vertical turbulence-induced mixing), whereas buoyancy-controlling strategies were essential for mature bloom outbreaks. Canonical correlation analysis (CCA) revealed the combined actions of multiple environment variables on different bloom clusters. The effects of buoyancy-controlling strategies (ISP), vertical turbulence-induced mixing velocity of colony (VMT) and horizontal drift velocity of colony (HDT) were quantitatively compared using scenario simulations in the coupled model. VMT accounted for 52.9% of bloom formations and maintained blooms over long periods, thus demonstrating the importance of wind-induced turbulence in shallow lakes. In comparison, HDT and buoyancy controlling strategies influenced blooms at different stages. In conclusion, the approach developed here presents a promising tool for understanding the processes of onshore/offshore algal

  15. A full picture of large lepton number asymmetries of the Universe

    Energy Technology Data Exchange (ETDEWEB)

    Barenboim, Gabriela [Departament de Física Teòrica and IFIC, Universitat de València-CSIC, C/ Dr. Moliner, 50, Burjassot, E-46100 Spain (Spain); Park, Wan-Il, E-mail: Gabriela.Barenboim@uv.es, E-mail: wipark@jbnu.ac.kr [Department of Science Education (Physics), Chonbuk National University, 567 Baekje-daero, Jeonju, 561-756 (Korea, Republic of)

    2017-04-01

    A large lepton number asymmetry of O(0.1−1) at present Universe might not only be allowed but also necessary for consistency among cosmological data. We show that, if a sizeable lepton number asymmetry were produced before the electroweak phase transition, the requirement for not producing too much baryon number asymmetry through sphalerons processes, forces the high scale lepton number asymmetry to be larger than about 03. Therefore a mild entropy release causing O(10-100) suppression of pre-existing particle density should take place, when the background temperature of the Universe is around T = O(10{sup −2}-10{sup 2}) GeV for a large but experimentally consistent asymmetry to be present today. We also show that such a mild entropy production can be obtained by the late-time decays of the saxion, constraining the parameters of the Peccei-Quinn sector such as the mass and the vacuum expectation value of the saxion field to be m {sub φ} ∼> O(10) TeV and φ{sub 0} ∼> O(10{sup 14}) GeV, respectively.

  16. The clinical application of percutaneous large core needle biopsy on large breast mass

    International Nuclear Information System (INIS)

    Peng Songhong; Ma Jie; Wang Guohong; Sun Guoping; Fu Jianmin; Zhou Dongxian

    2005-01-01

    Objective: An evaluation of the clinical application of percutaneous large core needle biopsy on large breast mass. Methods: Mammography and percutaneous large core needle biopsy were performed in 31 cases of large breast mass. Results: Apart from 5 cases showing characteristic calcification of malignancy, the rest cases were lack of diagnostic manifestation. Needle biopsy and pathological examination identified breast canner in 11 cases, suppurative mastitis in 9 case, fibrocystic mammary disorder in 7 cases, tuberculosis in 1 case, and fibroadenoma in 3 cases. Fibrocystic mammary disease was initially identified by biopsy in a case, while the following pathological diagnosis was fibrocystic mammary disorder with cancinoma in sim. Specificity rate of' biopsy was 96.8% and no false positive was observed. Vagotonia occurred in one case during the biopsy and hematoma in another. Conclusion: Percutaneous large core needle biopsy is a less invasive, simple, safe and reliable methods in the diagnosis of the large breast mass. And it may be recommended as a complementary procedure for routine imaging modality or surgical resection. (authors)

  17. Large-scale transportation network congestion evolution prediction using deep learning theory.

    Science.gov (United States)

    Ma, Xiaolei; Yu, Haiyang; Wang, Yunpeng; Wang, Yinhai

    2015-01-01

    Understanding how congestion at one location can cause ripples throughout large-scale transportation network is vital for transportation researchers and practitioners to pinpoint traffic bottlenecks for congestion mitigation. Traditional studies rely on either mathematical equations or simulation techniques to model traffic congestion dynamics. However, most of the approaches have limitations, largely due to unrealistic assumptions and cumbersome parameter calibration process. With the development of Intelligent Transportation Systems (ITS) and Internet of Things (IoT), transportation data become more and more ubiquitous. This triggers a series of data-driven research to investigate transportation phenomena. Among them, deep learning theory is considered one of the most promising techniques to tackle tremendous high-dimensional data. This study attempts to extend deep learning theory into large-scale transportation network analysis. A deep Restricted Boltzmann Machine and Recurrent Neural Network architecture is utilized to model and predict traffic congestion evolution based on Global Positioning System (GPS) data from taxi. A numerical study in Ningbo, China is conducted to validate the effectiveness and efficiency of the proposed method. Results show that the prediction accuracy can achieve as high as 88% within less than 6 minutes when the model is implemented in a Graphic Processing Unit (GPU)-based parallel computing environment. The predicted congestion evolution patterns can be visualized temporally and spatially through a map-based platform to identify the vulnerable links for proactive congestion mitigation.

  18. A complete process for production of flexible large area polymer solar cells entirely using screen printing-First public demonstration

    DEFF Research Database (Denmark)

    Krebs, Frederik C; Jørgensen, Mikkel; Norrman, Kion

    2009-01-01

    , complete processing in air using commonly available screen printing, and finally, simple mechanical encapsulation using a flexible packaging material and electrical contacting post-production using crimped contacts. We detail the production of more than 2000 modules in one production run and show......A complete polymer solar cell module prepared in the ambient atmosphere under industrial conditions is presented. The versatility of the polymer solar cell technology is demonstrated through the use of abstract forms for the active area, a flexible substrate, processing entirely from solution...

  19. CMOS compatible route for GaAs based large scale flexible and transparent electronics

    KAUST Repository

    Nour, Maha A.; Ghoneim, Mohamed T.; Droopad, Ravi; Hussain, Muhammad Mustafa

    2014-01-01

    Flexible electronics using gallium arsenide (GaAs) for nano-electronics with high electron mobility and optoelectronics with direct band gap are attractive for many applications. Here we describe a state-of-the-art CMOS compatible batch fabrication process of transforming traditional electronic circuitry into large-area flexible, semitransparent platform. We show a simple release process for peeling off 200 nm of GaAs from 200 nm GaAs/300 nm AlAs stack on GaAs substrate using diluted hydrofluoric acid (HF). This process enables releasing a single top layer compared to peeling off all layers with small sizes at the same time. This is done utilizing a network of release holes which contributes to the better transparency (45 % at 724 nm wavelength) observed.

  20. CMOS compatible route for GaAs based large scale flexible and transparent electronics

    KAUST Repository

    Nour, Maha A.

    2014-08-01

    Flexible electronics using gallium arsenide (GaAs) for nano-electronics with high electron mobility and optoelectronics with direct band gap are attractive for many applications. Here we describe a state-of-the-art CMOS compatible batch fabrication process of transforming traditional electronic circuitry into large-area flexible, semitransparent platform. We show a simple release process for peeling off 200 nm of GaAs from 200 nm GaAs/300 nm AlAs stack on GaAs substrate using diluted hydrofluoric acid (HF). This process enables releasing a single top layer compared to peeling off all layers with small sizes at the same time. This is done utilizing a network of release holes which contributes to the better transparency (45 % at 724 nm wavelength) observed.

  1. Large-area perovskite nanowire arrays fabricated by large-scale roll-to-roll micro-gravure printing and doctor blading

    Science.gov (United States)

    Hu, Qiao; Wu, Han; Sun, Jia; Yan, Donghang; Gao, Yongli; Yang, Junliang

    2016-02-01

    Organic-inorganic hybrid halide perovskite nanowires (PNWs) show great potential applications in electronic and optoelectronic devices such as solar cells, field-effect transistors and photodetectors. It is very meaningful to fabricate ordered, large-area PNW arrays and greatly accelerate their applications and commercialization in electronic and optoelectronic devices. Herein, highly oriented and ultra-long methylammonium lead iodide (CH3NH3PbI3) PNW array thin films were fabricated by large-scale roll-to-roll (R2R) micro-gravure printing and doctor blading in ambient environments (humility ~45%, temperature ~28 °C), which produced PNW lengths as long as 15 mm. Furthermore, photodetectors based on these PNWs were successfully fabricated on both silicon oxide (SiO2) and flexible polyethylene terephthalate (PET) substrates and showed moderate performance. This study provides low-cost, large-scale techniques to fabricate large-area PNW arrays with great potential applications in flexible electronic and optoelectronic devices.Organic-inorganic hybrid halide perovskite nanowires (PNWs) show great potential applications in electronic and optoelectronic devices such as solar cells, field-effect transistors and photodetectors. It is very meaningful to fabricate ordered, large-area PNW arrays and greatly accelerate their applications and commercialization in electronic and optoelectronic devices. Herein, highly oriented and ultra-long methylammonium lead iodide (CH3NH3PbI3) PNW array thin films were fabricated by large-scale roll-to-roll (R2R) micro-gravure printing and doctor blading in ambient environments (humility ~45%, temperature ~28 °C), which produced PNW lengths as long as 15 mm. Furthermore, photodetectors based on these PNWs were successfully fabricated on both silicon oxide (SiO2) and flexible polyethylene terephthalate (PET) substrates and showed moderate performance. This study provides low-cost, large-scale techniques to fabricate large-area PNW arrays

  2. Large momentum transfer phenomena

    International Nuclear Information System (INIS)

    Imachi, Masahiro; Otsuki, Shoichiro; Matsuoka, Takeo; Sawada, Shoji.

    1978-01-01

    The large momentum transfer phenomena in hadron reaction drastically differ from small momentum transfer phenomena, and are described in this paper. Brief review on the features of the large transverse momentum transfer reactions is described in relation with two-body reactions, single particle productions, particle ratios, two jet structure, two particle correlations, jet production cross section, and the component of momentum perpendicular to the plane defined by the incident protons and the triggered pions and transverse momentum relative to jet axis. In case of two-body process, the exponent N of the power law of the differential cross section is a value between 10 to 11.5 in the large momentum transfer region. The breaks of the exponential behaviors into the power ones are observed at the large momentum transfer region. The break would enable to estimate the order of a critical length. The large momentum transfer phenomena strongly suggest an important role of constituents of hadrons in the hard region. Hard rearrangement of constituents from different initial hadrons induces large momentum transfer reactions. Several rules to count constituents in the hard region have been proposed so far to explain the power behavior. Scale invariant quark interaction and hard reactions are explained, and a summary of the possible types of hard subprocess is presented. (Kato, T.)

  3. Processing large-diameter poly(L-lactic acid) microfiber mesh/mesenchymal stromal cell constructs via resin embedding: an efficient histologic method

    International Nuclear Information System (INIS)

    D’Alessandro, Delfo; Danti, Serena; Pertici, Gianni; Moscato, Stefania; Metelli, Maria Rita; Petrini, Mario; Danti, Sabrina; Berrettini, Stefano; Nesti, Claudia

    2014-01-01

    In this study, we performed a complete histologic analysis of constructs based on large diameter ( > 100 μm) poly-L-lactic acid (PLLA) microfibers obtained via dry-wet spinning and rat Mesenchymal Stromal Cells (rMSCs) differentiated towards the osteogenic lineage, using acrylic resin embedding. In many synthetic polymer-based microfiber meshes, ex post processability of fiber/cell constructs for histologic analysis may face deterring difficulties, leading to an incomplete investigation of the potential of these scaffolds. Indeed, while polymeric nanofiber (fiber diameter = tens of nanometers)/cell constructs can usually be embedded in common histologic media and easily sectioned, preserving the material structure and the antigenic reactivity, histologic analysis of large polymeric microfiber/cell constructs in the literature is really scant. This affects microfiber scaffolds based on FDA-approved and widely used polymers such as PLLA and its copolymers. Indeed, for such constructs, especially those with fiber diameter and fiber interspace much larger than cell size, standard histologic processing is usually inefficient due to inhomogeneous hardness and lack of cohesion between the synthetic and the biological phases under sectioning. In this study, the microfiber/MSC constructs were embedded in acrylic resin and the staining/reaction procedures were calibrated to demonstrate the possibility of successfully employing histologic methods in tissue engineering studies even in such difficult cases. We histologically investigated the main osteogenic markers and extracellular matrix molecules, such as alkaline phosphatase, osteopontin, osteocalcin, TGF-β1, Runx2, Collagen type I and the presence of amorphous, fibrillar and mineralized matrix. Biochemical tests were employed to confirm our findings. This protocol permitted efficient sectioning of the treated constructs and good penetration of the histologic reagents, thus allowing distribution and expression of

  4. Physical processes in thin-film electroluminescent structures based on ZnS:Mn showing self-organized patterns

    International Nuclear Information System (INIS)

    Zuccaro, S.; Raker, Th.; Niedernostheide, F.-J.; Kuhn, T.; Purwins, H.-G.

    2003-01-01

    Physical processes in thin ZnS:Mn films and their relation to the formation of dynamical patterns in the electroluminescence of AC driven films are investigated. The technique of photo-depolarization-spectroscopy is used to investigate defect states in these films and it is shown that specific features in the spectra correlate with the observed self-organized patterns. Furthermore, the time dependence of the dissipative current is measured at the same samples and compared with current waveforms obtained from numerical simulations of a drift-diffusion model. The results are used to discuss the origin of the self-organized processes in ZnS:Mn-films

  5. ICLIC : interactive categorization of large image collections

    NARCIS (Netherlands)

    Van Der Corput, Paul; van Wijk, Jarke J.

    2016-01-01

    We present a new approach for the analysis of large image collections. We argue that categorization plays an important role in this process, not only to label images as end result, but also during exploration. Furthermore, to increase the effectiveness and efficiency of the categorization process we

  6. Inducing a health-promoting change process within an organization the Effectiveness of a Large-Scale Intervention on Social Capital, Openness, and Autonomous Motivation Toward Health

    NARCIS (Netherlands)

    Scheppingen, A.R. van; Vroome, E.M.M. de; Have, K.C.J.M. ten; Bos, E.H.; Zwetsloot, G.I.J.M.; Mechelen, W. van

    2014-01-01

    Objective: To examine the effectiveness of an organizational large-scale intervention applied to induce a health-promoting organizational change process. Design and Methods: A quasi-experimental, "as-treated" design was used. Regression analyses on data of employees of a Dutch dairy company (n =324)

  7. Research on pre-processing of QR Code

    Science.gov (United States)

    Sun, Haixing; Xia, Haojie; Dong, Ning

    2013-10-01

    QR code encodes many kinds of information because of its advantages: large storage capacity, high reliability, full arrange of utter-high-speed reading, small printing size and high-efficient representation of Chinese characters, etc. In order to obtain the clearer binarization image from complex background, and improve the recognition rate of QR code, this paper researches on pre-processing methods of QR code (Quick Response Code), and shows algorithms and results of image pre-processing for QR code recognition. Improve the conventional method by changing the Souvola's adaptive text recognition method. Additionally, introduce the QR code Extraction which adapts to different image size, flexible image correction approach, and improve the efficiency and accuracy of QR code image processing.

  8. Large-deviation theory for diluted Wishart random matrices

    Science.gov (United States)

    Castillo, Isaac Pérez; Metz, Fernando L.

    2018-03-01

    Wishart random matrices with a sparse or diluted structure are ubiquitous in the processing of large datasets, with applications in physics, biology, and economy. In this work, we develop a theory for the eigenvalue fluctuations of diluted Wishart random matrices based on the replica approach of disordered systems. We derive an analytical expression for the cumulant generating function of the number of eigenvalues IN(x ) smaller than x ∈R+ , from which all cumulants of IN(x ) and the rate function Ψx(k ) controlling its large-deviation probability Prob[IN(x ) =k N ] ≍e-N Ψx(k ) follow. Explicit results for the mean value and the variance of IN(x ) , its rate function, and its third cumulant are discussed and thoroughly compared to numerical diagonalization, showing very good agreement. The present work establishes the theoretical framework put forward in a recent letter [Phys. Rev. Lett. 117, 104101 (2016), 10.1103/PhysRevLett.117.104101] as an exact and compelling approach to deal with eigenvalue fluctuations of sparse random matrices.

  9. Using Agent Base Models to Optimize Large Scale Network for Large System Inventories

    Science.gov (United States)

    Shameldin, Ramez Ahmed; Bowling, Shannon R.

    2010-01-01

    The aim of this paper is to use Agent Base Models (ABM) to optimize large scale network handling capabilities for large system inventories and to implement strategies for the purpose of reducing capital expenses. The models used in this paper either use computational algorithms or procedure implementations developed by Matlab to simulate agent based models in a principal programming language and mathematical theory using clusters, these clusters work as a high performance computational performance to run the program in parallel computational. In both cases, a model is defined as compilation of a set of structures and processes assumed to underlie the behavior of a network system.

  10. Testing of Large-Scale ICV Glasses with Hanford LAW Simulant

    Energy Technology Data Exchange (ETDEWEB)

    Hrma, Pavel R.; Kim, Dong-Sang; Vienna, John D.; Matyas, Josef; Smith, Donald E.; Schweiger, Michael J.; Yeager, John D.

    2005-03-01

    Preliminary glass compositions for immobilizing Hanford low-activity waste (LAW) by the in-container vitrification (ICV) process were initially fabricated at crucible- and engineering-scale, including simulants and actual (radioactive) LAW. Glasses were characterized for vapor hydration test (VHT) and product consistency test (PCT) responses and crystallinity (both quenched and slow-cooled samples). Selected glasses were tested for toxicity characteristic leach procedure (TCLP) responses, viscosity, and electrical conductivity. This testing showed that glasses with LAW loading of 20 mass% can be made readily and meet all product constraints by a far margin. Glasses with over 22 mass% Na2O can be made to meet all other product quality and process constraints. Large-scale testing was performed at the AMEC, Geomelt Division facility in Richland. Three tests were conducted using simulated LAW with increasing loadings of 12, 17, and 20 mass% Na2O. Glass samples were taken from the test products in a manner to represent the full expected range of product performance. These samples were characterized for composition, density, crystalline and non-crystalline phase assemblage, and durability using the VHT, PCT, and TCLP tests. The results, presented in this report, show that the AMEC ICV product with meets all waste form requirements with a large margin. These results provide strong evidence that the Hanford LAW can be successfully vitrified by the ICV technology and can meet all the constraints related to product quality. The economic feasibility of the ICV technology can be further enhanced by subsequent optimization.

  11. A Robust Photogrammetric Processing Method of Low-Altitude UAV Images

    Directory of Open Access Journals (Sweden)

    Mingyao Ai

    2015-02-01

    Full Text Available Low-altitude Unmanned Aerial Vehicles (UAV images which include distortion, illumination variance, and large rotation angles are facing multiple challenges of image orientation and image processing. In this paper, a robust and convenient photogrammetric approach is proposed for processing low-altitude UAV images, involving a strip management method to automatically build a standardized regional aerial triangle (AT network, a parallel inner orientation algorithm, a ground control points (GCPs predicting method, and an improved Scale Invariant Feature Transform (SIFT method to produce large number of evenly distributed reliable tie points for bundle adjustment (BA. A multi-view matching approach is improved to produce Digital Surface Models (DSM and Digital Orthophoto Maps (DOM for 3D visualization. Experimental results show that the proposed approach is robust and feasible for photogrammetric processing of low-altitude UAV images and 3D visualization of products.

  12. Fabrication of a metallic roll stamp with low internal stress and high hardness for large area display applications by a pulse reverse current electroforming process

    International Nuclear Information System (INIS)

    Kim, Joongeok; Han, Jungjin; Kim, Taekyung; Kang, Shinill

    2014-01-01

    With the increasing demand for large scale micro/nano components in the fields of display, energy and electrical devices, etc, the establishment of a roll imprinting process has become a priority. The fabrication of a roll stamp with high dimensional accuracy and uniformity is one of the key issues in the roll imprinting process, because the roll stamp determines the properties of the replicated micro/nano structures. In this study, a method to fabricate a metallic roll stamp with low internal stress, high flatness, and high hardness was proposed by a pulse reverse current (PRC) electroforming process. The effects of PRC electroforming processes on the internal stress, hardness, and grain size of the electroformed stamp were examined, and the optimum process conditions were suggested. As a practical example of the proposed method, various micro-patterns for electronic circuits were fabricated via the roll imprinting process using a PRC electroformed stamp. (paper)

  13. Carbon Nanotube Integration with a CMOS Process

    Science.gov (United States)

    Perez, Maximiliano S.; Lerner, Betiana; Resasco, Daniel E.; Pareja Obregon, Pablo D.; Julian, Pedro M.; Mandolesi, Pablo S.; Buffa, Fabian A.; Boselli, Alfredo; Lamagna, Alberto

    2010-01-01

    This work shows the integration of a sensor based on carbon nanotubes using CMOS technology. A chip sensor (CS) was designed and manufactured using a 0.30 μm CMOS process, leaving a free window on the passivation layer that allowed the deposition of SWCNTs over the electrodes. We successfully investigated with the CS the effect of humidity and temperature on the electrical transport properties of SWCNTs. The possibility of a large scale integration of SWCNTs with CMOS process opens a new route in the design of more efficient, low cost sensors with high reproducibility in their manufacture. PMID:22319330

  14. Workflow management in large distributed systems

    International Nuclear Information System (INIS)

    Legrand, I; Newman, H; Voicu, R; Dobre, C; Grigoras, C

    2011-01-01

    The MonALISA (Monitoring Agents using a Large Integrated Services Architecture) framework provides a distributed service system capable of controlling and optimizing large-scale, data-intensive applications. An essential part of managing large-scale, distributed data-processing facilities is a monitoring system for computing facilities, storage, networks, and the very large number of applications running on these systems in near realtime. All this monitoring information gathered for all the subsystems is essential for developing the required higher-level services—the components that provide decision support and some degree of automated decisions—and for maintaining and optimizing workflow in large-scale distributed systems. These management and global optimization functions are performed by higher-level agent-based services. We present several applications of MonALISA's higher-level services including optimized dynamic routing, control, data-transfer scheduling, distributed job scheduling, dynamic allocation of storage resource to running jobs and automated management of remote services among a large set of grid facilities.

  15. Workflow management in large distributed systems

    Science.gov (United States)

    Legrand, I.; Newman, H.; Voicu, R.; Dobre, C.; Grigoras, C.

    2011-12-01

    The MonALISA (Monitoring Agents using a Large Integrated Services Architecture) framework provides a distributed service system capable of controlling and optimizing large-scale, data-intensive applications. An essential part of managing large-scale, distributed data-processing facilities is a monitoring system for computing facilities, storage, networks, and the very large number of applications running on these systems in near realtime. All this monitoring information gathered for all the subsystems is essential for developing the required higher-level services—the components that provide decision support and some degree of automated decisions—and for maintaining and optimizing workflow in large-scale distributed systems. These management and global optimization functions are performed by higher-level agent-based services. We present several applications of MonALISA's higher-level services including optimized dynamic routing, control, data-transfer scheduling, distributed job scheduling, dynamic allocation of storage resource to running jobs and automated management of remote services among a large set of grid facilities.

  16. Levels of Processing and the Cue-Dependent Nature of Recollection

    Science.gov (United States)

    Mulligan, Neil W.; Picklesimer, Milton

    2012-01-01

    Dual-process models differentiate between two bases of memory, recollection and familiarity. It is routinely claimed that deeper, semantic encoding enhances recollection relative to shallow, non-semantic encoding, and that recollection is largely a product of semantic, elaborative rehearsal. The present experiments show that this is not always the…

  17. VisualRank: applying PageRank to large-scale image search.

    Science.gov (United States)

    Jing, Yushi; Baluja, Shumeet

    2008-11-01

    Because of the relative ease in understanding and processing text, commercial image-search systems often rely on techniques that are largely indistinguishable from text-search. Recently, academic studies have demonstrated the effectiveness of employing image-based features to provide alternative or additional signals. However, it remains uncertain whether such techniques will generalize to a large number of popular web queries, and whether the potential improvement to search quality warrants the additional computational cost. In this work, we cast the image-ranking problem into the task of identifying "authority" nodes on an inferred visual similarity graph and propose VisualRank to analyze the visual link structures among images. The images found to be "authorities" are chosen as those that answer the image-queries well. To understand the performance of such an approach in a real system, we conducted a series of large-scale experiments based on the task of retrieving images for 2000 of the most popular products queries. Our experimental results show significant improvement, in terms of user satisfaction and relevancy, in comparison to the most recent Google Image Search results. Maintaining modest computational cost is vital to ensuring that this procedure can be used in practice; we describe the techniques required to make this system practical for large scale deployment in commercial search engines.

  18. Large exchange-dominated domain wall velocities in antiferromagnetically coupled nanowires

    Science.gov (United States)

    Kuteifan, Majd; Lubarda, M. V.; Fu, S.; Chang, R.; Escobar, M. A.; Mangin, S.; Fullerton, E. E.; Lomakin, V.

    2016-04-01

    Magnetic nanowires supporting field- and current-driven domain wall motion are envisioned for methods of information storage and processing. A major obstacle for their practical use is the domain-wall velocity, which is traditionally limited for low fields and currents due to the Walker breakdown occurring when the driving component reaches a critical threshold value. We show through numerical and analytical modeling that the Walker breakdown limit can be extended or completely eliminated in antiferromagnetically coupled magnetic nanowires. These coupled nanowires allow for large domain-wall velocities driven by field and/or current as compared to conventional nanowires.

  19. Economically viable large-scale hydrogen liquefaction

    Science.gov (United States)

    Cardella, U.; Decker, L.; Klein, H.

    2017-02-01

    The liquid hydrogen demand, particularly driven by clean energy applications, will rise in the near future. As industrial large scale liquefiers will play a major role within the hydrogen supply chain, production capacity will have to increase by a multiple of today’s typical sizes. The main goal is to reduce the total cost of ownership for these plants by increasing energy efficiency with innovative and simple process designs, optimized in capital expenditure. New concepts must ensure a manageable plant complexity and flexible operability. In the phase of process development and selection, a dimensioning of key equipment for large scale liquefiers, such as turbines and compressors as well as heat exchangers, must be performed iteratively to ensure technological feasibility and maturity. Further critical aspects related to hydrogen liquefaction, e.g. fluid properties, ortho-para hydrogen conversion, and coldbox configuration, must be analysed in detail. This paper provides an overview on the approach, challenges and preliminary results in the development of efficient as well as economically viable concepts for large-scale hydrogen liquefaction.

  20. Mizan: Optimizing Graph Mining in Large Parallel Systems

    KAUST Repository

    Kalnis, Panos

    2012-03-01

    Extracting information from graphs, from nding shortest paths to complex graph mining, is essential for many ap- plications. Due to the shear size of modern graphs (e.g., social networks), processing must be done on large paral- lel computing infrastructures (e.g., the cloud). Earlier ap- proaches relied on the MapReduce framework, which was proved inadequate for graph algorithms. More recently, the message passing model (e.g., Pregel) has emerged. Although the Pregel model has many advantages, it is agnostic to the graph properties and the architecture of the underlying com- puting infrastructure, leading to suboptimal performance. In this paper, we propose Mizan, a layer between the users\\' code and the computing infrastructure. Mizan considers the structure of the input graph and the architecture of the in- frastructure in order to: (i) decide whether it is bene cial to generate a near-optimal partitioning of the graph in a pre- processing step, and (ii) choose between typical point-to- point message passing and a novel approach that puts com- puting nodes in a virtual overlay ring. We deployed Mizan on a small local Linux cluster, on the cloud (256 virtual machines in Amazon EC2), and on an IBM Blue Gene/P supercomputer (1024 CPUs). We show that Mizan executes common algorithms on very large graphs 1-2 orders of mag- nitude faster than MapReduce-based implementations and up to one order of magnitude faster than implementations relying on Pregel-like hash-based graph partitioning.

  1. Large area and flexible electronics

    CERN Document Server

    Caironi, Mario

    2015-01-01

    From materials to applications, this ready reference covers the entire value chain from fundamentals via processing right up to devices, presenting different approaches to large-area electronics, thus enabling readers to compare materials, properties and performance.Divided into two parts, the first focuses on the materials used for the electronic functionality, covering organic and inorganic semiconductors, including vacuum and solution-processed metal-oxide semiconductors, nanomembranes and nanocrystals, as well as conductors and insulators. The second part reviews the devices and applicatio

  2. rRNA maturation in yeast cells depleted of large ribosomal subunit proteins.

    Directory of Open Access Journals (Sweden)

    Gisela Pöll

    Full Text Available The structural constituents of the large eukaryotic ribosomal subunit are 3 ribosomal RNAs, namely the 25S, 5.8S and 5S rRNA and about 46 ribosomal proteins (r-proteins. They assemble and mature in a highly dynamic process that involves more than 150 proteins and 70 small RNAs. Ribosome biogenesis starts in the nucleolus, continues in the nucleoplasm and is completed after nucleo-cytoplasmic translocation of the subunits in the cytoplasm. In this work we created 26 yeast strains, each of which conditionally expresses one of the large ribosomal subunit (LSU proteins. In vivo depletion of the analysed LSU r-proteins was lethal and led to destabilisation and degradation of the LSU and/or its precursors. Detailed steady state and metabolic pulse labelling analyses of rRNA precursors in these mutant strains showed that LSU r-proteins can be grouped according to their requirement for efficient progression of different steps of large ribosomal subunit maturation. Comparative analyses of the observed phenotypes and the nature of r-protein-rRNA interactions as predicted by current atomic LSU structure models led us to discuss working hypotheses on i how individual r-proteins control the productive processing of the major 5' end of 5.8S rRNA precursors by exonucleases Rat1p and Xrn1p, and ii the nature of structural characteristics of nascent LSUs that are required for cytoplasmic accumulation of nascent subunits but are nonessential for most of the nuclear LSU pre-rRNA processing events.

  3. Friction conditions in the bearing area of an aluminium extrusion process

    NARCIS (Netherlands)

    Ma, X.; de Rooij, Matthias B.; Schipper, Dirk J.

    2012-01-01

    In aluminium extrusion processes, friction inside the bearing channel is important for controlling the surface quality of the extrusion products. The contact materials show a large hardness difference, one being hot aluminium, and the other being hardened tool steel. Further, the contact pressure is

  4. State of the art in the process design of large sea water desalination plants; Estado del arte en el diseno del proceso de plantas desaladoras de agua de mar de gran capacidad

    Energy Technology Data Exchange (ETDEWEB)

    Sanchez Sanchez, J. M.; Sanchez Castillo, N.; Sanchez Castillo, R.

    2008-07-01

    The desalination of seawater is used in commercial operations worldwide in order to obtain large quantities of proper water for population supply, irrigation or industrial uses. The designs of the processes which are involved in desalination are changing all the time. In this paper the evolution of the processes of seawater desalination plants will be discussed. It will focus on large Reverse Osmosis desalination plants: also it will discuss the reasons of this evolution. (Author) 8 refs.

  5. Hexagonal Boron Nitride assisted transfer and encapsulation of large area CVD graphene

    Science.gov (United States)

    Shautsova, Viktoryia; Gilbertson, Adam M.; Black, Nicola C. G.; Maier, Stefan A.; Cohen, Lesley F.

    2016-07-01

    We report a CVD hexagonal boron nitride (hBN-) assisted transfer method that enables a polymer-impurity free transfer process and subsequent top encapsulation of large-area CVD-grown graphene. We demonstrate that the CVD hBN layer that is utilized in this transfer technique acts as a buffer layer between the graphene film and supporting polymer layer. We show that the resulting graphene layers possess lower doping concentration, and improved carrier mobilities compared to graphene films produced by conventional transfer methods onto untreated SiO2/Si, SAM-modified and hBN covered SiO2/Si substrates. Moreover, we show that the top hBN layer used in the transfer process acts as an effective top encapsulation resulting in improved stability to ambient exposure. The transfer method is applicable to other CVD-grown 2D materials on copper foils, thereby facilitating the preparation of van der Waals heterostructures with controlled doping.

  6. PREFACE PASREG: The 7th International Workshop on the Processing and Applications of Superconducting (RE)BCO Large Grain Materials (Washington DC, 29-31 July 2010) PASREG: The 7th International Workshop on the Processing and Applications of Superconducting (RE)BCO Large Grain Materials (Washington DC, 29-31 July 2010)

    Science.gov (United States)

    Freyhardt, Herbert; Cardwell, David; Strasik, Mike

    2010-12-01

    Large grain, (RE)BCO bulk superconductors fabricated by top seeded melt growth (TSMG) are able to generate large magnetic fields compared to conventional, iron-based permanent magnets. Following 20 years of development, these materials are now beginning to realize their considerable potential for a variety of engineering applications such as magnetic separators, flywheel energy storage and magnetic bearings. MgB2 has also continued to emerge as a potentially important bulk superconducting material for engineering applications below 20 K due to its lack of granularity and the ease with which complex shapes of this material can be fabricated. This issue of Superconductor Science and Technology contains a selection of papers presented at the 7th International Workshop on the Processing and Applications of Superconducting (RE)BCO Large Grain Materials, including MgB2, held 29th-31sy July 2010 at the Omni Shoreham Hotel, Washington DC, USA, to report progress made in this field in the previous three year period. The workshop followed those held previously in Cambridge, UK (1997), Morioka, Japan (1999), Seattle, USA (2001), Jena, Germany (2003), Tokyo, Japan (2005) and again in Cambridge, UK (2007). The scope of the seventh PASREG workshop was extended to include processing and characterization aspects of the broader spectrum of bulk high temperature superconducting (HTS) materials, including melt-cast Bi-HTS and bulk MgB2, recent developments in the field and innovative applications of bulk HTS. A total of 38 papers were presented at this workshop, of which 30 were presented in oral form and 8 were presented as posters. The organizers wish to acknowledge the efforts of Sue Butler of the University of Houston for her local organization of the workshop. The eighth PASREG workshop will be held in Taiwan in the summer of 2012.

  7. Physical processes in spin polarized plasmas

    International Nuclear Information System (INIS)

    Kulsrud, R.M.; Valeo, E.J.; Cowley, S.

    1984-05-01

    If the plasma in a nuclear fusion reactor is polarized, the nuclear reactions are modified in such a way as to enhance the reactor performance. We calculate in detail the modification of these nuclear reactions by different modes of polarization of the nuclear fuel. We also consider in detail the various physical processes that can lead to depolarization and show that they are by and large slow enough that a high degree of polarization can be maintained

  8. Manufacturing and mechanical property test of the large-scale oxide dispersion strengthened martensitic mother tube by hot isostatic pressing and hot extrusion process

    International Nuclear Information System (INIS)

    Narita, Takeshi; Ukai, Shigeharu; Kaito, Takeji; Ohtsuka, Satoshi; Fujiwara, Masayuki

    2003-09-01

    Mass production capability of Oxide Dispersion Strengthened (ODS) ferritic steel cladding (9Cr) is evaluated in the Phase II of the Feasibility Studies on Commercialized Fast Reactor Cycle System. The cost for manufacturing mother tube is a dominant factor in the total cost for manufacturing ODS ferritic cladding. In this study, the large-scale 9Cr-ODS martensitic mother tube was produced by overseas supplier with mass production equipments for commercialized ODS steels. The process of manufacturing the ODS mother tube consists of raw material powder production, mechanical alloying by high energy ball mill, hot isostatic pressing(HIP), and hot extrusion. Following results were obtained in this study. (1) Micro structure of the ODS steels is equivalent to that of domestic products, and fine oxides are uniformly distributed. The mechanical alloying by large capacity (1 ton) ball mill can be satisfactorily carried out. (2) A large scale mother tube (65 mm OD x 48 mm ID x 10,000 mm L), which can produce about 60 pieces of 3 m length ODS ferritic claddings by four times cold rolling, have been successfully manufactured through HIP and Hot Extrusion process. (3) Rough surface of the mother tubes produced in this study can be improved by selecting the reasonable hot extrusion condition. (4) Hardness and tensile strength of the manufactured ODS steels are lower than domestic products with same chemical composition. This is owing to the high aluminum content in the product, and those properties could be improved by decreasing the aluminum content in the raw material powder. (author)

  9. Large-scale grain growth in the solid-state process: From "Abnormal" to "Normal"

    Science.gov (United States)

    Jiang, Minhong; Han, Shengnan; Zhang, Jingwei; Song, Jiageng; Hao, Chongyan; Deng, Manjiao; Ge, Lingjing; Gu, Zhengfei; Liu, Xinyu

    2018-02-01

    Abnormal grain growth (AGG) has been a common phenomenon during the ceramic or metallurgy processing since prehistoric times. However, usually it had been very difficult to grow big single crystal (centimeter scale over) by using the AGG method due to its so-called occasionality. Based on the AGG, a solid-state crystal growth (SSCG) method was developed. The greatest advantages of the SSCG technology are the simplicity and cost-effectiveness of the technique. But the traditional SSCG technology is still uncontrollable. This article first summarizes the history and current status of AGG, and then reports recent technical developments from AGG to SSCG, and further introduces a new seed-free, solid-state crystal growth (SFSSCG) technology. This SFSSCG method allows us to repeatedly and controllably fabricate large-scale single crystals with appreciable high quality and relatively stable chemical composition at a relatively low temperature, at least in (K0.5Na0.5)NbO3(KNN) and Cu-Al-Mn systems. In this sense, the exaggerated grain growth is no longer 'Abnormal' but 'Normal' since it is able to be artificially controllable and repeated now. This article also provides a crystal growth model to qualitatively explain the mechanism of SFSSCG for KNN system. Compared with the traditional melt and high temperature solution growth methods, the SFSSCG method has the advantages of low energy consumption, low investment, simple technique, composition homogeneity overcoming the issues with incongruent melting and high volatility. This SFSSCG could be helpful for improving the mechanical and physical properties of single crystals, which should be promising for industrial applications.

  10. Shortest triplet clustering: reconstructing large phylogenies using representative sets

    Directory of Open Access Journals (Sweden)

    Sy Vinh Le

    2005-04-01

    Full Text Available Abstract Background Understanding the evolutionary relationships among species based on their genetic information is one of the primary objectives in phylogenetic analysis. Reconstructing phylogenies for large data sets is still a challenging task in Bioinformatics. Results We propose a new distance-based clustering method, the shortest triplet clustering algorithm (STC, to reconstruct phylogenies. The main idea is the introduction of a natural definition of so-called k-representative sets. Based on k-representative sets, shortest triplets are reconstructed and serve as building blocks for the STC algorithm to agglomerate sequences for tree reconstruction in O(n2 time for n sequences. Simulations show that STC gives better topological accuracy than other tested methods that also build a first starting tree. STC appears as a very good method to start the tree reconstruction. However, all tested methods give similar results if balanced nearest neighbor interchange (BNNI is applied as a post-processing step. BNNI leads to an improvement in all instances. The program is available at http://www.bi.uni-duesseldorf.de/software/stc/. Conclusion The results demonstrate that the new approach efficiently reconstructs phylogenies for large data sets. We found that BNNI boosts the topological accuracy of all methods including STC, therefore, one should use BNNI as a post-processing step to get better topological accuracy.

  11. Large explosive basaltic eruptions at Katla volcano, Iceland: Fragmentation, grain size and eruption dynamics

    Science.gov (United States)

    Schmith, Johanne; Höskuldsson, Ármann; Holm, Paul Martin; Larsen, Guðrún

    2018-04-01

    Katla volcano in Iceland produces hazardous large explosive basaltic eruptions on a regular basis, but very little quantitative data for future hazard assessments exist. Here details on fragmentation mechanism and eruption dynamics are derived from a study of deposit stratigraphy with detailed granulometry and grain morphology analysis, granulometric modeling, componentry and the new quantitative regularity index model of fragmentation mechanism. We show that magma/water interaction is important in the ash generation process, but to a variable extent. By investigating the large explosive basaltic eruptions from 1755 and 1625, we document that eruptions of similar size and magma geochemistry can have very different fragmentation dynamics. Our models show that fragmentation in the 1755 eruption was a combination of magmatic degassing and magma/water-interaction with the most magma/water-interaction at the beginning of the eruption. The fragmentation of the 1625 eruption was initially also a combination of both magmatic and phreatomagmatic processes, but magma/water-interaction diminished progressively during the later stages of the eruption. However, intense magma/water interaction was reintroduced during the final stages of the eruption dominating the fine fragmentation at the end. This detailed study of fragmentation changes documents that subglacial eruptions have highly variable interaction with the melt water showing that the amount and access to melt water changes significantly during eruptions. While it is often difficult to reconstruct the progression of eruptions that have no quantitative observational record, this study shows that integrating field observations and granulometry with the new regularity index can form a coherent model of eruption evolution.

  12. Processing and Application of ICESat Large Footprint Full Waveform Laser Range Data

    NARCIS (Netherlands)

    Duong, V.H.

    2010-01-01

    In the last two decades, laser scanning systems made the transition from scientific research to the commercial market. Laser scanning has a large variety of applications such as digital elevation models, forest inventory and man-made object reconstruction, and became the most required input data for

  13. Accelerating Best Care in Pennsylvania: adapting a large academic system's quality improvement process to rural community hospitals.

    Science.gov (United States)

    Haydar, Ziad; Gunderson, Julie; Ballard, David J; Skoufalos, Alexis; Berman, Bettina; Nash, David B

    2008-01-01

    Industrial quality improvement (QI) methods such as continuous quality improvement (CQI) may help bridge the gap between evidence-based "best care" and the quality of care provided. In 2006, Baylor Health Care System collaborated with Jefferson Medical College of Thomas Jefferson University to conduct a QI demonstration project in select Pennsylvania hospitals using CQI techniques developed by Baylor. The training was provided over a 6-month period and focused on methods for rapid-cycle improvement; data system design; data management; tools to improve patient outcomes, processes of care, and cost-effectiveness; use of clinical guidelines and protocols; leadership skills; and customer service skills. Participants successfully implemented a variety of QI projects. QI education programs developed and pioneered within large health care systems can be adapted and applied successfully to other settings, providing needed tools to smaller rural and community hospitals that lack the necessary resources to establish such programs independently.

  14. Large variation in lipid content, ΣPCB and δ13C within individual Atlantic salmon (Salmo salar)

    International Nuclear Information System (INIS)

    Persson, Maria E.; Larsson, Per; Holmqvist, Niklas; Stenroth, Patrik

    2007-01-01

    Many studies that investigate pollutant levels, or use stable isotope ratios to define trophic level or animal origin, use different standard ways of sampling (dorsal, whole filet or whole body samples). This study shows that lipid content, ΣPCB and δ 13 C display large differences within muscle samples taken from a single Atlantic salmon. Lipid- and PCB-content was lowest in tail muscles, intermediate in anterior-dorsal muscles and highest in the stomach (abdominal) muscle area. Stable isotopes of carbon (δ 13 C) showed a lipid accumulation in the stomach muscle area and a depletion in tail muscles. We conclude that it is important to choose an appropriate sample location within an animal based on what processes are to be studied. Care should be taken when attributing persistent pollutant levels or stable isotope data to specific environmental processes before controlling for within-animal variation in these variables. - Lipid content, ΣPCB and δ 13 C vary to a large extent within Atlantic salmon, therefore, the sample technique for individual fish is of outmost importance for proper interpretation of data

  15. Low energy positron diffraction from Cu(111): Importance of surface loss processes at large angles of incidence

    International Nuclear Information System (INIS)

    Lessor, D.L.; Duke, C.B.; Lippel, P.H.; Brandes, G.R.; Canter, K.F.; Horsky, T.N.

    1990-10-01

    Intensities of positrons specularly diffracted from Cu(111) were measured at the Brandeis positron beam facility and analyzed in the energy range 8eV i = 4eV. At lower energies strong energy dependences occur associated both with multiple elastic scattering phenomena within atomic layers of Cu parallel to the surface and with the thresholds of inelastic channels (e.g., plasmon creation). Use of the free electron calculation of V i shows that energy dependence of inelastic processes is necessary to obtain a satisfactory description of the absolute magnitude of the diffracted intensities below E = 50eV. Detailed comparison of the calculated and observed diffraction intensities reveals the necessity of incorporating surface loss processes explicitly into the model in order to achieve a quantitative description of the measured intensities for E 40 degree. 30 refs., 5 figs., 1 tab

  16. Large-scale grid management; Storskala Nettforvaltning

    Energy Technology Data Exchange (ETDEWEB)

    Langdal, Bjoern Inge; Eggen, Arnt Ove

    2003-07-01

    The network companies in the Norwegian electricity industry now have to establish a large-scale network management, a concept essentially characterized by (1) broader focus (Broad Band, Multi Utility,...) and (2) bigger units with large networks and more customers. Research done by SINTEF Energy Research shows so far that the approaches within large-scale network management may be structured according to three main challenges: centralization, decentralization and out sourcing. The article is part of a planned series.

  17. The front-end analog and digital signal processing electronics for the drift chambers of the Stanford Large Detector

    International Nuclear Information System (INIS)

    Haller, G.M.; Freytag, D.R.; Fox, J.; Olsen, J.; Paffrath, L.; Yim, A.; Honma, A.

    1990-10-01

    The front-end signal processing electronics for the drift-chambers of the Stanford Large Detector (SLD) at the Stanford Linear Collider is described. The system is implemented with printed-circuit boards which are shaped for direct mounting on the detector. Typically, a motherboard comprises 64 channels of transimpedance amplification and analog waveform sampling, A/D conversion, and associated control and readout circuitry. The loaded motherboard thus forms a processor which records low-level wave forms from 64 detector channels and transforms the information into a 64 k-byte serial data stream. In addition, the package performs calibration functions, measures leakage currents on the wires, and generates wire hit patterns for triggering purposes. The construction and operation of the electronic circuits utilizing monolithic, hybridized, and programmable components are discussed

  18. Large Eddy Simulation of Heat Entrainment Under Arctic Sea Ice

    Science.gov (United States)

    Ramudu, Eshwan; Gelderloos, Renske; Yang, Di; Meneveau, Charles; Gnanadesikan, Anand

    2018-01-01

    Arctic sea ice has declined rapidly in recent decades. The faster than projected retreat suggests that free-running large-scale climate models may not be accurately representing some key processes. The small-scale turbulent entrainment of heat from the mixed layer could be one such process. To better understand this mechanism, we model the Arctic Ocean's Canada Basin, which is characterized by a perennial anomalously warm Pacific Summer Water (PSW) layer residing at the base of the mixed layer and a summertime Near-Surface Temperature Maximum (NSTM) within the mixed layer trapping heat from solar radiation. We use large eddy simulation (LES) to investigate heat entrainment for different ice-drift velocities and different initial temperature profiles. The value of LES is that the resolved turbulent fluxes are greater than the subgrid-scale fluxes for most of our parameter space. The results show that the presence of the NSTM enhances heat entrainment from the mixed layer. Additionally there is no PSW heat entrained under the parameter space considered. We propose a scaling law for the ocean-to-ice heat flux which depends on the initial temperature anomaly in the NSTM layer and the ice-drift velocity. A case study of "The Great Arctic Cyclone of 2012" gives a turbulent heat flux from the mixed layer that is approximately 70% of the total ocean-to-ice heat flux estimated from the PIOMAS model often used for short-term predictions. Present results highlight the need for large-scale climate models to account for the NSTM layer.

  19. Analysis of Large-Strain Extrusion Machining with Different Chip Compression Ratios

    Directory of Open Access Journals (Sweden)

    Wen Jun Deng

    2012-01-01

    Full Text Available Large-Strain Extrusion Machining (LSEM is a novel-introduced process for deforming materials to very high plastic strains to produce ultra-fine nanostructured materials. Before the technique can be exploited, it is important to understand the deformation behavior of the workpiece and its relationship to the machining parameters and friction conditions. This paper reports finite-element method (FEM analysis of the LSEM process to understand the evolution of temperature field, effective strain, and strain rate under different chip compression ratios. The cutting and thrust forces are also analyzed with respect to time. The results show that LSEM can produce very high strains by changing in the value of chip compression ratio, thereby enabling the production of nanostructured materials. The shape of the chip produced by LSEM can also be geometrically well constrained.

  20. Patterns and drivers of fish community assembly in a large marine ecosystem

    DEFF Research Database (Denmark)

    Pécuchet, Lauréne; Törnroos, Anna; Lindegren, Martin

    2016-01-01

    . To determine assembly rules, ecological similarities of co-occurring species are often investigated. This can be evaluated using trait-based indices summarizing the species’ niches in a given community. In order to investigate the underlying processes shaping community assembly in marine ecosystems, we...... investigated the patterns and drivers of fish community composition in the Baltic Sea, a semi-enclosed sea characterized by a pronounced environmental gradient. Our results showed a marked decline in species- and functional richness, largely explained by decreasing salinities. In addition, habitat complexity...