WorldWideScience

Sample records for large slow processes

  1. Revealing the cluster of slow transients behind a large slow slip event.

    Science.gov (United States)

    Frank, William B; Rousset, Baptiste; Lasserre, Cécile; Campillo, Michel

    2018-05-01

    Capable of reaching similar magnitudes to large megathrust earthquakes [ M w (moment magnitude) > 7], slow slip events play a major role in accommodating tectonic motion on plate boundaries through predominantly aseismic rupture. We demonstrate here that large slow slip events are a cluster of short-duration slow transients. Using a dense catalog of low-frequency earthquakes as a guide, we investigate the M w 7.5 slow slip event that occurred in 2006 along the subduction interface 40 km beneath Guerrero, Mexico. We show that while the long-period surface displacement, as recorded by Global Positioning System, suggests a 6-month duration, the motion in the direction of tectonic release only sporadically occurs over 55 days, and its surface signature is attenuated by rapid relocking of the plate interface. Our proposed description of slow slip as a cluster of slow transients forces us to re-evaluate our understanding of the physics and scaling of slow earthquakes.

  2. Frequency response of slow beam extraction process

    International Nuclear Information System (INIS)

    Toyama, Takeshi; Sato, Hikaru; Marutsuka, Katsumi; Shirakata, Masashi.

    1994-01-01

    A servo control system has been incorporated into the practical slow extraction system in order to stabilize the spill structure less than a few kHz. Frequency responses of the components of the servo-spill control system and the open-loop frequency response were measured. The beam transfer function of the slow extraction process was derived from the measured data and approximated using a simple function. This is utilized to improve the performance of the servo-loop. (author)

  3. Fast and slow spindles during the sleep slow oscillation: disparate coalescence and engagement in memory processing.

    Science.gov (United States)

    Mölle, Matthias; Bergmann, Til O; Marshall, Lisa; Born, Jan

    2011-10-01

    Thalamo-cortical spindles driven by the up-state of neocortical slow (memory consolidation during sleep. We examined interactions between SOs and spindles in human slow wave sleep, focusing on the presumed existence of 2 kinds of spindles, i.e., slow frontocortical and fast centro-parietal spindles. Two experiments were performed in healthy humans (24.5 ± 0.9 y) investigating undisturbed sleep (Experiment I) and the effects of prior learning (word paired associates) vs. non-learning (Experiment II) on multichannel EEG recordings during sleep. Only fast spindles (12-15 Hz) were synchronized to the depolarizing SO up-state. Slow spindles (9-12 Hz) occurred preferentially at the transition into the SO down-state, i.e., during waning depolarization. Slow spindles also revealed a higher probability to follow rather than precede fast spindles. For sequences of individual SOs, fast spindle activity was largest for "initial" SOs, whereas SO amplitude and slow spindle activity were largest for succeeding SOs. Prior learning enhanced this pattern. The finding that fast and slow spindles occur at different times of the SO cycle points to disparate generating mechanisms for the 2 kinds of spindles. The reported temporal relationships during SO sequences suggest that fast spindles, driven by the SO up-state feed back to enhance the likelihood of succeeding SOs together with slow spindles. By enforcing such SO-spindle cycles, particularly after prior learning, fast spindles possibly play a key role in sleep-dependent memory processing.

  4. Distinguishing Fast and Slow Processes in Accuracy - Response Time Data.

    Directory of Open Access Journals (Sweden)

    Frederik Coomans

    Full Text Available We investigate the relation between speed and accuracy within problem solving in its simplest non-trivial form. We consider tests with only two items and code the item responses in two binary variables: one indicating the response accuracy, and one indicating the response speed. Despite being a very basic setup, it enables us to study item pairs stemming from a broad range of domains such as basic arithmetic, first language learning, intelligence-related problems, and chess, with large numbers of observations for every pair of problems under consideration. We carry out a survey over a large number of such item pairs and compare three types of psychometric accuracy-response time models present in the literature: two 'one-process' models, the first of which models accuracy and response time as conditionally independent and the second of which models accuracy and response time as conditionally dependent, and a 'two-process' model which models accuracy contingent on response time. We find that the data clearly violates the restrictions imposed by both one-process models and requires additional complexity which is parsimoniously provided by the two-process model. We supplement our survey with an analysis of the erroneous responses for an example item pair and demonstrate that there are very significant differences between the types of errors in fast and slow responses.

  5. Delineating psychomotor slowing from reduced processing speed in schizophrenia

    NARCIS (Netherlands)

    Morrens, M.; Hulstijn, W.; Matton, C.; Madani, Y.; Bouwel, L. van; Peuskens, J.; Sabbe, B.G.C.

    2008-01-01

    Introduction. Psychomotor slowing is an intrinsic feature of schizophrenia that is poorly delineated from generally reduced processing speed. Although the Symbol Digit Substitution Test (SDST) is widely used to assess psychomotor speed, the task also taps several higher-order cognitive processes.

  6. Study of Dynamic Characteristics of Slow-Changing Process

    Directory of Open Access Journals (Sweden)

    Yinong Li

    2000-01-01

    Full Text Available A vibration system with slow-changing parameters is a typical nonlinear system. Such systems often occur in the working and controlled process of some intelligent structures when vibration and deformation exist synchronously. In this paper, a system with slow-changing stiffness, damping and mass is analyzed in an intelligent structure. The relationship between the amplitude and the frequency of the system is studied, and its dynamic characteristic is also discussed. Finally, a piecewise linear method is developed on the basis of the asymptotic method. The simulation and the experiment show that a suitable slow-changing stiffness can restrain the amplitude of the system when the system passes through the resonant region.

  7. Large deviations in the presence of cooperativity and slow dynamics

    Science.gov (United States)

    Whitelam, Stephen

    2018-06-01

    We study simple models of intermittency, involving switching between two states, within the dynamical large-deviation formalism. Singularities appear in the formalism when switching is cooperative or when its basic time scale diverges. In the first case the unbiased trajectory distribution undergoes a symmetry breaking, leading to a change in shape of the large-deviation rate function for a particular dynamical observable. In the second case the symmetry of the unbiased trajectory distribution remains unbroken. Comparison of these models suggests that singularities of the dynamical large-deviation formalism can signal the dynamical equivalent of an equilibrium phase transition but do not necessarily do so.

  8. Large forging manufacturing process

    Science.gov (United States)

    Thamboo, Samuel V.; Yang, Ling

    2002-01-01

    A process for forging large components of Alloy 718 material so that the components do not exhibit abnormal grain growth includes the steps of: a) providing a billet with an average grain size between ASTM 0 and ASTM 3; b) heating the billet to a temperature of between 1750.degree. F. and 1800.degree. F.; c) upsetting the billet to obtain a component part with a minimum strain of 0.125 in at least selected areas of the part; d) reheating the component part to a temperature between 1750.degree. F. and 1800.degree. F.; e) upsetting the component part to a final configuration such that said selected areas receive no strains between 0.01 and 0.125; f) solution treating the component part at a temperature of between 1725.degree. F. and 1750.degree. F.; and g) aging the component part over predetermined times at different temperatures. A modified process achieves abnormal grain growth in selected areas of a component where desirable.

  9. Topography of Slow Sigma Power during Sleep is Associated with Processing Speed in Preschool Children.

    Science.gov (United States)

    Doucette, Margaret R; Kurth, Salome; Chevalier, Nicolas; Munakata, Yuko; LeBourgeois, Monique K

    2015-11-04

    Cognitive development is influenced by maturational changes in processing speed, a construct reflecting the rapidity of executing cognitive operations. Although cognitive ability and processing speed are linked to spindles and sigma power in the sleep electroencephalogram (EEG), little is known about such associations in early childhood, a time of major neuronal refinement. We calculated EEG power for slow (10-13 Hz) and fast (13.25-17 Hz) sigma power from all-night high-density electroencephalography (EEG) in a cross-sectional sample of healthy preschool children (n = 10, 4.3 ± 1.0 years). Processing speed was assessed as simple reaction time. On average, reaction time was 1409 ± 251 ms; slow sigma power was 4.0 ± 1.5 μV²; and fast sigma power was 0.9 ± 0.2 μV². Both slow and fast sigma power predominated over central areas. Only slow sigma power was correlated with processing speed in a large parietal electrode cluster (p power predicted faster reaction time. Our findings indicate regional correlates between sigma power and processing speed that are specific to early childhood and provide novel insights into the neurobiological features of the EEG that may underlie developing cognitive abilities.

  10. Topography of Slow Sigma Power during Sleep is Associated with Processing Speed in Preschool Children

    Directory of Open Access Journals (Sweden)

    Margaret R. Doucette

    2015-11-01

    Full Text Available Cognitive development is influenced by maturational changes in processing speed, a construct reflecting the rapidity of executing cognitive operations. Although cognitive ability and processing speed are linked to spindles and sigma power in the sleep electroencephalogram (EEG, little is known about such associations in early childhood, a time of major neuronal refinement. We calculated EEG power for slow (10–13 Hz and fast (13.25–17 Hz sigma power from all-night high-density electroencephalography (EEG in a cross-sectional sample of healthy preschool children (n = 10, 4.3 ± 1.0 years. Processing speed was assessed as simple reaction time. On average, reaction time was 1409 ± 251 ms; slow sigma power was 4.0 ± 1.5 μV2; and fast sigma power was 0.9 ± 0.2 μV2. Both slow and fast sigma power predominated over central areas. Only slow sigma power was correlated with processing speed in a large parietal electrode cluster (p < 0.05, r ranging from −0.6 to −0.8, such that greater power predicted faster reaction time. Our findings indicate regional correlates between sigma power and processing speed that are specific to early childhood and provide novel insights into the neurobiological features of the EEG that may underlie developing cognitive abilities.

  11. The space charge effects on the slow extraction process

    International Nuclear Information System (INIS)

    Ohmori, Chihiro.

    1992-06-01

    The calculation of the slow extraction which includes the space charge effects has been performed for the Compressor/Stretcher Ring (CSR) of the proposed Japanese Hadron Project. We have investigated the slow extraction of 1 GeV proton beam with an average current of 100μA. Calculation shows not only the emittance growth of the extracted beam but also decrease of the extraction efficiency and discontinuity of beam spill. (author)

  12. Molecular dynamics simulations of sputtering of organic overlayers by slow, large clusters

    International Nuclear Information System (INIS)

    Rzeznik, L.; Czerwinski, B.; Garrison, B.J.; Winograd, N.; Postawa, Z.

    2008-01-01

    The ion-stimulated desorption of organic molecules by impact of large and slow clusters is examined using molecular dynamics (MDs) computer simulations. The investigated system, represented by a monolayer of benzene deposited on Ag{1 1 1}, is irradiated with projectiles composed of thousands of noble gas atoms having a kinetic energy of 0.1-20 eV/atom. The sputtering yield of molecular species and the kinetic energy distributions are analyzed and compared to the results obtain for PS4 overlayer. The simulations demonstrate quite clearly that the physics of ejection by large and slow clusters is distinct from the ejection events stimulated by the popular SIMS clusters, like C 60 , Au 3 and SF 5 at tens of keV energies.

  13. Optical signal processing using slow and fast light technologies

    DEFF Research Database (Denmark)

    Capmany, J.; Sales, Salvador; Xue, Weiqi

    2009-01-01

    We review the theory of slow and fat light effects due to coherent population oscillations in semiconductor waveguides, which can be potentially applied in microwave photonic systems as a RF phase shifters. In order to satisfy the application requirement of 360 degrees RF phase shift at different...

  14. Simulations to investigate the eect of circuit imperfections on the slow extraction process

    CERN Document Server

    Van De Pontseele, Wouter

    2016-01-01

    The Super Proton Synchrotron (SPS) is a 400 GeV proton accelerator at CERN. Its main purpose is the last step in pre-acceleration of particles before injecting them into the Large Hadron Collider. Besides that, it is used for experiments directly connected to the SPS. For these xed target experiments, a steady slow extraction of the SPS beam is necessary over thousands of turns. The SPS slow extraction process makes use of the third-integer resonance in combination with an electrostatic septum. The stability of the extracted intensity and the losses at the septum are in uenced by the imperfections of the current circuit that drives the SPS quadrupoles. This report summarizes the studies that were carried out with MAD-X to simulate these imperfections.

  15. Identification of low order models for large scale processes

    NARCIS (Netherlands)

    Wattamwar, S.K.

    2010-01-01

    Many industrial chemical processes are complex, multi-phase and large scale in nature. These processes are characterized by various nonlinear physiochemical effects and fluid flows. Such processes often show coexistence of fast and slow dynamics during their time evolutions. The increasing demand

  16. Explicit and Implicit Processes Constitute the Fast and Slow Processes of Sensorimotor Learning.

    Science.gov (United States)

    McDougle, Samuel D; Bond, Krista M; Taylor, Jordan A

    2015-07-01

    A popular model of human sensorimotor learning suggests that a fast process and a slow process work in parallel to produce the canonical learning curve (Smith et al., 2006). Recent evidence supports the subdivision of sensorimotor learning into explicit and implicit processes that simultaneously subserve task performance (Taylor et al., 2014). We set out to test whether these two accounts of learning processes are homologous. Using a recently developed method to assay explicit and implicit learning directly in a sensorimotor task, along with a computational modeling analysis, we show that the fast process closely resembles explicit learning and the slow process approximates implicit learning. In addition, we provide evidence for a subdivision of the slow/implicit process into distinct manifestations of motor memory. We conclude that the two-state model of motor learning is a close approximation of sensorimotor learning, but it is unable to describe adequately the various implicit learning operations that forge the learning curve. Our results suggest that a wider net be cast in the search for the putative psychological mechanisms and neural substrates underlying the multiplicity of processes involved in motor learning. Copyright © 2015 the authors 0270-6474/15/359568-12$15.00/0.

  17. Dynamically slow processes in supercooled water confined between hydrophobic plates

    Energy Technology Data Exchange (ETDEWEB)

    Franzese, Giancarlo [Departamento de Fisica Fundamental, Universidad de Barcelona, Diagonal 647, Barcelona 08028 (Spain); Santos, Francisco de los, E-mail: gfranzese@ub.ed, E-mail: fdlsant@ugr.e [Departamento de Electromagnetismo y Fisica de la Materia, Universidad de Granada, Fuentenueva s/n, 18071 Granada (Spain)

    2009-12-16

    We study the dynamics of water confined between hydrophobic flat surfaces at low temperature. At different pressures, we observe different behaviors that we understand in terms of the hydrogen bond dynamics. At high pressure, the formation of the open structure of the hydrogen bond network is inhibited and the surfaces can be rapidly dried (dewetted) by formation of a large cavity with decreasing temperature. At lower pressure we observe strong non-exponential behavior of the correlation function, but with no strong increase of the correlation time. This behavior can be associated, on the one hand, to the rapid ordering of the hydrogen bonds that generates heterogeneities and, on the other hand, to the lack of a single timescale as a consequence of the cooperativity in the vicinity of the liquid-liquid critical point that characterizes the phase diagram at low temperature of the water model considered here. At very low pressures, the gradual formation of the hydrogen bond network is responsible for the large increase of the correlation time and, eventually, the dynamical arrest of the system, with a strikingly different dewetting process, characterized by the formation of many small cavities.

  18. Dynamically slow processes in supercooled water confined between hydrophobic plates

    International Nuclear Information System (INIS)

    Franzese, Giancarlo; Santos, Francisco de los

    2009-01-01

    We study the dynamics of water confined between hydrophobic flat surfaces at low temperature. At different pressures, we observe different behaviors that we understand in terms of the hydrogen bond dynamics. At high pressure, the formation of the open structure of the hydrogen bond network is inhibited and the surfaces can be rapidly dried (dewetted) by formation of a large cavity with decreasing temperature. At lower pressure we observe strong non-exponential behavior of the correlation function, but with no strong increase of the correlation time. This behavior can be associated, on the one hand, to the rapid ordering of the hydrogen bonds that generates heterogeneities and, on the other hand, to the lack of a single timescale as a consequence of the cooperativity in the vicinity of the liquid-liquid critical point that characterizes the phase diagram at low temperature of the water model considered here. At very low pressures, the gradual formation of the hydrogen bond network is responsible for the large increase of the correlation time and, eventually, the dynamical arrest of the system, with a strikingly different dewetting process, characterized by the formation of many small cavities.

  19. Sound asleep: Processing and retention of slow oscillation phase-targeted stimuli

    NARCIS (Netherlands)

    Cox, R.; Korjoukov, I.; de Boer, M.; Talamini, L.M.

    2014-01-01

    The sleeping brain retains some residual information processing capacity. Although direct evidence is scarce, a substantial literature suggests the phase of slow oscillations during deep sleep to be an important determinant for stimulus processing. Here, we introduce an algorithm for predicting slow

  20. Large transverse momentum hadronic processes

    International Nuclear Information System (INIS)

    Darriulat, P.

    1977-01-01

    The possible relations between deep inelastic leptoproduction and large transverse momentum (psub(t)) processes in hadronic collisions are usually considered in the framework of the quark-parton picture. Experiments observing the structure of the final state in proton-proton collisions producing at least one large transverse momentum particle have led to the following conclusions: a large fraction of produced particles are uneffected by the large psub(t) process. The other products are correlated to the large psub(t) particle. Depending upon the sign of scalar product they can be separated into two groups of ''towards-movers'' and ''away-movers''. The experimental evidence are reviewed favouring such a picture and the properties are discussed of each of three groups (underlying normal event, towards-movers and away-movers). Some phenomenological interpretations are presented. The exact nature of away- and towards-movers must be further investigated. Their apparent jet structure has to be confirmed. Angular correlations between leading away and towards movers are very informative. Quantum number flow, both within the set of away and towards-movers, and between it and the underlying normal event, are predicted to behave very differently in different models

  1. Slowed EEG rhythmicity in patients with chronic pancreatitis: evidence of abnormal cerebral pain processing?

    DEFF Research Database (Denmark)

    Olesen, Søren Schou; Hansen, Tine Maria; Gravesen, Carina

    2011-01-01

    Intractable pain usually dominates the clinical presentation of chronic pancreatitis (CP). Slowing of electroencephalogram (EEG) rhythmicity has been associated with abnormal cortical pain processing in other chronic pain disorders. The aim of this study was to investigate the spectral distribution...

  2. Proprioceptive deafferentation slows down the processing of visual hand feedback

    DEFF Research Database (Denmark)

    Balslev, Daniela; Miall, R Chris; Cole, Jonathan

    2007-01-01

    During visually guided movements both vision and proprioception inform the brain about the position of the hand, so interaction between these two modalities is presumed. Current theories suggest that this interaction occurs by sensory information from both sources being fused into a more reliable...... proprioception facilitates the processing of visual information during motor control. Subjects used a computer mouse to move a cursor to a screen target. In 28% of the trials, pseudorandomly, the cursor was rotated or the target jumped. Reaction time for the trajectory correction in response to this perturbation......, multimodal, percept of hand location. In the literature on perception, however, there is evidence that different sensory modalities interact in the allocation of attention, so that a stimulus in one modality facilitates the processing of a stimulus in a different modality. We investigated whether...

  3. Phase of Spontaneous Slow Oscillations during Sleep Influences Memory-Related Processing of Auditory Cues.

    Science.gov (United States)

    Batterink, Laura J; Creery, Jessica D; Paller, Ken A

    2016-01-27

    Slow oscillations during slow-wave sleep (SWS) may facilitate memory consolidation by regulating interactions between hippocampal and cortical networks. Slow oscillations appear as high-amplitude, synchronized EEG activity, corresponding to upstates of neuronal depolarization and downstates of hyperpolarization. Memory reactivations occur spontaneously during SWS, and can also be induced by presenting learning-related cues associated with a prior learning episode during sleep. This technique, targeted memory reactivation (TMR), selectively enhances memory consolidation. Given that memory reactivation is thought to occur preferentially during the slow-oscillation upstate, we hypothesized that TMR stimulation effects would depend on the phase of the slow oscillation. Participants learned arbitrary spatial locations for objects that were each paired with a characteristic sound (eg, cat-meow). Then, during SWS periods of an afternoon nap, one-half of the sounds were presented at low intensity. When object location memory was subsequently tested, recall accuracy was significantly better for those objects cued during sleep. We report here for the first time that this memory benefit was predicted by slow-wave phase at the time of stimulation. For cued objects, location memories were categorized according to amount of forgetting from pre- to post-nap. Conditions of high versus low forgetting corresponded to stimulation timing at different slow-oscillation phases, suggesting that learning-related stimuli were more likely to be processed and trigger memory reactivation when they occurred at the optimal phase of a slow oscillation. These findings provide insight into mechanisms of memory reactivation during sleep, supporting the idea that reactivation is most likely during cortical upstates. Slow-wave sleep (SWS) is characterized by synchronized neural activity alternating between active upstates and quiet downstates. The slow-oscillation upstates are thought to provide a

  4. ASGARD: A LARGE SURVEY FOR SLOW GALACTIC RADIO TRANSIENTS. I. OVERVIEW AND FIRST RESULTS

    Energy Technology Data Exchange (ETDEWEB)

    Williams, Peter K. G.; Bower, Geoffrey C.; Croft, Steve; Keating, Garrett K.; Law, Casey J.; Wright, Melvyn C. H., E-mail: pwilliams@astro.berkeley.edu [Department of Astronomy, B-20 Hearst Field Annex 3411, University of California, Berkeley, CA 94720-3411 (United States)

    2013-01-10

    Searches for slow radio transients and variables have generally focused on extragalactic populations, and the basic parameters of Galactic populations remain poorly characterized. We present a large 3 GHz survey performed with the Allen Telescope Array (ATA) that aims to improve this situation: ASGARD, the ATA Survey of Galactic Radio Dynamism. ASGARD observations spanned two years with weekly visits to 23 deg{sup 2} in two fields in the Galactic plane, totaling 900 hr of integration time on science fields and making it significantly larger than previous efforts. The typical blind unresolved source detection limit was 10 mJy. We describe the observations and data analysis techniques in detail, demonstrating our ability to create accurate wide-field images while effectively modeling and subtracting large-scale radio emission, allowing standard transient-and-variability analysis techniques to be used. We present early results from the analysis of two pointings: one centered on the microquasar Cygnus X-3 and one overlapping the Kepler field of view (l = 76 Degree-Sign , b = +13. Degree-Sign 5). Our results include images, catalog statistics, completeness functions, variability measurements, and a transient search. Out of 134 sources detected in these pointings, the only compellingly variable one is Cygnus X-3, and no transients are detected. We estimate number counts for potential Galactic radio transients and compare our current limits to previous work and our projection for the fully analyzed ASGARD data set.

  5. Timing of the Crab pulsar III. The slowing down and the nature of the random process

    International Nuclear Information System (INIS)

    Groth, E.J.

    1975-01-01

    The Crab pulsar arrival times are analyzed. The data are found to be consistent with a smooth slowing down with a braking index of 2.515+-0.005. Superposed on the smooth slowdown is a random process which has the same second moments as a random walk in the frequency. The strength of the random process is R 2 >=0.53 (+0.24, -0.12) x10 -22 Hz 2 s -1 , where R is the mean rate of steps and 2 > is the second moment of the step amplitude distribution. Neither the braking index nor the strength of the random process shows evidence of statistically significant time variations, although small fluctuations in the braking index and rather large fluctuations in the noise strength cannot be ruled out. There is a possibility that the random process contains a small component with the same second moments as a random walk in the phase. If so, a time scale of 3.5 days is indicated

  6. Sound asleep: processing and retention of slow oscillation phase-targeted stimuli.

    Science.gov (United States)

    Cox, Roy; Korjoukov, Ilia; de Boer, Marieke; Talamini, Lucia M

    2014-01-01

    The sleeping brain retains some residual information processing capacity. Although direct evidence is scarce, a substantial literature suggests the phase of slow oscillations during deep sleep to be an important determinant for stimulus processing. Here, we introduce an algorithm for predicting slow oscillations in real-time. Using this approach to present stimuli directed at both oscillatory up and down states, we show neural stimulus processing depends importantly on the slow oscillation phase. During ensuing wakefulness, however, we did not observe differential brain or behavioral responses to these stimulus categories, suggesting no enduring memories were formed. We speculate that while simpler forms of learning may occur during sleep, neocortically based memories are not readily established during deep sleep.

  7. Sound asleep: processing and retention of slow oscillation phase-targeted stimuli.

    Directory of Open Access Journals (Sweden)

    Roy Cox

    Full Text Available The sleeping brain retains some residual information processing capacity. Although direct evidence is scarce, a substantial literature suggests the phase of slow oscillations during deep sleep to be an important determinant for stimulus processing. Here, we introduce an algorithm for predicting slow oscillations in real-time. Using this approach to present stimuli directed at both oscillatory up and down states, we show neural stimulus processing depends importantly on the slow oscillation phase. During ensuing wakefulness, however, we did not observe differential brain or behavioral responses to these stimulus categories, suggesting no enduring memories were formed. We speculate that while simpler forms of learning may occur during sleep, neocortically based memories are not readily established during deep sleep.

  8. Transient crustal movement in the northern Izu-Bonin arc starting in 2004: A large slow slip event or a slow back-arc rifting event?

    Science.gov (United States)

    Arisa, Deasy; Heki, Kosuke

    2016-07-01

    The Izu-Bonin arc lies along the convergent boundary where the Pacific Plate subducts beneath the Philippine Sea Plate. Horizontal velocities of continuous Global Navigation Satellite System stations on the Izu Islands move eastward by up to 1 cm/year relative to the stable part of the Philippine Sea Plate suggesting active back-arc rifting behind the northern part of the arc. Here, we report that such eastward movements transiently accelerated in the middle of 2004 resulting in 3 cm extra movements in 3 years. We compare three different mechanisms possibly responsible for this transient movement, i.e. (1) postseismic movement of the 2004 September earthquake sequence off the Kii Peninsula far to the west, (2) a temporary activation of the back-arc rifting to the west dynamically triggered by seismic waves from a nearby earthquake, and (3) a large slow slip event in the Izu-Bonin Trench to the east. By comparing crustal movements in different regions, the first possibility can be shown unlikely. It is difficult to rule out the second possibility, but current evidence support the third possibility, i.e. a large slow slip event with moment magnitude of 7.5 may have occurred there.

  9. Myelin Breakdown Mediates Age-Related Slowing in Cognitive Processing Speed in Healthy Elderly Men

    Science.gov (United States)

    Lu, Po H.; Lee, Grace J.; Tishler, Todd A.; Meghpara, Michael; Thompson, Paul M.; Bartzokis, George

    2013-01-01

    Background: To assess the hypothesis that in a sample of very healthy elderly men selected to minimize risk for Alzheimer's disease (AD) and cerebrovascular disease, myelin breakdown in late-myelinating regions mediates age-related slowing in cognitive processing speed (CPS). Materials and methods: The prefrontal lobe white matter and the genu of…

  10. Hydrothermal processes in the Edmond deposits, slow- to intermediate-spreading Central Indian Ridge

    Science.gov (United States)

    Cao, Hong; Sun, Zhilei; Zhai, Shikui; Cao, Zhimin; Jiang, Xuejun; Huang, Wei; Wang, Libo; Zhang, Xilin; He, Yongjun

    2018-04-01

    The Edmond hydrothermal field, located on the Central Indian Ridge (CIR), has a distinct mineralization history owing to its unique magmatic, tectonic, and alteration processes. Here, we report the detailed mineralogical and geochemical characteristics of hydrothermal metal sulfides recovered from this area. Based on the mineralogical investigations, the Edmond hydrothermal deposits comprise of high-temperature Fe-rich massive sulfides, medium-temperature Zn-rich sulfide chimney and low-temperature Ca-rich sulfate mineral assemblages. According to these compositions, three distinctive mineralization stages have been identified: (1) low-temperature consisting largely of anhydrite and pyrite/marcasite; (2) medium-high temperature distinguished by the mineral assemblage of pyrite, sphalerite and chalcopyrite; and (3) low-temperature stage characterized by the mineral assemblage of colloidal pyrite/marcasite, barite, quartz, anglesite. Several lines of evidence suggest that the sulfides were influenced by pervasive low-temperature diffuse flows in this area. The hydrothermal deposits are relatively enriched in Fe (5.99-18.93 wt%), Zn (2.10-10.00 wt%) and Ca (0.02-19.15 wt%), but display low Cu (0.28-0.81 wt%). The mineralogical varieties and low metal content of sulfides in the Edmond hydrothermal field both indicate that extensive water circulation is prevalent below the Edmond hydrothermal field. With regard to trace elements, the contents of Pb, Ba, Sr, As, Au, Ag, and Cd are significantly higher than those in other sediment-starved mid-ocean ridges, which is indicative of contribution from felsic rock sources. Furthermore, the multiphase hydrothermal activity and the pervasive water circulation underneath are speculated to play important roles in element remobilization and enrichment. Our findings deepen our understanding about the complex mineralization process in slow- to intermediate-spreading ridges globally.

  11. Protective systems and its protective switching elements on local failures of large slow-capacitor bank system

    International Nuclear Information System (INIS)

    Hasegawa, Mitsuo; Inoue, Kunikazu; Ueno, Isao.

    1994-01-01

    In various applications of pulsed power technologies, large capacitor bank systems are used to feed high current impulse to different experimental devices. The accidental electric breakdown in one of the capacitors in a parallel connection of the large bank may result in serious damages such as mechanical explosion and oil effusion or fire. In most fast banks, each unit capacitor has an output gap switch, which is expected to decouple the capacitors one another. However, no such special element is adopted usually in the slow bank system, partly because of the economical consideration. We have developed a novel and inexpensive protective element for these relatively slow capacitor banks, utilizing a concept of the enclosed type of the fast breakers. The principle of the operation of the protection elements is verified by a simulation experiment. Their practical effectiveness is also successfully demonstrated in the application to the system of the pulsed high magnetic field generator. (author)

  12. Movement - uncontrolled or slow

    Science.gov (United States)

    Dystonia; Involuntary slow and twisting movements; Choreoathetosis; Leg and arm movements - uncontrollable; Arm and leg movements - uncontrollable; Slow involuntary movements of large muscle groups; Athetoid movements

  13. Efficient querying of large process model repositories

    NARCIS (Netherlands)

    Jin, Tao; Wang, Jianmin; La Rosa, M.; Hofstede, ter A.H.M.; Wen, Lijie

    2013-01-01

    Recent years have seen an increased uptake of business process management technology in industries. This has resulted in organizations trying to manage large collections of business process models. One of the challenges facing these organizations concerns the retrieval of models from large business

  14. How Radiologists Think: Understanding Fast and Slow Thought Processing and How It Can Improve Our Teaching.

    Science.gov (United States)

    van der Gijp, Anouk; Webb, Emily M; Naeger, David M

    2017-06-01

    Scholars have identified two distinct ways of thinking. This "Dual Process Theory" distinguishes a fast, nonanalytical way of thinking, called "System 1," and a slow, analytical way of thinking, referred to as "System 2." In radiology, we use both methods when interpreting and reporting images, and both should ideally be emphasized when educating our trainees. This review provides practical tips for improving radiology education, by enhancing System 1 and System 2 thinking among our trainees. Copyright © 2017 The Association of University Radiologists. Published by Elsevier Inc. All rights reserved.

  15. MMRW-BOOKS, Legacy books on slowing down, thermalization, particle transport theory, random processes in reactors

    International Nuclear Information System (INIS)

    Williams, M.M.R.

    2007-01-01

    Description: Prof. M.M..R Williams has now released three of his legacy books for free distribution: 1 - M.M.R. Williams: The Slowing Down and Thermalization of Neutrons, North-Holland Publishing Company - Amsterdam, 582 pages, 1966. Content: Part I - The Thermal Energy Region: 1. Introduction and Historical Review, 2. The Scattering Kernel, 3. Neutron Thermalization in an Infinite Homogeneous Medium, 4. Neutron Thermalization in Finite Media, 5. The Spatial Dependence of the Energy Spectrum, 6. Reactor Cell Calculations, 7. Synthetic Scattering Kernels. Part II - The Slowing Down Region: 8. Scattering Kernels in the Slowing Down Region, 9. Neutron Slowing Down in an Infinite Homogeneous Medium, 10.Neutron Slowing Down and Diffusion. 2 - M.M.R. Williams: Mathematical Methods in Particle Transport Theory, Butterworths, London, 430 pages, 1971. Content: 1 The General Problem of Particle Transport, 2 The Boltzmann Equation for Gas Atoms and Neutrons, 3 Boundary Conditions, 4 Scattering Kernels, 5 Some Basic Problems in Neutron Transport and Rarefied Gas Dynamics, 6 The Integral Form of the Transport Equation in Plane, Spherical and Cylindrical Geometries, 7 Exact Solutions of Model Problems, 8 Eigenvalue Problems in Transport Theory, 9 Collision Probability Methods, 10 Variational Methods, 11 Polynomial Approximations. 3 - M.M.R. Williams: Random Processes in Nuclear Reactors, Pergamon Press Oxford New York Toronto Sydney, 243 pages, 1974. Content: 1. Historical Survey and General Discussion, 2. Introductory Mathematical Treatment, 3. Applications of the General Theory, 4. Practical Applications of the Probability Distribution, 5. The Langevin Technique, 6. Point Model Power Reactor Noise, 7. The Spatial Variation of Reactor Noise, 8. Random Phenomena in Heterogeneous Reactor Systems, 9. Associated Fluctuation Problems, Appendix: Noise Equivalent Sources. Note to the user: Prof. M.M.R Williams owns the copyright of these books and he authorises the OECD/NEA Data Bank

  16. Large scale processing of dielectric electroactive polymers

    DEFF Research Database (Denmark)

    Vudayagiri, Sindhu

    Efficient processing techniques are vital to the success of any manufacturing industry. The processing techniques determine the quality of the products and thus to a large extent the performance and reliability of the products that are manufactured. The dielectric electroactive polymer (DEAP...

  17. Turbidity and plant growth in large slow-flowing lowland rivers: progress report March 1989

    OpenAIRE

    Marker, A.F.H.

    1989-01-01

    The River Great Ouse is a highly managed large lowland river in eastern England. It drains rich arable land in the Midlands and Eastern England and over the years nutrient concentrations have increased and there is a general perception that the clarity of the water has decreased. The main river channels have been dredged a number of times partly for flood control reasons but also for recreational boating and navigation activities. The period covered by this first report has been used to devel...

  18. Shoot, shovel and shut up: cryptic poaching slows restoration of a large carnivore in Europe.

    Science.gov (United States)

    Liberg, Olof; Chapron, Guillaume; Wabakken, Petter; Pedersen, Hans Christian; Hobbs, N Thompson; Sand, Håkan

    2012-03-07

    Poaching is a widespread and well-appreciated problem for the conservation of many threatened species. Because poaching is illegal, there is strong incentive for poachers to conceal their activities, and consequently, little data on the effects of poaching on population dynamics are available. Quantifying poaching mortality should be a required knowledge when developing conservation plans for endangered species but is hampered by methodological challenges. We show that rigorous estimates of the effects of poaching relative to other sources of mortality can be obtained with a hierarchical state-space model combined with multiple sources of data. Using the Scandinavian wolf (Canis lupus) population as an illustrative example, we show that poaching accounted for approximately half of total mortality and more than two-thirds of total poaching remained undetected by conventional methods, a source of mortality we term as 'cryptic poaching'. Our simulations suggest that without poaching during the past decade, the population would have been almost four times as large in 2009. Such a severe impact of poaching on population recovery may be widespread among large carnivores. We believe that conservation strategies for large carnivores considering only observed data may not be adequate and should be revised by including and quantifying cryptic poaching.

  19. Low-Altitude and Slow-Speed Small Target Detection Based on Spectrum Zoom Processing

    Directory of Open Access Journals (Sweden)

    Xuwang Zhang

    2018-01-01

    Full Text Available This paper proposes a spectrum zoom processing based target detection algorithm for detecting the weak echo of low-altitude and slow-speed small (LSS targets in heavy ground clutter environments, which can be used to retrofit the existing radar systems. With the existing range-Doppler frequency images, the proposed method firstly concatenates the data from the same Doppler frequency slot of different images and then applies the spectrum zoom processing. After performing the clutter suppression, the target detection can be finally implemented. Through the theoretical analysis and real data verification, it is shown that the proposed algorithm can obtain a preferable spectrum zoom result and improve the signal-to-clutter ratio (SCR with a very low computational load.

  20. Multi-temporal mapping of a large, slow-moving earth flow for kinematic interpretation

    Science.gov (United States)

    Guerriero, Luigi; Coe, Jeffrey A.; Revellino, Paola; Guadagno, Francesco M.

    2014-01-01

    Periodic movement of large, thick landslides on discrete basal surfaces produces modifications of the topographic surface, creates faults and folds, and influences the locations of springs, ponds, and streams (Baum, et al., 1993; Coe et al., 2009). The geometry of the basal-slip surface, which can be controlled by geological structures (e.g., fold axes, faults, etc.; Revellino et al., 2010; Grelle et al., 2011), and spatial variation in the rate of displacement, are responsible for differential deformation and kinematic segmentation of the landslide body. Thus, large landslides are often composed of several distinct kinematic elements. Each element represents a discrete kinematic domain within the main landslide that is broadly characterized by stretching (extension) of the upper part of the landslide and shortening (compression) near the landslide toe (Baum and Fleming, 1991; Guerriero et al., in review). On the basis of this knowledge, we used photo interpretive and GPS field mapping methods to map structures on the surface of the Montaguto earth flow in the Apennine Mountains of southern Italy at a scale of 1:6,000. (Guerriero et al., 2013a; Fig.1). The earth flow has been periodically active since at least 1954. The most extensive and destructive period of activity began on April 26, 2006, when an estimated 6 million m3 of material mobilized, covering and closing Italian National Road SS90, and damaging residential structures (Guerriero et al., 2013b). Our maps show the distribution and evolution of normal faults, thrust faults, strike-slip faults, flank ridges, and hydrological features at nine different dates (October, 1954; June, 1976; June, 1991; June, 2003; June, 2005; May, 2006; October, 2007; July, 2009; and March , 2010) between 1954 and 2010. Within the earth flow we recognized several kinematic elements and associated structures (Fig.2a). Within each kinematic element (e.g. the earth flow neck; Fig.2b), the flow velocity was highest in the middle, and

  1. World's gas processing growth slows; U.S., Canada retain greatest share

    International Nuclear Information System (INIS)

    True, W.R.

    1994-01-01

    Growth in the world's natural-gas processing industry slowed somewhat in 1993 after strong expansion a year earlier. In 1993, slower growth was more evenly distributed among the world's regions than in 1992 with the US and Canada adding capacity along with the Middle East and Asia-Pacific. The US and Canada continue to lead the world in capacity with more than 107 bcfd; in throughput with almost 76 bcfd; and in production with nearly 115 million gpd (2.7 million b/d). The two countries also continued to lead the world in petroleum-derived sulfur production with more than 54% of the world's capacity and production last year. The paper discusses industry trends; the picture in the US; activities in Texas, Louisiana, Alaska, Alabama, Colorado, Kansas, and Oklahoma; new capacity worldwide; expansion plans in North America; and sulfur recovery

  2. One- and two-electron processes in collisions between hydrogen molecules and slow highly charged ions

    International Nuclear Information System (INIS)

    Wells, E.; Carnes, K.D.; Tawara, H.; Ali, R.; Sidky, Emil Y.; Illescas, Clara; Ben-Itzhak, I.

    2005-01-01

    A coincidence time-of-flight technique coupled with projectile charge state analysis was used to study electron capture in collisions between slow highly charged ions and hydrogen molecules. We found single electron capture with no target excitation to be the dominant process for both C 6+ projectiles at a velocity of 0.8 atomic units and Ar 11+ projectiles at v 0.63 a.u. Double electron capture and transfer excitation, however, were found to be comparable and occur about 30% of the time relative to single capture. Most projectiles (96%) auto-ionize quickly following double capture into doubly excited states. The data are compared to classical and quantum mechanical model calculations

  3. Storage process of large solid radioactive wastes

    International Nuclear Information System (INIS)

    Morin, Bruno; Thiery, Daniel.

    1976-01-01

    Process for the storage of large size solid radioactive waste, consisting of contaminated objects such as cartridge filters, metal swarf, tools, etc, whereby such waste is incorporated in a thermohardening resin at room temperature, after prior addition of at least one inert charge to the resin. Cross-linking of the resin is then brought about [fr

  4. Enhanced IMC design of load disturbance rejection for integrating and unstable processes with slow dynamics.

    Science.gov (United States)

    Liu, Tao; Gao, Furong

    2011-04-01

    In view of the deficiencies in existing internal model control (IMC)-based methods for load disturbance rejection for integrating and unstable processes with slow dynamics, a modified IMC-based controller design is proposed to deal with step- or ramp-type load disturbance that is often encountered in engineering practices. By classifying the ways through which such load disturbance enters into the process, analytical controller formulae are correspondingly developed, based on a two-degree-of-freedom (2DOF) control structure that allows for separate optimization of load disturbance rejection from setpoint tracking. An obvious merit is that there is only a single adjustable parameter in the proposed controller, which in essence corresponds to the time constant of the closed-loop transfer function for load disturbance rejection, and can be monotonically tuned to meet a good trade-off between disturbance rejection performance and closed-loop robust stability. At the same time, robust tuning constraints are given to accommodate process uncertainties in practice. Illustrative examples from the recent literature are used to show effectiveness and merits of the proposed method for different cases of load disturbance. Copyright © 2010. Published by Elsevier Ltd.

  5. The Very Large Array Data Processing Pipeline

    Science.gov (United States)

    Kent, Brian R.; Masters, Joseph S.; Chandler, Claire J.; Davis, Lindsey E.; Kern, Jeffrey S.; Ott, Juergen; Schinzel, Frank K.; Medlin, Drew; Muders, Dirk; Williams, Stewart; Geers, Vincent C.; Momjian, Emmanuel; Butler, Bryan J.; Nakazato, Takeshi; Sugimoto, Kanako

    2018-01-01

    We present the VLA Pipeline, software that is part of the larger pipeline processing framework used for the Karl G. Jansky Very Large Array (VLA), and Atacama Large Millimeter/sub-millimeter Array (ALMA) for both interferometric and single dish observations.Through a collection of base code jointly used by the VLA and ALMA, the pipeline builds a hierarchy of classes to execute individual atomic pipeline tasks within the Common Astronomy Software Applications (CASA) package. Each pipeline task contains heuristics designed by the team to actively decide the best processing path and execution parameters for calibration and imaging. The pipeline code is developed and written in Python and uses a "context" structure for tracking the heuristic decisions and processing results. The pipeline "weblog" acts as the user interface in verifying the quality assurance of each calibration and imaging stage. The majority of VLA scheduling blocks above 1 GHz are now processed with the standard continuum recipe of the pipeline and offer a calibrated measurement set as a basic data product to observatory users. In addition, the pipeline is used for processing data from the VLA Sky Survey (VLASS), a seven year community-driven endeavor started in September 2017 to survey the entire sky down to a declination of -40 degrees at S-band (2-4 GHz). This 5500 hour next-generation large radio survey will explore the time and spectral domains, relying on pipeline processing to generate calibrated measurement sets, polarimetry, and imaging data products that are available to the astronomical community with no proprietary period. Here we present an overview of the pipeline design philosophy, heuristics, and calibration and imaging results produced by the pipeline. Future development will include the testing of spectral line recipes, low signal-to-noise heuristics, and serving as a testing platform for science ready data products.The pipeline is developed as part of the CASA software package by an

  6. Reassessing the 2006 Guerrero slow-slip event, Mexico : Implications for large earthquakes in the Guerrero Gap

    NARCIS (Netherlands)

    Bekaert, D.P.S.; Hooper, A.; Wright, T.J.

    2015-01-01

    In Guerrero, Mexico, slow-slip events have been observed in a seismic gap, where no earthquakes have occurred since 1911. A rupture of the entire gap today could result in a Mw 8.2–8.4 earthquake. However, it remains unclear how slow-slip events change the stress field in the Guerrero seismic region

  7. Probing the positron moderation process using high-intensity, highly polarized slow-positron beams

    Science.gov (United States)

    Van House, J.; Zitzewitz, P. W.

    1984-01-01

    A highly polarized (P = 0.48 + or - 0.02) intense (500,000/sec) beam of 'slow' (Delta E = about 2 eV) positrons (e+) is generated, and it is shown that it is possible to achieve polarization as high as P = 0.69 + or - 0.04 with reduced intensity. The measured polarization of the slow e+ emitted by five different positron moderators showed no dependence on the moderator atomic number (Z). It is concluded that only source positrons with final kinetic energy below 17 keV contribute to the slow-e+ beam, in disagreement with recent yield functions derived from low-energy measurements. Measurements of polarization and yield with absorbers of different Z between the source and moderator show the effects of the energy and angular distributions of the source positrons on P. The depolarization of fast e+ transmitted through high-Z absorbers has been measured. Applications of polarized slow-e+ beams are discussed.

  8. Processing and properties of large-sized ceramic slabs

    Energy Technology Data Exchange (ETDEWEB)

    Raimondo, M.; Dondi, M.; Zanelli, C.; Guarini, G.; Gozzi, A.; Marani, F.; Fossa, L.

    2010-07-01

    Large-sized ceramic slabs with dimensions up to 360x120 cm{sup 2} and thickness down to 2 mm are manufactured through an innovative ceramic process, starting from porcelain stoneware formulations and involving wet ball milling, spray drying, die-less slow-rate pressing, a single stage of fast drying-firing, and finishing (trimming, assembling of ceramic-fiberglass composites). Fired and unfired industrial slabs were selected and characterized from the technological, compositional (XRF, XRD) and microstructural (SEM) viewpoints. Semi-finished products exhibit a remarkable microstructural uniformity and stability in a rather wide window of firing schedules. The phase composition and compact microstructure of fired slabs are very similar to those of porcelain stoneware tiles. The values of water absorption, bulk density, closed porosity, functional performances as well as mechanical and tribological properties conform to the top quality range of porcelain stoneware tiles. However, the large size coupled with low thickness bestow on the slab a certain degree of flexibility, which is emphasized in ceramic-fiberglass composites. These outstanding performances make the large-sized slabs suitable to be used in novel applications: building and construction (new floorings without dismantling the previous paving, ventilated facades, tunnel coverings, insulating panelling), indoor furnitures (table tops, doors), support for photovoltaic ceramic panels. (Author) 24 refs.

  9. Processing and properties of large-sized ceramic slabs

    International Nuclear Information System (INIS)

    Raimondo, M.; Dondi, M.; Zanelli, C.; Guarini, G.; Gozzi, A.; Marani, F.; Fossa, L.

    2010-01-01

    Large-sized ceramic slabs with dimensions up to 360x120 cm 2 and thickness down to 2 mm are manufactured through an innovative ceramic process, starting from porcelain stoneware formulations and involving wet ball milling, spray drying, die-less slow-rate pressing, a single stage of fast drying-firing, and finishing (trimming, assembling of ceramic-fiberglass composites). Fired and unfired industrial slabs were selected and characterized from the technological, compositional (XRF, XRD) and microstructural (SEM) viewpoints. Semi-finished products exhibit a remarkable microstructural uniformity and stability in a rather wide window of firing schedules. The phase composition and compact microstructure of fired slabs are very similar to those of porcelain stoneware tiles. The values of water absorption, bulk density, closed porosity, functional performances as well as mechanical and tribological properties conform to the top quality range of porcelain stoneware tiles. However, the large size coupled with low thickness bestow on the slab a certain degree of flexibility, which is emphasized in ceramic-fiberglass composites. These outstanding performances make the large-sized slabs suitable to be used in novel applications: building and construction (new floorings without dismantling the previous paving, ventilated facades, tunnel coverings, insulating panelling), indoor furnitures (table tops, doors), support for photovoltaic ceramic panels. (Author) 24 refs.

  10. Indices of slowness of information processing in head injury patients : Tests for selective attention related to ERP latencies

    NARCIS (Netherlands)

    Spikman, Jacoba M.; Naalt, van der Joukje; Weerden , van Tiemen; Zomeren , van Adriaan H.

    2004-01-01

    We explored the relation between neuropsychological (attention tests involving time constraints) and neurophysiological (N2 and P3 event-related potential (ERP) latencies) indices of slowness of information processing after closed head injury (CHI). A group of 44 CHI patients performed worse than

  11. Human myosin VIIa is a very slow processive motor protein on various cellular actin structures.

    Science.gov (United States)

    Sato, Osamu; Komatsu, Satoshi; Sakai, Tsuyoshi; Tsukasaki, Yoshikazu; Tanaka, Ryosuke; Mizutani, Takeomi; Watanabe, Tomonobu M; Ikebe, Reiko; Ikebe, Mitsuo

    2017-06-30

    Human myosin VIIa (MYO7A) is an actin-linked motor protein associated with human Usher syndrome (USH) type 1B, which causes human congenital hearing and visual loss. Although it has been thought that the role of human myosin VIIa is critical for USH1 protein tethering with actin and transportation along actin bundles in inner-ear hair cells, myosin VIIa's motor function remains unclear. Here, we studied the motor function of the tail-truncated human myosin VIIa dimer (HM7AΔTail/LZ) at the single-molecule level. We found that the HM7AΔTail/LZ moves processively on single actin filaments with a step size of 35 nm. Dwell-time distribution analysis indicated an average waiting time of 3.4 s, yielding ∼0.3 s -1 for the mechanical turnover rate; hence, the velocity of HM7AΔTail/LZ was extremely slow, at 11 nm·s -1 We also examined HM7AΔTail/LZ movement on various actin structures in demembranated cells. HM7AΔTail/LZ showed unidirectional movement on actin structures at cell edges, such as lamellipodia and filopodia. However, HM7AΔTail/LZ frequently missed steps on actin tracks and exhibited bidirectional movement at stress fibers, which was not observed with tail-truncated myosin Va. These results suggest that the movement of the human myosin VIIa motor protein is more efficient on lamellipodial and filopodial actin tracks than on stress fibers, which are composed of actin filaments with different polarity, and that the actin structures influence the characteristics of cargo transportation by human myosin VIIa. In conclusion, myosin VIIa movement appears to be suitable for translocating USH1 proteins on stereocilia actin bundles in inner-ear hair cells. © 2017 by The American Society for Biochemistry and Molecular Biology, Inc.

  12. Activated-Lignite-Based Super Large Granular Slow-Release Fertilizers Improve Apple Tree Growth: Synthesis, Characterizations, and Laboratory and Field Evaluations.

    Science.gov (United States)

    Tang, Yafu; Wang, Xinying; Yang, Yuechao; Gao, Bin; Wan, Yongshan; Li, Yuncong C; Cheng, Dongdong

    2017-07-26

    In this work, lignite, a low-grade coal, was modified using the solid-phase activation method with the aid of a Pd/CeO 2 nanoparticle catalyst to improve its pore structure and nutrient absorption. Results indicate that the adsorption ability of the activated lignite to NO 3 - , NH 4 + , H 2 PO 4 - , and K + was significantly higher than that of raw lignite. The activated lignite was successfully combined with the polymeric slow-release fertilizer, which exhibits typical slow-release behavior, to prepare the super large granular activated lignite slow-release fertilizer (SAF). In addition to the slow-release ability, the SAF showed excellent water-retention capabilities. Soil column leaching experiments further confirmed the slow-release characteristics of the SAF with fertilizer nutrient loss greatly reduced in comparison to traditional and slow-release fertilizers. Furthermore, field tests of the SAF in an orchard showed that the novel SAF was better than other tested fertilizers in improve the growth of young apple trees. Findings from this study suggest that the newly developed SAF has great potential to be used in apple cultivation and production systems in the future.

  13. Fairness, fast and slow: A review of dual process models of fairness

    DEFF Research Database (Denmark)

    Hallsson, Bjørn Gunnar; Hulme, Oliver; Siebner, Hartwig Roman

    2018-01-01

    -control to override with reasoning-based fairness concerns, or whether fairness itself can be intuitive. While we find strong support for rejecting the notion that self-interest is always intuitive, the literature has reached conflicting conclusions about the neurocognitive systems underpinning fairness. We propose...... that this disagreement can largely be resolved in light of an extended Social Heuristics Hypothesis. Divergent findings may be attributed to the interpretation of behavioral effects of ego depletion or neurostimulation, reverse inference from brain activity to the underlying psychological process, and insensitivity...

  14. Slow Antihydrogen

    International Nuclear Information System (INIS)

    Gabrielse, G.; Speck, A.; Storry, C.H.; Le Sage, D.; Guise, N.; Larochelle, P.C.; Grzonka, D.; Oelert, W.; Schepers, G.; Sefzick, T.; Pittner, H.; Herrmann, M.; Walz, J.; Haensch, T.W.; Comeau, D.; Hessels, E.A.

    2004-01-01

    Slow antihydrogen is now produced by two different production methods. In Method I, large numbers of H atoms are produced during positron-cooling of antiprotons within a nested Penning trap. In a just-demonstrated Method II, lasers control the production of antihydrogen atoms via charge exchange collisions. Field ionization detection makes it possible to probe the internal structure of the antihydrogen atoms being produced - most recently revealing atoms that are too tightly bound to be well described by the guiding center atom approximation. The speed of antihydrogen atoms has recently been measured for the first time. After the requested overview, the recent developments are surveyed

  15. On the group approximation errors in description of neutron slowing-down at large distances from a source. Diffusion approach

    International Nuclear Information System (INIS)

    Kulakovskij, M.Ya.; Savitskij, V.I.

    1981-01-01

    The errors of multigroup calculating the neutron flux spatial and energy distribution in the fast reactor shield caused by using group and age approximations are considered. It is shown that at small distances from a source the age theory rather well describes the distribution of the slowing-down density. With the distance increase the age approximation leads to underestimating the neutron fluxes, and the error quickly increases at that. At small distances from the source (up to 15 lengths of free path in graphite) the multigroup diffusion approximation describes the distribution of slowing down density quite satisfactorily and at that the results almost do not depend on the number of groups. With the distance increase the multigroup diffusion calculations lead to considerable overestimating of the slowing-down density. The conclusion is drawn that the group approximation proper errors are opposite in sign to the error introduced by the age approximation and to some extent compensate each other

  16. Dehydrogenation in large ingot casting process

    International Nuclear Information System (INIS)

    Ubukata, Takashi; Suzuki, Tadashi; Ueda, Sou; Shibata, Takashi

    2009-01-01

    Forging components (for nuclear power plants) have become larger and larger because of decreased weld lines from a safety point of view. Consequently they have been manufactured from ingots requirement for 200 tons or more. Dehydrogenation is one of the key issues for large ingot manufacturing process. In the case of ingots of 200 tons or heavier, mold stream degassing (MSD) has been applied for dehydrogenation. Although JSW had developed mold stream degassing by argon (MSD-Ar) as a more effective dehydrogenating practice, MSD-Ar was not applied for these ingots, because conventional refractory materials of a stopper rod for the Ar blowing hole had low durability. In this study, we have developed a new type of stopper rod through modification of both refractory materials and the stopper rod construction and have successfully expanded the application range of MSD-Ar up to ingots weighting 330 tons. Compared with the conventional MSD, the hydrogen content in ingots after MSD-Ar has decreased by 24 percent due to the dehydrogenation rate of MSD-Ar increased by 34 percent. (author)

  17. Imbricated slip rate processes during slow slip transients imaged by low-frequency earthquakes

    Science.gov (United States)

    Lengliné, O.; Frank, W.; Marsan, D.; Ampuero, J. P.

    2017-12-01

    Low Frequency Earthquakes (LFEs) often occur in conjunction with transient strain episodes, or Slow Slip Events (SSEs), in subduction zones. Their focal mechanism and location consistent with shear failure on the plate interface argue for a model where LFEs are discrete dynamic ruptures in an otherwise slowly slipping interface. SSEs are mostly observed by surface geodetic instruments with limited resolution and it is likely that only the largest ones are detected. The time synchronization of LFEs and SSEs suggests that we could use the recorded LFEs to constrain the evolution of SSEs, and notably of the geodetically-undetected small ones. However, inferring slow slip rate from the temporal evolution of LFE activity is complicated by the strong temporal clustering of LFEs. Here we apply dedicated statistical tools to retrieve the temporal evolution of SSE slip rates from the time history of LFE occurrences in two subduction zones, Mexico and Cascadia, and in the deep portion of the San Andreas fault at Parkfield. We find temporal characteristics of LFEs that are similar across these three different regions. The longer term episodic slip transients present in these datasets show a slip rate decay with time after the passage of the SSE front possibly as t-1/4. They are composed of multiple short term transients with steeper slip rate decay as t-α with α between 1.4 and 2. We also find that the maximum slip rate of SSEs has a continuous distribution. Our results indicate that creeping faults host intermittent deformation at various scales resulting from the imbricated occurrence of numerous slow slip events of various amplitudes.

  18. Using Low-Frequency Earthquakes to Investigate Slow Slip Processes and Plate Interface Structure Beneath the Olympic Peninsula, WA

    Science.gov (United States)

    Chestler, Shelley

    This dissertation seeks to further understand the LFE source process, the role LFEs play in generating slow slip, and the utility of using LFEs to examine plate interface structure. The work involves the creation and investigation of a 2-year-long catalog of low-frequency earthquakes beneath the Olympic Peninsula, Washington. In the first chapter, we calculate the seismic moments for 34,264 low-frequency earthquakes (LFEs) beneath the Olympic Peninsula, WA. LFE moments range from 1.4x1010- 1.9x1012 N-m (M W=0.7-2.1). While regular earthquakes follow a power-law moment-frequency distribution with a b-value near 1 (the number of events increases by a factor of 10 for each unit increase in MW), we find that while for large LFEs the b-value is ˜6, for small LFEs it is families, or spots on the plate interface where LFEs repeat, can also be fit by exponential distributions. An exponential moment-frequency distribution implies a scale-limited source process. We consider two end-member models where LFE moment is limited by (1) the amount of slip or (2) slip area. We favor the area-limited model. Based on the observed exponential distribution of LFE moment and geodetically observed total slip we estimate that the total area that slips within an LFE family has a diameter of 300 m. Assuming an area-limited model, we estimate the slips, sub-patch diameters, stress drops, and slip rates for LFEs during ETS events. We allow for LFEs to rupture smaller sub-patches within the LFE family patch. Models with 1-10 sub-patches produce slips of 0.1-1 mm, sub-patch diameters of 80-275 m, and stress drops of 30-1000 kPa. While one sub-patch is often assumed, we believe 3-10 sub-patches are more likely. In the second chapter, using high-resolution relative low-frequency earthquake (LFE) locations, we calculate the patch areas (Ap) of LFE families. During Episodic Tremor and Slip (ETS) events, we define AT as the area that slips during LFEs and ST as the total amount of summed LFE slip

  19. Process mining in the large : a tutorial

    NARCIS (Netherlands)

    Aalst, van der W.M.P.; Zimányi, E.

    2014-01-01

    Recently, process mining emerged as a new scientific discipline on the interface between process models and event data. On the one hand, conventional Business Process Management (BPM) and Workflow Management (WfM) approaches and tools are mostly model-driven with little consideration for event data.

  20. Frictional processes in smectite-rich gouges sheared at slow to high slip rates

    Science.gov (United States)

    Aretusini, Stefano; Mittempergher, Silvia; Gualtieri, Alessandro; Di Toro, Giulio

    2015-04-01

    The slipping zones of shallow sections of megathrusts and of large landslides are often smectite-rich (e.g., montmorillonite type). Consequently, similar "frictional" processes operating at high slip rates (> 1 m/s) might be responsible of the large slips estimated in megathrust (50 m for the 2011 Tohoku Mw 9.1 earthquake) and measured in large landslides (500 m for the 1963 Vajont slide, Italy). At present, only rotary shear apparatuses can reproduce simultaneously the large slips and slip rates of these events. Noteworthy, the frictional processes proposed so far (thermal and thermochemical pressurization, etc.) remain rather obscure. Here we present preliminary results obtained with the ROtary Shear Apparatus (ROSA) installed at Padua University. Thirty-one experiments were performed at ambient conditions on pure end-members of (1) smectite-rich standard powders (STx-1b: ~68 wt% Ca-montmorillonite, ~30 wt% opal-CT and ~2 wt% quartz), (2) quartz powders (qtz) and (3) on 80:20 = Stx-1b:qtz mixtures. The gouges were sandwiched between two (1) hollow (25/15 mm external/internal diameter) or (2) solid (25 mm in diameter) stainless-steel made cylinders and confined by inner and outer Teflon rings (only outer for solid cylinders). Gouges were sheared at a normal stress of 5 MPa, slip rates V from 300 μm/s to 1.5 m/s and total slip of 3 m. The deformed gouges were investigated with quantitative (Rietveld method with internal standard) X-ray powder diffraction (XRPD) and Scanning Electron Microscopy (SEM). In the smectite-rich standard endmember, (1) for 300 μm/s ≤ V ≤ 0.1 m/s, initial friction coefficient (μi) was 0.6±0.05 whereas the steady-state friction coefficient (μss) was velocity and slip strengthening (μss 0.85±0.05), (2) for 0.1 m/s 0.8 m/s, velocity and slip weakening (μi = 0.7±0.1 and μss = 0.25±0.05). In the 80:20 Stx-1b:qtz mixtures, (1) for 300 μm/s ≤ V ≤ 0.1 m/s, μi ranged was 0.7±0.05 and increased with slip to μss = 0.77±0

  1. Fairness, fast and slow: A review of dual process models of fairness.

    Science.gov (United States)

    Hallsson, Bjørn G; Siebner, Hartwig R; Hulme, Oliver J

    2018-06-01

    Fairness, the notion that people deserve or have rights to certain resources or kinds of treatment, is a fundamental dimension of moral cognition. Drawing on recent evidence from economics, psychology, and neuroscience, we ask whether self-interest is always intuitive, requiring self-control to override with reasoning-based fairness concerns, or whether fairness itself can be intuitive. While we find strong support for rejecting the notion that self-interest is always intuitive, the literature has reached conflicting conclusions about the neurocognitive systems underpinning fairness. We propose that this disagreement can largely be resolved in light of an extended Social Heuristics Hypothesis. Divergent findings may be attributed to the interpretation of behavioral effects of ego depletion or neurostimulation, reverse inference from brain activity to the underlying psychological process, and insensitivity to social context and inter-individual differences. To better dissect the neurobiological basis of fairness, we outline how future research should embrace cross-disciplinary methods that combine psychological manipulations with neuroimaging, and that can probe inter-individual, and cultural heterogeneities. Copyright © 2018 Elsevier Ltd. All rights reserved.

  2. Slow sedimentary processes on-a-chip: experiments on porous flow effects on granular bed creep

    Science.gov (United States)

    Houssais, M.; Maldarelli, C.; Shattuck, M.; Morris, J. F.

    2017-12-01

    Steep soils dynamics is hard to catch. they exhibit very slow granular creep most of the time, and sometimes, mostly under or after rain, turn into a landslide, a very fast avalanche flow.The conditions of transition from soil creep to avalanching remains a lot non-understood, and Safe Factor law (empirical criteria, function of rain intensity and duration). On another side, in marine fast deposition environments, compaction drives vertical porous flow, which makes bed shear resistance change, and form over time bed size patterns (pipes, dishes) or mechanical heterogeneities.Capturing how the slow creep dynamics depends on the porous flow would allow for much more accurate landscape evolution modeling.We present here preliminary results of an experimental investigation of one the major triggering condition for soils destabilization: rain infiltration, and more generally porous flow through a tilted granular bed. In a quasi-2D microfluidics channel, a flat sediment bed made of spherical particles is prepared, in fully submerged condition. It is thereafter tilted (at slope under critical slope of avalanching) and simultaneously put under vertical weak porous flow (well under the critical flow of liquefaction regarding positive pressure gradients). The two control parameters are varied, and local particles concentration and motion are measured. Interestingly, although staying in the sub-critical creeping regime, we observe an acceleration of the bed deformation downward, as the porous flow and the bed slope are increased, until the criteria for avalanching is reached. Those results appear to present similitudes with the case of tilted dry sediment bed under controlled vibrations. Consequently it opens the discussion about a potential universal model of landslides triggering due to frequent seismological and rainstorm events.

  3. Energy conversion assessment of vacuum, slow and fast pyrolysis processes for low and high ash paper waste sludge

    International Nuclear Information System (INIS)

    Ridout, Angelo J.; Carrier, Marion; Collard, François-Xavier; Görgens, Johann

    2016-01-01

    Highlights: • Vacuum, slow and fast pyrolysis of low and high ash paper waste sludge (PWS) is compared. • Reactor temperature and pellet size optimised to maximise liquid and solid product yields. • Gross energy recovery from solid and liquid was assessed. • Fast pyrolysis of low and high ash PWS offers higher energy conversions. - Abstract: The performance of vacuum, slow and fast pyrolysis processes to transfer energy from the paper waste sludge (PWS) to liquid and solid products was compared. Paper waste sludges with low and high ash content (8.5 and 46.7 wt.%) were converted under optimised conditions for temperature and pellet size to maximise both product yields and energy content. Comparison of the gross energy conversions, as a combination of the bio-oil/tarry phase and char (EC_s_u_m), revealed that the fast pyrolysis performance was between 18.5% and 20.1% higher for the low ash PWS, and 18.4% and 36.5% higher for high ash PWS, when compared to the slow and vacuum pyrolysis processes respectively. For both PWSs, this finding was mainly attributed to higher production of condensable organic compounds and lower water yields during FP. The low ash PWS chars, fast pyrolysis bio-oils and vacuum pyrolysis tarry phase products had high calorific values (∼18–23 MJ kg"−"1) making them promising for energy applications. Considering the low calorific values of the chars from alternative pyrolysis processes (∼4–7 MJ kg"−"1), the high ash PWS should rather be converted to fast pyrolysis bio-oil to maximise the recovery of usable energy products.

  4. Statistical processing of large image sequences.

    Science.gov (United States)

    Khellah, F; Fieguth, P; Murray, M J; Allen, M

    2005-01-01

    The dynamic estimation of large-scale stochastic image sequences, as frequently encountered in remote sensing, is important in a variety of scientific applications. However, the size of such images makes conventional dynamic estimation methods, for example, the Kalman and related filters, impractical. In this paper, we present an approach that emulates the Kalman filter, but with considerably reduced computational and storage requirements. Our approach is illustrated in the context of a 512 x 512 image sequence of ocean surface temperature. The static estimation step, the primary contribution here, uses a mixture of stationary models to accurately mimic the effect of a nonstationary prior, simplifying both computational complexity and modeling. Our approach provides an efficient, stable, positive-definite model which is consistent with the given correlation structure. Thus, the methods of this paper may find application in modeling and single-frame estimation.

  5. Changes in the carotenoid metabolism of capsicum fruits during application of modelized slow drying process for paprika production.

    Science.gov (United States)

    Pérez-Gálvez, Antonio; Hornero-Méndez, Dámaso; Mínguez-Mosquera, María Isabel

    2004-02-11

    A temperature profile simulating the traditional slow drying process of red pepper fruits, which is conducted in La Vera region (Spain) for paprika production, was developed. Carotenoid and ascorbic acid content, as well as moisture of fruits, were monitored during the slow drying process designed. Data obtained suggested that the evolution of carotenoid concentration, the main quality trait for paprika, directly depend on the physical conditions imposed. During the drying process, three different stages could be observed in relation to the carotenoids. The first stage corresponds to a physiological adaptation to the new imposed conditions that implied a decrease (ca. 20%) in the carotenoid content during the first 24 h. After that short period and during 5 days, a second stage was noticed, recovering the biosynthetic (carotenogenic) capability of the fruits, which denotes an accommodation of the fruits to the new environmental conditions. During the following 48 h (third stage) a sharp increase in the carotenoid content was observed. This last phenomenon seems to be related with an oxidative-thermal stress, which took place during the first stage, inducing a carotenogenesis similar to that occurring in over-ripening fruits. Results demonstrate that a fine control of the temperature and moisture content would help to positively modulate carotenogenesis and minimize catabolism, making it possible to adjust the drying process to the ripeness stage of fruits with the aim of improving carotenoid retention and therefore quality of the resulting product. In the case of ascorbic acid, data demonstrated that this compound is very sensitive to the drying process, with a decrease of about 76% during the first 24 h and remaining only at trace levels during the rest of the process. Therefore, no antioxidant role should be expected from ascorbic acid during the whole process and in the corresponding final product (paprika), despite that red pepper fruit is well-known to be rich

  6. A slow atomic diffusion process in high-entropy glass-forming metallic melts

    Science.gov (United States)

    Chen, Changjiu; Wong, Kaikin; Krishnan, Rithin P.; Embs, Jan P.; Chathoth, Suresh M.

    2018-04-01

    Quasi-elastic neutron scattering has been used to study atomic relaxation processes in high-entropy glass-forming metallic melts with different glass-forming ability (GFA). The momentum transfer dependence of mean relaxation time shows a highly collective atomic transport process in the alloy melts with the highest and lowest GFA. However, a jump diffusion process is the long-range atomic transport process in the intermediate GFA alloy melt. Nevertheless, atomic mobility close to the melting temperature of these alloy melts is quite similar, and the temperature dependence of the diffusion coefficient exhibits a non-Arrhenius behavior. The atomic mobility in these high-entropy melts is much slower than that of the best glass-forming melts at their respective melting temperatures.

  7. When fast logic meets slow belief: Evidence for a parallel-processing model of belief bias

    OpenAIRE

    Trippas, Dries; Thompson, Valerie A.; Handley, Simon J.

    2016-01-01

    Two experiments pitted the default-interventionist account of belief bias against a parallel-processing model. According to the former, belief bias occurs because a fast, belief-based evaluation of the conclusion pre-empts a working-memory demanding logical analysis. In contrast, according to the latter both belief-based and logic-based responding occur in parallel. Participants were given deductive reasoning problems of variable complexity and instructed to decide whether the conclusion was ...

  8. Evolution amplified processing with temporally dispersed slow neuronal connectivity in primates.

    Science.gov (United States)

    Caminiti, Roberto; Ghaziri, Hassan; Galuske, Ralf; Hof, Patrick R; Innocenti, Giorgio M

    2009-11-17

    The corpus callosum (CC) provides the main route of communication between the 2 hemispheres of the brain. In monkeys, chimpanzees, and humans, callosal axons of distinct size interconnect functionally different cortical areas. Thinner axons in the genu and in the posterior body of the CC interconnect the prefrontal and parietal areas, respectively, and thicker axons in the midbody and in the splenium interconnect primary motor, somatosensory, and visual areas. At all locations, axon diameter, and hence its conduction velocity, increases slightly in the chimpanzee compared with the macaque because of an increased number of large axons but not between the chimpanzee and man. This, together with the longer connections in larger brains, doubles the expected conduction delays between the hemispheres, from macaque to man, and amplifies their range about 3-fold. These changes can have several consequences for cortical dynamics, particularly on the cycle of interhemispheric oscillators.

  9. When fast logic meets slow belief: Evidence for a parallel-processing model of belief bias.

    Science.gov (United States)

    Trippas, Dries; Thompson, Valerie A; Handley, Simon J

    2017-05-01

    Two experiments pitted the default-interventionist account of belief bias against a parallel-processing model. According to the former, belief bias occurs because a fast, belief-based evaluation of the conclusion pre-empts a working-memory demanding logical analysis. In contrast, according to the latter both belief-based and logic-based responding occur in parallel. Participants were given deductive reasoning problems of variable complexity and instructed to decide whether the conclusion was valid on half the trials or to decide whether the conclusion was believable on the other half. When belief and logic conflict, the default-interventionist view predicts that it should take less time to respond on the basis of belief than logic, and that the believability of a conclusion should interfere with judgments of validity, but not the reverse. The parallel-processing view predicts that beliefs should interfere with logic judgments only if the processing required to evaluate the logical structure exceeds that required to evaluate the knowledge necessary to make a belief-based judgment, and vice versa otherwise. Consistent with this latter view, for the simplest reasoning problems (modus ponens), judgments of belief resulted in lower accuracy than judgments of validity, and believability interfered more with judgments of validity than the converse. For problems of moderate complexity (modus tollens and single-model syllogisms), the interference was symmetrical, in that validity interfered with belief judgments to the same degree that believability interfered with validity judgments. For the most complex (three-term multiple-model syllogisms), conclusion believability interfered more with judgments of validity than vice versa, in spite of the significant interference from conclusion validity on judgments of belief.

  10. Localized atrophy of the thalamus and slowed cognitive processing speed in MS patients.

    Science.gov (United States)

    Bergsland, Niels; Zivadinov, Robert; Dwyer, Michael G; Weinstock-Guttman, Bianca; Benedict, Ralph Hb

    2016-09-01

    Deep gray matter (DGM) atrophy is common in multiple sclerosis (MS), but no studies have investigated surface-based structure changes over time with respect to healthy controls (HCs). Moreover, the relationship between cognition and the spatio-temporal evolution of DGM atrophy is poorly understood. To explore DGM structural differences between MS and HCs over time in relation to neuropsychological (NP) outcomes. The participants were 44 relapsing-remitting and 20 secondary progressive MS patients and 22 HCs. All were scanned using 3T magnetic resonance imaging (MRI) at baseline and 3-year follow-up. NP examination emphasized consensus standard tests of processing speed and memory. We performed both volumetric and shape analysis of DGM structures and assessed their relationships with cognition. Compared to HCs, MS patients presented with significantly smaller DGM volumes. For the thalamus and caudate, differences in shape were mostly localized along the lateral ventricles. NP outcomes were related to both volume and shape of the DGM structures. Over 3 years, decreased cognitive processing speed was related to localized atrophy on the anterior and superior surface of the left thalamus. These findings highlight the role of atrophy in the anterior nucleus of the thalamus and its relation to cognitive decline in MS. © The Author(s), 2015.

  11. Question of automation of periodical slow process in nuclear power stations

    International Nuclear Information System (INIS)

    Berta, S.

    1985-01-01

    In Hungary one 440 MW PRW unit is in service and the second one is under commission. Two units have one background complex. Further two PWR units are being made and of course another background complex. In the PWR technology these background technological processes ensure safe and contamination free operation. Their task is to prevent the escape of noxious, chemical or radioactive by-products. The main technological parts of the background complex are: Fuel cell resting; Four different water filtering systems; Hydrogen contact burners; Two gas filtering systems; Fluid wastage and contaminated resin handling; Preparation of chemical solutions. The aim of this article is to study the possibilities of automation of the background complex

  12. Possibility of a crossed-beam experiment involving slow-neutron capture by unstable nuclei - ``rapid-process tron''

    Science.gov (United States)

    Yamazaki, T.; Katayama, I.; Uwamino, Y.

    1993-02-01

    The possibility of a crossed beam facility of slow neutrons capturing unstable nuclei is examined in connection with the Japanese Hadron Project. With a pulsed proton beam of 50 Hz repetition and with a 100 μA average beam current, one obtains a spallation neutron source of 2.4 × 10 8 thermal neutrons/cm 3/spill over a 60 cm length with a 3 ms average duration time by using a D 2O moderator. By confining radioactive nuclei of 10 9 ions in a beam circulation ring of 0.3 MHz revolution frequency, so that nuclei pass through the neutron source, one obtains a collision luminosity of 3.9 × 10 24/cm 2/s. A new research domain aimed at studying rapid processes in nuclear genetics in a laboratory will be created.

  13. Ions and electrons thermal effects on the fast-slow mode conversion process in a three components plasma

    International Nuclear Information System (INIS)

    Fidone, I.; Gomberoff, L.

    1977-07-01

    Fast-slow mode conversion in a deuterium plasma with a small amount of hydrogen impurity, for frequencies close to the two-ion hybrid frequency, is investigated. It is shown that while electron thermal effects tend to inhibit the wave conversion process, ion thermal effects tend to restore, qualitatively, the cold plasma properties, favouring therefore, the energy exchange between the two modes. The aforementioned effects are competitive for zetasub(o)sup(e)=1/nsub(parall).vsub(e)>=1. For zetasub(o)sup(e)<=1, electron thermal effects, in particular Landau damping, dominate over ion Larmor radius effects, drastically diminishing the wave conversion efficacy. For zetasub(o)sup(e)<<1, the coupling between the modes disappears altogether

  14. Broadband true time delay for microwave signal processing, using slow light based on stimulated Brillouin scattering in optical fibers.

    Science.gov (United States)

    Chin, Sanghoon; Thévenaz, Luc; Sancho, Juan; Sales, Salvador; Capmany, José; Berger, Perrine; Bourderionnet, Jérôme; Dolfi, Daniel

    2010-10-11

    We experimentally demonstrate a novel technique to process broadband microwave signals, using all-optically tunable true time delay in optical fibers. The configuration to achieve true time delay basically consists of two main stages: photonic RF phase shifter and slow light, based on stimulated Brillouin scattering in fibers. Dispersion properties of fibers are controlled, separately at optical carrier frequency and in the vicinity of microwave signal bandwidth. This way time delay induced within the signal bandwidth can be manipulated to correctly act as true time delay with a proper phase compensation introduced to the optical carrier. We completely analyzed the generated true time delay as a promising solution to feed phased array antenna for radar systems and to develop dynamically reconfigurable microwave photonic filters.

  15. Slow briefs: slow food....slow architecture

    OpenAIRE

    Crotch, Joanna

    2012-01-01

    We are moving too fast…fast lives, fast cars, fast food…..and fast architecture. We are caught up in a world that allows no time to stop and think; to appreciate and enjoy all the really important things in our lives. Recent responses to this seemingly unstoppable trend are the growing movements of Slow Food and Cittaslow. Both initiatives are, within their own realms, attempting to reverse speed, homogeny, expediency and globalisation, considering the values of regionality, patience, craft, ...

  16. Large deviations for Gaussian processes in Hoelder norm

    International Nuclear Information System (INIS)

    Fatalov, V R

    2003-01-01

    Some results are proved on the exact asymptotic representation of large deviation probabilities for Gaussian processes in the Hoeder norm. The following classes of processes are considered: the Wiener process, the Brownian bridge, fractional Brownian motion, and stationary Gaussian processes with power-law covariance function. The investigation uses the method of double sums for Gaussian fields

  17. Implementing the National Council of Teachers of Mathematics Standards: A slow process

    Directory of Open Access Journals (Sweden)

    Joseph M. Furner

    2004-10-01

    Full Text Available The purpose of this study was to look at inservice teachers’ pedagogical beliefs about the National Council of Teachers of Mathematics Standards (1989 & 2000.  The Standards’ Belief Instrument (Zollman and Mason, 1992 was administered on teachers.  An ANOVA was used to look for a significant difference between teachers with five years or less experience of teaching mathematics, and those with more than five years teaching experience. One expectation was  that teachers who are recent graduates of teacher education programmes may have more training  on the NCTM Standards. Although there were no statistically significant differences between the two groups, this study did support the expectation. Current training with in-service teachers shows that many of the teachers are familiar with neither the National Council of Teachers of Mathematics nor their Standards.  It seems then from this study that the implementation process of the NCTM Standards, and  perhaps any standards or best practices and new curriculum implementation, is very sluggish.

  18. The advantage of being slow: The quasi-neutral contact process.

    Directory of Open Access Journals (Sweden)

    Marcelo Martins de Oliveira

    Full Text Available According to the competitive exclusion principle, in a finite ecosystem, extinction occurs naturally when two or more species compete for the same resources. An important question that arises is: when coexistence is not possible, which mechanisms confer an advantage to a given species against the other(s? In general, it is expected that the species with the higher reproductive/death ratio will win the competition, but other mechanisms, such as asymmetry in interspecific competition or unequal diffusion rates, have been found to change this scenario dramatically. In this work, we examine competitive advantage in the context of quasi-neutral population models, including stochastic models with spatial structure as well as macroscopic (mean-field descriptions. We employ a two-species contact process in which the "biological clock" of one species is a factor of α slower than that of the other species. Our results provide new insights into how stochasticity and competition interact to determine extinction in finite spatial systems. We find that a species with a slower biological clock has an advantage if resources are limited, winning the competition against a species with a faster clock, in relatively small systems. Periodic or stochastic environmental variations also favor the slower species, even in much larger systems.

  19. The advantage of being slow: The quasi-neutral contact process.

    Science.gov (United States)

    de Oliveira, Marcelo Martins; Dickman, Ronald

    2017-01-01

    According to the competitive exclusion principle, in a finite ecosystem, extinction occurs naturally when two or more species compete for the same resources. An important question that arises is: when coexistence is not possible, which mechanisms confer an advantage to a given species against the other(s)? In general, it is expected that the species with the higher reproductive/death ratio will win the competition, but other mechanisms, such as asymmetry in interspecific competition or unequal diffusion rates, have been found to change this scenario dramatically. In this work, we examine competitive advantage in the context of quasi-neutral population models, including stochastic models with spatial structure as well as macroscopic (mean-field) descriptions. We employ a two-species contact process in which the "biological clock" of one species is a factor of α slower than that of the other species. Our results provide new insights into how stochasticity and competition interact to determine extinction in finite spatial systems. We find that a species with a slower biological clock has an advantage if resources are limited, winning the competition against a species with a faster clock, in relatively small systems. Periodic or stochastic environmental variations also favor the slower species, even in much larger systems.

  20. Processing and properties of large-sized ceramic slabs

    Directory of Open Access Journals (Sweden)

    Fossa, L.

    2010-10-01

    Full Text Available Large-sized ceramic slabs – with dimensions up to 360x120 cm2 and thickness down to 2 mm – are manufactured through an innovative ceramic process, starting from porcelain stoneware formulations and involving wet ball milling, spray drying, die-less slow-rate pressing, a single stage of fast drying-firing, and finishing (trimming, assembling of ceramic-fiberglass composites. Fired and unfired industrial slabs were selected and characterized from the technological, compositional (XRF, XRD and microstructural (SEM viewpoints. Semi-finished products exhibit a remarkable microstructural uniformity and stability in a rather wide window of firing schedules. The phase composition and compact microstructure of fired slabs are very similar to those of porcelain stoneware tiles. The values of water absorption, bulk density, closed porosity, functional performances as well as mechanical and tribological properties conform to the top quality range of porcelain stoneware tiles. However, the large size coupled with low thickness bestow on the slab a certain degree of flexibility, which is emphasized in ceramic-fiberglass composites. These outstanding performances make the large-sized slabs suitable to be used in novel applications: building and construction (new floorings without dismantling the previous paving, ventilated façades, tunnel coverings, insulating panelling, indoor furnitures (table tops, doors, support for photovoltaic ceramic panels.

    Se han fabricado piezas de gran formato, con dimensiones de hasta 360x120 cm, y menos de 2 mm, de espesor, empleando métodos innovadores de fabricación, partiendo de composiciones de gres porcelánico y utilizando, molienda con bolas por vía húmeda, atomización, prensado a baja velocidad sin boquilla de extrusión, secado y cocción rápido en una sola etapa, y un acabado que incluye la adhesión de fibra de vidrio al soporte cerámico y el rectificado de la pieza final. Se han

  1. Large scale and big data processing and management

    CERN Document Server

    Sakr, Sherif

    2014-01-01

    Large Scale and Big Data: Processing and Management provides readers with a central source of reference on the data management techniques currently available for large-scale data processing. Presenting chapters written by leading researchers, academics, and practitioners, it addresses the fundamental challenges associated with Big Data processing tools and techniques across a range of computing environments.The book begins by discussing the basic concepts and tools of large-scale Big Data processing and cloud computing. It also provides an overview of different programming models and cloud-bas

  2. Taking account of sample finite dimensions in processing measurements of double differential cross sections of slow neutron scattering

    International Nuclear Information System (INIS)

    Lisichkin, Yu.V.; Dovbenko, A.G.; Efimenko, B.A.; Novikov, A.G.; Smirenkina, L.D.; Tikhonova, S.I.

    1979-01-01

    Described is a method of taking account of finite sample dimensions in processing measurement results of double differential cross sections (DDCS) of slow neutron scattering. A necessity of corrective approach to the account taken of the effect of sample finite dimensions is shown, and, in particular, the necessity to conduct preliminary processing of DDCS, the account being taken of attenuation coefficients of single scattered neutrons (SSN) for measurements on the sample with a container, and on the container. Correction for multiple scattering (MS) calculated on the base of the dynamic model should be obtained, the account being taken of resolution effects. To minimize the effect of the dynamic model used in calculations it is preferred to make absolute measurements of DDCS and to use the subraction method. The above method was realized in the set of programs for the BESM-5 computer. The FISC program computes the coefficients of SSN attenuation and correction for MS. The DDS program serves to compute a model DDCS averaged as per the resolution function of an instrument. The SCATL program is intended to prepare initial information necessary for the FISC program, and permits to compute the scattering law for all materials. Presented are the results of using the above method while processing experimental data on measuring DDCS of water by the DIN-1M spectrometer

  3. Epigenomic maintenance through dietary intervention can facilitate DNA repair process to slow down the progress of premature aging.

    Science.gov (United States)

    Ghosh, Shampa; Sinha, Jitendra Kumar; Raghunath, Manchala

    2016-09-01

    DNA damage caused by various sources remains one of the most researched topics in the area of aging and neurodegeneration. Increased DNA damage causes premature aging. Aging is plastic and is characterised by the decline in the ability of a cell/organism to maintain genomic stability. Lifespan can be modulated by various interventions like calorie restriction, a balanced diet of macro and micronutrients or supplementation with nutrients/nutrient formulations such as Amalaki rasayana, docosahexaenoic acid, resveratrol, curcumin, etc. Increased levels of DNA damage in the form of double stranded and single stranded breaks are associated with decreased longevity in animal models like WNIN/Ob obese rats. Erroneous DNA repair can result in accumulation of DNA damage products, which in turn result in premature aging disorders such as Hutchinson-Gilford progeria syndrome. Epigenomic studies of the aging process have opened a completely new arena for research and development of drugs and therapeutic agents. We propose here that agents or interventions that can maintain epigenomic stability and facilitate the DNA repair process can slow down the progress of premature aging, if not completely prevent it. © 2016 IUBMB Life, 68(9):717-721, 2016. © 2016 International Union of Biochemistry and Molecular Biology.

  4. Broadband Reflective Coating Process for Large FUVOIR Mirrors, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — ZeCoat Corporation will develop and demonstrate a set of revolutionary coating processes for making broadband reflective coatings suitable for very large mirrors (4+...

  5. Slow perceptual processing at the core of developmental dyslexia: a parameter-based assessment of visual attention.

    Science.gov (United States)

    Stenneken, Prisca; Egetemeir, Johanna; Schulte-Körne, Gerd; Müller, Hermann J; Schneider, Werner X; Finke, Kathrin

    2011-10-01

    The cognitive causes as well as the neurological and genetic basis of developmental dyslexia, a complex disorder of written language acquisition, are intensely discussed with regard to multiple-deficit models. Accumulating evidence has revealed dyslexics' impairments in a variety of tasks requiring visual attention. The heterogeneity of these experimental results, however, points to the need for measures that are sufficiently sensitive to differentiate between impaired and preserved attentional components within a unified framework. This first parameter-based group study of attentional components in developmental dyslexia addresses potentially altered attentional components that have recently been associated with parietal dysfunctions in dyslexia. We aimed to isolate the general attentional resources that might underlie reduced span performance, i.e., either a deficient working memory storage capacity, or a slowing in visual perceptual processing speed, or both. Furthermore, by analysing attentional selectivity in dyslexia, we addressed a potential lateralized abnormality of visual attention, i.e., a previously suggested rightward spatial deviation compared to normal readers. We investigated a group of high-achieving young adults with persisting dyslexia and matched normal readers in an experimental whole report and a partial report of briefly presented letter arrays. Possible deviations in the parametric values of the dyslexic compared to the control group were taken as markers for the underlying deficit. The dyslexic group showed a striking reduction in perceptual processing speed (by 26% compared to controls) while their working memory storage capacity was in the normal range. In addition, a spatial deviation of attentional weighting compared to the control group was confirmed in dyslexic readers, which was larger in participants with a more severe dyslexic disorder. In general, the present study supports the relevance of perceptual processing speed in disorders

  6. The effect of deep and slow breathing on pain perception, autonomic activity, and mood processing--an experimental study.

    Science.gov (United States)

    Busch, Volker; Magerl, Walter; Kern, Uwe; Haas, Joachim; Hajak, Göran; Eichhammer, Peter

    2012-02-01

    Deep and slow breathing (DSB) techniques, as a component of various relaxation techniques, have been reported as complementary approaches in the treatment of chronic pain syndromes, but the relevance of relaxation for alleviating pain during a breathing intervention was not evaluated so far. In order to disentangle the effects of relaxation and respiration, we investigated two different DSB techniques at the same respiration rates and depths on pain perception, autonomic activity, and mood in 16 healthy subjects. In the attentive DSB intervention, subjects were asked to breathe guided by a respiratory feedback task requiring a high degree of concentration and constant attention. In the relaxing DSB intervention, the subjects relaxed during the breathing training. The skin conductance levels, indicating sympathetic tone, were measured during the breathing maneuvers. Thermal detection and pain thresholds for cold and hot stimuli and profile of mood states were examined before and after the breathing sessions. The mean detection and pain thresholds showed a significant increase resulting from the relaxing DSB, whereas no significant changes of these thresholds were found associated with the attentive DSB. The mean skin conductance levels indicating sympathetic activity decreased significantly during the relaxing DSB intervention but not during the attentive DSB. Both breathing interventions showed similar reductions in negative feelings (tension, anger, and depression). Our results suggest that the way of breathing decisively influences autonomic and pain processing, thereby identifying DSB in concert with relaxation as the essential feature in the modulation of sympathetic arousal and pain perception. Wiley Periodicals, Inc.

  7. Fast or slow-foods? Describing natural variations in oral processing characteristics across a wide range of Asian foods.

    Science.gov (United States)

    Forde, C G; Leong, C; Chia-Ming, E; McCrickerd, K

    2017-02-22

    The structural properties of foods have a functional role to play in oral processing behaviours and sensory perception, and also impact on meal size and the experience of fullness. This study adopted a new approach by using behavioural coding analysis of eating behaviours to explore how a range of food textures manifest as the microstructural properties of eating and expectations of fullness. A selection of 47 Asian foods were served in fixed quantities to a panel of participants (N = 12) and their eating behaviours were captured via web-camera recordings. Behavioural coding analysis was completed on the recordings to extract total bites, chews and swallows and cumulative time of the food spent in the mouth. From these measurements a series of microstructural properties including average bite size (g), chews per bite, oro-sensory exposure time (seconds) and average eating rate (g min -1 ) were derived per food. The sensory and macronutrient properties of each food were correlated with the microstructure of eating to compare the differences in eating behaviour on a gram for gram basis. There were strong relationships between the perceived food textural properties and its eating behaviours and a food's total water content was the best predictor of its eating rate. Foods that were eaten at a slower eating rate, with smaller bites and more chews per bite were rated as higher in the expected fullness. These relationships are important as oral processing behaviours and beliefs about the potential satiating value of food influence portion decisions and moderate meal size. These data support the idea that naturally occurring differences in the food structure and texture could be used to design meals that slow the rate of eating and maximise fullness.

  8. [Dual process in large number estimation under uncertainty].

    Science.gov (United States)

    Matsumuro, Miki; Miwa, Kazuhisa; Terai, Hitoshi; Yamada, Kento

    2016-08-01

    According to dual process theory, there are two systems in the mind: an intuitive and automatic System 1 and a logical and effortful System 2. While many previous studies about number estimation have focused on simple heuristics and automatic processes, the deliberative System 2 process has not been sufficiently studied. This study focused on the System 2 process for large number estimation. First, we described an estimation process based on participants’ verbal reports. The task, corresponding to the problem-solving process, consisted of creating subgoals, retrieving values, and applying operations. Second, we investigated the influence of such deliberative process by System 2 on intuitive estimation by System 1, using anchoring effects. The results of the experiment showed that the System 2 process could mitigate anchoring effects.

  9. Large-Scale Graph Processing Using Apache Giraph

    KAUST Repository

    Sakr, Sherif

    2017-01-07

    This book takes its reader on a journey through Apache Giraph, a popular distributed graph processing platform designed to bring the power of big data processing to graph data. Designed as a step-by-step self-study guide for everyone interested in large-scale graph processing, it describes the fundamental abstractions of the system, its programming models and various techniques for using the system to process graph data at scale, including the implementation of several popular and advanced graph analytics algorithms.

  10. Large-Scale Graph Processing Using Apache Giraph

    KAUST Repository

    Sakr, Sherif; Orakzai, Faisal Moeen; Abdelaziz, Ibrahim; Khayyat, Zuhair

    2017-01-01

    This book takes its reader on a journey through Apache Giraph, a popular distributed graph processing platform designed to bring the power of big data processing to graph data. Designed as a step-by-step self-study guide for everyone interested in large-scale graph processing, it describes the fundamental abstractions of the system, its programming models and various techniques for using the system to process graph data at scale, including the implementation of several popular and advanced graph analytics algorithms.

  11. Slow information processing after very severe closed head injury : impaired access to declarative knowledge and intact application and acquisition of procedural knowledge

    NARCIS (Netherlands)

    Timmerman, ME; Brouwer, WH

    As an explanation of the pattern of slow information processing after closed head injury (CHI), hypotheses of impaired access to declarative memory and intact application and acquisition of procedural memory after CHI are presented. These two hypotheses were tested by means of four cognitive

  12. Cohesive zone model for intergranular slow crack growth in ceramics: influence of the process and the microstructure

    International Nuclear Information System (INIS)

    Romero de la Osa, M; Olagnon, C; Chevalier, J; Estevez, R; Tallaron, C

    2011-01-01

    Ceramic polycrystals are prone to slow crack growth (SCG) which is stress and environmentally assisted, similarly to observations reported for silica glasses. The kinetics of fracture are known to be dependent on the load level, the temperature and the relative humidity. In addition, evidence is available on the influence of the microstructure on the SCG rate with an increase in the crack velocity with decreasing the grain size. Crack propagation takes place beyond a load threshold, which is grain size dependent. We present a cohesive zone model for the intergranular failure process. The methodology accounts for an intrinsic opening that governs the length of the cohesive zone and allows the investigation of grain size effects. A rate and temperature-dependent cohesive model is proposed (Romero de la Osa M, Estevez R et al 2009 J. Mech. Adv. Mater. Struct. 16 623–31) to mimic the reaction–rupture mechanism. The formulation is inspired by Michalske and Freiman's picture (Michalske and Freiman 1983 J. Am. Ceram. Soc. 66 284–8) together with a recent study by Zhu et al (2005 J. Mech. Phys. Solids 53 1597–623) of the reaction–rupture mechanism. The present investigation extends a previous work (Romero de la Osa et al 2009 Int. J. Fracture 158 157–67) in which the problem is formulated. Here, we explore the influence of the microstructure in terms of grain size, their elastic properties and residual thermal stresses originating from the cooling from the sintering temperature down to ambient conditions. Their influence on SCG for static loadings is reported and the predictions compared with experimental trends. We show that the initial stress state is responsible for the grain size dependence reported experimentally for SCG. Furthermore, the account for the initial stresses enables the prediction of a load threshold below which no crack growth is observed: a crack arrest takes place when the crack path meets a region in compression

  13. Slow cortical potential and theta/beta neurofeedback training in adults: effects on attentional processes, and motor system excitability

    Directory of Open Access Journals (Sweden)

    Petra eStuder

    2014-07-01

    Full Text Available Neurofeedback (NF is being successfully applied, among others, in children with ADHD and as a peak performance training in healthy subjects. However, the neuronal mechanisms mediating a successful NF training have not yet been sufficiently uncovered for both theta/beta (T/B, and slow cortical potential (SCP training, two protocols established in NF in ADHD. In the present randomized controlled investigation in adults without a clinical diagnosis (n = 59, the specificity of the effects of these two NF protocols on attentional processes, and motor system excitability were to be examined, focusing on the underlying neuronal mechanisms. NF training consisted of 10 double sessions, and self-regulation skills were analyzed. Pre- and post-training assessments encompassed performance and event-related potential measures during an attention task, and motor system excitability assessed by transcranial magnetic stimulation. Some NF protocol specific effects have been obtained. However, due to the limited sample size medium effects didn’t reach the level of significance. Self-regulation abilities during negativity trials of the SCP training were associated with increased contingent negative variation amplitudes, indicating improved resource allocation during cognitive preparation. Theta/beta training was associated with increased response speed and decreased target-P3 amplitudes after successful theta/beta regulation suggested reduced attentional resources necessary for stimulus evaluation. Motor system excitability effects after theta/beta training paralleled the effects of methylphenidate. Overall, our results are limited by the non-sufficiently acquired self-regulation skills, but some specific effects between good and poor learners could be described. Future studies with larger sample sizes and sufficient acquisition of self-regulation skills are needed to further evaluate the protocol specific effects on attention and motor system excitability

  14. Anthropogenic control on geomorphic process rates: can we slow down the erosion rates? (Geomorphology Outstanding Young Scientist Award & Penck Lecture)

    Science.gov (United States)

    Vanacker, V.

    2012-04-01

    The surface of the Earth is changing rapidly, largely in response to anthropogenic perturbation. Direct anthropogenic disturbance of natural environments may be much larger in many places than the (projected) indirect effects of climate change. There is now large evidence that humans have significantly altered geomorphic process rates, mainly through changes in vegetation composition, density and cover. While much attention has been given to the impact of vegetation degradation on geomorphic process rates, I suggest that the pathway of restoration is equally important to investigate. First, vegetation recovery after crop abandonment has a rapid and drastic impact on geomorphic process rates. Our data from degraded catchments in the tropical Andes show that erosion rates can be reduced by up to 100 times when increasing the protective vegetation cover. During vegetation restoration, the combined effects of the reduction in surface runoff, sediment production and hydrological connectivity are stronger than the individual effects together. Therefore, changes in erosion and sedimentation during restoration are not simply the reverse of those observed during degradation. Second, anthropogenic perturbation causes a profound but often temporary change in geomorphic process rates. Reconstruction of soil erosion rates in Spain shows us that modern erosion rates in well-vegetated areas are similar to long-term rates, despite evidence of strong pulses in historical erosion rates after vegetation clearance and agriculture. The soil vegetation system might be resilient to short pulses of accelerated erosion (and deposition), as there might exist a dynamic coupling between soil erosion and production also in degraded environments.

  15. The influence of slow cooling on Y211 size and content in single-grain YBCO bulk superconductor through the infiltration-growth process

    Energy Technology Data Exchange (ETDEWEB)

    Ouerghi, A [Systems and Applied Mechanics Laboratory LASMAP, Polytechnic School of Tunisia, Rue El Kawarezmi La Marsa 743, Université de Carthage Tunis (Tunisia); Moutalbi, N., E-mail: nahed.moutalbi@yahoo.fr [Systems and Applied Mechanics Laboratory LASMAP, Polytechnic School of Tunisia, Rue El Kawarezmi La Marsa 743, Université de Carthage Tunis (Tunisia); Noudem, J.G. [CRISMAT-ENSICAEN (UMR-CNRS 6508), Université de Caen-Basse-Normandie, F-14050 Caen (France); LUSAC, Université de Caen-Basse-Normandie F-50130 Cherbourg-Octeville (France); M' chirgui, A. [Systems and Applied Mechanics Laboratory LASMAP, Polytechnic School of Tunisia, Rue El Kawarezmi La Marsa 743, Université de Carthage Tunis (Tunisia)

    2017-03-15

    Highlights: • YBCO bulk superconductors are produced by optimized Seeded Infiltration and Growth process. • The slow cooling time, in a fixed slow cooling temperature window, affects considerably the surface morphology and the bulk’s microstructure. • The Y211 particle’s size and content depend on the slow cooling time and its distribution behavior changes from one position to another. • There is an optimum slow cooling time, estimated to 88h, over which the shrinkage for both the liquid phase and the Y211 pellet is maximal, without any improvement of the crystal grain growth. • The magnetic trapped flux distribution for a given sample brings out the single grain characteristic. - Abstract: Highly textured YBa{sub 2}Cu{sub 3}O{sub 7-δ} (Y123) superconductors were produced using modified Textured Top Seeded Infiltration Growth (TSIG) process. The liquid source is made of only Y123 powder whereas the solid source is composed of Y{sub 2}BaCuO{sub 5} (Y211) powder. We aim to control the amount of liquid that infiltrates the solid pellet, which in turn controls the final amount of Y{sub 2}BaCuO{sub 5} particles in Y123 matrix. The effect of the slow cooling kinetics on sample morphology, on grain growth and on final microstructure was too investigated. It is shown that appropriate slow cooling time may also contribute to the control of the amount of Y211 inclusions in the final structure of Y123 bulk. We report herein the Y211 particle size and density distribution in the whole Y123 matrix. The present work proves that finest Y211 particles locate under the seed and that their size and density increase with distance from the seed.

  16. Drell–Yan process at Large Hadron Collider

    Indian Academy of Sciences (India)

    the Drell–Yan process [1] first studied with muon final states. In Standard .... Two large-statistics sets of signal events, based on the value of the dimuon invariant mass, .... quality control criteria are applied to this globally reconstructed muon.

  17. Large Scale Processes and Extreme Floods in Brazil

    Science.gov (United States)

    Ribeiro Lima, C. H.; AghaKouchak, A.; Lall, U.

    2016-12-01

    Persistent large scale anomalies in the atmospheric circulation and ocean state have been associated with heavy rainfall and extreme floods in water basins of different sizes across the world. Such studies have emerged in the last years as a new tool to improve the traditional, stationary based approach in flood frequency analysis and flood prediction. Here we seek to advance previous studies by evaluating the dominance of large scale processes (e.g. atmospheric rivers/moisture transport) over local processes (e.g. local convection) in producing floods. We consider flood-prone regions in Brazil as case studies and the role of large scale climate processes in generating extreme floods in such regions is explored by means of observed streamflow, reanalysis data and machine learning methods. The dynamics of the large scale atmospheric circulation in the days prior to the flood events are evaluated based on the vertically integrated moisture flux and its divergence field, which are interpreted in a low-dimensional space as obtained by machine learning techniques, particularly supervised kernel principal component analysis. In such reduced dimensional space, clusters are obtained in order to better understand the role of regional moisture recycling or teleconnected moisture in producing floods of a given magnitude. The convective available potential energy (CAPE) is also used as a measure of local convection activities. We investigate for individual sites the exceedance probability in which large scale atmospheric fluxes dominate the flood process. Finally, we analyze regional patterns of floods and how the scaling law of floods with drainage area responds to changes in the climate forcing mechanisms (e.g. local vs large scale).

  18. Nonterrestrial material processing and manufacturing of large space systems

    Science.gov (United States)

    Von Tiesenhausen, G.

    1979-01-01

    Nonterrestrial processing of materials and manufacturing of large space system components from preprocessed lunar materials at a manufacturing site in space is described. Lunar materials mined and preprocessed at the lunar resource complex will be flown to the space manufacturing facility (SMF), where together with supplementary terrestrial materials, they will be final processed and fabricated into space communication systems, solar cell blankets, radio frequency generators, and electrical equipment. Satellite Power System (SPS) material requirements and lunar material availability and utilization are detailed, and the SMF processing, refining, fabricating facilities, material flow and manpower requirements are described.

  19. Resin infusion of large composite structures modeling and manufacturing process

    Energy Technology Data Exchange (ETDEWEB)

    Loos, A.C. [Michigan State Univ., Dept. of Mechanical Engineering, East Lansing, MI (United States)

    2006-07-01

    The resin infusion processes resin transfer molding (RTM), resin film infusion (RFI) and vacuum assisted resin transfer molding (VARTM) are cost effective techniques for the fabrication of complex shaped composite structures. The dry fibrous preform is placed in the mold, consolidated, resin impregnated and cured in a single step process. The fibrous performs are often constructed near net shape using highly automated textile processes such as knitting, weaving and braiding. In this paper, the infusion processes RTM, RFI and VARTM are discussed along with the advantages of each technique compared with traditional composite fabrication methods such as prepreg tape lay up and autoclave cure. The large number of processing variables and the complex material behavior during infiltration and cure make experimental optimization of the infusion processes costly and inefficient. Numerical models have been developed which can be used to simulate the resin infusion processes. The model formulation and solution procedures for the VARTM process are presented. A VARTM process simulation of a carbon fiber preform was presented to demonstrate the type of information that can be generated by the model and to compare the model predictions with experimental measurements. Overall, the predicted flow front positions, resin pressures and preform thicknesses agree well with the measured values. The results of the simulation show the potential cost and performance benefits that can be realized by using a simulation model as part of the development process. (au)

  20. In situ hybridisation of a large repertoire of muscle-specific transcripts in fish larvae: the new superficial slow-twitch fibres exhibit characteristics of fast-twitch differentiation.

    Science.gov (United States)

    Chauvigné, F; Ralliere, C; Cauty, C; Rescan, P Y

    2006-01-01

    Much of the present information on muscle differentiation in fish concerns the early embryonic stages. To learn more about the maturation and the diversification of the fish myotomal fibres in later stages of ontogeny, we investigated, by means of in situ hybridisation, the developmental expression of a large repertoire of muscle-specific genes in trout larvae from hatching to yolk resorption. At hatching, transcripts for fast and slow muscle protein isoforms, namely myosins, tropomyosins, troponins and myosin binding protein C were present in the deep fast and the superficial slow areas of the myotome, respectively. During myotome expansion that follows hatching, the expression of fast isoforms became progressively confined to the borders of the fast muscle mass, whereas, in contrast, slow muscle isoform transcripts were uniformly expressed in all the slow fibres. Transcripts for several enzymes involved in oxidative metabolism such as citrate synthase, cytochrome oxidase component IV and succinate dehydrogenase, were present throughout the whole myotome of hatching embryos but in later stages became concentrated in slow fibre as well as in lateral fast fibres. Surprisingly, the slow fibres that are added externally to the single superficial layer of the embryonic (original) slow muscle fibres expressed not only slow twitch muscle isoforms but also, transiently, a subset of fast twitch muscle isoforms including MyLC1, MyLC3, MyHC and myosin binding protein C. Taken together these observations show that the growth of the myotome of the fish larvae is associated with complex patterns of muscular gene expression and demonstrate the unexpected presence of fast muscle isoform-expressing fibres in the most superficial part of the slow muscle.

  1. Research on Francis Turbine Modeling for Large Disturbance Hydropower Station Transient Process Simulation

    Directory of Open Access Journals (Sweden)

    Guangtao Zhang

    2015-01-01

    Full Text Available In the field of hydropower station transient process simulation (HSTPS, characteristic graph-based iterative hydroturbine model (CGIHM has been widely used when large disturbance hydroturbine modeling is involved. However, by this model, iteration should be used to calculate speed and pressure, and slow convergence or no convergence problems may be encountered for some reasons like special characteristic graph profile, inappropriate iterative algorithm, or inappropriate interpolation algorithm, and so forth. Also, other conventional large disturbance hydroturbine models are of some disadvantages and difficult to be used widely in HSTPS. Therefore, to obtain an accurate simulation result, a simple method for hydroturbine modeling is proposed. By this method, both the initial operating point and the transfer coefficients of linear hydroturbine model keep changing during simulation. Hence, it can reflect the nonlinearity of the hydroturbine and be used for Francis turbine simulation under large disturbance condition. To validate the proposed method, both large disturbance and small disturbance simulations of a single hydrounit supplying a resistive, isolated load were conducted. It was shown that the simulation result is consistent with that of field test. Consequently, the proposed method is an attractive option for HSTPS involving Francis turbine modeling under large disturbance condition.

  2. QCD phenomenology of the large P/sub T/ processes

    International Nuclear Information System (INIS)

    Stroynowski, R.

    1979-11-01

    Quantum Chromodynamics (QCD) provides a framework for the possible high-accuracy calculations of the large-p/sub T/ processes. The description of the large-transverse-momentum phenomena is introduced in terms of the parton model, and the modifications expected from QCD are described by using as an example single-particle distributions. The present status of available data (π, K, p, p-bar, eta, particle ratios, beam ratios, direct photons, nuclear target dependence), the evidence for jets, and the future prospects are reviewed. 80 references, 33 figures, 3 tables

  3. Tapping to a slow tempo in the presence of simple and complex meters reveals experience-specific biases for processing music.

    Directory of Open Access Journals (Sweden)

    Sangeeta Ullal-Gupta

    Full Text Available Musical meters vary considerably across cultures, yet relatively little is known about how culture-specific experience influences metrical processing. In Experiment 1, we compared American and Indian listeners' synchronous tapping to slow sequences. Inter-tone intervals contained silence or to-be-ignored rhythms that were designed to induce a simple meter (familiar to Americans and Indians or a complex meter (familiar only to Indians. A subset of trials contained an abrupt switch from one rhythm to another to assess the disruptive effects of contradicting the initially implied meter. In the unfilled condition, both groups tapped earlier than the target and showed large tap-tone asynchronies (measured in relative phase. When inter-tone intervals were filled with simple-meter rhythms, American listeners tapped later than targets, but their asynchronies were smaller and declined more rapidly. Likewise, asynchronies rose sharply following a switch away from simple-meter but not from complex-meter rhythm. By contrast, Indian listeners performed similarly across all rhythm types, with asynchronies rapidly declining over the course of complex- and simple-meter trials. For these listeners, a switch from either simple or complex meter increased asynchronies. Experiment 2 tested American listeners but doubled the duration of the synchronization phase prior to (and after the switch. Here, compared with simple meters, complex-meter rhythms elicited larger asynchronies that declined at a slower rate, however, asynchronies increased after the switch for all conditions. Our results provide evidence that ease of meter processing depends to a great extent on the amount of experience with specific meters.

  4. Tapping to a slow tempo in the presence of simple and complex meters reveals experience-specific biases for processing music.

    Science.gov (United States)

    Ullal-Gupta, Sangeeta; Hannon, Erin E; Snyder, Joel S

    2014-01-01

    Musical meters vary considerably across cultures, yet relatively little is known about how culture-specific experience influences metrical processing. In Experiment 1, we compared American and Indian listeners' synchronous tapping to slow sequences. Inter-tone intervals contained silence or to-be-ignored rhythms that were designed to induce a simple meter (familiar to Americans and Indians) or a complex meter (familiar only to Indians). A subset of trials contained an abrupt switch from one rhythm to another to assess the disruptive effects of contradicting the initially implied meter. In the unfilled condition, both groups tapped earlier than the target and showed large tap-tone asynchronies (measured in relative phase). When inter-tone intervals were filled with simple-meter rhythms, American listeners tapped later than targets, but their asynchronies were smaller and declined more rapidly. Likewise, asynchronies rose sharply following a switch away from simple-meter but not from complex-meter rhythm. By contrast, Indian listeners performed similarly across all rhythm types, with asynchronies rapidly declining over the course of complex- and simple-meter trials. For these listeners, a switch from either simple or complex meter increased asynchronies. Experiment 2 tested American listeners but doubled the duration of the synchronization phase prior to (and after) the switch. Here, compared with simple meters, complex-meter rhythms elicited larger asynchronies that declined at a slower rate, however, asynchronies increased after the switch for all conditions. Our results provide evidence that ease of meter processing depends to a great extent on the amount of experience with specific meters.

  5. Slow cortical potential and theta/beta neurofeedback training in adults: effects on attentional processes, and motor system excitability

    OpenAIRE

    Petra eStuder; Oliver eKratz; Holger eGevensleben; Aribert eRothenberger; Gunther H Moll; Martin eHautzinger; Hartmut eHeinrich; Hartmut eHeinrich

    2014-01-01

    Neurofeedback (NF) is being successfully applied, among others, in children with ADHD and as a peak performance training in healthy subjects. However, the neuronal mechanisms mediating a successful NF training have not yet been sufficiently uncovered for both theta/beta (T/B), and slow cortical potential (SCP) training, two protocols established in NF in ADHD. In the present randomized controlled investigation in adults without a clinical diagnosis (n = 59), the specificity of the effects of ...

  6. Slow cortical potential and theta/beta neurofeedback training in adults: effects on attentional processes and motor system excitability

    OpenAIRE

    Studer, Petra; Kratz, Oliver; Gevensleben, Holger; Rothenberger, Aribert; Moll, Gunther H.; Hautzinger, Martin; Heinrich, Hartmut

    2014-01-01

    Neurofeedback (NF) is being successfully applied, among others, in children with attention deficit/hyperactivity disorder (ADHD) and as a peak performance training in healthy subjects. However, the neuronal mechanisms mediating a successful NF training have not yet been sufficiently uncovered for both theta/beta (T/B), and slow cortical potential (SCP) training, two protocols established in NF in ADHD. In the present, randomized, controlled investigation in adults without a clinical diagnosis...

  7. Measuring the In-Process Figure, Final Prescription, and System Alignment of Large Optics and Segmented Mirrors Using Lidar Metrology

    Science.gov (United States)

    Ohl, Raymond; Slotwinski, Anthony; Eegholm, Bente; Saif, Babak

    2011-01-01

    The fabrication of large optics is traditionally a slow process, and fabrication capability is often limited by measurement capability. W hile techniques exist to measure mirror figure with nanometer precis ion, measurements of large-mirror prescription are typically limited to submillimeter accuracy. Using a lidar instrument enables one to measure the optical surface rough figure and prescription in virtuall y all phases of fabrication without moving the mirror from its polis hing setup. This technology improves the uncertainty of mirror presc ription measurement to the micron-regime.

  8. Large sample hydrology in NZ: Spatial organisation in process diagnostics

    Science.gov (United States)

    McMillan, H. K.; Woods, R. A.; Clark, M. P.

    2013-12-01

    A key question in hydrology is how to predict the dominant runoff generation processes in any given catchment. This knowledge is vital for a range of applications in forecasting hydrological response and related processes such as nutrient and sediment transport. A step towards this goal is to map dominant processes in locations where data is available. In this presentation, we use data from 900 flow gauging stations and 680 rain gauges in New Zealand, to assess hydrological processes. These catchments range in character from rolling pasture, to alluvial plains, to temperate rainforest, to volcanic areas. By taking advantage of so many flow regimes, we harness the benefits of large-sample and comparative hydrology to study patterns and spatial organisation in runoff processes, and their relationship to physical catchment characteristics. The approach we use to assess hydrological processes is based on the concept of diagnostic signatures. Diagnostic signatures in hydrology are targeted analyses of measured data which allow us to investigate specific aspects of catchment response. We apply signatures which target the water balance, the flood response and the recession behaviour. We explore the organisation, similarity and diversity in hydrological processes across the New Zealand landscape, and how these patterns change with scale. We discuss our findings in the context of the strong hydro-climatic gradients in New Zealand, and consider the implications for hydrological model building on a national scale.

  9. Large Eddy Simulation of Cryogenic Injection Processes at Supercritical Pressure

    Science.gov (United States)

    Oefelein, Joseph C.

    2002-01-01

    This paper highlights results from the first of a series of hierarchical simulations aimed at assessing the modeling requirements for application of the large eddy simulation technique to cryogenic injection and combustion processes in liquid rocket engines. The focus is on liquid-oxygen-hydrogen coaxial injectors at a condition where the liquid-oxygen is injected at a subcritical temperature into a supercritical environment. For this situation a diffusion dominated mode of combustion occurs in the presence of exceedingly large thermophysical property gradients. Though continuous, these gradients approach the behavior of a contact discontinuity. Significant real gas effects and transport anomalies coexist locally in colder regions of the flow, with ideal gas and transport characteristics occurring within the flame zone. The current focal point is on the interfacial region between the liquid-oxygen core and the coaxial hydrogen jet where the flame anchors itself.

  10. On Building and Processing of Large Digitalized Map Archive

    Directory of Open Access Journals (Sweden)

    Milan Simunek

    2011-07-01

    Full Text Available A tall list of problems needs to be solved during a long-time work on a virtual model of Prague aim of which is to show historical development of the city in virtual reality. This paper presents an integrated solution to digitalizing, cataloguing and processing of a large number of maps from different periods and from variety of sources. A specialized (GIS software application was developed to allow for a fast georeferencing (using an evolutionary algorithm, for cataloguing in an internal database, and subsequently for an easy lookup of relevant maps. So the maps could be processed further to serve as a main input for a proper modeling of a changing face of the city through times.

  11. Possible implications of large scale radiation processing of food

    International Nuclear Information System (INIS)

    Zagorski, Z.P.

    1990-01-01

    Large scale irradiation has been discussed in terms of the participation of processing cost in the final value of the improved product. Another factor has been taken into account and that is the saturation of the market with the new product. In the case of successful projects the participation of irradiation cost is low, and the demand for the better product is covered. A limited availability of sources makes the modest saturation of the market difficult with all food subjected to correct radiation treatment. The implementation of the preservation of food needs a decided selection of these kinds of food which comply to all conditions i.e. of acceptance by regulatory bodies, real improvement of quality and economy. The last condition prefers the possibility of use of electron beams of low energy. The best fulfilment of conditions for successful processing is observed in the group of dry food, in expensive spices in particular. (author)

  12. Possible implications of large scale radiation processing of food

    Science.gov (United States)

    Zagórski, Z. P.

    Large scale irradiation has been discussed in terms of the participation of processing cost in the final value of the improved product. Another factor has been taken into account and that is the saturation of the market with the new product. In the case of succesful projects the participation of irradiation cost is low, and the demand for the better product is covered. A limited availability of sources makes the modest saturation of the market difficult with all food subjected to correct radiation treatment. The implementation of the preservation of food needs a decided selection of these kinds of food which comply to all conditions i.e. of acceptance by regulatory bodies, real improvement of quality and economy. The last condition prefers the possibility of use of electron beams of low energy. The best fullfilment of conditions for succesful processing is observed in the group of dry food, in expensive spices in particular.

  13. Hierarchical optimal control of large-scale nonlinear chemical processes.

    Science.gov (United States)

    Ramezani, Mohammad Hossein; Sadati, Nasser

    2009-01-01

    In this paper, a new approach is presented for optimal control of large-scale chemical processes. In this approach, the chemical process is decomposed into smaller sub-systems at the first level, and a coordinator at the second level, for which a two-level hierarchical control strategy is designed. For this purpose, each sub-system in the first level can be solved separately, by using any conventional optimization algorithm. In the second level, the solutions obtained from the first level are coordinated using a new gradient-type strategy, which is updated by the error of the coordination vector. The proposed algorithm is used to solve the optimal control problem of a complex nonlinear chemical stirred tank reactor (CSTR), where its solution is also compared with the ones obtained using the centralized approach. The simulation results show the efficiency and the capability of the proposed hierarchical approach, in finding the optimal solution, over the centralized method.

  14. Combined process automation for large-scale EEG analysis.

    Science.gov (United States)

    Sfondouris, John L; Quebedeaux, Tabitha M; Holdgraf, Chris; Musto, Alberto E

    2012-01-01

    Epileptogenesis is a dynamic process producing increased seizure susceptibility. Electroencephalography (EEG) data provides information critical in understanding the evolution of epileptiform changes throughout epileptic foci. We designed an algorithm to facilitate efficient large-scale EEG analysis via linked automation of multiple data processing steps. Using EEG recordings obtained from electrical stimulation studies, the following steps of EEG analysis were automated: (1) alignment and isolation of pre- and post-stimulation intervals, (2) generation of user-defined band frequency waveforms, (3) spike-sorting, (4) quantification of spike and burst data and (5) power spectral density analysis. This algorithm allows for quicker, more efficient EEG analysis. Copyright © 2011 Elsevier Ltd. All rights reserved.

  15. Applicability of vector processing to large-scale nuclear codes

    International Nuclear Information System (INIS)

    Ishiguro, Misako; Harada, Hiroo; Matsuura, Toshihiko; Okuda, Motoi; Ohta, Fumio; Umeya, Makoto.

    1982-03-01

    To meet the growing trend of computational requirements in JAERI, introduction of a high-speed computer with vector processing faculty (a vector processor) is desirable in the near future. To make effective use of a vector processor, appropriate optimization of nuclear codes to pipelined-vector architecture is vital, which will pose new problems concerning code development and maintenance. In this report, vector processing efficiency is assessed with respect to large-scale nuclear codes by examining the following items: 1) The present feature of computational load in JAERI is analyzed by compiling the computer utilization statistics. 2) Vector processing efficiency is estimated for the ten heavily-used nuclear codes by analyzing their dynamic behaviors run on a scalar machine. 3) Vector processing efficiency is measured for the other five nuclear codes by using the current vector processors, FACOM 230-75 APU and CRAY-1. 4) Effectiveness of applying a high-speed vector processor to nuclear codes is evaluated by taking account of the characteristics in JAERI jobs. Problems of vector processors are also discussed from the view points of code performance and ease of use. (author)

  16. Processing and properties of large grain (RE)BCO

    International Nuclear Information System (INIS)

    Cardwell, D.A.

    1998-01-01

    The potential of high temperature superconductors to generate large magnetic fields and to carry current with low power dissipation at 77 K is particularly attractive for a variety of permanent magnet applications. As a result large grain bulk (RE)-Ba-Cu-O ((RE)BCO) materials have been developed by melt process techniques in an attempt to fabricate practical materials for use in high field devices. This review outlines the current state of the art in this field of processing, including seeding requirements for the controlled fabrication of these materials, the origin of striking growth features such as the formation of a facet plane around the seed, platelet boundaries and (RE) 2 BaCuO 5 (RE-211) inclusions in the seeded melt grown microstructure. An observed variation in critical current density in large grain (RE)BCO samples is accounted for by Sm contamination of the material in the vicinity of the seed and with the development of a non-uniform growth morphology at ∼4 mm from the seed position. (RE)Ba 2 Cu 3 O 7-δ (RE-123) dendrites are observed to form and bro[en preferentially within the a/b plane of the lattice in this growth regime. Finally, trapped fields in excess of 3 T have been reported in irr[iated U-doped YBCO and (RE) 1+x Ba 2-x Cu 3 O y (RE=Sm, Nd) materials have been observed to carry transport current in fields of up to 10 T at 77 K. This underlines the potential of bulk (RE)BCO materials for practical permanent magnet type applications. (orig.)

  17. Processes with large Psub(T) in the quantum chromodynamics

    International Nuclear Information System (INIS)

    Slepchenko, L.A.

    1981-01-01

    Necessary data on deep inelastic processes and processes of hard collision of hadrons and their interpretation in QCD are stated. Low of power reduction of exclusive and inclusive cross sections at large transverse momenta, electromagnetic and inelastic (structural functions) formfactors of hadrons have been discussed. When searching for a method of taking account of QCD effects scaling disturbance was considered. It is shown that for the large transverse momenta the deep inelastic l-h scatterina is represented as the scattering with a compound system (hadron) in the pulse approximation. In an assumption of a parton model obtained was a hadron cross section calculated through a renormalized structural parton function was obtained. Proof of the factorization in the principal logarithmic approximation of QCD has been obtained by means of a quark-gluon diagram technique. The cross section of the hadron reaction in the factorized form, which is analogous to the l-h scattering, has been calculated. It is shown that a) the diagram summing with the gluon emission generates the scaling disturbance in renormalized structural functions (SF) of quarks and gluons and a running coupling constant arises simultaneously; b) the disturbance character of the Bjorken scaling of SF is the same as in the deep inelasic lepton scattering. QCD problems which can not be solved within the framework of the perturbation theory, are discussed. The evolution of SF describing the bound state of a hadron and the hadron light cone have been studied. Radiation corrections arising in two-loop and higher approximations have been evaluated. QCD corrections for point-similar power asymptotes of processes with high energies and transfers of momenta have been studied on the example of the inclusive production of quark and gluon jets. Rules of the quark counting of anomalous dimensionalities of QCD have been obtained. It is concluded that the considered limit of the inclusive cross sections is close to

  18. Application of large radiation sources in chemical processing industry

    International Nuclear Information System (INIS)

    Krishnamurthy, K.

    1977-01-01

    Large radiation sources and their application in chemical processing industry are described. A reference has also been made to the present developments in this field in India. Radioactive sources, notably 60 Co, are employed in production of wood-plastic and concrete-polymer composites, vulcanised rubbers, polymers, sulfochlorinated paraffin hydrocarbons and in a number of other applications which require deep penetration and high reliability of source. Machine sources of electrons are used in production of heat shrinkable plastics, insulation materials for cables, curing of paints etc. Radiation sources have also been used for sewage hygienisation. As for the scene in India, 60 Co sources, gamma chambers and batch irradiators are manufactured. A list of the on-going R and D projects and organisations engaged in research in this field is given. (M.G.B.)

  19. Accelerated decomposition techniques for large discounted Markov decision processes

    Science.gov (United States)

    Larach, Abdelhadi; Chafik, S.; Daoui, C.

    2017-12-01

    Many hierarchical techniques to solve large Markov decision processes (MDPs) are based on the partition of the state space into strongly connected components (SCCs) that can be classified into some levels. In each level, smaller problems named restricted MDPs are solved, and then these partial solutions are combined to obtain the global solution. In this paper, we first propose a novel algorithm, which is a variant of Tarjan's algorithm that simultaneously finds the SCCs and their belonging levels. Second, a new definition of the restricted MDPs is presented to ameliorate some hierarchical solutions in discounted MDPs using value iteration (VI) algorithm based on a list of state-action successors. Finally, a robotic motion-planning example and the experiment results are presented to illustrate the benefit of the proposed decomposition algorithms.

  20. Modeling of large-scale oxy-fuel combustion processes

    DEFF Research Database (Denmark)

    Yin, Chungen

    2012-01-01

    Quite some studies have been conducted in order to implement oxy-fuel combustion with flue gas recycle in conventional utility boilers as an effective effort of carbon capture and storage. However, combustion under oxy-fuel conditions is significantly different from conventional air-fuel firing......, among which radiative heat transfer under oxy-fuel conditions is one of the fundamental issues. This paper demonstrates the nongray-gas effects in modeling of large-scale oxy-fuel combustion processes. Oxy-fuel combustion of natural gas in a 609MW utility boiler is numerically studied, in which...... calculation of the oxy-fuel WSGGM remarkably over-predicts the radiative heat transfer to the furnace walls and under-predicts the gas temperature at the furnace exit plane, which also result in a higher incomplete combustion in the gray calculation. Moreover, the gray and non-gray calculations of the same...

  1. Hydrothermal processes above the Yellowstone magma chamber: Large hydrothermal systems and large hydrothermal explosions

    Science.gov (United States)

    Morgan, L.A.; Shanks, W.C. Pat; Pierce, K.L.

    2009-01-01

    Hydrothermal explosions are violent and dramatic events resulting in the rapid ejection of boiling water, steam, mud, and rock fragments from source craters that range from a few meters up to more than 2 km in diameter; associated breccia can be emplaced as much as 3 to 4 km from the largest craters. Hydrothermal explosions occur where shallow interconnected reservoirs of steam- and liquid-saturated fluids with temperatures at or near the boiling curve underlie thermal fields. Sudden reduction in confi ning pressure causes fluids to fl ash to steam, resulting in signifi cant expansion, rock fragmentation, and debris ejection. In Yellowstone, hydrothermal explosions are a potentially signifi cant hazard for visitors and facilities and can damage or even destroy thermal features. The breccia deposits and associated craters formed from hydrothermal explosions are mapped as mostly Holocene (the Mary Bay deposit is older) units throughout Yellowstone National Park (YNP) and are spatially related to within the 0.64-Ma Yellowstone caldera and along the active Norris-Mammoth tectonic corridor. In Yellowstone, at least 20 large (>100 m in diameter) hydrothermal explosion craters have been identifi ed; the scale of the individual associated events dwarfs similar features in geothermal areas elsewhere in the world. Large hydrothermal explosions in Yellowstone have occurred over the past 16 ka averaging ??1 every 700 yr; similar events are likely in the future. Our studies of large hydrothermal explosion events indicate: (1) none are directly associated with eruptive volcanic or shallow intrusive events; (2) several historical explosions have been triggered by seismic events; (3) lithic clasts and comingled matrix material that form hydrothermal explosion deposits are extensively altered, indicating that explosions occur in areas subjected to intense hydrothermal processes; (4) many lithic clasts contained in explosion breccia deposits preserve evidence of repeated fracturing

  2. The front-end electronics and slow control of large area SiPM for the SST-1M camera developed for the CTA experiment

    Czech Academy of Sciences Publication Activity Database

    Aguilar, J.A.; Bilnik, W.; Borkowski, J.; Mandát, Dušan; Pech, Miroslav; Schovánek, Petr

    Roč. 830, Sep (2016), s. 219-232 ISSN 0168-9002 R&D Projects: GA MŠk LM2015046; GA MŠk LE13012; GA MŠk LG14019 Institutional support: RVO:68378271 Keywords : CTA * SiPM * G-APD * preamplifier * front-end * slow-control * compensation Subject RIV: BF - Elementary Particles and High Energy Physics Impact factor: 1.362, year: 2016

  3. Large earthquake rupture process variations on the Middle America megathrust

    Science.gov (United States)

    Ye, Lingling; Lay, Thorne; Kanamori, Hiroo

    2013-11-01

    The megathrust fault between the underthrusting Cocos plate and overriding Caribbean plate recently experienced three large ruptures: the August 27, 2012 (Mw 7.3) El Salvador; September 5, 2012 (Mw 7.6) Costa Rica; and November 7, 2012 (Mw 7.4) Guatemala earthquakes. All three events involve shallow-dipping thrust faulting on the plate boundary, but they had variable rupture processes. The El Salvador earthquake ruptured from about 4 to 20 km depth, with a relatively large centroid time of ˜19 s, low seismic moment-scaled energy release, and a depleted teleseismic short-period source spectrum similar to that of the September 2, 1992 (Mw 7.6) Nicaragua tsunami earthquake that ruptured the adjacent shallow portion of the plate boundary. The Costa Rica and Guatemala earthquakes had large slip in the depth range 15 to 30 km, and more typical teleseismic source spectra. Regional seismic recordings have higher short-period energy levels for the Costa Rica event relative to the El Salvador event, consistent with the teleseismic observations. A broadband regional waveform template correlation analysis is applied to categorize the focal mechanisms for larger aftershocks of the three events. Modeling of regional wave spectral ratios for clustered events with similar mechanisms indicates that interplate thrust events have corner frequencies, normalized by a reference model, that increase down-dip from anomalously low values near the Middle America trench. Relatively high corner frequencies are found for thrust events near Costa Rica; thus, variations along strike of the trench may also be important. Geodetic observations indicate trench-parallel motion of a forearc sliver extending from Costa Rica to Guatemala, and low seismic coupling on the megathrust has been inferred from a lack of boundary-perpendicular strain accumulation. The slip distributions and seismic radiation from the large regional thrust events indicate relatively strong seismic coupling near Nicoya, Costa

  4. Slowing down modernity: A critique : A critique

    OpenAIRE

    Vostal , Filip

    2017-01-01

    International audience; The connection between modernization and social acceleration is now a prominent theme in critical social analysis. Taking a cue from these debates, I explore attempts that aim to 'slow down modernity' by resisting the dynamic tempo of various social processes and experiences. The issue of slowdown now accounts for a largely unquestioned measure, expected to deliver unhasty tempo conditioning good and ethical life, mental well-being and accountable democracy. In princip...

  5. LoCuSS: THE SLOW QUENCHING OF STAR FORMATION IN CLUSTER GALAXIES AND THE NEED FOR PRE-PROCESSING

    Energy Technology Data Exchange (ETDEWEB)

    Haines, C. P. [Departamento de Astronomía, Universidad de Chile, Casilla 36-D, Correo Central, Santiago (Chile); Pereira, M. J.; Egami, E.; Rawle, T. D. [Steward Observatory, University of Arizona, 933 North Cherry Avenue, Tucson, AZ 85721 (United States); Smith, G. P.; Ziparo, F.; McGee, S. L. [School of Physics and Astronomy, University of Birmingham, Edgbaston, Birmingham, B15 2TT (United Kingdom); Babul, A. [Department of Physics and Astronomy, University of Victoria, 3800 Finnerty Road, Victoria, BC, V8P 1A1 (Canada); Finoguenov, A. [Department of Physics, University of Helsinki, Gustaf Hällströmin katu 2a, FI-0014 Helsinki (Finland); Okabe, N. [Academia Sinica Institute of Astronomy and Astrophysics (ASIAA), P.O. Box 23-141, Taipei 10617, Taiwan (China); Moran, S. M., E-mail: cphaines@das.uchile.cl [Smithsonian Astrophysical Observatory, 60 Garden Street, Cambridge, MA 02138 (United States)

    2015-06-10

    We present a study of the spatial distribution and kinematics of star-forming galaxies in 30 massive clusters at 0.15 < z < 0.30, combining wide-field Spitzer 24 μm and GALEX near-ultraviolet imaging with highly complete spectroscopy of cluster members. The fraction (f{sub SF}) of star-forming cluster galaxies rises steadily with cluster-centric radius, increasing fivefold by 2r{sub 200}, but remains well below field values even at 3r{sub 200}. This suppression of star formation at large radii cannot be reproduced by models in which star formation is quenched in infalling field galaxies only once they pass within r{sub 200} of the cluster, but is consistent with some of them being first pre-processed within galaxy groups. Despite the increasing f{sub SF}-radius trend, the surface density of star-forming galaxies actually declines steadily with radius, falling ∼15× from the core to 2r{sub 200}. This requires star formation to survive within recently accreted spirals for 2–3 Gyr to build up the apparent over-density of star-forming galaxies within clusters. The velocity dispersion profile of the star-forming galaxy population shows a sharp peak of 1.44 σ{sub ν} at 0.3r{sub 500}, and is 10%–35% higher than that of the inactive cluster members at all cluster-centric radii, while their velocity distribution shows a flat, top-hat profile within r{sub 500}. All of these results are consistent with star-forming cluster galaxies being an infalling population, but one that must also survive ∼0.5–2 Gyr beyond passing within r{sub 200}. By comparing the observed distribution of star-forming galaxies in the stacked caustic diagram with predictions from the Millennium simulation, we obtain a best-fit model in which star formation rates decline exponentially on quenching timescales of 1.73 ± 0.25 Gyr upon accretion into the cluster.

  6. The use of quasi-isothermal modulated temperature differential scanning calorimetry for the characterization of slow crystallization processes in lipid-based solid self-emulsifying systems.

    Science.gov (United States)

    Otun, Sarah O; Meehan, Elizabeth; Qi, Sheng; Craig, Duncan Q M

    2015-04-01

    Slow or incomplete crystallization may be a significant manufacturing issue for solid lipid-based dosage forms, yet little information is available on this phenomenon. In this investigation we suggest a novel means by which slow solidification may be monitored in Gelucire 44/14 using quasi-isothermal modulated temperature DSC (QiMTDSC). Conventional linear heating and cooling DSC methods were employed, along with hot stage microscopy (HSM), for basic thermal profiling of Gelucire 44/14. QiMTDSC experiments were performed on cooling from the melt, using a range of incremental decreases in temperature and isothermal measurement periods. DSC and HSM highlighted the main (primary) crystallization transition; solid fat content analysis and kinetic analysis were used to profile the solidification process. The heat capacity profile from QiMTDSC indicated that after an initial energetic primary crystallisation, the lipid underwent a slower period of crystallization which continued to manifest at much lower temperatures than indicated by standard DSC. We present evidence that Gelucire 44/14 undergoes an initial crystallization followed by a secondary, slower process. QIMTDSC appears to be a promising tool in the investigation of this secondary crystallization process.

  7. Process evaluation of treatment times in a large radiotherapy department

    International Nuclear Information System (INIS)

    Beech, R.; Burgess, K.; Stratford, J.

    2016-01-01

    Purpose/objective: The Department of Health (DH) recognises access to appropriate and timely radiotherapy (RT) services as crucial in improving cancer patient outcomes, especially when facing a predicted increase in cancer diagnosis. There is a lack of ‘real-time’ data regarding daily demand of a linear accelerator, the impact of increasingly complex techniques on treatment times, and whether current scheduling reflects time needed for RT delivery, which would be valuable in highlighting current RT provision. Material/methods: A systematic quantitative process evaluation was undertaken in a large regional cancer centre, including a satellite centre, between January and April 2014. Data collected included treatment room-occupancy time, RT site, RT and verification technique and patient mobility status. Data was analysed descriptively; average room-occupancy times were calculated for RT techniques and compared to historical standardised treatment times within the department. Results: Room-occupancy was recorded for over 1300 fractions, over 50% of which overran their allotted treatment time. In a focused sample of 16 common techniques, 10 overran their allocated timeslots. Verification increased room-occupancy by six minutes (50%) over non-imaging. Treatments for patients requiring mobility assistance took four minutes (29%) longer. Conclusion: The majority of treatments overran their standardised timeslots. Although technique advancement has reduced RT delivery time, room-occupancy has not necessarily decreased. Verification increases room-occupancy and needs to be considered when moving towards adaptive techniques. Mobility affects room-occupancy and will become increasingly significant in an ageing population. This evaluation assesses validity of current treatment times in this department, and can be modified and repeated as necessary. - Highlights: • A process evaluation examined room-occupancy for various radiotherapy techniques. • Appointment lengths

  8. Neutral processes forming large clones during colonization of new areas.

    Science.gov (United States)

    Rafajlović, M; Kleinhans, D; Gulliksson, C; Fries, J; Johansson, D; Ardehed, A; Sundqvist, L; Pereyra, R T; Mehlig, B; Jonsson, P R; Johannesson, K

    2017-08-01

    In species reproducing both sexually and asexually clones are often more common in recently established populations. Earlier studies have suggested that this pattern arises due to natural selection favouring generally or locally successful genotypes in new environments. Alternatively, as we show here, this pattern may result from neutral processes during species' range expansions. We model a dioecious species expanding into a new area in which all individuals are capable of both sexual and asexual reproduction, and all individuals have equal survival rates and dispersal distances. Even under conditions that favour sexual recruitment in the long run, colonization starts with an asexual wave. After colonization is completed, a sexual wave erodes clonal dominance. If individuals reproduce more than one season, and with only local dispersal, a few large clones typically dominate for thousands of reproductive seasons. Adding occasional long-distance dispersal, more dominant clones emerge, but they persist for a shorter period of time. The general mechanism involved is simple: edge effects at the expansion front favour asexual (uniparental) recruitment where potential mates are rare. Specifically, our model shows that neutral processes (with respect to genotype fitness) during the population expansion, such as random dispersal and demographic stochasticity, produce genotype patterns that differ from the patterns arising in a selection model. The comparison with empirical data from a post-glacially established seaweed species (Fucus radicans) shows that in this case, a neutral mechanism is strongly supported. © 2017 The Authors. Journal of Evolutionary Biology Published by John Wiley & Sons ltd on Behalf of European Society for Evolutionary Biology.

  9. Coaxial slow source

    International Nuclear Information System (INIS)

    Brooks, R.D.; Jarboe, T.R.

    1990-01-01

    Field reversed configurations (FRCs) are a class of compact toroid with not toroidal field. The field reversed theta pinch technique has been successfully used for formation of FRCs since their inception in 1958. In this method an initial bias field is produced. After ionization of the fill gas, the current in the coil is rapidly reversed producing the radial implosion of a current sheath. At the ends of the coil the reversed field lines rapidly tear and reconnect with the bias field lines until no more bias flux remains. At this point, vacuum reversed field accumulates around the configuration which contracts axially until an equilibrium is reached. When extrapolating the use of such a technique to reactor size plasmas two main shortcomings are found. First, the initial bias field, and hence flux in a given device, which can be reconnected to form the configuration is limited from above by destructive axial dynamics. Second, the voltages required to produce rapid current reversal in the coil are very large. Clearly, a low voltage formation technique without limitations on flux addition is desirable. The Coaxial Slow Source (CSS) device was designed to meet this need. It has two coaxial theta pinch coils. Coaxial coil geometry allows for the addition of as much magnetic flux to the annular plasma between them as can be generated inside the inner coil. Furthermore the device can be operated at charging voltages less than 10 kV and on resistive diffusion, rather than implosive time scales. The inner coil is a novel, concentric, helical design so as to allow it to be cantilevered on one end to permit translation of the plasma. Following translation off the inner coil the Annular Field Reversed Configuration would be re-formed as a true FRC. In this paper we investigate the formation process in the new parallel configuration., CSSP, in which the inner and outer coils are connected in parallel to the main capacitor bank

  10. FROM SLOW FOOD TO SLOW TOURISM

    Directory of Open Access Journals (Sweden)

    Bac Dorin Paul

    2014-12-01

    Full Text Available One of the effects of globalization is the faster pace of our lives. This rhythm can be noticed in all aspects of life: travel, work, shopping, etc. and it has serious negative effects. It has become common knowledge that stress and speed generate serious medical issues. Food and eating habits in the modern world have taken their toll on our health. However, some people took a stand and argued for a new kind of lifestyle. It all started in the field of gastronomy, where a new movement emerged – Slow Food, based on the ideas and philosophy of Carlo Petrini. Slow Food represents an important adversary to the concept of fast food, and is promoting local products, enjoyable meals and healthy food. The philosophy of the Slow Food movement developed in several directions: Cittaslow, slow travel and tourism, slow religion and slow money etc. The present paper will account the evolution of the concept and its development during the most recent years. We will present how the philosophy of slow food was applied in all the other fields it reached and some critical points of view. Also we will focus on the presence of the slow movement in Romania, although it is in a very early stage of development. The main objectives of the present paper are: to present the chronological and ideological evolution of the slow movement; to establish a clear separation of slow travel and slow tourism, as many mistake on for the other; to review the presence of the slow movement in Romania. Regarding the research methodology, information was gathered from relevant academic papers and books and also from interviews and discussions with local entrepreneurs. The research is mostly theoretical and empirical, as slow food and slow tourism are emerging research themes in academic circles.

  11. Living Slow and Being Moral : Life History Predicts the Dual Process of Other-Centered Reasoning and Judgments.

    Science.gov (United States)

    Zhu, Nan; Hawk, Skyler T; Chang, Lei

    2018-06-01

    Drawing from the dual process model of morality and life history theory, the present research examined the role of cognitive and emotional processes as bridges between basic environmental challenges (i.e., unpredictability and competition) and other-centered moral orientation (i.e., prioritizing the welfare of others). In two survey studies, cognitive and emotional processes represented by future-oriented planning and emotional attachment, respectively (Study 1, N = 405), or by perspective taking and empathic concern, respectively (Study 2, N = 424), positively predicted other-centeredness in prosocial moral reasoning (Study 1) and moral judgment dilemmas based on rationality or intuition (Study 2). Cognitive processes were more closely related to rational aspects of other-centeredness, whereas the emotional processes were more closely related to the intuitive aspects of other-centeredness (Study 2). Finally, the cognitive and emotional processes also mediated negative effects of unpredictability (i.e., negative life events and childhood financial insecurity), as well as positive effects of individual-level, contest competition (i.e., educational and occupational competition) on other-centeredness. Overall, these findings support the view that cognitive and emotional processes do not necessarily contradict each other. Rather, they might work in concert to promote other-centeredness in various circumstances and might be attributed to humans' developmental flexibility in the face of environmental challenges.

  12. Slow wave cyclotron maser

    International Nuclear Information System (INIS)

    Kho, T.H.; Lin, A.T.

    1988-01-01

    Cyclotron masers such as Gyrotrons and the Autoresonance Masers, are fast wave devices: the electromagnetic wave's phase velocity v rho , is greater than the electron beam velocity, v b . To be able to convert the beam kinetic energy into radiation in these devices the beam must have an initial transverse momentum, usually obtained by propagating the beam through a transverse wiggler magnet, or along a nonuniform guide magnetic field before entry into the interaction region. Either process introduces a significant amount of thermal spread in the beam which degrades the performance of the maser. However, if the wave phase velocity v rho v b , the beam kinetic energy can be converted directly into radiation without the requirement of an initial transverse beam momentum, making a slow wave cyclotron maser a potentially simpler and more compact device. The authors present the linear and nonlinear physics of the slow wave cyclotron maser and examine its potential for practical application

  13. Summer Decay Processes in a Large Tabular Iceberg

    Science.gov (United States)

    Wadhams, P.; Wagner, T. M.; Bates, R.

    2012-12-01

    Summer Decay Processes in a Large Tabular Iceberg Peter Wadhams (1), Till J W Wagner(1) and Richard Bates(2) (1) Department of Applied Mathematics and Theoretical Physics, University of Cambridge, Wilberforce Road, Cambridge CB3 0WA, UK (2) Scottish Oceans Institute, School of Geography and Geosciences, University of St Andrews, St. Andrews, Scotland KY16 9AL We present observational results from an experiment carried out during July-August 2012 on a giant grounded tabular iceberg off Baffin Island. The iceberg studied was part of the Petermann Ice Island B1 (PIIB1) which calved off the Petermann Glacier in NW Greenland in 2010. Since 2011 it has been aground in 100 m of water on the Baffin Island shelf at 69 deg 06'N, 66 deg 06'W. As part of the project a set of high resolution GPS sensors and tiltmeters was placed on the ice island to record rigid body motion as well as flexural responses to wind, waves, current and tidal forces, while a Waverider buoy monitored incident waves and swell. On July 31, 2012 a major breakup event was recorded, with a piece of 25,000 sq m surface area calving off the iceberg. At the time of breakup, GPS sensors were collecting data both on the main berg as well as on the newly calved piece, while two of us (PW and TJWW) were standing on the broken-out portion which rose by 0.6 m to achieve a new isostatic equilibrium. Crucially, there was no significant swell at the time of breakup, which suggests a melt-driven decay process rather than wave-driven flexural break-up. The GPS sensors recorded two disturbances during the hour preceding the breakup, indicative of crack growth and propagation. Qualitative observation during the two weeks in which our research ship was moored to, or was close to, the ice island edge indicates that an important mechanism for summer ablation is successive collapses of the overburden from above an unsupported wave cut, which creates a submerged ram fringing the berg. A model of buoyancy stresses induced by

  14. A KPI-based process monitoring and fault detection framework for large-scale processes.

    Science.gov (United States)

    Zhang, Kai; Shardt, Yuri A W; Chen, Zhiwen; Yang, Xu; Ding, Steven X; Peng, Kaixiang

    2017-05-01

    Large-scale processes, consisting of multiple interconnected subprocesses, are commonly encountered in industrial systems, whose performance needs to be determined. A common approach to this problem is to use a key performance indicator (KPI)-based approach. However, the different KPI-based approaches are not developed with a coherent and consistent framework. Thus, this paper proposes a framework for KPI-based process monitoring and fault detection (PM-FD) for large-scale industrial processes, which considers the static and dynamic relationships between process and KPI variables. For the static case, a least squares-based approach is developed that provides an explicit link with least-squares regression, which gives better performance than partial least squares. For the dynamic case, using the kernel representation of each subprocess, an instrument variable is used to reduce the dynamic case to the static case. This framework is applied to the TE benchmark process and the hot strip mill rolling process. The results show that the proposed method can detect faults better than previous methods. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  15. Data-driven process decomposition and robust online distributed modelling for large-scale processes

    Science.gov (United States)

    Shu, Zhang; Lijuan, Li; Lijuan, Yao; Shipin, Yang; Tao, Zou

    2018-02-01

    With the increasing attention of networked control, system decomposition and distributed models show significant importance in the implementation of model-based control strategy. In this paper, a data-driven system decomposition and online distributed subsystem modelling algorithm was proposed for large-scale chemical processes. The key controlled variables are first partitioned by affinity propagation clustering algorithm into several clusters. Each cluster can be regarded as a subsystem. Then the inputs of each subsystem are selected by offline canonical correlation analysis between all process variables and its controlled variables. Process decomposition is then realised after the screening of input and output variables. When the system decomposition is finished, the online subsystem modelling can be carried out by recursively block-wise renewing the samples. The proposed algorithm was applied in the Tennessee Eastman process and the validity was verified.

  16. Cloning, overexpression, crystallization and preliminary X-ray crystallographic analysis of a slow-processing mutant of penicillin G acylase from Kluyvera citrophila

    International Nuclear Information System (INIS)

    Varshney, Nishant Kumar; Ramasamy, Sureshkumar; Brannigan, James A.; Wilkinson, Anthony J.; Suresh, C. G.

    2013-01-01

    The pac gene encoding penicillin G acylase from K. citrophila was cloned and a slow-processing site-directed mutant was prepared, expressed, purified and crystallized. Triclinic and monoclinic crystal forms were obtained which diffracted to 2.5 and 3.5 Å resolution, respectively. Kluyvera citrophila penicillin G acylase (KcPGA) has recently attracted increased attention relative to the well studied and commonly used Escherichia coli PGA (EcPGA) because KcPGA is more resilient to harsh conditions and is easier to immobilize for the industrial hydrolysis of natural penicillins to generate the 6-aminopenicillin (6-APA) nucleus, which is the starting material for semi-synthetic antibiotic production. Like other penicillin acylases, KcPGA is synthesized as a single-chain inactive pro-PGA, which upon autocatalytic processing becomes an active heterodimer of α and β chains. Here, the cloning of the pac gene encoding KcPGA and the preparation of a slow-processing mutant precursor are reported. The purification, crystallization and preliminary X-ray analysis of crystals of this precursor protein are described. The protein crystallized in two different space groups, P1, with unit-cell parameters a = 54.0, b = 124.6, c = 135.1 Å, α = 104.1, β = 101.4, γ = 96.5°, and C2, with unit-cell parameters a = 265.1, b = 54.0, c = 249.2 Å, β = 104.4°, using the sitting-drop vapour-diffusion method. Diffraction data were collected at 100 K and the phases were determined using the molecular-replacement method. The initial maps revealed electron density for the spacer peptide

  17. Dynamics of soil CO2 efflux under varying atmospheric CO2 concentrations reveal dominance of slow processes.

    Science.gov (United States)

    Kim, Dohyoung; Oren, Ram; Clark, James S; Palmroth, Sari; Oishi, A Christopher; McCarthy, Heather R; Maier, Chris A; Johnsen, Kurt

    2017-09-01

    We evaluated the effect on soil CO 2 efflux (F CO 2 ) of sudden changes in photosynthetic rates by altering CO 2 concentration in plots subjected to +200 ppmv for 15 years. Five-day intervals of exposure to elevated CO 2 (eCO 2 ) ranging 1.0-1.8 times ambient did not affect F CO 2 . F CO 2 did not decrease until 4 months after termination of the long-term eCO 2 treatment, longer than the 10 days observed for decrease of F CO 2 after experimental blocking of C flow to belowground, but shorter than the ~13 months it took for increase of F CO 2 following the initiation of eCO 2 . The reduction of F CO 2 upon termination of enrichment (~35%) cannot be explained by the reduction of leaf area (~15%) and associated carbohydrate production and allocation, suggesting a disproportionate contraction of the belowground ecosystem components; this was consistent with the reductions in base respiration and F CO 2 -temperature sensitivity. These asymmetric responses pose a tractable challenge to process-based models attempting to isolate the effect of individual processes on F CO2 . © 2017 John Wiley & Sons Ltd.

  18. Design Methodology of Process Layout considering Various Equipment Types for Large scale Pyro processing Facility

    International Nuclear Information System (INIS)

    Yu, Seung Nam; Lee, Jong Kwang; Lee, Hyo Jik

    2016-01-01

    At present, each item of process equipment required for integrated processing is being examined, based on experience acquired during the Pyropocess Integrated Inactive Demonstration Facility (PRIDE) project, and considering the requirements and desired performance enhancement of KAPF as a new facility beyond PRIDE. Essentially, KAPF will be required to handle hazardous materials such as spent nuclear fuel, which must be processed in an isolated and shielded area separate from the operator location. Moreover, an inert-gas atmosphere must be maintained, because of the radiation and deliquescence of the materials. KAPF must also achieve the goal of significantly increased yearly production beyond that of the previous facility; therefore, several parts of the production line must be automated. This article presents the method considered for the conceptual design of both the production line and the overall layout of the KAPF process equipment. This study has proposed a design methodology that can be utilized as a preliminary step for the design of a hot-cell-type, large-scale facility, in which the various types of processing equipment operated by the remote handling system are integrated. The proposed methodology applies to part of the overall design procedure and contains various weaknesses. However, if the designer is required to maximize the efficiency of the installed material-handling system while considering operation restrictions and maintenance conditions, this kind of design process can accommodate the essential components that must be employed simultaneously in a general hot-cell system

  19. DB-XES : enabling process discovery in the large

    NARCIS (Netherlands)

    Syamsiyah, A.; van Dongen, B.F.; van der Aalst, W.M.P.; Ceravolo, P.; Guetl, C.; Rinderle-Ma, S.

    2018-01-01

    Dealing with the abundance of event data is one of the main process discovery challenges. Current process discovery techniques are able to efficiently handle imported event log files that fit in the computer’s memory. Once data files get bigger, scalability quickly drops since the speed required to

  20. Benchmarking processes for managing large international space programs

    Science.gov (United States)

    Mandell, Humboldt C., Jr.; Duke, Michael B.

    1993-01-01

    The relationship between management style and program costs is analyzed to determine the feasibility of financing large international space missions. The incorporation of management systems is considered to be essential to realizing low cost spacecraft and planetary surface systems. Several companies ranging from large Lockheed 'Skunk Works' to small companies including Space Industries, Inc., Rocket Research Corp., and Orbital Sciences Corp. were studied. It is concluded that to lower the prices, the ways in which spacecraft and hardware are developed must be changed. Benchmarking of successful low cost space programs has revealed a number of prescriptive rules for low cost managements, including major changes in the relationships between the public and private sectors.

  1. Excitation processes in slow K+--He, Ne, Ar, H2, N2 and Na+--He collisions

    International Nuclear Information System (INIS)

    Kikiani, B.I.; Gochitashvili, M.R.; Kvizhinadze, R.V.; Ankudinov, V.A.

    1984-01-01

    Quasimolecular features of excitation processes in K + --He, Ne, Ar, H 2 , N 2 and Na + --He collisions were investigated by measuring the cross sections for the emission of the resonance lines of potassium (lambda = 766.5 and 769.9 nm), sodium (lambda = 589 and 589.6 nm), and helium (lambda = 584 nm) atoms at ion energies in the range 0.5--10 keV. In Na + --He collisions, the resonance-line excitation functions obtained for sodium and helium atoms exhibit oscillations that are in antiphase and are due to phase interference between the quasimolecular states of the system of colliding particles. Experimental data on K + --Ar collisions are interpreted in terms of schematic correlation diagrams for molecular orbitals. The excitation mechanisms for K + --N 2 and K + --Ar have beenfound to be similar, and this leads to the conclusion that the quasimolecular model used for the ion-atom case is also valid for the ion-molecule case. It is shown that the excitation of the 4p-state of the potassium atom in the K + --Ar case is due to a Landau-Zener type of interaction in the region of the quasicrossing of (KAr) + terms. Analysis of the excitation of this state in K + --N 2 collisions also shows that the capture of an electron into the excited 4p-state of the potassium atom is due to a nonadiabatic transition in the region of quasicrossing of energy terms of the same symmetry

  2. Really big data: Processing and analysis of large datasets

    Science.gov (United States)

    Modern animal breeding datasets are large and getting larger, due in part to the recent availability of DNA data for many animals. Computational methods for efficiently storing and analyzing those data are under development. The amount of storage space required for such datasets is increasing rapidl...

  3. Initial crystallization and growth in melt processing of large-domain YBa2Cu3Ox for magnetic levitation

    International Nuclear Information System (INIS)

    Shi, D.

    1994-10-01

    Crystallization temperature in YBa 2 Cu 3 O x (123) during peritectic reaction has been studied by differential thermal analysis (DTA) and optical microscopy. It has been found that YBa 2 Cu 3 O x experiences partial melting near 1,010 C during heating while crystallization takes place at a much lower temperature range upon cooling indicating a delayed nucleation process. A series of experiments have been conducted to search for the initial crystallization temperature in the Y 2 BaCuO x + liquid phase field. The authors have found that the slow-cool period (1 C/h) for the 123 grain texturing can start at as low as 960 C. This novel processing has resulted in high-quality, large-domain, strongly pinned 123 magnetic levitators

  4. Drell–Yan process at Large Hadron Collider

    Indian Academy of Sciences (India)

    Drell–Yan process at LHC, q q ¯ → /* → ℓ+ ℓ-, is one of the benchmarks for confirmation of Standard Model at TeV energy scale. Since the theoretical prediction for the rate is precise and the final state is clean as well as relatively easy to measure, the process can be studied at the LHC even at relatively low luminosity.

  5. Manufacturing process to reduce large grain growth in zirconium alloys

    International Nuclear Information System (INIS)

    Rosecrans, P.M.

    1987-01-01

    A method is described of treating cold worked zirconium alloys to reduce large grain growth during thermal treatment above its recrystallization temperature. The method comprises heating the zirconium alloy at a temperature of about 1300 0 F. to 1350 0 F. for about 1 to 3 hours subsequent to cold working the zirconium alloy and prior to the thermal treatment at a temperature of between 1450 0 -1550 0 F., the thermal treatment temperature being above the recrystallization temperature

  6. Manufacturing Process Simulation of Large-Scale Cryotanks

    Science.gov (United States)

    Babai, Majid; Phillips, Steven; Griffin, Brian

    2003-01-01

    NASA's Space Launch Initiative (SLI) is an effort to research and develop the technologies needed to build a second-generation reusable launch vehicle. It is required that this new launch vehicle be 100 times safer and 10 times cheaper to operate than current launch vehicles. Part of the SLI includes the development of reusable composite and metallic cryotanks. The size of these reusable tanks is far greater than anything ever developed and exceeds the design limits of current manufacturing tools. Several design and manufacturing approaches have been formulated, but many factors must be weighed during the selection process. Among these factors are tooling reachability, cycle times, feasibility, and facility impacts. The manufacturing process simulation capabilities available at NASA.s Marshall Space Flight Center have played a key role in down selecting between the various manufacturing approaches. By creating 3-D manufacturing process simulations, the varying approaches can be analyzed in a virtual world before any hardware or infrastructure is built. This analysis can detect and eliminate costly flaws in the various manufacturing approaches. The simulations check for collisions between devices, verify that design limits on joints are not exceeded, and provide cycle times which aide in the development of an optimized process flow. In addition, new ideas and concerns are often raised after seeing the visual representation of a manufacturing process flow. The output of the manufacturing process simulations allows for cost and safety comparisons to be performed between the various manufacturing approaches. This output helps determine which manufacturing process options reach the safety and cost goals of the SLI. As part of the SLI, The Boeing Company was awarded a basic period contract to research and propose options for both a metallic and a composite cryotank. Boeing then entered into a task agreement with the Marshall Space Flight Center to provide manufacturing

  7. Process component inventory in a large commercial reprocessing facility

    International Nuclear Information System (INIS)

    Canty, M.J.; Berliner, A.; Spannagel, G.

    1983-01-01

    Using a computer simulation program, the equilibrium operation of the Pu-extraction and purification processes of a reference commercial reprocessing facility was investigated. Particular attention was given to the long-term net fluctuations of Pu inventories in hard-to-measure components such as the solvent extraction contractors. Comparing the variance of these inventories with the measurement variance for Pu contained in feed, analysis and buffer tanks, it was concluded that direct or indirect periodic estimation of contactor inventories would not contribute significantly to improving the quality of closed material balances over the process MBA

  8. How to collect and process large polyhedral viruses of insects

    Science.gov (United States)

    W. D. Rollinson; F. B. Lewis

    1962-01-01

    Polyhedral viruses have proved highly effective and very practical for control of certain pine sawflies; and a method of collecting and processing the small polyhedra (5 microns or less) characteristic of sawflies has been described. There is experimental evidence that the virus diseases of many Lepidopterous insects can be used similarly for direct control. The...

  9. ARMA modelling of neutron stochastic processes with large measurement noise

    International Nuclear Information System (INIS)

    Zavaljevski, N.; Kostic, Lj.; Pesic, M.

    1994-01-01

    An autoregressive moving average (ARMA) model of the neutron fluctuations with large measurement noise is derived from langevin stochastic equations and validated using time series data obtained during prompt neutron decay constant measurements at the zero power reactor RB in Vinca. Model parameters are estimated using the maximum likelihood (ML) off-line algorithm and an adaptive pole estimation algorithm based on the recursive prediction error method (RPE). The results show that subcriticality can be determined from real data with high measurement noise using much shorter statistical sample than in standard methods. (author)

  10. Habitat loss as the main cause of the slow recovery of fish faunas of regulated large rivers in Europe: The transversal floodplain gradient

    NARCIS (Netherlands)

    Aarts, B.G.W.; Van den Brink, F.W.B.; Nienhuis, P.H.

    2004-01-01

    In large European rivers the chemical water quality has improved markedly in recent decades, yet the recovery of the fish fauna is not proceeding accordingly. Important causes are the loss of habitats in the main river channels and their floodplains, and the diminished hydrological connectivity

  11. Processing large remote sensing image data sets on Beowulf clusters

    Science.gov (United States)

    Steinwand, Daniel R.; Maddox, Brian; Beckmann, Tim; Schmidt, Gail

    2003-01-01

    High-performance computing is often concerned with the speed at which floating- point calculations can be performed. The architectures of many parallel computers and/or their network topologies are based on these investigations. Often, benchmarks resulting from these investigations are compiled with little regard to how a large dataset would move about in these systems. This part of the Beowulf study addresses that concern by looking at specific applications software and system-level modifications. Applications include an implementation of a smoothing filter for time-series data, a parallel implementation of the decision tree algorithm used in the Landcover Characterization project, a parallel Kriging algorithm used to fit point data collected in the field on invasive species to a regular grid, and modifications to the Beowulf project's resampling algorithm to handle larger, higher resolution datasets at a national scale. Systems-level investigations include a feasibility study on Flat Neighborhood Networks and modifications of that concept with Parallel File Systems.

  12. LSSA large area silicon sheet task continuous Czochralski process development

    Science.gov (United States)

    Rea, S. N.

    1978-01-01

    A Czochralski crystal growing furnace was converted to a continuous growth facility by installation of a premelter to provide molten silicon flow into the primary crucible. The basic furnace is operational and several trial crystals were grown in the batch mode. Numerous premelter configurations were tested both in laboratory-scale equipment as well as in the actual furnace. The best arrangement tested to date is a vertical, cylindrical graphite heater containing small fused silicon test tube liner in which the incoming silicon is melted and flows into the primary crucible. Economic modeling of the continuous Czochralski process indicates that for 10 cm diameter crystal, 100 kg furnace runs of four or five crystals each are near-optimal. Costs tend to asymptote at the 100 kg level so little additional cost improvement occurs at larger runs. For these conditions, crystal cost in equivalent wafer area of around $20/sq m exclusive of polysilicon and slicing was obtained.

  13. The Quanzhou large earthquake: environment impact and deep process

    Science.gov (United States)

    WANG, Y.; Gao*, R.; Ye, Z.; Wang, C.

    2017-12-01

    The Quanzhou earthquake is the largest earthquake in China's southeast coast in history. The ancient city of Quanzhou and its adjacent areas suffered serious damage. Analysis of the impact of Quanzhou earthquake on human activities, ecological environment and social development will provide an example for the research on environment and human interaction.According to historical records, on the night of December 29, 1604, a Ms 8.0 earthquake occurred in the sea area at the east of Quanzhou (25.0°N, 119.5°E) with a focal depth of 25 kilometers. It affected to a maximum distance of 220 kilometers from the epicenter and caused serious damage. Quanzhou, which has been known as one of the world's largest trade ports during Song and Yuan periods was heavily destroyed by this earthquake. The destruction of the ancient city was very serious and widespread. The city wall collapsed in Putian, Nanan, Tongan and other places. The East and West Towers of Kaiyuan Temple, which are famous with magnificent architecture in history, were seriously destroyed.Therefore, an enormous earthquake can exert devastating effects on human activities and social development in the history. It is estimated that a more than Ms. 5.0 earthquake in the economically developed coastal areas in China can directly cause economic losses for more than one hundred million yuan. This devastating large earthquake that severely destroyed the Quanzhou city was triggered under a tectonic-extensional circumstance. In this coastal area of the Fujian Province, the crust gradually thins eastward from inland to coast (less than 29 km thick crust beneath the coast), the lithosphere is also rather thin (60 70 km), and the Poisson's ratio of the crust here appears relatively high. The historical Quanzhou Earthquake was probably correlated with the NE-striking Littoral Fault Zone, which is characterized by right-lateral slip and exhibiting the most active seismicity in the coastal area of Fujian. Meanwhile, tectonic

  14. Comparative Study of Surface-lattice-site Resolved Neutralization of Slow Multicharged Ions during Large-angle Quasi-binary Collisions with Au(110): Simulation and Experiment

    International Nuclear Information System (INIS)

    Meyer, F.W.

    2001-01-01

    In this article we extend our earlier studies of the azimuthal dependences of low energy projectiles scattered in large angle quasi-binary collisions from Au(110). Measurements are presented for 20 keV Ar 9+ at normal incidence, which are compared with our earlier measurements for this ion at 5 keV and 10 0 incidence angle. A deconvolution procedure based on MARLOWE simulation results carried out at both energies provides information about the energy dependence of projectile neutralization during interactions just with the atoms along the top ridge of the reconstructed Au(110) surface corrugation, in comparison to, e.g., interactions with atoms lying on the sidewalls. To test the sensitivity of the agreement between the MARLOWE results and the experimental measurements, we show simulation results obtained for a non-reconstructed Au(110) surface with 20 keV Ar projectiles, and for different scattering potentials that are intended to simulate the effects on scattering trajectory of a projectile inner shell vacancy surviving the binary collision, In addition, simulation results are shown for a number of different total scattering angles, to illustrate their utility in finding optimum values for this parameter prior to the actual measurements

  15. Investigation and monitoring in support of the structural mitigation of large slow moving landslides: an example from Ca' Lita (Northern Apennines, Reggio Emilia, Italy)

    Science.gov (United States)

    Corsini, A.; Borgatti, L.; Caputo, G.; de Simone, N.; Sartini, G.; Truffelli, G.

    2006-01-01

    The Ca' Lita landslide is a large and deep-seated mass movement located in the Secchia River Valley, in the sector of the Northern Apennines falling into Reggio Emilia Province, about 70 km west of Bologna (Northern Italy). It consists of a composite landslide system that affects Cretaceous to Eocene flysch rock masses and chaotic complexes. Many of the components making up the landslide system have resumed activity between 2002 and 2004, and are now threatening some hamlets and an important road serving the upper watershed area of River Secchia, where many villages and key industrial facilities are located. This paper presents the analysis and the quantification of displacement rates and depths of the mass movements, based on geological and geomorphological surveys, differential DEM analysis, interpretation of underground stratigraphic and monitoring data collected during the investigation campaign that has been undertaken in order to design cost-effective mitigation structures, and that has been conducted with the joint collaboration between public offices and research institutes.

  16. Flux free growth of large FeSe1/2Te1/2 superconducting single crystals by an easy high temperature melt and slow cooling method

    Directory of Open Access Journals (Sweden)

    P. K. Maheshwari

    2015-09-01

    Full Text Available We report successful growth of flux free large single crystals of superconducting FeSe1/2Te1/2 with typical dimensions of up to few cm. The AC and DC magnetic measurements revealed the superconducting transition temperature (Tc value of around 11.5K and the isothermal MH showed typical type-II superconducting behavior. The lower critical field (Hc1 being estimated by measuring the low field isothermal magnetization in superconducting regime is found to be above 200Oe at 0K. The temperature dependent electrical resistivity ρ(T  showed the Tc (onset to be 14K and the Tc(ρ = 0 at 11.5K. The electrical resistivity under various magnetic fields i.e., ρ(TH for H//ab and H//c demonstrated the difference in the width of Tc with applied field of 14Tesla to be nearly 2K, confirming the anisotropic nature of superconductivity. The upper critical and irreversibility fields at absolute zero temperature i.e., Hc2(0 and Hirr(0 being determined by the conventional one-band Werthamer–Helfand–Hohenberg (WHH equation for the criteria of normal state resistivity (ρn falling to 90% (onset, and 10% (offset is 76.9Tesla, and 37.45Tesla respectively, for H//c and 135.4Tesla, and 71.41Tesla respectively, for H//ab. The coherence length at the zero temperature is estimated to be above 20Å ´ by using the Ginsburg-Landau theory. The activation energy for the FeSe1/2Te1/2 in both directions H//c and H//ab is determined by using Thermally Activation Flux Flow (TAFF model.

  17. The Use of Quasi-Isothermal Modulated Temperature Differential Scanning Calorimetry for the Characterization of Slow Crystallization Processes in Lipid-Based Solid Self-Emulsifying Systems

    OpenAIRE

    Otun, Sarah O.; Meehan, Elizabeth; Qi, Sheng; Craig, Duncan Q. M.

    2014-01-01

    Purpose Slow or incomplete crystallization may be a significant manufacturing issue for solid lipid-based dosage forms, yet little information is available on this phenomenon. In this investigation we suggest a novel means by which slow solidification may be monitored in Gelucire 44/14 using quasi-isothermal modulated temperature DSC (QiMTDSC). Methods Conventional linear heating and cooling DSC methods were employed, along with hot stage microscopy (HSM), for basic thermal profiling of Geluc...

  18. A low-fat, whole-food vegan diet, as well as other strategies that down-regulate IGF-I activity, may slow the human aging process.

    Science.gov (United States)

    McCarty, Mark F

    2003-06-01

    A considerable amount of evidence is consistent with the proposition that systemic IGF-I activity acts as pacesetter in the aging process. A reduction in IGF-I activity is the common characteristic of rodents whose maximal lifespan has been increased by a wide range of genetic or dietary measures, including caloric restriction. The lifespans of breeds of dogs and strains of rats tend to be inversely proportional to their mature weight and IGF-I levels. The link between IGF-I and aging appears to be evolutionarily conserved; in worms and flies, lifespan is increased by reduction-of-function mutations in signaling intermediates homologous to those which mediate insulin/IGF-I activity in mammals. The fact that an increase in IGF-I activity plays a key role in the induction of sexual maturity, is consistent with a broader role for-IGF-I in aging regulation. If down-regulation of IGF-I activity could indeed slow aging in humans, a range of practical measures for achieving this may be at hand. These include a low-fat, whole-food, vegan diet, exercise training, soluble fiber, insulin sensitizers, appetite suppressants, and agents such as flax lignans, oral estrogen, or tamoxifen that decrease hepatic synthesis of IGF-I. Many of these measures would also be expected to decrease risk for common age-related diseases. Regimens combining several of these approaches might have a sufficient impact on IGF-I activity to achieve a useful retardation of the aging process. However, in light of the fact that IGF-I promotes endothelial production of nitric oxide and may be of especial importance to cerebrovascular health, additional measures for stroke prevention-most notably salt restriction-may be advisable when attempting to down-regulate IGF-I activity as a pro-longevity strategy.

  19. Valid knowledge for the professional design of large and complex design processes

    NARCIS (Netherlands)

    Aken, van J.E.

    2004-01-01

    The organization and planning of design processes, which we may regard as design process design, is an important issue. Especially for large and complex design-processes traditional approaches to process design may no longer suffice. The design literature gives quite some design process models. As

  20. Too slow, for Milton

    OpenAIRE

    Armstrong, N.

    2011-01-01

    Too slow, for Milton was written in 2011, as part of a memorial project for Milton Babbitt. The piece borrows harmonies from Babbitt's Composition for 12 Instruments (harmonies which Babbitt had in turn borrowed from Schoenberg's Ode to Napoleon), but unfolds them as part of a musical texture characterised by repetition, resonance, and a slow rate of change. As Babbitt once told me that my music was 'too slow', this seemed an appropriately obstinate form of homage.

  1. The dream-lag effect: Selective processing of personally significant events during Rapid Eye Movement sleep, but not during Slow Wave Sleep.

    Science.gov (United States)

    van Rijn, E; Eichenlaub, J-B; Lewis, P A; Walker, M P; Gaskell, M G; Malinowski, J E; Blagrove, M

    2015-07-01

    Incorporation of details from waking life events into Rapid Eye Movement (REM) sleep dreams has been found to be highest on the night after, and then 5-7 nights after events (termed, respectively, the day-residue and dream-lag effects). In experiment 1, 44 participants kept a daily log for 10 days, reporting major daily activities (MDAs), personally significant events (PSEs), and major concerns (MCs). Dream reports were collected from REM and Slow Wave Sleep (SWS) in the laboratory, or from REM sleep at home. The dream-lag effect was found for the incorporation of PSEs into REM dreams collected at home, but not for MDAs or MCs. No dream-lag effect was found for SWS dreams, or for REM dreams collected in the lab after SWS awakenings earlier in the night. In experiment 2, the 44 participants recorded reports of their spontaneously recalled home dreams over the 10 nights following the instrumental awakenings night, which thus acted as a controlled stimulus with two salience levels, high (sleep lab) and low (home awakenings). The dream-lag effect was found for the incorporation into home dreams of references to the experience of being in the sleep laboratory, but only for participants who had reported concerns beforehand about being in the sleep laboratory. The delayed incorporation of events from daily life into dreams has been proposed to reflect REM sleep-dependent memory consolidation. However, an alternative emotion processing or emotional impact of events account, distinct from memory consolidation, is supported by the finding that SWS dreams do not evidence the dream-lag effect. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.

  2. Cristalização de melão pelo processo lento de açucaramento Crystallization of melon fruit through slow sugary process

    Directory of Open Access Journals (Sweden)

    Ângelo Shigueyuki Morita

    2005-06-01

    Full Text Available Este trabalho foi conduzido no laboratório de Tecnologia de Alimentos da Escola Superior de Agricultura de Mossoró (ESAM. Avaliou-se a possibilidade de processar o melão como fruta cristalizada. Foram testadas as variedades Gália, Pele de Sapo e Orange Fresh, utilizando-se o processo lento de açucaramento, retirando-se as polpas em formas de bolas e colocando-as sucessivamente em soluções de sacarose a 20, 30, 40, 50, 60 e 70º Brix até abrir a fervura, mantendo-se a polpa em repouso por 24 horas a cada solução de sacarose. Em seguida, os frutos foram colocados em uma estufa a 50ºC durante 6 horas, atingindo-se assim, a umidade final entre 26,16 a 27,53%. Foram determinados teor de umidade, pH, sólidos solúveis totais, e os produtos submetidos a uma análise sensorial. Constatou-se que a cristalização em melão foi tecnicamente viável, que a variedade Pele de Sapo foi a melhor aceita, e que não houve mudança na coloração da polpa das variedades.The experiment was carried out in the Food Technology Laboratory of the Escola Superior de Agricultura de Mossoró (ESAM, Mossoró-RN, Brazil to evaluate the possibility of processing the melon pulp as a crystallized fruit by using the us following melon varieties: Gália, Pele de Sapo, and Orange Flesh, utilizing the slow sugary process. The pulps were withdrawn in little ball form and put successively into sucrose solutions at 20, 30, 40, 50, 60 and 70°Brix, until boiling, keeping them in inactivity for 24 hours in each solution. After that, the fruits were placed in a stove at 50°C during 6 hours, reaching final humidity between 26.16 and 27.53%. Evaluations for humidity content, pH and total soluble solids were made. In addition a sensorial analysis was made. It was observed that the melon crystallization was technically feasible. Pele de Sapo melon was the best in comparison to the other types. Changing in the melon pulp colouring was not observed.

  3. Slow dynamics in translation-invariant quantum lattice models

    Science.gov (United States)

    Michailidis, Alexios A.; Žnidarič, Marko; Medvedyeva, Mariya; Abanin, Dmitry A.; Prosen, Tomaž; Papić, Z.

    2018-03-01

    Many-body quantum systems typically display fast dynamics and ballistic spreading of information. Here we address the open problem of how slow the dynamics can be after a generic breaking of integrability by local interactions. We develop a method based on degenerate perturbation theory that reveals slow dynamical regimes and delocalization processes in general translation invariant models, along with accurate estimates of their delocalization time scales. Our results shed light on the fundamental questions of the robustness of quantum integrable systems and the possibility of many-body localization without disorder. As an example, we construct a large class of one-dimensional lattice models where, despite the absence of asymptotic localization, the transient dynamics is exceptionally slow, i.e., the dynamics is indistinguishable from that of many-body localized systems for the system sizes and time scales accessible in experiments and numerical simulations.

  4. Very slow neutrons

    International Nuclear Information System (INIS)

    Frank, A.

    1983-01-01

    The history is briefly presented of the research so far of very slow neutrons and their basic properties are explained. The methods are described of obtaining very slow neutrons and the problems of their preservation are discussed. The existence of very slow neutrons makes it possible to perform experiments which may deepen the knowledge of the fundamental properties of neutrons. Their wavelength approximates that of visible radiation. The possibilities and use are discussed of neutron optical systems (neutron microscope) which could be an effective instrument for the study of the detailed arrangement, especially of organic substances. (B.S.)

  5. Transformer Industry Productivity Slows.

    Science.gov (United States)

    Otto, Phyllis Flohr

    1981-01-01

    Annual productivity increases averaged 2.4 percent during 1963-79, slowing since 1972 to 1.5 percent; computer-assisted design and product standardization aided growth in output per employee-hour. (Author)

  6. Large-Deviation Results for Discriminant Statistics of Gaussian Locally Stationary Processes

    Directory of Open Access Journals (Sweden)

    Junichi Hirukawa

    2012-01-01

    Full Text Available This paper discusses the large-deviation principle of discriminant statistics for Gaussian locally stationary processes. First, large-deviation theorems for quadratic forms and the log-likelihood ratio for a Gaussian locally stationary process with a mean function are proved. Their asymptotics are described by the large deviation rate functions. Second, we consider the situations where processes are misspecified to be stationary. In these misspecified cases, we formally make the log-likelihood ratio discriminant statistics and derive the large deviation theorems of them. Since they are complicated, they are evaluated and illustrated by numerical examples. We realize the misspecification of the process to be stationary seriously affecting our discrimination.

  7. Informational support of the investment process in a large city economy

    Directory of Open Access Journals (Sweden)

    Tamara Zurabovna Chargazia

    2016-12-01

    Full Text Available Large cities possess a sufficient potential to participate in the investment processes both at the national and international levels. A potential investor’s awareness of the possibilities and prospects of a city development is of a great importance for him or her to make a decision. So, providing a potential investor with relevant, laconic and reliable information, the local authorities increase the intensity of the investment process in the city economy and vice-versa. As a hypothesis, there is a proposition that a large city administration can sufficiently activate the investment processes in the economy of a corresponding territorial entity using the tools of the information providing. The purpose of this article is to develop measures for the improvement of the investment portal of a large city as an important instrument of the information providing, which will make it possible to brisk up the investment processes at the level under analysis. The reasons of the unsatisfactory information providing on the investment process in a large city economy are deeply analyzed; the national and international experience in this sphere is studied; advantages and disadvantages of the information providing of the investment process in the economy of the city of Makeyevka are considered; the investment portals of different cities are compared. There are suggested technical approaches for improving the investment portal of a large city. The research results can be used to improve the investment policy of large cities.

  8. The dream-lag effect: selective processing of personally significant events during Rapid Eye Movement sleep, but not during Slow Wave Sleep

    OpenAIRE

    van Rijn, E.; Eichenlaub, J.-B.; Lewis, Penelope A.; Walker, M.P.; Gaskell, M.G.; Malinowski, J.E.; Blagrove, M.

    2015-01-01

    Incorporation of details from waking life events into Rapid Eye Movement (REM) sleep dreams has been found to be highest on the night after, and then 5-7 nights after events (termed, respectively, the day-residue and dream-lag effects). In experiment 1, 44 participants kept a daily log for 10. days, reporting major daily activities (MDAs), personally significant events (PSEs), and major concerns (MCs). Dream reports were collected from REM and Slow Wave Sleep (SWS) in the laboratory, or from ...

  9. Process variations in surface nano geometries manufacture on large area substrates

    DEFF Research Database (Denmark)

    Calaon, Matteo; Hansen, Hans Nørgaard; Tosello, Guido

    2014-01-01

    The need of transporting, treating and measuring increasingly smaller biomedical samples has pushed the integration of a far reaching number of nanofeatures over large substrates size in respect to the conventional processes working area windows. Dimensional stability of nano fabrication processe...

  10. Feasibility of large volume casting cementation process for intermediate level radioactive waste

    International Nuclear Information System (INIS)

    Chen Zhuying; Chen Baisong; Zeng Jishu; Yu Chengze

    1988-01-01

    The recent tendency of radioactive waste treatment and disposal both in China and abroad is reviewed. The feasibility of the large volume casting cementation process for treating and disposing the intermediate level radioactive waste from spent fuel reprocessing plant in shallow land is assessed on the basis of the analyses of the experimental results (such as formulation study, solidified radioactive waste properties measurement ect.). It can be concluded large volume casting cementation process is a promising, safe and economic process. It is feasible to dispose the intermediate level radioactive waste from reprocessing plant it the disposal site chosen has resonable geological and geographical conditions and some additional effective protection means are taken

  11. Progress in Root Cause and Fault Propagation Analysis of Large-Scale Industrial Processes

    Directory of Open Access Journals (Sweden)

    Fan Yang

    2012-01-01

    Full Text Available In large-scale industrial processes, a fault can easily propagate between process units due to the interconnections of material and information flows. Thus the problem of fault detection and isolation for these processes is more concerned about the root cause and fault propagation before applying quantitative methods in local models. Process topology and causality, as the key features of the process description, need to be captured from process knowledge and process data. The modelling methods from these two aspects are overviewed in this paper. From process knowledge, structural equation modelling, various causal graphs, rule-based models, and ontological models are summarized. From process data, cross-correlation analysis, Granger causality and its extensions, frequency domain methods, information-theoretical methods, and Bayesian nets are introduced. Based on these models, inference methods are discussed to find root causes and fault propagation paths under abnormal situations. Some future work is proposed in the end.

  12. Extraterrestrial processing and manufacturing of large space systems. Volume 3: Executive summary

    Science.gov (United States)

    Miller, R. H.; Smith, D. B. S.

    1979-01-01

    Facilities and equipment are defined for refining processes to commercial grade of lunar material that is delivered to a 'space manufacturing facility' in beneficiated, primary processed quality. The manufacturing facilities and the equipment for producing elements of large space systems from these materials and providing programmatic assessments of the concepts are also defined. In-space production processes of solar cells (by vapor deposition) and arrays, structures and joints, conduits, waveguides, RF equipment radiators, wire cables, converters, and others are described.

  13. SPS slow extraction septa

    CERN Multimedia

    CERN PhotoLab

    1979-01-01

    SPS long straight section (LSS) with a series of 5 septum tanks for slow extraction (view in the direction of the proton beam). There are 2 of these: in LSS2, towards the N-Area; in LSS6 towards the W-Area. See also Annual Report 1975, p.175.

  14. AGS slow extraction improvements

    International Nuclear Information System (INIS)

    Glenn, J.W.; Smith, G.A.; Sandberg, J.N.; Repeta, L.; Weisberg, H.

    1979-01-01

    Improvement of the straightness of the F5 copper septum increased the AGS slow extraction efficiency from approx. 80% to approx. 90%. Installation of an electrostatic septum at H2O, 24 betatron wavelengths upstream of F5, further improved the extraction efficiency to approx. 97%

  15. PF slow positron source

    International Nuclear Information System (INIS)

    Shirakawa, A.; Enomoto, A.; Kurihara, T.

    1993-01-01

    A new slow-positron source is under construction at the Photon Factory. Positrons are produced by bombarding a tantalum rod with high-energy electrons; they are moderated in multiple tungsten vanes. We report here the present status of this project. (author)

  16. Large break frequency for the SRS (Savannah River Site) production reactor process water system

    International Nuclear Information System (INIS)

    Daugherty, W.L.; Awadalla, N.G.; Sindelar, R.L.; Bush, S.H.

    1989-01-01

    The objective of this paper is to present the results and conclusions of an evaluation of the large break frequency for the process water system (primary coolant system), including the piping, reactor tank, heat exchangers, expansion joints and other process water system components. This evaluation was performed to support the ongoing PRA effort and to complement deterministic analyses addressing the credibility of a double-ended guillotine break. This evaluation encompasses three specific areas: the failure probability of large process water piping directly from imposed loads, the indirect failure probability of piping caused by the seismic-induced failure of surrounding structures, and the failure of all other process water components. The first two of these areas are discussed in detail in other papers. This paper primarily addresses the failure frequency of components other than piping, and includes the other two areas as contributions to the overall process water system break frequency

  17. Medical Students Perceive Better Group Learning Processes when Large Classes Are Made to Seem Small

    Science.gov (United States)

    Hommes, Juliette; Arah, Onyebuchi A.; de Grave, Willem; Schuwirth, Lambert W. T.; Scherpbier, Albert J. J. A.; Bos, Gerard M. J.

    2014-01-01

    Objective Medical schools struggle with large classes, which might interfere with the effectiveness of learning within small groups due to students being unfamiliar to fellow students. The aim of this study was to assess the effects of making a large class seem small on the students' collaborative learning processes. Design A randomised controlled intervention study was undertaken to make a large class seem small, without the need to reduce the number of students enrolling in the medical programme. The class was divided into subsets: two small subsets (n = 50) as the intervention groups; a control group (n = 102) was mixed with the remaining students (the non-randomised group n∼100) to create one large subset. Setting The undergraduate curriculum of the Maastricht Medical School, applying the Problem-Based Learning principles. In this learning context, students learn mainly in tutorial groups, composed randomly from a large class every 6–10 weeks. Intervention The formal group learning activities were organised within the subsets. Students from the intervention groups met frequently within the formal groups, in contrast to the students from the large subset who hardly enrolled with the same students in formal activities. Main Outcome Measures Three outcome measures assessed students' group learning processes over time: learning within formally organised small groups, learning with other students in the informal context and perceptions of the intervention. Results Formal group learning processes were perceived more positive in the intervention groups from the second study year on, with a mean increase of β = 0.48. Informal group learning activities occurred almost exclusively within the subsets as defined by the intervention from the first week involved in the medical curriculum (E-I indexes>−0.69). Interviews tapped mainly positive effects and negligible negative side effects of the intervention. Conclusion Better group learning processes can be

  18. Medical students perceive better group learning processes when large classes are made to seem small.

    Science.gov (United States)

    Hommes, Juliette; Arah, Onyebuchi A; de Grave, Willem; Schuwirth, Lambert W T; Scherpbier, Albert J J A; Bos, Gerard M J

    2014-01-01

    Medical schools struggle with large classes, which might interfere with the effectiveness of learning within small groups due to students being unfamiliar to fellow students. The aim of this study was to assess the effects of making a large class seem small on the students' collaborative learning processes. A randomised controlled intervention study was undertaken to make a large class seem small, without the need to reduce the number of students enrolling in the medical programme. The class was divided into subsets: two small subsets (n=50) as the intervention groups; a control group (n=102) was mixed with the remaining students (the non-randomised group n∼100) to create one large subset. The undergraduate curriculum of the Maastricht Medical School, applying the Problem-Based Learning principles. In this learning context, students learn mainly in tutorial groups, composed randomly from a large class every 6-10 weeks. The formal group learning activities were organised within the subsets. Students from the intervention groups met frequently within the formal groups, in contrast to the students from the large subset who hardly enrolled with the same students in formal activities. Three outcome measures assessed students' group learning processes over time: learning within formally organised small groups, learning with other students in the informal context and perceptions of the intervention. Formal group learning processes were perceived more positive in the intervention groups from the second study year on, with a mean increase of β=0.48. Informal group learning activities occurred almost exclusively within the subsets as defined by the intervention from the first week involved in the medical curriculum (E-I indexes>-0.69). Interviews tapped mainly positive effects and negligible negative side effects of the intervention. Better group learning processes can be achieved in large medical schools by making large classes seem small.

  19. RESOURCE SAVING TECHNOLOGICAL PROCESS OF LARGE-SIZE DIE THERMAL TREATMENT

    Directory of Open Access Journals (Sweden)

    L. A. Glazkov

    2009-01-01

    Full Text Available The given paper presents a development of a technological process pertaining to hardening large-size parts made of die steel. The proposed process applies a water-air mixture instead of a conventional hardening medium that is industrial oil.While developing this new technological process it has been necessary to solve the following problems: reduction of thermal treatment duration, reduction of power resource expense (natural gas and mineral oil, elimination of fire danger and increase of process ecological efficiency. 

  20. A methodology for fault diagnosis in large chemical processes and an application to a multistage flash desalination process: Part II

    International Nuclear Information System (INIS)

    Tarifa, Enrique E.; Scenna, Nicolas J.

    1998-01-01

    In Part I, an efficient method for identifying faults in large processes was presented. The whole plant is divided into sectors by using structural, functional, or causal decomposition. A signed directed graph (SDG) is the model used for each sector. The SDG represents interactions among process variables. This qualitative model is used to carry out qualitative simulation for all possible faults. The output of this step is information about the process behaviour. This information is used to build rules. When a symptom is detected in one sector, its rules are evaluated using on-line data and fuzzy logic to yield the diagnosis. In this paper the proposed methodology is applied to a multiple stage flash (MSF) desalination process. This process is composed of sequential flash chambers. It was designed for a pilot plant that produces drinkable water for a community in Argentina; that is, it is a real case. Due to the large number of variables, recycles, phase changes, etc., this process is a good challenge for the proposed diagnosis method

  1. Slow movement execution in event-related potentials (P300).

    Science.gov (United States)

    Naruse, Kumi; Sakuma, Haruo; Hirai, Takane

    2002-02-01

    We examined whether slow movement execution has an effect on cognitive and information processing by measuring the P300 component. 8 subjects performed a continuous slow forearm rotational movement using 2 task speeds. Slow (a 30-50% decrease from the subject's Preferred speed) and Very Slow (a 60-80% decrease). The mean coefficient of variation for rotation speed under Very Slow was higher than that under Slow, showing that the subjects found it difficult to perform the Very Slow task smoothly. The EEG score of alpha-1 (8-10 Hz) under Slow Condition was increased significantly more than under the Preferred Condition; however, the increase under Very Slow was small when compared with Preferred. After performing the task. P300 latency under Very Slow increased significantly as compared to that at pretask. Further, P300 amplitude decreased tinder both speed conditions when compared to that at pretask, and a significant decrease was seen under the Slow Condition at Fz, whereas the decrease under the Very Slow Condition was small. These differences indicated that a more complicated neural composition and an increase in subjects' attention might have been involved when the task was performed under the Very Slow Condition. We concluded that slow movement execution may have an influence on cognitive function and may depend on the percentage of decrease from the Preferred speed of the individual.

  2. Visualization of the Flux Rope Generation Process Using Large Quantities of MHD Simulation Data

    Directory of Open Access Journals (Sweden)

    Y Kubota

    2013-03-01

    Full Text Available We present a new concept of analysis using visualization of large quantities of simulation data. The time development of 3D objects with high temporal resolution provides the opportunity for scientific discovery. We visualize large quantities of simulation data using the visualization application 'Virtual Aurora' based on AVS (Advanced Visual Systems and the parallel distributed processing at "Space Weather Cloud" in NICT based on Gfarm technology. We introduce two results of high temporal resolution visualization: the magnetic flux rope generation process and dayside reconnection using a system of magnetic field line tracing.

  3. Visual analysis of inter-process communication for large-scale parallel computing.

    Science.gov (United States)

    Muelder, Chris; Gygi, Francois; Ma, Kwan-Liu

    2009-01-01

    In serial computation, program profiling is often helpful for optimization of key sections of code. When moving to parallel computation, not only does the code execution need to be considered but also communication between the different processes which can induce delays that are detrimental to performance. As the number of processes increases, so does the impact of the communication delays on performance. For large-scale parallel applications, it is critical to understand how the communication impacts performance in order to make the code more efficient. There are several tools available for visualizing program execution and communications on parallel systems. These tools generally provide either views which statistically summarize the entire program execution or process-centric views. However, process-centric visualizations do not scale well as the number of processes gets very large. In particular, the most common representation of parallel processes is a Gantt char t with a row for each process. As the number of processes increases, these charts can become difficult to work with and can even exceed screen resolution. We propose a new visualization approach that affords more scalability and then demonstrate it on systems running with up to 16,384 processes.

  4. Sustainable Development of Slow Fashion Businesses: Customer Value Approach

    Directory of Open Access Journals (Sweden)

    Sojin Jung

    2016-06-01

    Full Text Available As an alternative to the prevalent fast fashion model, slow fashion has emerged as a way of enhancing sustainability in the fashion industry, yet how slow fashion can enhance profitability is still largely unknown. Based on a customer value creation framework, this study empirically tested a structural model that specified the slow fashion attributes that contribute to creating perceived customer value, which subsequently increases a consumer’s intention to buy and pay a price premium for slow fashion products. An analysis of 221 U.S. consumer data revealed that delivering exclusive product value is significantly critical in creating customer value for slow fashion, and customer value, in turn, positively affects consumers’ purchase intentions. Further analysis also revealed that different slow fashion attributes distinctively affect customer value. This provides potential strategies on which slow fashion businesses can focus to secure an economically sustainable business model, thereby continuously improving environmental and social sustainability with the slow fashion ideal.

  5. PLANNING QUALITY ASSURANCE PROCESSES IN A LARGE SCALE GEOGRAPHICALLY SPREAD HYBRID SOFTWARE DEVELOPMENT PROJECT

    Directory of Open Access Journals (Sweden)

    Святослав Аркадійович МУРАВЕЦЬКИЙ

    2016-02-01

    Full Text Available There have been discussed key points of operational activates in a large scale geographically spread software development projects. A look taken at required QA processes structure in such project. There have been given up to date methods of integration quality assurance processes into software development processes. There have been reviewed existing groups of software development methodologies. Such as sequential, agile and based on RPINCE2. There have been given a condensed overview of quality assurance processes in each group. There have been given a review of common challenges that sequential and agile models are having in case of large geographically spread hybrid software development project. Recommendations were given in order to tackle those challenges.  The conclusions about the best methodology choice and appliance to the particular project have been made.

  6. Slow-transit Constipation.

    Science.gov (United States)

    Bharucha, Adil E.; Philips, Sidney F.

    2001-08-01

    Idiopathic slow-transit constipation is a clinical syndrome predominantly affecting women, characterized by intractable constipation and delayed colonic transit. This syndrome is attributed to disordered colonic motor function. The disorder spans a spectrum of variable severity, ranging from patients who have relatively mild delays in transit but are otherwise indistinguishable from irritable bowel syndrome to patients with colonic inertia or chronic megacolon. The diagnosis is made after excluding colonic obstruction, metabolic disorders (hypothyroidism, hypercalcemia), drug-induced constipation, and pelvic floor dysfunction (as discussed by Wald ). Most patients are treated with one or more pharmacologic agents, including dietary fiber supplementation, saline laxatives (milk of magnesia), osmotic agents (lactulose, sorbitol, and polyethylene glycol 3350), and stimulant laxatives (bisacodyl and glycerol). A subtotal colectomy is effective and occasionally is indicated for patients with medically refractory, severe slow-transit constipation, provided pelvic floor dysfunction has been excluded or treated.

  7. Large-scale continuous process to vitrify nuclear defense waste: operating experience with nonradioactive waste

    International Nuclear Information System (INIS)

    Cosper, M.B.; Randall, C.T.; Traverso, G.M.

    1982-01-01

    The developmental program underway at SRL has demonstrated the vitrification process proposed for the sludge processing facility of the DWPF on a large scale. DWPF design criteria for production rate, equipment lifetime, and operability have all been met. The expected authorization and construction of the DWPF will result in the safe and permanent immobilization of a major quantity of existing high level waste. 11 figures, 4 tables

  8. Process γ*γ → σ at large virtuality of γ*

    International Nuclear Information System (INIS)

    Volkov, M.K.; Radzhabov, A.E.; Yudichev, V.L.

    2004-01-01

    The process γ*γ → σ is investigated in the framework of the SU(2) x SU(2) chiral NJL model, where γ*γ are photons with the large and small virtuality, respectively, and σ is a pseudoscalar meson. The form factor of the process is derived for arbitrary virtuality of γ* in the Euclidean kinematic domain. The asymptotic behavior of this form factor resembles the asymptotic behavior of the γ*γ → π form factor [ru

  9. Increasing a large petrochemical company efficiency by improvement of decision making process

    OpenAIRE

    Kirin Snežana D.; Nešić Lela G.

    2010-01-01

    The paper shows the results of a research conducted in a large petrochemical company, in a state under transition, with the aim to "shed light" on the decision making process from the aspect of personal characteristics of the employees, in order to use the results to improve decision making process and increase company efficiency. The research was conducted by a survey, i.e. by filling out a questionnaire specially made for this purpose, in real conditions, during working hours. The sample of...

  10. The testing of thermal-mechanical-hydrological-chemical processes using a large block

    International Nuclear Information System (INIS)

    Lin, W.; Wilder, D.G.; Blink, J.A.; Blair, S.C.; Buscheck, T.A.; Chesnut, D.A.; Glassley, W.E.; Lee, K.; Roberts, J.J.

    1994-01-01

    The radioactive decay heat from nuclear waste packages may, depending on the thermal load, create coupled thermal-mechanical-hydrological-chemical (TMHC) processes in the near-field environment of a repository. A group of tests on a large block (LBT) are planned to provide a timely opportunity to test and calibrate some of the TMHC model concepts. The LBT is advantageous for testing and verifying model concepts because the boundary conditions are controlled, and the block can be characterized before and after the experiment. A block of Topopah Spring tuff of about 3 x 3 x 4.5 m will be sawed and isolated at Fran Ridge, Nevada Test Site. Small blocks of the rock adjacent to the large block will be collected for laboratory testing of some individual thermal-mechanical, hydrological, and chemical processes. A constant load of about 4 MPa will be applied to the top and sides of the large block. The sides will be sealed with moisture and thermal barriers. The large block will be heated with one heater in each borehole and guard heaters on the sides so that a dry-out zone and a condensate zone will exist simultaneously. Temperature, moisture content, pore pressure, chemical composition, stress and displacement will be measured throughout the block during the heating and cool-down phases. The results from the experiments on small blocks and the tests on the large block will provide a better understanding of some concepts of the coupled TMHC processes

  11. The Faculty Promotion Process. An Empirical Analysis of the Administration of Large State Universities.

    Science.gov (United States)

    Luthans, Fred

    One phase of academic management, the faculty promotion process, is systematically described and analyzed. The study encompasses three parts: (l) the justification of the use of management concepts in the analysis of academic administration; (2) a descriptive presentation of promotion policies and practices in 46 large state universities; and (3)…

  12. Hadronic processes with large transfer momenta and quark counting rules in multiparticle dual amplitude

    International Nuclear Information System (INIS)

    Akkelin, S.V.; Kobylinskij, N.A.; Martynov, E.S.

    1989-01-01

    A dual N-particle amplitude satisfying the quark counting rules for the processes with large transfer momenta is constructed. The multiparticle channels are shown to give an essential contribution to the amplitude decreasing power in a hard kinematic limit. 19 refs.; 9 figs

  13. Large scale production and downstream processing of a recombinant porcine parvovirus vaccine

    NARCIS (Netherlands)

    Maranga, L.; Rueda, P.; Antonis, A.F.G.; Vela, C.; Langeveld, J.P.M.; Casal, J.I.; Carrondo, M.J.T.

    2002-01-01

    Porcine parvovirus (PPV) virus-like particles (VLPs) constitute a potential vaccine for prevention of parvovirus-induced reproductive failure in gilts. Here we report the development of a large scale (25 l) production process for PPV-VLPs with baculovirus-infected insect cells. A low multiplicity of

  14. On conservation of the baryon chirality in the processes with large momentum transfer

    International Nuclear Information System (INIS)

    Ioffe, B.L.

    1976-01-01

    The hypothesis of the baryon chirality conservation in the processes with large momentum transfer is suggested and some arguments in its favour are made. Experimental implicatiosns of this assumption for weak and electromagnetic form factors of transitions in the baryon octet and of transitions N → Δ, N → Σsup(*) are considered

  15. Large deviations for the Fleming-Viot process with neutral mutation and selection

    OpenAIRE

    Dawson, Donald; Feng, Shui

    1998-01-01

    Large deviation principles are established for the Fleming-Viot processes with neutral mutation and selection, and the corresponding equilibrium measures as the sampling rate goes to 0. All results are first proved for the finite allele model, and then generalized, through the projective limit technique, to the infinite allele model. Explicit expressions are obtained for the rate functions.

  16. Areas prone to slow slip events impede earthquake rupture propagation and promote afterslip

    Science.gov (United States)

    Rolandone, Frederique; Nocquet, Jean-Mathieu; Mothes, Patricia A.; Jarrin, Paul; Vallée, Martin; Cubas, Nadaya; Hernandez, Stephen; Plain, Morgan; Vaca, Sandro; Font, Yvonne

    2018-01-01

    At subduction zones, transient aseismic slip occurs either as afterslip following a large earthquake or as episodic slow slip events during the interseismic period. Afterslip and slow slip events are usually considered as distinct processes occurring on separate fault areas governed by different frictional properties. Continuous GPS (Global Positioning System) measurements following the 2016 Mw (moment magnitude) 7.8 Ecuador earthquake reveal that large and rapid afterslip developed at discrete areas of the megathrust that had previously hosted slow slip events. Regardless of whether they were locked or not before the earthquake, these areas appear to persistently release stress by aseismic slip throughout the earthquake cycle and outline the seismic rupture, an observation potentially leading to a better anticipation of future large earthquakes. PMID:29404404

  17. Slow, slow, quick, quick, slow: Saudi Arabia's 'Gas Initiative'

    International Nuclear Information System (INIS)

    Robins, Philip

    2004-01-01

    This article sets out to analyse the Saudi gas initiative in the context of the decision-making process in Saudi Arabia between 1998 and 2002. It describes the overall context in which the initiative was made. It focuses on the personalities and institutions that were important in its birth and its evolution. The article argues that a mixture of personalities (especially that of Crown Prince Abdullah and foreign minister Saud al-Faisal) and institutions (especially a clutch of new bodies formed in 1999 and 2000) were pivotal in the emergence of the initiative. It also looks at the obstacles that were placed in the way of the initiative, arguing that Saudi Aramco and the minister of oil, Ali Naimi, were key blocking players. Over time, the Saudi gas initiative has come to be seen as a benchmark of the wider cause of economic liberalization in the Kingdom. The lack of progress in the initiative since the initial indicative contract awards in June 2001 has reflected the lack of movement in the general reformist strategy

  18. Quality Improvement Process in a Large Intensive Care Unit: Structure and Outcomes.

    Science.gov (United States)

    Reddy, Anita J; Guzman, Jorge A

    2016-11-01

    Quality improvement in the health care setting is a complex process, and even more so in the critical care environment. The development of intensive care unit process measures and quality improvement strategies are associated with improved outcomes, but should be individualized to each medical center as structure and culture can differ from institution to institution. The purpose of this report is to describe the structure of quality improvement processes within a large medical intensive care unit while using examples of the study institution's successes and challenges in the areas of stat antibiotic administration, reduction in blood product waste, central line-associated bloodstream infections, and medication errors. © The Author(s) 2015.

  19. A Proactive Complex Event Processing Method for Large-Scale Transportation Internet of Things

    OpenAIRE

    Wang, Yongheng; Cao, Kening

    2014-01-01

    The Internet of Things (IoT) provides a new way to improve the transportation system. The key issue is how to process the numerous events generated by IoT. In this paper, a proactive complex event processing method is proposed for large-scale transportation IoT. Based on a multilayered adaptive dynamic Bayesian model, a Bayesian network structure learning algorithm using search-and-score is proposed to support accurate predictive analytics. A parallel Markov decision processes model is design...

  20. Simulation research on the process of large scale ship plane segmentation intelligent workshop

    Science.gov (United States)

    Xu, Peng; Liao, Liangchuang; Zhou, Chao; Xue, Rui; Fu, Wei

    2017-04-01

    Large scale ship plane segmentation intelligent workshop is a new thing, and there is no research work in related fields at home and abroad. The mode of production should be transformed by the existing industry 2.0 or part of industry 3.0, also transformed from "human brain analysis and judgment + machine manufacturing" to "machine analysis and judgment + machine manufacturing". In this transforming process, there are a great deal of tasks need to be determined on the aspects of management and technology, such as workshop structure evolution, development of intelligent equipment and changes in business model. Along with them is the reformation of the whole workshop. Process simulation in this project would verify general layout and process flow of large scale ship plane section intelligent workshop, also would analyze intelligent workshop working efficiency, which is significant to the next step of the transformation of plane segmentation intelligent workshop.

  1. Processing graded feedback: electrophysiological correlates of learning from small and large errors.

    Science.gov (United States)

    Luft, Caroline Di Bernardi; Takase, Emilio; Bhattacharya, Joydeep

    2014-05-01

    Feedback processing is important for learning and therefore may affect the consolidation of skills. Considerable research demonstrates electrophysiological differences between correct and incorrect feedback, but how we learn from small versus large errors is usually overlooked. This study investigated electrophysiological differences when processing small or large error feedback during a time estimation task. Data from high-learners and low-learners were analyzed separately. In both high- and low-learners, large error feedback was associated with higher feedback-related negativity (FRN) and small error feedback was associated with a larger P300 and increased amplitude over the motor related areas of the left hemisphere. In addition, small error feedback induced larger desynchronization in the alpha and beta bands with distinctly different topographies between the two learning groups: The high-learners showed a more localized decrease in beta power over the left frontocentral areas, and the low-learners showed a widespread reduction in the alpha power following small error feedback. Furthermore, only the high-learners showed an increase in phase synchronization between the midfrontal and left central areas. Importantly, this synchronization was correlated to how well the participants consolidated the estimation of the time interval. Thus, although large errors were associated with higher FRN, small errors were associated with larger oscillatory responses, which was more evident in the high-learners. Altogether, our results suggest an important role of the motor areas in the processing of error feedback for skill consolidation.

  2. Towards Portable Large-Scale Image Processing with High-Performance Computing.

    Science.gov (United States)

    Huo, Yuankai; Blaber, Justin; Damon, Stephen M; Boyd, Brian D; Bao, Shunxing; Parvathaneni, Prasanna; Noguera, Camilo Bermudez; Chaganti, Shikha; Nath, Vishwesh; Greer, Jasmine M; Lyu, Ilwoo; French, William R; Newton, Allen T; Rogers, Baxter P; Landman, Bennett A

    2018-05-03

    High-throughput, large-scale medical image computing demands tight integration of high-performance computing (HPC) infrastructure for data storage, job distribution, and image processing. The Vanderbilt University Institute for Imaging Science (VUIIS) Center for Computational Imaging (CCI) has constructed a large-scale image storage and processing infrastructure that is composed of (1) a large-scale image database using the eXtensible Neuroimaging Archive Toolkit (XNAT), (2) a content-aware job scheduling platform using the Distributed Automation for XNAT pipeline automation tool (DAX), and (3) a wide variety of encapsulated image processing pipelines called "spiders." The VUIIS CCI medical image data storage and processing infrastructure have housed and processed nearly half-million medical image volumes with Vanderbilt Advanced Computing Center for Research and Education (ACCRE), which is the HPC facility at the Vanderbilt University. The initial deployment was natively deployed (i.e., direct installations on a bare-metal server) within the ACCRE hardware and software environments, which lead to issues of portability and sustainability. First, it could be laborious to deploy the entire VUIIS CCI medical image data storage and processing infrastructure to another HPC center with varying hardware infrastructure, library availability, and software permission policies. Second, the spiders were not developed in an isolated manner, which has led to software dependency issues during system upgrades or remote software installation. To address such issues, herein, we describe recent innovations using containerization techniques with XNAT/DAX which are used to isolate the VUIIS CCI medical image data storage and processing infrastructure from the underlying hardware and software environments. The newly presented XNAT/DAX solution has the following new features: (1) multi-level portability from system level to the application level, (2) flexible and dynamic software

  3. The key network communication technology in large radiation image cooperative process system

    International Nuclear Information System (INIS)

    Li Zheng; Kang Kejun; Gao Wenhuan; Wang Jingjin

    1998-01-01

    Large container inspection system (LCIS) based on radiation imaging technology is a powerful tool for the customs to check the contents inside a large container without opening it. An image distributed network system is composed of operation manager station, image acquisition station, environment control station, inspection processing station, check-in station, check-out station, database station by using advanced network technology. Mass data, such as container image data, container general information, manifest scanning data, commands and status, must be on-line transferred between different stations. Advanced network communication technology is presented

  4. Asymptotic description of two metastable processes of solidification for the case of large relaxation time

    International Nuclear Information System (INIS)

    Omel'yanov, G.A.

    1995-07-01

    The non-isothermal Cahn-Hilliard equations in the n-dimensional case (n = 2,3) are considered. The interaction length is proportional to a small parameter, and the relaxation time is proportional to a constant. The asymptotic solutions describing two metastable processes are constructed and justified. The soliton type solution describes the first stage of separation in alloy, when a set of ''superheated liquid'' appears inside the ''solid'' part. The Van der Waals type solution describes the free interface dynamics for large time. The smoothness of temperature is established for large time and the Mullins-Sekerka problem describing the free interface is derived. (author). 46 refs

  5. Vegetative deviation in patients with slow-repair processes in the post-operative wound and effect of the combined use of low-intensity laser therapy and pantovegin electrophoresis

    Directory of Open Access Journals (Sweden)

    Dugieva M.Z.

    2013-12-01

    Full Text Available Aim of this study was to evaluate the influence of combined use of low-intensity infrared laser therapy when exposed area of the thymus and electrophoresis of pantovegin on vegetative status of patients with the slowdown in the wound recovery reparative processes. Material study were 190 patients after gynecological laparotomy. Result. The article presents data on changes in vegetative status in postoperative gynecological patients with a slowdown in the wound recovery reparative processes. In this group of patients in the postoperative period parasimpatikotony prevails. By combination of low-intensity infrared laser therapy when exposed area of the thymus and pantovegin electrophoresis achieved more rapid normalization of available changes with the transition to the use of combination eitony. It is recommended to use physiotherapy method for slowing reparative processes in the wound.

  6. Slow shock characteristics as a function of distance from the X-line in the magnetotail

    International Nuclear Information System (INIS)

    Lee, L.C.; Lin, Y.; Shi, Y.; Tsurutani, B.T.

    1989-01-01

    Both particle and MHD simulations are performed to study the characteristics of slow shocks in the magnetotail. The particle simulations indicate that switch-off shocks exhibit large amplitude rotational wave trains, while magnetotail slow shocks with an intermediate Mach number M An c congruent 0.98 do not display such rotational wave trains. The MHD simulations show that the spontaneous reconnection process in the near-earth plasma sheet leads to the formation of a pair of slow shocks tailward of the reconnection line (X-line). The properties of slow shocks are found to vary as a function of the distance from X-line due to the fomation of plasmoid. Slow shocks in most regions of the magnetotail are found to be non-switch-off shocks with M An <0.98. The present results are used to discuss the lack of large amplitude rotational wave trains at slow shocks in the deep magnetotail. copyright American Geophysical Union 1989

  7. A methodology for fault diagnosis in large chemical processes and an application to a multistage flash desalination process: Part I

    International Nuclear Information System (INIS)

    Tarifa, Enrique E.; Scenna, Nicolas J.

    1998-01-01

    This work presents a new strategy for fault diagnosis in large chemical processes (E.E. Tarifa, Fault diagnosis in complex chemistries plants: plants of large dimensions and batch processes. Ph.D. thesis, Universidad Nacional del Litoral, Santa Fe, 1995). A special decomposition of the plant is made in sectors. Afterwards each sector is studied independently. These steps are carried out in the off-line mode. They produced vital information for the diagnosis system. This system works in the on-line mode and is based on a two-tier strategy. When a fault is produced, the upper level identifies the faulty sector. Then, the lower level carries out an in-depth study that focuses only on the critical sectors to identify the fault. The loss of information produced by the process partition may cause spurious diagnosis. This problem is overcome at the second level using qualitative simulation and fuzzy logic. In the second part of this work, the new methodology is tested to evaluate its performance in practical cases. A multiple stage flash desalination system (MSF) is chosen because it is a complex system, with many recycles and variables to be supervised. The steps for the knowledge base generation and all the blocks included in the diagnosis system are analyzed. Evaluation of the diagnosis performance is carried out using a rigorous dynamic simulator

  8. Moditored unsaturated soil transport processes as a support for large scale soil and water management

    Science.gov (United States)

    Vanclooster, Marnik

    2010-05-01

    The current societal demand for sustainable soil and water management is very large. The drivers of global and climate change exert many pressures on the soil and water ecosystems, endangering appropriate ecosystem functioning. The unsaturated soil transport processes play a key role in soil-water system functioning as it controls the fluxes of water and nutrients from the soil to plants (the pedo-biosphere link), the infiltration flux of precipitated water to groundwater and the evaporative flux, and hence the feed back from the soil to the climate system. Yet, unsaturated soil transport processes are difficult to quantify since they are affected by huge variability of the governing properties at different space-time scales and the intrinsic non-linearity of the transport processes. The incompatibility of the scales between the scale at which processes reasonably can be characterized, the scale at which the theoretical process correctly can be described and the scale at which the soil and water system need to be managed, calls for further development of scaling procedures in unsaturated zone science. It also calls for a better integration of theoretical and modelling approaches to elucidate transport processes at the appropriate scales, compatible with the sustainable soil and water management objective. Moditoring science, i.e the interdisciplinary research domain where modelling and monitoring science are linked, is currently evolving significantly in the unsaturated zone hydrology area. In this presentation, a review of current moditoring strategies/techniques will be given and illustrated for solving large scale soil and water management problems. This will also allow identifying research needs in the interdisciplinary domain of modelling and monitoring and to improve the integration of unsaturated zone science in solving soil and water management issues. A focus will be given on examples of large scale soil and water management problems in Europe.

  9. Development of polymers for large scale roll-to-roll processing of polymer solar cells

    DEFF Research Database (Denmark)

    Carlé, Jon Eggert

    Development of polymers for large scale roll-to-roll processing of polymer solar cells Conjugated polymers potential to both absorb light and transport current as well as the perspective of low cost and large scale production has made these kinds of material attractive in solar cell research....... The research field of polymer solar cells (PSCs) is rapidly progressing along three lines: Improvement of efficiency and stability together with the introduction of large scale production methods. All three lines are explored in this work. The thesis describes low band gap polymers and why these are needed....... Polymer of this type display broader absorption resulting in better overlap with the solar spectrum and potentially higher current density. Synthesis, characterization and device performance of three series of polymers illustrating how the absorption spectrum of polymers can be manipulated synthetically...

  10. Modelling hydrologic and hydrodynamic processes in basins with large semi-arid wetlands

    Science.gov (United States)

    Fleischmann, Ayan; Siqueira, Vinícius; Paris, Adrien; Collischonn, Walter; Paiva, Rodrigo; Pontes, Paulo; Crétaux, Jean-François; Bergé-Nguyen, Muriel; Biancamaria, Sylvain; Gosset, Marielle; Calmant, Stephane; Tanimoun, Bachir

    2018-06-01

    Hydrological and hydrodynamic models are core tools for simulation of large basins and complex river systems associated to wetlands. Recent studies have pointed towards the importance of online coupling strategies, representing feedbacks between floodplain inundation and vertical hydrology. Especially across semi-arid regions, soil-floodplain interactions can be strong. In this study, we included a two-way coupling scheme in a large scale hydrological-hydrodynamic model (MGB) and tested different model structures, in order to assess which processes are important to be simulated in large semi-arid wetlands and how these processes interact with water budget components. To demonstrate benefits from this coupling over a validation case, the model was applied to the Upper Niger River basin encompassing the Niger Inner Delta, a vast semi-arid wetland in the Sahel Desert. Simulation was carried out from 1999 to 2014 with daily TMPA 3B42 precipitation as forcing, using both in-situ and remotely sensed data for calibration and validation. Model outputs were in good agreement with discharge and water levels at stations both upstream and downstream of the Inner Delta (Nash-Sutcliffe Efficiency (NSE) >0.6 for most gauges), as well as for flooded areas within the Delta region (NSE = 0.6; r = 0.85). Model estimates of annual water losses across the Delta varied between 20.1 and 30.6 km3/yr, while annual evapotranspiration ranged between 760 mm/yr and 1130 mm/yr. Evaluation of model structure indicated that representation of both floodplain channels hydrodynamics (storage, bifurcations, lateral connections) and vertical hydrological processes (floodplain water infiltration into soil column; evapotranspiration from soil and vegetation and evaporation of open water) are necessary to correctly simulate flood wave attenuation and evapotranspiration along the basin. Two-way coupled models are necessary to better understand processes in large semi-arid wetlands. Finally, such coupled

  11. Analysis of reforming process of large distorted ring in final enlarging forging

    International Nuclear Information System (INIS)

    Miyazawa, Takeshi; Murai, Etsuo

    2002-01-01

    In the construction of reactors or pressure vessels for oil chemical plants and nuclear power stations, mono block open-die forging rings are often utilized. Generally, a large forged ring is manufactured by means of enlarging forging with reductions of the wall thickness. During the enlarging process the circular ring is often distorted and becomes an ellipse in shape. However the shape control of the ring is a complicated work. This phenomenon makes the matter still worse in forging of larger rings. In order to make precision forging of large rings, we have developed the forging method using a v-shape anvil. The v-shape anvil is geometrically adjusted to fit the distorted ring in the final circle and reform automatically the shape of the ring during enlarging forging. This paper has analyzed the reforming process of distorted ring by computer program based on F.E.M. and examined the effect on the precision of ring forging. (author)

  12. A mesh density study for application to large deformation rolling process evaluation

    International Nuclear Information System (INIS)

    Martin, J.A.

    1997-12-01

    When addressing large deformation through an elastic-plastic analysis the mesh density is paramount in determining the accuracy of the solution. However, given the nonlinear nature of the problem, a highly-refined mesh will generally require a prohibitive amount of computer resources. This paper addresses finite element mesh optimization studies considering accuracy of results and computer resource needs as applied to large deformation rolling processes. In particular, the simulation of the thread rolling manufacturing process is considered using the MARC software package and a Cray C90 supercomputer. Both mesh density and adaptive meshing on final results for both indentation of a rigid body to a specified depth and contact rolling along a predetermined length are evaluated

  13. Constructing large scale SCI-based processing systems by switch elements

    International Nuclear Information System (INIS)

    Wu, B.; Kristiansen, E.; Skaali, B.; Bogaerts, A.; Divia, R.; Mueller, H.

    1993-05-01

    The goal of this paper is to study some of the design criteria for the switch elements to form the interconnection of large scale SCI-based processing systems. The approved IEEE standard 1596 makes it possible to couple up to 64K nodes together. In order to connect thousands of nodes to construct large scale SCI-based processing systems, one has to interconnect these nodes by switch elements to form different topologies. A summary of the requirements and key points of interconnection networks and switches is presented. Two models of the SCI switch elements are proposed. The authors investigate several examples of systems constructed for 4-switches with simulations and the results are analyzed. Some issues and enhancements are discussed to provide the ideas behind the switch design that can improve performance and reduce latency. 29 refs., 11 figs., 3 tabs

  14. Large-scale functional networks connect differently for processing words and symbol strings.

    Science.gov (United States)

    Liljeström, Mia; Vartiainen, Johanna; Kujala, Jan; Salmelin, Riitta

    2018-01-01

    Reconfigurations of synchronized large-scale networks are thought to be central neural mechanisms that support cognition and behavior in the human brain. Magnetoencephalography (MEG) recordings together with recent advances in network analysis now allow for sub-second snapshots of such networks. In the present study, we compared frequency-resolved functional connectivity patterns underlying reading of single words and visual recognition of symbol strings. Word reading emphasized coherence in a left-lateralized network with nodes in classical perisylvian language regions, whereas symbol processing recruited a bilateral network, including connections between frontal and parietal regions previously associated with spatial attention and visual working memory. Our results illustrate the flexible nature of functional networks, whereby processing of different form categories, written words vs. symbol strings, leads to the formation of large-scale functional networks that operate at distinct oscillatory frequencies and incorporate task-relevant regions. These results suggest that category-specific processing should be viewed not so much as a local process but as a distributed neural process implemented in signature networks. For words, increased coherence was detected particularly in the alpha (8-13 Hz) and high gamma (60-90 Hz) frequency bands, whereas increased coherence for symbol strings was observed in the high beta (21-29 Hz) and low gamma (30-45 Hz) frequency range. These findings attest to the role of coherence in specific frequency bands as a general mechanism for integrating stimulus-dependent information across brain regions.

  15. Microarray Data Processing Techniques for Genome-Scale Network Inference from Large Public Repositories.

    Science.gov (United States)

    Chockalingam, Sriram; Aluru, Maneesha; Aluru, Srinivas

    2016-09-19

    Pre-processing of microarray data is a well-studied problem. Furthermore, all popular platforms come with their own recommended best practices for differential analysis of genes. However, for genome-scale network inference using microarray data collected from large public repositories, these methods filter out a considerable number of genes. This is primarily due to the effects of aggregating a diverse array of experiments with different technical and biological scenarios. Here we introduce a pre-processing pipeline suitable for inferring genome-scale gene networks from large microarray datasets. We show that partitioning of the available microarray datasets according to biological relevance into tissue- and process-specific categories significantly extends the limits of downstream network construction. We demonstrate the effectiveness of our pre-processing pipeline by inferring genome-scale networks for the model plant Arabidopsis thaliana using two different construction methods and a collection of 11,760 Affymetrix ATH1 microarray chips. Our pre-processing pipeline and the datasets used in this paper are made available at http://alurulab.cc.gatech.edu/microarray-pp.

  16. Extraterrestrial processing and manufacturing of large space systems, volume 1, chapters 1-6

    Science.gov (United States)

    Miller, R. H.; Smith, D. B. S.

    1979-01-01

    Space program scenarios for production of large space structures from lunar materials are defined. The concept of the space manufacturing facility (SMF) is presented. The manufacturing processes and equipment for the SMF are defined and the conceptual layouts are described for the production of solar cells and arrays, structures and joints, conduits, waveguides, RF equipment radiators, wire cables, and converters. A 'reference' SMF was designed and its operation requirements are described.

  17. Large-scale methanol plants. [Based on Japanese-developed process

    Energy Technology Data Exchange (ETDEWEB)

    Tado, Y

    1978-02-01

    A study was made on how to produce methanol economically which is expected as a growth item for use as a material for pollution-free energy or for chemical use, centering on the following subjects: (1) Improvement of thermal economy, (2) Improvement of process, and (3) Problems of hardware attending the expansion of scale. The results of this study were already adopted in actual plants, obtaining good results, and large-scale methanol plants are going to be realized.

  18. Large-scale calculations of the beta-decay rates and r-process nucleosynthesis

    Energy Technology Data Exchange (ETDEWEB)

    Borzov, I N; Goriely, S [Inst. d` Astronomie et d` Astrophysique, Univ. Libre de Bruxelles, Campus Plaine, Bruxelles (Belgium); Pearson, J M [Inst. d` Astronomie et d` Astrophysique, Univ. Libre de Bruxelles, Campus Plaine, Bruxelles (Belgium); [Lab. de Physique Nucleaire, Univ. de Montreal, Montreal (Canada)

    1998-06-01

    An approximation to a self-consistent model of the ground state and {beta}-decay properties of neutron-rich nuclei is outlined. The structure of the {beta}-strength functions in stable and short-lived nuclei is discussed. The results of large-scale calculations of the {beta}-decay rates for spherical and slightly deformed nuclides of relevance to the r-process are analysed and compared with the results of existing global calculations and recent experimental data. (orig.)

  19. A large-scale circuit mechanism for hierarchical dynamical processing in the primate cortex

    OpenAIRE

    Chaudhuri, Rishidev; Knoblauch, Kenneth; Gariel, Marie-Alice; Kennedy, Henry; Wang, Xiao-Jing

    2015-01-01

    We developed a large-scale dynamical model of the macaque neocortex, which is based on recently acquired directed- and weighted-connectivity data from tract-tracing experiments, and which incorporates heterogeneity across areas. A hierarchy of timescales naturally emerges from this system: sensory areas show brief, transient responses to input (appropriate for sensory processing), whereas association areas integrate inputs over time and exhibit persistent activity (suitable for decision-makin...

  20. A framework for the direct evaluation of large deviations in non-Markovian processes

    International Nuclear Information System (INIS)

    Cavallaro, Massimo; Harris, Rosemary J

    2016-01-01

    We propose a general framework to simulate stochastic trajectories with arbitrarily long memory dependence and efficiently evaluate large deviation functions associated to time-extensive observables. This extends the ‘cloning’ procedure of Giardiná et al (2006 Phys. Rev. Lett. 96 120603) to non-Markovian systems. We demonstrate the validity of this method by testing non-Markovian variants of an ion-channel model and the totally asymmetric exclusion process, recovering results obtainable by other means. (letter)

  1. The effects of large scale processing on caesium leaching from cemented simulant sodium nitrate waste

    International Nuclear Information System (INIS)

    Lee, D.J.; Brown, D.J.

    1982-01-01

    The effects of large scale processing on the properties of cemented simulant sodium nitrate waste have been investigated. Leach tests have been performed on full-size drums, cores and laboratory samples of cement formulations containing Ordinary Portland Cement (OPC), Sulphate Resisting Portland Cement (SRPC) and a blended cement (90% ground granulated blast furnace slag/10% OPC). In addition, development of the cement hydration exotherms with time and the temperature distribution in 220 dm 3 samples have been followed. (author)

  2. Large Data at Small Universities: Astronomical processing using a computer classroom

    Science.gov (United States)

    Fuller, Nathaniel James; Clarkson, William I.; Fluharty, Bill; Belanger, Zach; Dage, Kristen

    2016-06-01

    The use of large computing clusters for astronomy research is becoming more commonplace as datasets expand, but access to these required resources is sometimes difficult for research groups working at smaller Universities. As an alternative to purchasing processing time on an off-site computing cluster, or purchasing dedicated hardware, we show how one can easily build a crude on-site cluster by utilizing idle cycles on instructional computers in computer-lab classrooms. Since these computers are maintained as part of the educational mission of the University, the resource impact on the investigator is generally low.By using open source Python routines, it is possible to have a large number of desktop computers working together via a local network to sort through large data sets. By running traditional analysis routines in an “embarrassingly parallel” manner, gains in speed are accomplished without requiring the investigator to learn how to write routines using highly specialized methodology. We demonstrate this concept here applied to 1. photometry of large-format images and 2. Statistical significance-tests for X-ray lightcurve analysis. In these scenarios, we see a speed-up factor which scales almost linearly with the number of cores in the cluster. Additionally, we show that the usage of the cluster does not severely limit performance for a local user, and indeed the processing can be performed while the computers are in use for classroom purposes.

  3. Curbing variations in packaging process through Six Sigma way in a large-scale food-processing industry

    Science.gov (United States)

    Desai, Darshak A.; Kotadiya, Parth; Makwana, Nikheel; Patel, Sonalinkumar

    2015-03-01

    Indian industries need overall operational excellence for sustainable profitability and growth in the present age of global competitiveness. Among different quality and productivity improvement techniques, Six Sigma has emerged as one of the most effective breakthrough improvement strategies. Though Indian industries are exploring this improvement methodology to their advantage and reaping the benefits, not much has been presented and published regarding experience of Six Sigma in the food-processing industries. This paper is an effort to exemplify the application of Six Sigma quality improvement drive to one of the large-scale food-processing sectors in India. The paper discusses the phase wiz implementation of define, measure, analyze, improve, and control (DMAIC) on one of the chronic problems, variations in the weight of milk powder pouch. The paper wraps up with the improvements achieved and projected bottom-line gain to the unit by application of Six Sigma methodology.

  4. Response of deep and shallow tropical maritime cumuli to large-scale processes

    Science.gov (United States)

    Yanai, M.; Chu, J.-H.; Stark, T. E.; Nitta, T.

    1976-01-01

    The bulk diagnostic method of Yanai et al. (1973) and a simplified version of the spectral diagnostic method of Nitta (1975) are used for a more quantitative evaluation of the response of various types of cumuliform clouds to large-scale processes, using the same data set in the Marshall Islands area for a 100-day period in 1956. The dependence of the cloud mass flux distribution on radiative cooling, large-scale vertical motion, and evaporation from the sea is examined. It is shown that typical radiative cooling rates in the tropics tend to produce a bimodal distribution of mass spectrum exhibiting deep and shallow clouds. The bimodal distribution is further enhanced when the large-scale vertical motion is upward, and a nearly unimodal distribution of shallow clouds prevails when the relative cooling is compensated by the heating due to the large-scale subsidence. Both deep and shallow clouds are modulated by large-scale disturbances. The primary role of surface evaporation is to maintain the moisture flux at the cloud base.

  5. Large quantity production of carbon and boron nitride nanotubes by mechano-thermal process

    International Nuclear Information System (INIS)

    Chen, Y.; Fitzgerald, J.D.; Chadderton, L.; Williams, J.S.; Campbell, S.J.

    2002-01-01

    Full text: Nanotube materials including carbon and boron nitride have excellent properties compared with bulk materials. The seamless graphene cylinders with a high length to diameter ratio make them as superstrong fibers. A high amount of hydrogen can be stored into nanotubes as future clean fuel source. Theses applications require large quantity of nanotubes materials. However, nanotube production in large quantity, fully controlled quality and low costs remains challenges for most popular synthesis methods such as arc discharge, laser heating and catalytic chemical decomposition. Discovery of new synthesis methods is still crucial for future industrial application. The new low-temperature mechano-thermal process discovered by the current author provides an opportunity to develop a commercial method for bulk production. This mechano-thermal process consists of a mechanical ball milling and a thermal annealing processes. Using this method, both carbon and boron nitride nanotubes were produced. I will present the mechano-thermal method as the new bulk production technique in the conference. The lecture will summarise main results obtained. In the case of carbon nanotubes, different nanosized structures including multi-walled nanotubes, nanocells, and nanoparticles have been produced in a graphite sample using a mechano-thermal process, consisting of I mechanical milling at room temperature for up to 150 hours and subsequent thermal annealing at 1400 deg C. Metal particles have played an important catalytic effect on the formation of different tubular structures. While defect structure of the milled graphite appears to be responsible for the formation of small tubes. It is found that the mechanical treatment of graphite powder produces a disordered and microporous structure, which provides nucleation sites for nanotubes as well as free carbon atoms. Multiwalled carbon nanotubes appear to grow via growth of the (002) layers during thermal annealing. In the case of BN

  6. Process automation system for integration and operation of Large Volume Plasma Device

    International Nuclear Information System (INIS)

    Sugandhi, R.; Srivastava, P.K.; Sanyasi, A.K.; Srivastav, Prabhakar; Awasthi, L.M.; Mattoo, S.K.

    2016-01-01

    Highlights: • Analysis and design of process automation system for Large Volume Plasma Device (LVPD). • Data flow modeling for process model development. • Modbus based data communication and interfacing. • Interface software development for subsystem control in LabVIEW. - Abstract: Large Volume Plasma Device (LVPD) has been successfully contributing towards understanding of the plasma turbulence driven by Electron Temperature Gradient (ETG), considered as a major contributor for the plasma loss in the fusion devices. Large size of the device imposes certain difficulties in the operation, such as access of the diagnostics, manual control of subsystems and large number of signals monitoring etc. To achieve integrated operation of the machine, automation is essential for the enhanced performance and operational efficiency. Recently, the machine is undergoing major upgradation for the new physics experiments. The new operation and control system consists of following: (1) PXIe based fast data acquisition system for the equipped diagnostics; (2) Modbus based Process Automation System (PAS) for the subsystem controls and (3) Data Utilization System (DUS) for efficient storage, processing and retrieval of the acquired data. In the ongoing development, data flow model of the machine’s operation has been developed. As a proof of concept, following two subsystems have been successfully integrated: (1) Filament Power Supply (FPS) for the heating of W- filaments based plasma source and (2) Probe Positioning System (PPS) for control of 12 number of linear probe drives for a travel length of 100 cm. The process model of the vacuum production system has been prepared and validated against acquired pressure data. In the next upgrade, all the subsystems of the machine will be integrated in a systematic manner. The automation backbone is based on 4-wire multi-drop serial interface (RS485) using Modbus communication protocol. Software is developed on LabVIEW platform using

  7. Process automation system for integration and operation of Large Volume Plasma Device

    Energy Technology Data Exchange (ETDEWEB)

    Sugandhi, R., E-mail: ritesh@ipr.res.in; Srivastava, P.K.; Sanyasi, A.K.; Srivastav, Prabhakar; Awasthi, L.M.; Mattoo, S.K.

    2016-11-15

    Highlights: • Analysis and design of process automation system for Large Volume Plasma Device (LVPD). • Data flow modeling for process model development. • Modbus based data communication and interfacing. • Interface software development for subsystem control in LabVIEW. - Abstract: Large Volume Plasma Device (LVPD) has been successfully contributing towards understanding of the plasma turbulence driven by Electron Temperature Gradient (ETG), considered as a major contributor for the plasma loss in the fusion devices. Large size of the device imposes certain difficulties in the operation, such as access of the diagnostics, manual control of subsystems and large number of signals monitoring etc. To achieve integrated operation of the machine, automation is essential for the enhanced performance and operational efficiency. Recently, the machine is undergoing major upgradation for the new physics experiments. The new operation and control system consists of following: (1) PXIe based fast data acquisition system for the equipped diagnostics; (2) Modbus based Process Automation System (PAS) for the subsystem controls and (3) Data Utilization System (DUS) for efficient storage, processing and retrieval of the acquired data. In the ongoing development, data flow model of the machine’s operation has been developed. As a proof of concept, following two subsystems have been successfully integrated: (1) Filament Power Supply (FPS) for the heating of W- filaments based plasma source and (2) Probe Positioning System (PPS) for control of 12 number of linear probe drives for a travel length of 100 cm. The process model of the vacuum production system has been prepared and validated against acquired pressure data. In the next upgrade, all the subsystems of the machine will be integrated in a systematic manner. The automation backbone is based on 4-wire multi-drop serial interface (RS485) using Modbus communication protocol. Software is developed on LabVIEW platform using

  8. The power of event-driven analytics in Large Scale Data Processing

    CERN Multimedia

    CERN. Geneva; Marques, Paulo

    2011-01-01

    FeedZai is a software company specialized in creating high-­‐throughput low-­‐latency data processing solutions. FeedZai develops a product called "FeedZai Pulse" for continuous event-­‐driven analytics that makes application development easier for end users. It automatically calculates key performance indicators and baselines, showing how current performance differ from previous history, creating timely business intelligence updated to the second. The tool does predictive analytics and trend analysis, displaying data on real-­‐time web-­‐based graphics. In 2010 FeedZai won the European EBN Smart Entrepreneurship Competition, in the Digital Models category, being considered one of the "top-­‐20 smart companies in Europe". The main objective of this seminar/workshop is to explore the topic for large-­‐scale data processing using Complex Event Processing and, in particular, the possible uses of Pulse in...

  9. Slow Off-rates and Strong Product Binding Are Required for Processivity and Efficient Degradation of Recalcitrant Chitin by Family 18 Chitinases.

    Science.gov (United States)

    Kurašin, Mihhail; Kuusk, Silja; Kuusk, Piret; Sørlie, Morten; Väljamäe, Priit

    2015-11-27

    Processive glycoside hydrolases are the key components of enzymatic machineries that decompose recalcitrant polysaccharides, such as chitin and cellulose. The intrinsic processivity (P(Intr)) of cellulases has been shown to be governed by the rate constant of dissociation from polymer chain (koff). However, the reported koff values of cellulases are strongly dependent on the method used for their measurement. Here, we developed a new method for determining koff, based on measuring the exchange rate of the enzyme between a non-labeled and a (14)C-labeled polymeric substrate. The method was applied to the study of the processive chitinase ChiA from Serratia marcescens. In parallel, ChiA variants with weaker binding of the N-acetylglucosamine unit either in substrate-binding site -3 (ChiA-W167A) or the product-binding site +1 (ChiA-W275A) were studied. Both ChiA variants showed increased off-rates and lower apparent processivity on α-chitin. The rate of the production of insoluble reducing groups on the reduced α-chitin was an order of magnitude higher than koff, suggesting that the enzyme can initiate several processive runs without leaving the substrate. On crystalline chitin, the general activity of the wild type enzyme was higher, and the difference was magnifying with hydrolysis time. On amorphous chitin, the variants clearly outperformed the wild type. A model is proposed whereby strong interactions with polymer in the substrate-binding sites (low off-rates) and strong binding of the product in the product-binding sites (high pushing potential) are required for the removal of obstacles, like disintegration of chitin microfibrils. © 2015 by The American Society for Biochemistry and Molecular Biology, Inc.

  10. Large Scale Gaussian Processes for Atmospheric Parameter Retrieval and Cloud Screening

    Science.gov (United States)

    Camps-Valls, G.; Gomez-Chova, L.; Mateo, G.; Laparra, V.; Perez-Suay, A.; Munoz-Mari, J.

    2017-12-01

    Current Earth-observation (EO) applications for image classification have to deal with an unprecedented big amount of heterogeneous and complex data sources. Spatio-temporally explicit classification methods are a requirement in a variety of Earth system data processing applications. Upcoming missions such as the super-spectral Copernicus Sentinels EnMAP and FLEX will soon provide unprecedented data streams. Very high resolution (VHR) sensors like Worldview-3 also pose big challenges to data processing. The challenge is not only attached to optical sensors but also to infrared sounders and radar images which increased in spectral, spatial and temporal resolution. Besides, we should not forget the availability of the extremely large remote sensing data archives already collected by several past missions, such ENVISAT, Cosmo-SkyMED, Landsat, SPOT, or Seviri/MSG. These large-scale data problems require enhanced processing techniques that should be accurate, robust and fast. Standard parameter retrieval and classification algorithms cannot cope with this new scenario efficiently. In this work, we review the field of large scale kernel methods for both atmospheric parameter retrieval and cloud detection using infrared sounding IASI data and optical Seviri/MSG imagery. We propose novel Gaussian Processes (GPs) to train problems with millions of instances and high number of input features. Algorithms can cope with non-linearities efficiently, accommodate multi-output problems, and provide confidence intervals for the predictions. Several strategies to speed up algorithms are devised: random Fourier features and variational approaches for cloud classification using IASI data and Seviri/MSG, and engineered randomized kernel functions and emulation in temperature, moisture and ozone atmospheric profile retrieval from IASI as a proxy to the upcoming MTG-IRS sensor. Excellent compromise between accuracy and scalability are obtained in all applications.

  11. A practical process for light-water detritiation at large scales

    Energy Technology Data Exchange (ETDEWEB)

    Boniface, H.A. [Atomic Energy of Canada Limited, Chalk River, ON (Canada); Robinson, J., E-mail: jr@tyne-engineering.com [Tyne Engineering, Burlington, ON (Canada); Gnanapragasam, N.V.; Castillo, I.; Suppiah, S. [Atomic Energy of Canada Limited, Chalk River, ON (Canada)

    2014-07-01

    AECL and Tyne Engineering have recently completed a preliminary engineering design for a modest-scale tritium removal plant for light water, intended for installation at AECL's Chalk River Laboratories (CRL). This plant design was based on the Combined Electrolysis and Catalytic Exchange (CECE) technology developed at CRL over many years and demonstrated there and elsewhere. The general features and capabilities of this design have been reported as well as the versatility of the design for separating any pair of the three hydrogen isotopes. The same CECE technology could be applied directly to very large-scale wastewater detritiation, such as the case at Fukushima Daiichi Nuclear Power Station. However, since the CECE process scales linearly with throughput, the required capital and operating costs are substantial for such large-scale applications. This paper discusses some options for reducing the costs of very large-scale detritiation. Options include: Reducing tritium removal effectiveness; Energy recovery; Improving the tolerance of impurities; Use of less expensive or more efficient equipment. A brief comparison with alternative processes is also presented. (author)

  12. Analogue scale modelling of extensional tectonic processes using a large state-of-the-art centrifuge

    Science.gov (United States)

    Park, Heon-Joon; Lee, Changyeol

    2017-04-01

    Analogue scale modelling of extensional tectonic processes such as rifting and basin opening has been numerously conducted. Among the controlling factors, gravitational acceleration (g) on the scale models was regarded as a constant (Earth's gravity) in the most of the analogue model studies, and only a few model studies considered larger gravitational acceleration by using a centrifuge (an apparatus generating large centrifugal force by rotating the model at a high speed). Although analogue models using a centrifuge allow large scale-down and accelerated deformation that is derived by density differences such as salt diapir, the possible model size is mostly limited up to 10 cm. A state-of-the-art centrifuge installed at the KOCED Geotechnical Centrifuge Testing Center, Korea Advanced Institute of Science and Technology (KAIST) allows a large surface area of the scale-models up to 70 by 70 cm under the maximum capacity of 240 g-tons. Using the centrifuge, we will conduct analogue scale modelling of the extensional tectonic processes such as opening of the back-arc basin. Acknowledgement This research was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (grant number 2014R1A6A3A04056405).

  13. Comparison of Theta/Beta, Slow Cortical Potential, and Adaptive Neurofeedback Training in Adults: Training Effects on Attentional Processes, Motor System, and Mood

    OpenAIRE

    Studer, Petra

    2011-01-01

    Neurofeedback (NF) training is being applied in an increasing number of clinical and peak performance fields. The aim oft he present investigation in adults was three-fold: 1) to shed further light on the neuronal mechanisms underlying different NF protocols with respect to attentional processes and motor system excitability, 2) to examine the effects of different neurofeedback protocols on well-being / mood, 3) to evaluate the effects of an adaptive type of NF training. Neurof...

  14. Traditional Procurement is too Slow

    Directory of Open Access Journals (Sweden)

    Ann Kong

    2012-11-01

    Full Text Available This paper reports on an exploratory interview survey of construction project participants aimed at identifying the reasons for the decrease in use of the traditional, lump-sum, procurement system in Malaysia. The results show that most people believe it is too slow. This appears to be in part due to the contiguous nature of the various phase and stages of the process and especially the separation of the design and construction phases. The delays caused by disputes between the various parties are also seen as a contributory factor - the most prominent cause being the frequency of variations, with design and scope changes being a particular source of discontent. It is concluded that an up scaling of the whole of the time related reward/penalty system may be the most appropriate measure for the practice in future.

  15. Carotenoid content of the varieties Jaranda and Jariza (Capsicumannuum L.) and response during the industrial slow drying and grinding steps in paprika processing.

    Science.gov (United States)

    Mínguez-Mosquera, M I; Pérez-Gálvez, A; Garrido-Fernández, J

    2000-07-01

    Fruits of the pepper varieties Jaranda and Jariza (Capsicum annuum L. ) ripen as a group, enabling a single harvesting, showed a uniform carotenoid content that is high enough (7.9 g/kg) for the production of paprika. The drying system at mild temperature showed that fruits with moisture content of 85-88% generated a dry product with carotenoid content equal to or higher than the initial one. Those high moisture levels allowed the fruits to have a longer period of metabolic activity, increasing the yellow fraction, the red fraction, or both as a function of what biosynthetic process was predominant. This fact indicates under-ripeness of the fruits in the drying step. The results obtained allow us to establish that both varieties, Jaranda and Jariza, fit the dehydration process employed, yielding a dry fruit with carotenoid concentration similar to that the initial one. During the grinding step of the dry fruit, the heat generated by the hammers of the mill caused degradation of the yellow fraction, while the red fraction is maintained. The ripeness state of the harvested fruits and the appropriateness or severity of the processing steps are indicated by the ratio of red to yellow (R/Y) and/or red to total (R/T) pigments, since fluctuations in both fractions and in total pigments are reflected in and monitored by these parameters.

  16. Neutron slowing-down time in matter

    Energy Technology Data Exchange (ETDEWEB)

    Chabod, Sebastien P., E-mail: sebastien.chabod@lpsc.in2p3.fr [LPSC, Universite Joseph Fourier Grenoble 1, CNRS/IN2P3, Institut Polytechnique de Grenoble, 38000 Grenoble (France)

    2012-03-21

    We formulate the neutron slowing-down time through elastic collisions in a homogeneous, non-absorbing, infinite medium. Our approach allows taking into account for the first time the energy dependence of the scattering cross-section as well as the energy and temporal distribution of the source neutron population in the results. Starting from this development, we investigate the specific case of the propagation in matter of a mono-energetic neutron pulse. We then quantify the perturbation on the neutron slowing-down time induced by resonances in the scattering cross-section. We show that a resonance can induce a permanent reduction of the slowing-down time, preceded by two discontinuities: a first one at the resonance peak position and an echo one, appearing later. From this study, we suggest that a temperature increase of the propagating medium in presence of large resonances could modestly accelerate the neutron moderation.

  17. Kinetic slow mode-type solitons

    Directory of Open Access Journals (Sweden)

    K. Baumgärtel

    2005-01-01

    Full Text Available One-dimensional hybrid code simulations are presented, carried out in order both to study solitary waves of the slow mode branch in an isotropic, collisionless, medium-β plasma (βi=0.25 and to test the fluid based soliton interpretation of Cluster observed strong magnetic depressions (Stasiewicz et al., 2003; Stasiewicz, 2004 against kinetic theory. In the simulations, a variety of strongly oblique, large amplitude, solitons are seen, including solitons with Alfvenic polarization, similar to those predicted by the Hall-MHD theory, and robust, almost non-propagating, solitary structures of slow magnetosonic type with strong magnetic field depressions and perpendicular ion heating, which have no counterpart in fluid theory. The results support the soliton-based interpretation of the Cluster observations, but reveal substantial deficiencies of Hall-MHD theory in describing slow mode-type solitons in a plasma of moderate beta.

  18. Investigation of Slow-wave Activity Saturation during Surgical Anesthesia Reveals a Signature of Neural Inertia in Humans.

    Science.gov (United States)

    Warnaby, Catherine E; Sleigh, Jamie W; Hight, Darren; Jbabdi, Saad; Tracey, Irene

    2017-10-01

    Previously, we showed experimentally that saturation of slow-wave activity provides a potentially individualized neurophysiologic endpoint for perception loss during anesthesia. Furthermore, it is clear that induction and emergence from anesthesia are not symmetrically reversible processes. The observed hysteresis is potentially underpinned by a neural inertia mechanism as proposed in animal studies. In an advanced secondary analysis of 393 individual electroencephalographic data sets, we used slow-wave activity dose-response relationships to parameterize slow-wave activity saturation during induction and emergence from surgical anesthesia. We determined whether neural inertia exists in humans by comparing slow-wave activity dose responses on induction and emergence. Slow-wave activity saturation occurs for different anesthetics and when opioids and muscle relaxants are used during surgery. There was wide interpatient variability in the hypnotic concentrations required to achieve slow-wave activity saturation. Age negatively correlated with power at slow-wave activity saturation. On emergence, we observed abrupt decreases in slow-wave activity dose responses coincident with recovery of behavioral responsiveness in ~33% individuals. These patients are more likely to have lower power at slow-wave activity saturation, be older, and suffer from short-term confusion on emergence. Slow-wave activity saturation during surgical anesthesia implies that large variability in dosing is required to achieve a targeted potential loss of perception in individual patients. A signature for neural inertia in humans is the maintenance of slow-wave activity even in the presence of very-low hypnotic concentrations during emergence from anesthesia.

  19. Leveraging human oversight and intervention in large-scale parallel processing of open-source data

    Science.gov (United States)

    Casini, Enrico; Suri, Niranjan; Bradshaw, Jeffrey M.

    2015-05-01

    The popularity of cloud computing along with the increased availability of cheap storage have led to the necessity of elaboration and transformation of large volumes of open-source data, all in parallel. One way to handle such extensive volumes of information properly is to take advantage of distributed computing frameworks like Map-Reduce. Unfortunately, an entirely automated approach that excludes human intervention is often unpredictable and error prone. Highly accurate data processing and decision-making can be achieved by supporting an automatic process through human collaboration, in a variety of environments such as warfare, cyber security and threat monitoring. Although this mutual participation seems easily exploitable, human-machine collaboration in the field of data analysis presents several challenges. First, due to the asynchronous nature of human intervention, it is necessary to verify that once a correction is made, all the necessary reprocessing is done in chain. Second, it is often needed to minimize the amount of reprocessing in order to optimize the usage of resources due to limited availability. In order to improve on these strict requirements, this paper introduces improvements to an innovative approach for human-machine collaboration in the processing of large amounts of open-source data in parallel.

  20. Controlled elaboration of large-area plasmonic substrates by plasma process

    International Nuclear Information System (INIS)

    Pugliara, A; Despax, B; Makasheva, K; Bonafos, C; Carles, R

    2015-01-01

    Elaboration in a controlled way of large-area and efficient plasmonic substrates is achieved by combining sputtering of silver nanoparticles (AgNPs) and plasma polymerization of the embedding dielectric matrix in an axially asymmetric, capacitively coupled RF discharge maintained at low gas pressure. The plasma parameters and deposition conditions were optimized according to the optical response of these substrates. Structural and optical characterizations of the samples confirm the process efficiency. The obtained results indicate that to deposit a single layer of large and closely situated AgNPs, a high injected power and short sputtering times must be privileged. The plasma-elaborated plasmonic substrates appear to be very sensitive to any stimuli that affect their plasmonic response. (paper)

  1. APD arrays and large-area APDs via a new planar process

    CERN Document Server

    Farrell, R; Vanderpuye, K; Grazioso, R; Myers, R; Entine, G

    2000-01-01

    A fabrication process has been developed which allows the beveled-edge-type of avalanche photodiode (APD) to be made without the need for the artful bevel formation steps. This new process, applicable to both APD arrays and to discrete detectors, greatly simplifies manufacture and should lead to significant cost reduction for such photodetectors. This is achieved through a simple innovation that allows isolation around the device or array pixel to be brought into the plane of the surface of the silicon wafer, hence a planar process. A description of the new process is presented along with performance data for a variety of APD device and array configurations. APD array pixel gains in excess of 10 000 have been measured. Array pixel coincidence timing resolution of less than 5 ns has been demonstrated. An energy resolution of 6% for 662 keV gamma-rays using a CsI(T1) scintillator on a planar processed large-area APD has been recorded. Discrete APDs with active areas up to 13 cm sup 2 have been operated.

  2. Process optimization of large-scale production of recombinant adeno-associated vectors using dielectric spectroscopy.

    Science.gov (United States)

    Negrete, Alejandro; Esteban, Geoffrey; Kotin, Robert M

    2007-09-01

    A well-characterized manufacturing process for the large-scale production of recombinant adeno-associated vectors (rAAV) for gene therapy applications is required to meet current and future demands for pre-clinical and clinical studies and potential commercialization. Economic considerations argue in favor of suspension culture-based production. Currently, the only feasible method for large-scale rAAV production utilizes baculovirus expression vectors and insect cells in suspension cultures. To maximize yields and achieve reproducibility between batches, online monitoring of various metabolic and physical parameters is useful for characterizing early stages of baculovirus-infected insect cells. In this study, rAAVs were produced at 40-l scale yielding ~1 x 10(15) particles. During the process, dielectric spectroscopy was performed by real time scanning in radio frequencies between 300 kHz and 10 MHz. The corresponding permittivity values were correlated with the rAAV production. Both infected and uninfected reached a maximum value; however, only infected cell cultures permittivity profile reached a second maximum value. This effect was correlated with the optimal harvest time for rAAV production. Analysis of rAAV indicated the harvesting time around 48 h post-infection (hpi), and 72 hpi produced similar quantities of biologically active rAAV. Thus, if operated continuously, the 24-h reduction in the production process of rAAV gives sufficient time for additional 18 runs a year corresponding to an extra production of ~2 x 10(16) particles. As part of large-scale optimization studies, this new finding will facilitate the bioprocessing scale-up of rAAV and other bioproducts.

  3. CRISPR transcript processing: a mechanism for generating a large number of small interfering RNAs

    Directory of Open Access Journals (Sweden)

    Djordjevic Marko

    2012-07-01

    Full Text Available Abstract Background CRISPR/Cas (Clustered Regularly Interspaced Short Palindromic Repeats/CRISPR associated sequences is a recently discovered prokaryotic defense system against foreign DNA, including viruses and plasmids. CRISPR cassette is transcribed as a continuous transcript (pre-crRNA, which is processed by Cas proteins into small RNA molecules (crRNAs that are responsible for defense against invading viruses. Experiments in E. coli report that overexpression of cas genes generates a large number of crRNAs, from only few pre-crRNAs. Results We here develop a minimal model of CRISPR processing, which we parameterize based on available experimental data. From the model, we show that the system can generate a large amount of crRNAs, based on only a small decrease in the amount of pre-crRNAs. The relationship between the decrease of pre-crRNAs and the increase of crRNAs corresponds to strong linear amplification. Interestingly, this strong amplification crucially depends on fast non-specific degradation of pre-crRNA by an unidentified nuclease. We show that overexpression of cas genes above a certain level does not result in further increase of crRNA, but that this saturation can be relieved if the rate of CRISPR transcription is increased. We furthermore show that a small increase of CRISPR transcription rate can substantially decrease the extent of cas gene activation necessary to achieve a desired amount of crRNA. Conclusions The simple mathematical model developed here is able to explain existing experimental observations on CRISPR transcript processing in Escherichia coli. The model shows that a competition between specific pre-crRNA processing and non-specific degradation determines the steady-state levels of crRNA and is responsible for strong linear amplification of crRNAs when cas genes are overexpressed. The model further shows how disappearance of only a few pre-crRNA molecules normally present in the cell can lead to a large (two

  4. High-energy, large-momentum-transfer processes: Ladder diagrams in φ3 theory. Pt. 1

    International Nuclear Information System (INIS)

    Osland, P.; Wu, T.T.; Harvard Univ., Cambridge, MA

    1987-01-01

    Relativistic quantum field theories may give us useful guidance to understanding high-energy, large-momentum-transfer processes, where the center-of-mass energy is much larger than the transverse momentum transfers, which are in turn much larger than the masses of the participating particles. With this possibility in mind, we study the ladder diagrams in φ 3 theory. In this paper, some of the necessary techniques are developed and applied to the simplest cases of the fourth- and sixth-order ladder diagrams. (orig.)

  5. Loss aversion, large deviation preferences and optimal portfolio weights for some classes of return processes

    Science.gov (United States)

    Duffy, Ken; Lobunets, Olena; Suhov, Yuri

    2007-05-01

    We propose a model of a loss averse investor who aims to maximize his expected wealth under certain constraints. The constraints are that he avoids, with high probability, incurring an (suitably defined) unacceptable loss. The methodology employed comes from the theory of large deviations. We explore a number of fundamental properties of the model and illustrate its desirable features. We demonstrate its utility by analyzing assets that follow some commonly used financial return processes: Fractional Brownian Motion, Jump Diffusion, Variance Gamma and Truncated Lévy.

  6. Using LabVIEW for the design and control of digital signal processing systems. Simulation of the ultra slow extraction at COSY

    International Nuclear Information System (INIS)

    Heinrichs, G.; Rongen, H.; Jamal, R.

    1994-01-01

    For the ultraslow extraction system of the COler SYnchrotron COSY a direct digital synthesis system is being developed. LabVIEW from National Instruments has been chosen as a tool for the simulation of the digital signal processing algorithms as well as the generation of test sequences. In order to generate adjustable band-limited noise centered at a carrier frequency, alternative algorithms have been studied. LabVIEW permits the interactive variation of relevant system parameters by means of a graphical language in order to study the quality of the frequency band limitation as a function of noise parameters, digital accuracy and frequency range and to generate test sequences by means of a real-time function generator. Advantages and limitations of LabVIEW for such applications are discussed. ((orig.))

  7. Breeding and Genetics Symposium: really big data: processing and analysis of very large data sets.

    Science.gov (United States)

    Cole, J B; Newman, S; Foertter, F; Aguilar, I; Coffey, M

    2012-03-01

    15 m. Large data sets also create challenges for the delivery of genetic evaluations that must be overcome in a way that does not disrupt the transition from conventional to genomic evaluations. Processing time is important, especially as real-time systems for on-farm decisions are developed. The ultimate value of these systems is to decrease time-to-results in research, increase accuracy in genomic evaluations, and accelerate rates of genetic improvement.

  8. Research on the drawing process with a large total deformation wires of AZ31 alloy

    International Nuclear Information System (INIS)

    Bajor, T; Muskalski, Z; Suliga, M

    2010-01-01

    Magnesium and their alloys have been extensively studied in recent years, not only because of their potential applications as light-weight engineering materials, but also owing to their biodegradability. Due to their hexagonal close-packed crystallographic structure, cold plastic processing of magnesium alloys is difficult. The preliminary researches carried out by the authors have indicated that the application of the KOBO method, based on the effect of cyclic strain path change, for the deformation of magnesium alloys, provides the possibility of obtaining a fine-grained structure material to be used for further cold plastic processing with large total deformation. The main purpose of this work is to present research findings concerning a detailed analysis of mechanical properties and changes occurring in the structure of AZ31 alloy wire during the multistage cold drawing process. The appropriate selection of drawing parameters and the application of multistep heat treatment operations enable the deformation of the AZ31 alloy in the cold drawing process with a total draft of about 90%.

  9. Individual differences influence two-digit number processing, but not their analog magnitude processing: a large-scale online study.

    Science.gov (United States)

    Huber, Stefan; Nuerk, Hans-Christoph; Reips, Ulf-Dietrich; Soltanlou, Mojtaba

    2017-12-23

    Symbolic magnitude comparison is one of the most well-studied cognitive processes in research on numerical cognition. However, while the cognitive mechanisms of symbolic magnitude processing have been intensively studied, previous studies have paid less attention to individual differences influencing symbolic magnitude comparison. Employing a two-digit number comparison task in an online setting, we replicated previous effects, including the distance effect, the unit-decade compatibility effect, and the effect of cognitive control on the adaptation to filler items, in a large-scale study in 452 adults. Additionally, we observed that the most influential individual differences were participants' first language, time spent playing computer games and gender, followed by reported alcohol consumption, age and mathematical ability. Participants who used a first language with a left-to-right reading/writing direction were faster than those who read and wrote in the right-to-left direction. Reported playing time for computer games was correlated with faster reaction times. Female participants showed slower reaction times and a larger unit-decade compatibility effect than male participants. Participants who reported never consuming alcohol showed overall slower response times than others. Older participants were slower, but more accurate. Finally, higher grades in mathematics were associated with faster reaction times. We conclude that typical experiments on numerical cognition that employ a keyboard as an input device can also be run in an online setting. Moreover, while individual differences have no influence on domain-specific magnitude processing-apart from age, which increases the decade distance effect-they generally influence performance on a two-digit number comparison task.

  10. Computation of large covariance matrices by SAMMY on graphical processing units and multicore CPUs

    Energy Technology Data Exchange (ETDEWEB)

    Arbanas, G.; Dunn, M.E.; Wiarda, D., E-mail: arbanasg@ornl.gov, E-mail: dunnme@ornl.gov, E-mail: wiardada@ornl.gov [Oak Ridge National Laboratory, Oak Ridge, TN (United States)

    2011-07-01

    Computational power of Graphical Processing Units and multicore CPUs was harnessed by the nuclear data evaluation code SAMMY to speed up computations of large Resonance Parameter Covariance Matrices (RPCMs). This was accomplished by linking SAMMY to vendor-optimized implementations of the matrix-matrix multiplication subroutine of the Basic Linear Algebra Library to compute the most time-consuming step. The {sup 235}U RPCM computed previously using a triple-nested loop was re-computed using the NVIDIA implementation of the subroutine on a single Tesla Fermi Graphical Processing Unit, and also using the Intel's Math Kernel Library implementation on two different multicore CPU systems. A multiplication of two matrices of dimensions 16,000×20,000 that had previously taken days, took approximately one minute on the GPU. Comparable performance was achieved on a dual six-core CPU system. The magnitude of the speed-up suggests that these, or similar, combinations of hardware and libraries may be useful for large matrix operations in SAMMY. Uniform interfaces of standard linear algebra libraries make them a promising candidate for a programming framework of a new generation of SAMMY for the emerging heterogeneous computing platforms. (author)

  11. Computation of large covariance matrices by SAMMY on graphical processing units and multicore CPUs

    International Nuclear Information System (INIS)

    Arbanas, G.; Dunn, M.E.; Wiarda, D.

    2011-01-01

    Computational power of Graphical Processing Units and multicore CPUs was harnessed by the nuclear data evaluation code SAMMY to speed up computations of large Resonance Parameter Covariance Matrices (RPCMs). This was accomplished by linking SAMMY to vendor-optimized implementations of the matrix-matrix multiplication subroutine of the Basic Linear Algebra Library to compute the most time-consuming step. The 235 U RPCM computed previously using a triple-nested loop was re-computed using the NVIDIA implementation of the subroutine on a single Tesla Fermi Graphical Processing Unit, and also using the Intel's Math Kernel Library implementation on two different multicore CPU systems. A multiplication of two matrices of dimensions 16,000×20,000 that had previously taken days, took approximately one minute on the GPU. Comparable performance was achieved on a dual six-core CPU system. The magnitude of the speed-up suggests that these, or similar, combinations of hardware and libraries may be useful for large matrix operations in SAMMY. Uniform interfaces of standard linear algebra libraries make them a promising candidate for a programming framework of a new generation of SAMMY for the emerging heterogeneous computing platforms. (author)

  12. Preparation by the nano-casting process of novel porous carbons from large pore zeolite templates

    International Nuclear Information System (INIS)

    F Gaslain; J Parmentier; V Valtchev; J Patarin; C Vix Guterl

    2005-01-01

    The development of new growing industrial applications such as gas storage (e.g.: methane or hydrogen) or electric double-layer capacitors has focussed the attention of many research groups. For this kind of application, porous carbons with finely tailored micro-porosity (i.e.: pore size diameter ≤ 1 nm) appear as very promising materials due to their high surface area and their specific pore size distribution. In order to meet these requirements, attention has been paid towards the feasibility of preparing microporous carbons by the nano-casting process. Since the sizes and shapes of the pores and walls respectively become the walls and pores of the resultant carbons, using templates with different framework topologies leads to various carbon replicas. The works performed with commercially available zeolites employed as templates [1-4] showed that the most promising candidate is the FAU-type zeolite, which is a large zeolite with three-dimensional channel system. The promising results obtained on FAU-type matrices encouraged us to study the microporous carbon formation on large pore zeolites synthesized in our laboratory, such as EMC-1 (International Zeolite Association framework type FAU), zeolite β (BEA) or EMC-2 (EMT). The carbon replicas were prepared following largely the nano-casting method proposed for zeolite Y by the Kyotani research group [4]: either by liquid impregnation of furfuryl alcohol (FA) followed by carbonization or by vapour deposition (CVD) of propylene, or by an association of these two processes. Heat treatment of the mixed materials (zeolite / carbon) could also follow in order to improve the structural ordering of the carbon. After removal of the inorganic template by an acidic treatment, the carbon materials obtained were characterised by several analytical techniques (XRD, N 2 and CO 2 adsorption, electron microscopy, etc...). The unique characteristics of these carbons are discussed in details in this paper and compared to those

  13. Self-Calibrated In-Process Photogrammetry for Large Raw Part Measurement and Alignment before Machining.

    Science.gov (United States)

    Mendikute, Alberto; Yagüe-Fabra, José A; Zatarain, Mikel; Bertelsen, Álvaro; Leizea, Ibai

    2017-09-09

    Photogrammetry methods are being used more and more as a 3D technique for large scale metrology applications in industry. Optical targets are placed on an object and images are taken around it, where measuring traceability is provided by precise off-process pre-calibrated digital cameras and scale bars. According to the 2D target image coordinates, target 3D coordinates and camera views are jointly computed. One of the applications of photogrammetry is the measurement of raw part surfaces prior to its machining. For this application, post-process bundle adjustment has usually been adopted for computing the 3D scene. With that approach, a high computation time is observed, leading in practice to time consuming and user dependent iterative review and re-processing procedures until an adequate set of images is taken, limiting its potential for fast, easy-to-use, and precise measurements. In this paper, a new efficient procedure is presented for solving the bundle adjustment problem in portable photogrammetry. In-process bundle computing capability is demonstrated on a consumer grade desktop PC, enabling quasi real time 2D image and 3D scene computing. Additionally, a method for the self-calibration of camera and lens distortion has been integrated into the in-process approach due to its potential for highest precision when using low cost non-specialized digital cameras. Measurement traceability is set only by scale bars available in the measuring scene, avoiding the uncertainty contribution of off-process camera calibration procedures or the use of special purpose calibration artifacts. The developed self-calibrated in-process photogrammetry has been evaluated both in a pilot case scenario and in industrial scenarios for raw part measurement, showing a total in-process computing time typically below 1 s per image up to a maximum of 2 s during the last stages of the computed industrial scenes, along with a relative precision of 1/10,000 (e.g. 0.1 mm error in 1 m) with

  14. Human gamma oscillations during slow wave sleep.

    Directory of Open Access Journals (Sweden)

    Mario Valderrama

    Full Text Available Neocortical local field potentials have shown that gamma oscillations occur spontaneously during slow-wave sleep (SWS. At the macroscopic EEG level in the human brain, no evidences were reported so far. In this study, by using simultaneous scalp and intracranial EEG recordings in 20 epileptic subjects, we examined gamma oscillations in cerebral cortex during SWS. We report that gamma oscillations in low (30-50 Hz and high (60-120 Hz frequency bands recurrently emerged in all investigated regions and their amplitudes coincided with specific phases of the cortical slow wave. In most of the cases, multiple oscillatory bursts in different frequency bands from 30 to 120 Hz were correlated with positive peaks of scalp slow waves ("IN-phase" pattern, confirming previous animal findings. In addition, we report another gamma pattern that appears preferentially during the negative phase of the slow wave ("ANTI-phase" pattern. This new pattern presented dominant peaks in the high gamma range and was preferentially expressed in the temporal cortex. Finally, we found that the spatial coherence between cortical sites exhibiting gamma activities was local and fell off quickly when computed between distant sites. Overall, these results provide the first human evidences that gamma oscillations can be observed in macroscopic EEG recordings during sleep. They support the concept that these high-frequency activities might be associated with phasic increases of neural activity during slow oscillations. Such patterned activity in the sleeping brain could play a role in off-line processing of cortical networks.

  15. Geoinformation web-system for processing and visualization of large archives of geo-referenced data

    Science.gov (United States)

    Gordov, E. P.; Okladnikov, I. G.; Titov, A. G.; Shulgina, T. M.

    2010-12-01

    Developed working model of information-computational system aimed at scientific research in area of climate change is presented. The system will allow processing and analysis of large archives of geophysical data obtained both from observations and modeling. Accumulated experience of developing information-computational web-systems providing computational processing and visualization of large archives of geo-referenced data was used during the implementation (Gordov et al, 2007; Okladnikov et al, 2008; Titov et al, 2009). Functional capabilities of the system comprise a set of procedures for mathematical and statistical analysis, processing and visualization of data. At present five archives of data are available for processing: 1st and 2nd editions of NCEP/NCAR Reanalysis, ECMWF ERA-40 Reanalysis, JMA/CRIEPI JRA-25 Reanalysis, and NOAA-CIRES XX Century Global Reanalysis Version I. To provide data processing functionality a computational modular kernel and class library providing data access for computational modules were developed. Currently a set of computational modules for climate change indices approved by WMO is available. Also a special module providing visualization of results and writing to Encapsulated Postscript, GeoTIFF and ESRI shape files was developed. As a technological basis for representation of cartographical information in Internet the GeoServer software conforming to OpenGIS standards is used. Integration of GIS-functionality with web-portal software to provide a basis for web-portal’s development as a part of geoinformation web-system is performed. Such geoinformation web-system is a next step in development of applied information-telecommunication systems offering to specialists from various scientific fields unique opportunities of performing reliable analysis of heterogeneous geophysical data using approved computational algorithms. It will allow a wide range of researchers to work with geophysical data without specific programming

  16. Slow feature analysis: unsupervised learning of invariances.

    Science.gov (United States)

    Wiskott, Laurenz; Sejnowski, Terrence J

    2002-04-01

    Invariant features of temporally varying signals are useful for analysis and classification. Slow feature analysis (SFA) is a new method for learning invariant or slowly varying features from a vectorial input signal. It is based on a nonlinear expansion of the input signal and application of principal component analysis to this expanded signal and its time derivative. It is guaranteed to find the optimal solution within a family of functions directly and can learn to extract a large number of decorrelated features, which are ordered by their degree of invariance. SFA can be applied hierarchically to process high-dimensional input signals and extract complex features. SFA is applied first to complex cell tuning properties based on simple cell output, including disparity and motion. Then more complicated input-output functions are learned by repeated application of SFA. Finally, a hierarchical network of SFA modules is presented as a simple model of the visual system. The same unstructured network can learn translation, size, rotation, contrast, or, to a lesser degree, illumination invariance for one-dimensional objects, depending on only the training stimulus. Surprisingly, only a few training objects suffice to achieve good generalization to new objects. The generated representation is suitable for object recognition. Performance degrades if the network is trained to learn multiple invariances simultaneously.

  17. Subpixelic measurement of large 1D displacements: principle, processing algorithms, performances and software.

    Science.gov (United States)

    Guelpa, Valérian; Laurent, Guillaume J; Sandoz, Patrick; Zea, July Galeano; Clévy, Cédric

    2014-03-12

    This paper presents a visual measurement method able to sense 1D rigid body displacements with very high resolutions, large ranges and high processing rates. Sub-pixelic resolution is obtained thanks to a structured pattern placed on the target. The pattern is made of twin periodic grids with slightly different periods. The periodic frames are suited for Fourier-like phase calculations-leading to high resolution-while the period difference allows the removal of phase ambiguity and thus a high range-to-resolution ratio. The paper presents the measurement principle as well as the processing algorithms (source files are provided as supplementary materials). The theoretical and experimental performances are also discussed. The processing time is around 3 µs for a line of 780 pixels, which means that the measurement rate is mostly limited by the image acquisition frame rate. A 3-σ repeatability of 5 nm is experimentally demonstrated which has to be compared with the 168 µm measurement range.

  18. Decision process in MCDM with large number of criteria and heterogeneous risk preferences

    Directory of Open Access Journals (Sweden)

    Jian Liu

    Full Text Available A new decision process is proposed to address the challenge that a large number criteria in the multi-criteria decision making (MCDM problem and the decision makers with heterogeneous risk preferences. First, from the perspective of objective data, the effective criteria are extracted based on the similarity relations between criterion values and the criteria are weighted, respectively. Second, the corresponding types of theoretic model of risk preferences expectations will be built, based on the possibility and similarity between criterion values to solve the problem for different interval numbers with the same expectation. Then, the risk preferences (Risk-seeking, risk-neutral and risk-aversion will be embedded in the decision process. Later, the optimal decision object is selected according to the risk preferences of decision makers based on the corresponding theoretic model. Finally, a new algorithm of information aggregation model is proposed based on fairness maximization of decision results for the group decision, considering the coexistence of decision makers with heterogeneous risk preferences. The scientific rationality verification of this new method is given through the analysis of real case. Keywords: Heterogeneous, Risk preferences, Fairness, Decision process, Group decision

  19. Event processing time prediction at the CMS experiment of the Large Hadron Collider

    International Nuclear Information System (INIS)

    Cury, Samir; Gutsche, Oliver; Kcira, Dorian

    2014-01-01

    The physics event reconstruction is one of the biggest challenges for the computing of the LHC experiments. Among the different tasks that computing systems of the CMS experiment performs, the reconstruction takes most of the available CPU resources. The reconstruction time of single collisions varies according to event complexity. Measurements were done in order to determine this correlation quantitatively, creating means to predict it based on the data-taking conditions of the input samples. Currently the data processing system splits tasks in groups with the same number of collisions and does not account for variations in the processing time. These variations can be large and can lead to a considerable increase in the time it takes for CMS workflows to finish. The goal of this study was to use estimates on processing time to more efficiently split the workflow into jobs. By considering the CPU time needed for each job the spread of the job-length distribution in a workflow is reduced.

  20. Study of Drell-Yan process in CMS experiment at Large Hadron Collider

    CERN Document Server

    Jindal, Monika

    The proton-proton collisions at the Large Hadron Collider (LHC) is the begining of a new era in the high energy physics. It enables the possibility of the discoveries at high-energy frontier and also allows the study of Standard Model physics with high precision. The new physics discoveries and the precision measurements can be achieved with highly efficient and accurate detectors like Compact Muon Solenoid. In this thesis, we report the measurement of the differential production cross-section of the Drell-Yan process, $q ar{q} ightarrow Z/gamma^{*} ightarrowmu^{+}mu^{-}$ in proton-proton collisions at the center-of-mass energy $sqrt{s}=$ 7 TeV using CMS experiment at the LHC. This measurement is based on the analysis of data which corresponds to an integrated luminosity of $intmath{L}dt$ = 36.0 $pm$ 1.4 pb$^{-1}$. The measurement of the production cross-section of the Drell-Yan process provides a first test of the Standard Model in a new energy domain and may reveal exotic physics processes. The Drell...

  1. SIproc: an open-source biomedical data processing platform for large hyperspectral images.

    Science.gov (United States)

    Berisha, Sebastian; Chang, Shengyuan; Saki, Sam; Daeinejad, Davar; He, Ziqi; Mankar, Rupali; Mayerich, David

    2017-04-10

    There has recently been significant interest within the vibrational spectroscopy community to apply quantitative spectroscopic imaging techniques to histology and clinical diagnosis. However, many of the proposed methods require collecting spectroscopic images that have a similar region size and resolution to the corresponding histological images. Since spectroscopic images contain significantly more spectral samples than traditional histology, the resulting data sets can approach hundreds of gigabytes to terabytes in size. This makes them difficult to store and process, and the tools available to researchers for handling large spectroscopic data sets are limited. Fundamental mathematical tools, such as MATLAB, Octave, and SciPy, are extremely powerful but require that the data be stored in fast memory. This memory limitation becomes impractical for even modestly sized histological images, which can be hundreds of gigabytes in size. In this paper, we propose an open-source toolkit designed to perform out-of-core processing of hyperspectral images. By taking advantage of graphical processing unit (GPU) computing combined with adaptive data streaming, our software alleviates common workstation memory limitations while achieving better performance than existing applications.

  2. In-database processing of a large collection of remote sensing data: applications and implementation

    Science.gov (United States)

    Kikhtenko, Vladimir; Mamash, Elena; Chubarov, Dmitri; Voronina, Polina

    2016-04-01

    Large archives of remote sensing data are now available to scientists, yet the need to work with individual satellite scenes or product files constrains studies that span a wide temporal range or spatial extent. The resources (storage capacity, computing power and network bandwidth) required for such studies are often beyond the capabilities of individual geoscientists. This problem has been tackled before in remote sensing research and inspired several information systems. Some of them such as NASA Giovanni [1] and Google Earth Engine have already proved their utility for science. Analysis tasks involving large volumes of numerical data are not unique to Earth Sciences. Recent advances in data science are enabled by the development of in-database processing engines that bring processing closer to storage, use declarative query languages to facilitate parallel scalability and provide high-level abstraction of the whole dataset. We build on the idea of bridging the gap between file archives containing remote sensing data and databases by integrating files into relational database as foreign data sources and performing analytical processing inside the database engine. Thereby higher level query language can efficiently address problems of arbitrary size: from accessing the data associated with a specific pixel or a grid cell to complex aggregation over spatial or temporal extents over a large number of individual data files. This approach was implemented using PostgreSQL for a Siberian regional archive of satellite data products holding hundreds of terabytes of measurements from multiple sensors and missions taken over a decade-long span. While preserving the original storage layout and therefore compatibility with existing applications the in-database processing engine provides a toolkit for provisioning remote sensing data in scientific workflows and applications. The use of SQL - a widely used higher level declarative query language - simplifies interoperability

  3. Slow Tourism: Exploring the discourses

    Directory of Open Access Journals (Sweden)

    J. Guiver

    2016-05-01

    Full Text Available ‘Slow travel’ and ‘slow tourism’ are relatively new, but contested, concepts. This paper examines the meanings ascribed to them in the academic literature and websites targeted at potential tourists. It finds concurrence on aspects of savouring time at the destination and investing time to appreciate the locality, its people, history, culture and products, but detects different emphases. The academic literature stresses the benefits to the destination and global sustainability, while the websites focus on the personal benefits and ways of becoming a ‘slow tourist’. Food and drink epitomise the immersion in and absorption of the destination and the multi-dimensional tourism experience, contrasted with the superficiality of mainstream tourism. The paper discusses whether tourists practising slow tourism without using the label are slow tourists or not.

  4. Slow Earthquake Hunters: A New Citizen Science Project to Identify and Catalog Slow Slip Events in Geodetic Data

    Science.gov (United States)

    Bartlow, N. M.

    2017-12-01

    Slow Earthquake Hunters is a new citizen science project to detect, catalog, and monitor slow slip events. Slow slip events, also called "slow earthquakes", occur when faults slip too slowly to generate significant seismic radiation. They typically take between a few days and over a year to occur, and are most often found on subduction zone plate interfaces. While not dangerous in and of themselves, recent evidence suggests that monitoring slow slip events is important for earthquake hazards, as slow slip events have been known to trigger damaging "regular" earthquakes. Slow slip events, because they do not radiate seismically, are detected with a variety of methods, most commonly continuous geodetic Global Positioning System (GPS) stations. There is now a wealth of GPS data in some regions that experience slow slip events, but a reliable automated method to detect them in GPS data remains elusive. This project aims to recruit human users to view GPS time series data, with some post-processing to highlight slow slip signals, and flag slow slip events for further analysis by the scientific team. Slow Earthquake Hunters will begin with data from the Cascadia subduction zone, where geodetically detectable slow slip events with a duration of at least a few days recur at regular intervals. The project will then expand to other areas with slow slip events or other transient geodetic signals, including other subduction zones, and areas with strike-slip faults. This project has not yet rolled out to the public, and is in a beta testing phase. This presentation will show results from an initial pilot group of student participants at the University of Missouri, and solicit feedback for the future of Slow Earthquake Hunters.

  5. Eclogitization of the Subducted Oceanic Crust and Its Implications for the Mechanism of Slow Earthquakes

    Science.gov (United States)

    Wang, Xinyang; Zhao, Dapeng; Suzuki, Haruhiko; Li, Jiabiao; Ruan, Aiguo

    2017-12-01

    The generating mechanism and process of slow earthquakes can help us to better understand the seismogenic process and the petrological evolution of the subduction system, but they are still not very clear. In this work we present robust P and S wave tomography and Poisson's ratio images of the subducting Philippine Sea Plate beneath the Kii peninsula in Southwest Japan. Our results clearly reveal the spatial extent and variation of a low-velocity and high Poisson's ratio layer which is interpreted as the remnant of the subducted oceanic crust. The low-velocity layer disappears at depths >50 km, which is attributed to crustal eclogitization and consumption of fluids. The crustal eclogitization and destruction of the impermeable seal play a key role in the generation of slow earthquakes. The Moho depth of the overlying plate is an important factor affecting the depth range of slow earthquakes in warm subduction zones due to the transition of interface permeability from low to high there. The possible mechanism of the deep slow earthquakes is the dehydrated oceanic crustal rupture and shear slip at the transition zone in response to the crustal eclogitization and the temporal stress/strain field. A potential cause of the slow event gap existing beneath easternmost Shikoku and the Kii channel is the premature rupture of the subducted oceanic crust due to the large tensional force.

  6. Unmasking the linear behaviour of slow motor adaptation to prolonged convergence.

    Science.gov (United States)

    Erkelens, Ian M; Thompson, Benjamin; Bobier, William R

    2016-06-01

    Adaptation to changing environmental demands is central to maintaining optimal motor system function. Current theories suggest that adaptation in both the skeletal-motor and oculomotor systems involves a combination of fast (reflexive) and slow (recalibration) mechanisms. Here we used the oculomotor vergence system as a model to investigate the mechanisms underlying slow motor adaptation. Unlike reaching with the upper limbs, vergence is less susceptible to changes in cognitive strategy that can affect the behaviour of motor adaptation. We tested the hypothesis that mechanisms of slow motor adaptation reflect early neural processing by assessing the linearity of adaptive responses over a large range of stimuli. Using varied disparity stimuli in conflict with accommodation, the slow adaptation of tonic vergence was found to exhibit a linear response whereby the rate (R(2)  = 0.85, P < 0.0001) and amplitude (R(2)  = 0.65, P < 0.0001) of the adaptive effects increased proportionally with stimulus amplitude. These results suggest that this slow adaptive mechanism is an early neural process, implying a fundamental physiological nature that is potentially dominated by subcortical and cerebellar substrates. © 2016 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  7. Large-scale simulation of ductile fracture process of microstructured materials

    International Nuclear Information System (INIS)

    Tian Rong; Wang Chaowei

    2011-01-01

    The promise of computational science in the extreme-scale computing era is to reduce and decompose macroscopic complexities into microscopic simplicities with the expense of high spatial and temporal resolution of computing. In materials science and engineering, the direct combination of 3D microstructure data sets and 3D large-scale simulations provides unique opportunity for the development of a comprehensive understanding of nano/microstructure-property relationships in order to systematically design materials with specific desired properties. In the paper, we present a framework simulating the ductile fracture process zone in microstructural detail. The experimentally reconstructed microstructural data set is directly embedded into a FE mesh model to improve the simulation fidelity of microstructure effects on fracture toughness. To the best of our knowledge, it is for the first time that the linking of fracture toughness to multiscale microstructures in a realistic 3D numerical model in a direct manner is accomplished. (author)

  8. Processing large sensor data sets for safeguards : the knowledge generation system.

    Energy Technology Data Exchange (ETDEWEB)

    Thomas, Maikel A.; Smartt, Heidi Anne; Matthews, Robert F.

    2012-04-01

    Modern nuclear facilities, such as reprocessing plants, present inspectors with significant challenges due in part to the sheer amount of equipment that must be safeguarded. The Sandia-developed and patented Knowledge Generation system was designed to automatically analyze large amounts of safeguards data to identify anomalous events of interest by comparing sensor readings with those expected from a process of interest and operator declarations. This paper describes a demonstration of the Knowledge Generation system using simulated accountability tank sensor data to represent part of a reprocessing plant. The demonstration indicated that Knowledge Generation has the potential to address several problems critical to the future of safeguards. It could be extended to facilitate remote inspections and trigger random inspections. Knowledge Generation could analyze data to establish trust hierarchies, to facilitate safeguards use of operator-owned sensors.

  9. High-energy, large-momentum-transfer processes: Ladder diagrams in var-phi 3 theory

    International Nuclear Information System (INIS)

    Newton, C.L.J.

    1990-01-01

    Relativistic quantum field theories may help one to understand high-energy, large-momentum-transfer processes, where the center-of-mass energy is much larger than the transverse momentum transfers, which are in turn much larger than the masses of the participating particles. With this possibility in mind, the author studies ladder diagrams in var-phi 3 theory. He shows that in the limit s much-gt |t| much-gt m 2 , the scattering amplitude for the N-rung ladder diagram takes the form s -1 |t| -N+1 times a homogeneous polynomial of degree 2N - 2 and ln s and ln |t|. This polynomial takes different forms depending on the relation of ln |t| to ln s. More precisely, the asymptotic formula for the N-rung ladder diagram has points of non-analytically when ln |t| = γ ln s for γ = 1/2, 1/3, hor-ellipsis, 1/N-2

  10. Validation of the process control system of an automated large scale manufacturing plant.

    Science.gov (United States)

    Neuhaus, H; Kremers, H; Karrer, T; Traut, R H

    1998-02-01

    The validation procedure for the process control system of a plant for the large scale production of human albumin from plasma fractions is described. A validation master plan is developed, defining the system and elements to be validated, the interfaces with other systems with the validation limits, a general validation concept and supporting documentation. Based on this master plan, the validation protocols are developed. For the validation, the system is subdivided into a field level, which is the equipment part, and an automation level. The automation level is further subdivided into sections according to the different software modules. Based on a risk categorization of the modules, the qualification activities are defined. The test scripts for the different qualification levels (installation, operational and performance qualification) are developed according to a previously performed risk analysis.

  11. Plasma processing of large curved surfaces for superconducting rf cavity modification

    Directory of Open Access Journals (Sweden)

    J. Upadhyay

    2014-12-01

    Full Text Available Plasma-based surface modification of niobium is a promising alternative to wet etching of superconducting radio frequency (SRF cavities. We have demonstrated surface layer removal in an asymmetric nonplanar geometry, using a simple cylindrical cavity. The etching rate is highly correlated with the shape of the inner electrode, radio-frequency (rf circuit elements, gas pressure, rf power, chlorine concentration in the Cl_{2}/Ar gas mixtures, residence time of reactive species, and temperature of the cavity. Using variable radius cylindrical electrodes, large-surface ring-shaped samples, and dc bias in the external circuit, we have measured substantial average etching rates and outlined the possibility of optimizing plasma properties with respect to maximum surface processing effect.

  12. Earthquake cycles and physical modeling of the process leading up to a large earthquake

    Science.gov (United States)

    Ohnaka, Mitiyasu

    2004-08-01

    A thorough discussion is made on what the rational constitutive law for earthquake ruptures ought to be from the standpoint of the physics of rock friction and fracture on the basis of solid facts observed in the laboratory. From this standpoint, it is concluded that the constitutive law should be a slip-dependent law with parameters that may depend on slip rate or time. With the long-term goal of establishing a rational methodology of forecasting large earthquakes, the entire process of one cycle for a typical, large earthquake is modeled, and a comprehensive scenario that unifies individual models for intermediate-and short-term (immediate) forecasts is presented within the framework based on the slip-dependent constitutive law and the earthquake cycle model. The earthquake cycle includes the phase of accumulation of elastic strain energy with tectonic loading (phase II), and the phase of rupture nucleation at the critical stage where an adequate amount of the elastic strain energy has been stored (phase III). Phase II plays a critical role in physical modeling of intermediate-term forecasting, and phase III in physical modeling of short-term (immediate) forecasting. The seismogenic layer and individual faults therein are inhomogeneous, and some of the physical quantities inherent in earthquake ruptures exhibit scale-dependence. It is therefore critically important to incorporate the properties of inhomogeneity and physical scaling, in order to construct realistic, unified scenarios with predictive capability. The scenario presented may be significant and useful as a necessary first step for establishing the methodology for forecasting large earthquakes.

  13. Performance evaluation of the DCMD desalination process under bench scale and large scale module operating conditions

    KAUST Repository

    Francis, Lijo; Ghaffour, NorEddine; Alsaadi, Ahmad Salem; Nunes, Suzana Pereira; Amy, Gary L.

    2014-01-01

    The flux performance of different hydrophobic microporous flat sheet commercial membranes made of poly tetrafluoroethylene (PTFE) and poly propylene (PP) was tested for Red Sea water desalination using the direct contact membrane distillation (DCMD) process, under bench scale (high δT) and large scale module (low δT) operating conditions. Membranes were characterized for their surface morphology, water contact angle, thickness, porosity, pore size and pore size distribution. The DCMD process performance was optimized using a locally designed and fabricated module aiming to maximize the flux at different levels of operating parameters, mainly feed water and coolant inlet temperatures at different temperature differences across the membrane (δT). Water vapor flux of 88.8kg/m2h was obtained using a PTFE membrane at high δT (60°C). In addition, the flux performance was compared to the first generation of a new locally synthesized and fabricated membrane made of a different class of polymer under the same conditions. A total salt rejection of 99.99% and boron rejection of 99.41% were achieved under extreme operating conditions. On the other hand, a detailed water characterization revealed that low molecular weight non-ionic molecules (ppb level) were transported with the water vapor molecules through the membrane structure. The membrane which provided the highest flux was then tested under large scale module operating conditions. The average flux of the latter study (low δT) was found to be eight times lower than that of the bench scale (high δT) operating conditions.

  14. The large-scale process of microbial carbonate precipitation for nickel remediation from an industrial soil.

    Science.gov (United States)

    Zhu, Xuejiao; Li, Weila; Zhan, Lu; Huang, Minsheng; Zhang, Qiuzhuo; Achal, Varenyam

    2016-12-01

    Microbial carbonate precipitation is known as an efficient process for the remediation of heavy metals from contaminated soils. In the present study, a urease positive bacterial isolate, identified as Bacillus cereus NS4 through 16S rDNA sequencing, was utilized on a large scale to remove nickel from industrial soil contaminated by the battery industry. The soil was highly contaminated with an initial total nickel concentration of approximately 900 mg kg -1 . The soluble-exchangeable fraction was reduced to 38 mg kg -1 after treatment. The primary objective of metal stabilization was achieved by reducing the bioavailability through immobilizing the nickel in the urease-driven carbonate precipitation. The nickel removal in the soils contributed to the transformation of nickel from mobile species into stable biominerals identified as calcite, vaterite, aragonite and nickelous carbonate when analyzed under XRD. It was proven that during precipitation of calcite, Ni 2+ with an ion radius close to Ca 2+ was incorporated into the CaCO 3 crystal. The biominerals were also characterized by using SEM-EDS to observe the crystal shape and Raman-FTIR spectroscopy to predict responsible bonding during bioremediation with respect to Ni immobilization. The electronic structure and chemical-state information of the detected elements during MICP bioremediation process was studied by XPS. This is the first study in which microbial carbonate precipitation was used for the large-scale remediation of metal-contaminated industrial soil. Copyright © 2016 Elsevier Ltd. All rights reserved.

  15. Performance evaluation of the DCMD desalination process under bench scale and large scale module operating conditions

    KAUST Repository

    Francis, Lijo

    2014-04-01

    The flux performance of different hydrophobic microporous flat sheet commercial membranes made of poly tetrafluoroethylene (PTFE) and poly propylene (PP) was tested for Red Sea water desalination using the direct contact membrane distillation (DCMD) process, under bench scale (high δT) and large scale module (low δT) operating conditions. Membranes were characterized for their surface morphology, water contact angle, thickness, porosity, pore size and pore size distribution. The DCMD process performance was optimized using a locally designed and fabricated module aiming to maximize the flux at different levels of operating parameters, mainly feed water and coolant inlet temperatures at different temperature differences across the membrane (δT). Water vapor flux of 88.8kg/m2h was obtained using a PTFE membrane at high δT (60°C). In addition, the flux performance was compared to the first generation of a new locally synthesized and fabricated membrane made of a different class of polymer under the same conditions. A total salt rejection of 99.99% and boron rejection of 99.41% were achieved under extreme operating conditions. On the other hand, a detailed water characterization revealed that low molecular weight non-ionic molecules (ppb level) were transported with the water vapor molecules through the membrane structure. The membrane which provided the highest flux was then tested under large scale module operating conditions. The average flux of the latter study (low δT) was found to be eight times lower than that of the bench scale (high δT) operating conditions.

  16. ParaText : scalable solutions for processing and searching very large document collections : final LDRD report.

    Energy Technology Data Exchange (ETDEWEB)

    Crossno, Patricia Joyce; Dunlavy, Daniel M.; Stanton, Eric T.; Shead, Timothy M.

    2010-09-01

    This report is a summary of the accomplishments of the 'Scalable Solutions for Processing and Searching Very Large Document Collections' LDRD, which ran from FY08 through FY10. Our goal was to investigate scalable text analysis; specifically, methods for information retrieval and visualization that could scale to extremely large document collections. Towards that end, we designed, implemented, and demonstrated a scalable framework for text analysis - ParaText - as a major project deliverable. Further, we demonstrated the benefits of using visual analysis in text analysis algorithm development, improved performance of heterogeneous ensemble models in data classification problems, and the advantages of information theoretic methods in user analysis and interpretation in cross language information retrieval. The project involved 5 members of the technical staff and 3 summer interns (including one who worked two summers). It resulted in a total of 14 publications, 3 new software libraries (2 open source and 1 internal to Sandia), several new end-user software applications, and over 20 presentations. Several follow-on projects have already begun or will start in FY11, with additional projects currently in proposal.

  17. Engaging the public with low-carbon energy technologies: Results from a Scottish large group process

    International Nuclear Information System (INIS)

    Howell, Rhys; Shackley, Simon; Mabon, Leslie; Ashworth, Peta; Jeanneret, Talia

    2014-01-01

    This paper presents the results of a large group process conducted in Edinburgh, Scotland investigating public perceptions of climate change and low-carbon energy technologies, specifically carbon dioxide capture and storage (CCS). The quantitative and qualitative results reported show that the participants were broadly supportive of efforts to reduce carbon dioxide emissions, and that there is an expressed preference for renewable energy technologies to be employed to achieve this. CCS was considered in detail during the research due to its climate mitigation potential; results show that the workshop participants were cautious about its deployment. The paper discusses a number of interrelated factors which appear to influence perceptions of CCS; factors such as the perceived costs and benefits of the technology, and people's personal values and trust in others all impacted upon participants’ attitudes towards the technology. The paper thus argues for the need to provide the public with broad-based, balanced and trustworthy information when discussing CCS, and to take seriously the full range of factors that influence public perceptions of low-carbon technologies. - Highlights: • We report the results of a Scottish large group workshop on energy technologies. • There is strong public support for renewable energy and mixed opinions towards CCS. • The workshop was successful in initiating discussion around climate change and energy technologies. • Issues of trust, uncertainty, costs, benefits, values and emotions all inform public perceptions. • Need to take seriously the full range of factors that inform perceptions

  18. Accelerating large-scale protein structure alignments with graphics processing units

    Directory of Open Access Journals (Sweden)

    Pang Bin

    2012-02-01

    Full Text Available Abstract Background Large-scale protein structure alignment, an indispensable tool to structural bioinformatics, poses a tremendous challenge on computational resources. To ensure structure alignment accuracy and efficiency, efforts have been made to parallelize traditional alignment algorithms in grid environments. However, these solutions are costly and of limited accessibility. Others trade alignment quality for speedup by using high-level characteristics of structure fragments for structure comparisons. Findings We present ppsAlign, a parallel protein structure Alignment framework designed and optimized to exploit the parallelism of Graphics Processing Units (GPUs. As a general-purpose GPU platform, ppsAlign could take many concurrent methods, such as TM-align and Fr-TM-align, into the parallelized algorithm design. We evaluated ppsAlign on an NVIDIA Tesla C2050 GPU card, and compared it with existing software solutions running on an AMD dual-core CPU. We observed a 36-fold speedup over TM-align, a 65-fold speedup over Fr-TM-align, and a 40-fold speedup over MAMMOTH. Conclusions ppsAlign is a high-performance protein structure alignment tool designed to tackle the computational complexity issues from protein structural data. The solution presented in this paper allows large-scale structure comparisons to be performed using massive parallel computing power of GPU.

  19. Large wood mobility processes in low-order Chilean river channels

    Science.gov (United States)

    Iroumé, Andrés; Mao, Luca; Andreoli, Andrea; Ulloa, Héctor; Ardiles, María Paz

    2015-01-01

    Large wood (LW) mobility was studied over several time periods in channel segments of four low-order mountain streams, southern Chile. All wood pieces found within the bankfull channels and on the streambanks extending into the channel with dimensions more than 10 cm in diameter and 1 m in length were measured and their position was referenced. Thirty six percent of measured wood pieces were tagged to investigate log mobility. All segments were first surveyed in summer and then after consecutive rainy winter periods. Annual LW mobility ranged between 0 and 28%. Eighty-four percent of the moved LW had diameters ≤ 40 cm and 92% had lengths ≤ 7 m. Large wood mobility was higher in periods when maximum water level (Hmax) exceeded channel bankfull depth (HBk) than in periods with flows less than HBk, but the difference was not statistically significant. Dimensions of moved LW showed no significant differences between periods with flows exceeding and with flows less than bankfull stage. Statistically significant relationships were found between annual LW mobility (%) and unit stream power (for Hmax) and Hmax/HBk. The mean diameter of transported wood pieces per period was significantly correlated with unit stream power for H15% and H50% (the level above which the flow remains for 15 and 50% of the time, respectively). These results contribute to an understanding of the complexity of LW mobilization processes in mountain streams and can be used to assess and prevent potential damage caused by LW mobilization during floods.

  20. Large Eddy Simulation of Transient Flow, Solidification, and Particle Transport Processes in Continuous-Casting Mold

    Science.gov (United States)

    Liu, Zhongqiu; Li, Linmin; Li, Baokuan; Jiang, Maofa

    2014-07-01

    The current study developed a coupled computational model to simulate the transient fluid flow, solidification, and particle transport processes in a slab continuous-casting mold. Transient flow of molten steel in the mold is calculated using the large eddy simulation. An enthalpy-porosity approach is used for the analysis of solidification processes. The transport of bubble and non-metallic inclusion inside the liquid pool is calculated using the Lagrangian approach based on the transient flow field. A criterion of particle entrapment in the solidified shell is developed using the user-defined functions of FLUENT software (ANSYS, Inc., Canonsburg, PA). The predicted results of this model are compared with the measurements of the ultrasonic testing of the rolled steel plates and the water model experiments. The transient asymmetrical flow pattern inside the liquid pool exhibits quite satisfactory agreement with the corresponding measurements. The predicted complex instantaneous velocity field is composed of various small recirculation zones and multiple vortices. The transport of particles inside the liquid pool and the entrapment of particles in the solidified shell are not symmetric. The Magnus force can reduce the entrapment ratio of particles in the solidified shell, especially for smaller particles, but the effect is not obvious. The Marangoni force can play an important role in controlling the motion of particles, which increases the entrapment ratio of particles in the solidified shell obviously.

  1. Distributed processing and network of data acquisition and diagnostics control for Large Helical Device (LHD)

    International Nuclear Information System (INIS)

    Nakanishi, H.; Kojima, M.; Hidekuma, S.

    1997-11-01

    The LHD (Large Helical Device) data processing system has been designed in order to deal with the huge amount of diagnostics data of 600-900 MB per 10-second short-pulse experiment. It prepares the first plasma experiment in March 1998. The recent increase of the data volume obliged to adopt the fully distributed system structure which uses multiple data transfer paths in parallel and separates all of the computer functions into clients and servers. The fundamental element installed for every diagnostic device consists of two kinds of server computers; the data acquisition PC/Windows NT and the real-time diagnostics control VME/VxWorks. To cope with diversified kinds of both device control channels and diagnostics data, the object-oriented method are utilized wholly for the development of this system. It not only reduces the development burden, but also widen the software portability and flexibility. 100Mbps EDDI-based fast networks will re-integrate the distributed server computers so that they can behave as one virtual macro-machine for users. Network methods applied for the LHD data processing system are completely based on the TCP/IP internet technology, and it provides the same accessibility to the remote collaborators as local participants can operate. (author)

  2. A Pipeline for Large Data Processing Using Regular Sampling for Unstructured Grids

    Energy Technology Data Exchange (ETDEWEB)

    Berres, Anne Sabine [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Adhinarayanan, Vignesh [Virginia Polytechnic Inst. and State Univ. (Virginia Tech), Blacksburg, VA (United States); Turton, Terece [Univ. of Texas, Austin, TX (United States); Feng, Wu [Virginia Polytechnic Inst. and State Univ. (Virginia Tech), Blacksburg, VA (United States); Rogers, David Honegger [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-05-12

    Large simulation data requires a lot of time and computational resources to compute, store, analyze, visualize, and run user studies. Today, the largest cost of a supercomputer is not hardware but maintenance, in particular energy consumption. Our goal is to balance energy consumption and cognitive value of visualizations of resulting data. This requires us to go through the entire processing pipeline, from simulation to user studies. To reduce the amount of resources, data can be sampled or compressed. While this adds more computation time, the computational overhead is negligible compared to the simulation time. We built a processing pipeline at the example of regular sampling. The reasons for this choice are two-fold: using a simple example reduces unnecessary complexity as we know what to expect from the results. Furthermore, it provides a good baseline for future, more elaborate sampling methods. We measured time and energy for each test we did, and we conducted user studies in Amazon Mechanical Turk (AMT) for a range of different results we produced through sampling.

  3. Large-scale membrane transfer process: its application to single-crystal-silicon continuous membrane deformable mirror

    International Nuclear Information System (INIS)

    Wu, Tong; Sasaki, Takashi; Hane, Kazuhiro; Akiyama, Masayuki

    2013-01-01

    This paper describes a large-scale membrane transfer process developed for the construction of large-scale membrane devices via the transfer of continuous single-crystal-silicon membranes from one substrate to another. This technique is applied for fabricating a large stroke deformable mirror. A bimorph spring array is used to generate a large air gap between the mirror membrane and the electrode. A 1.9 mm × 1.9 mm × 2 µm single-crystal-silicon membrane is successfully transferred to the electrode substrate by Au–Si eutectic bonding and the subsequent all-dry release process. This process provides an effective approach for transferring a free-standing large continuous single-crystal-silicon to a flexible suspension spring array with a large air gap. (paper)

  4. Process Improvement to Enhance Quality in a Large Volume Labor and Birth Unit.

    Science.gov (United States)

    Bell, Ashley M; Bohannon, Jessica; Porthouse, Lisa; Thompson, Heather; Vago, Tony

    using the Lean process, frontline clinicians identified areas that needed improvement, developed and implemented successful strategies that addressed each gap, and enhanced the quality and safety of care for a large volume perinatal service.

  5. Torque measurements reveal large process differences between materials during high solid enzymatic hydrolysis of pretreated lignocellulose

    Directory of Open Access Journals (Sweden)

    Palmqvist Benny

    2012-08-01

    Full Text Available Abstract Background A common trend in the research on 2nd generation bioethanol is the focus on intensifying the process and increasing the concentration of water insoluble solids (WIS throughout the process. However, increasing the WIS content is not without problems. For example, the viscosity of pretreated lignocellulosic materials is known to increase drastically with increasing WIS content. Further, at elevated viscosities, problems arise related to poor mixing of the material, such as poor distribution of the enzymes and/or difficulties with temperature and pH control, which results in possible yield reduction. Achieving good mixing is unfortunately not without cost, since the power requirements needed to operate the impeller at high viscosities can be substantial. This highly important scale-up problem can easily be overlooked. Results In this work, we monitor the impeller torque (and hence power input in a stirred tank reactor throughout high solid enzymatic hydrolysis (Arundo donax and spruce. Two different process modes were evaluated, where either the impeller speed or the impeller power input was kept constant. Results from hydrolysis experiments at a fixed impeller speed of 10 rpm show that a very rapid decrease in impeller torque is experienced during hydrolysis of pretreated arundo (i.e. it loses its fiber network strength, whereas the fiber strength is retained for a longer time within the spruce material. This translates into a relatively low, rather WIS independent, energy input for arundo whereas the stirring power demand for spruce is substantially larger and quite WIS dependent. By operating the impeller at a constant power input (instead of a constant impeller speed it is shown that power input greatly affects the glucose yield of pretreated spruce whereas the hydrolysis of arundo seems unaffected. Conclusions The results clearly highlight the large differences between the arundo and spruce materials, both in terms of

  6. Development of Integrated Die Casting Process for Large Thin-Wall Magnesium Applications

    Energy Technology Data Exchange (ETDEWEB)

    Carter, Jon T. [General Motors LLC, Warren, MI (United States); Wang, Gerry [Meridian Lightweight Technologies, Plymouth MI (United States); Luo, Alan [General Motors LLC, Warren, MI (United States)

    2017-11-29

    The purpose of this project was to develop a process and product which would utilize magnesium die casting and result in energy savings when compared to the baseline steel product. The specific product chosen was a side door inner panel for a mid-size car. The scope of the project included: re-design of major structural parts of the door, design and build of the tooling required to make the parts, making of parts, assembly of doors, and testing (both physical and simulation) of doors. Additional work was done on alloy development, vacuum die casting, and overcasting, all in order to improve the performance of the doors and reduce cost. The project achieved the following objectives: 1. Demonstrated ability to design a large thin-wall magnesium die casting. 2. Demonstrated ability to manufacture a large thin-wall magnesium die casting in AM60 alloy. 3. Tested via simulations and/or physical tests the mechanical behavior and corrosion behavior of magnesium die castings and/or lightweight experimental automotive side doors which incorporate a large, thin-wall, powder coated, magnesium die casting. Under some load cases, the results revealed cracking of the casting, which can be addressed with re-design and better material models for CAE analysis. No corrosion of the magnesium panel was observed. 4. Using life cycle analysis models, compared the energy consumption and global warming potential of the lightweight door with those of a conventional steel door, both during manufacture and in service. Compared to a steel door, the lightweight door requires more energy to manufacture but less energy during operation (i.e., fuel consumption when driving vehicle). Similarly, compared to a steel door, the lightweight door has higher global warming potential (GWP) during manufacture, but lower GWP during operation. 5. Compared the conventional magnesium die casting process with the “super-vacuum” die casting process. Results achieved with cast tensile bars suggest some

  7. Recycling process of Mn-Al doped large grain UO2 pellets

    International Nuclear Information System (INIS)

    Nam, Ik Hui; Yang, Jae Ho; Rhee, Young Woo; Kim, Dong Joo; Kim, Jong Hun; Kim, Keon Sik; Song, Kun Woo

    2010-01-01

    To reduce the fuel cycle costs and the total mass of spent light water reactor (LWR) fuels, it is necessary to extend the fuel discharged burn-up. Research on fuel pellets focuses on increasing the pellet density and grain size to increase the uranium contents and the high burnup safety margins for LWRs. KAERI are developing the large grain UO 2 pellet for the same purpose. Small amount of additives doping technology are used to increase the grain size and the high temperature deformation of UO 2 pellets. Various promising additive candidates had been developed during the last 3 years and the MnO-Al 2 O 3 doped UO 2 fuel pellet is one of the most promising candidates. In a commercial UO 2 fuel pellet manufacturing process, defective UO 2 pellets or scraps are produced and those should be reused. A common recycling method for defective UO 2 pellets or scraps is that they are oxidized in air at about 450 .deg. C to make U 3 O 8 powder and then added to UO 2 powder. In the oxidation of a UO 2 pellet, the oxygen propagates along the grain boundary. The U 3 O 8 formation on the grain boundary causes a spallation of the grains. So, size and shape of U 3 O 8 powder deeply depend on the initial grain size of UO 2 pellets. In the case of Mn-Al doped large grain pellets, the average grain size is about 45μm and about 5 times larger than a typical un-doped UO 2 pellet which has grain size of about 8∼10μm. That big difference in grain size is expected to cause a big difference in recycled U 3 O 8 powder morphology. Addition of U 3 O 8 to UO 2 leads to a drop in the pellet density, impeding a grain growth and the formation of graph- like pore segregates. Such degradation of the UO 2 pellet properties by adding the recycled U 3 O 8 powder depend on the U 3 O 8 powder properties. So, it is necessary to understand the property and its effect on the pellet of the recycled U 3 O 8 . This paper shows a preliminary result about the recycled U 3 O 8 powder which was obtained by

  8. Slow viscous flow

    CERN Document Server

    Langlois, William E

    2014-01-01

    Leonardo wrote, 'Mechanics is the paradise of the mathematical sciences, because by means of it one comes to the fruits of mathematics' ; replace 'Mechanics' by 'Fluid mechanics' and here we are." -    from the Preface to the Second Edition Although the exponential growth of computer power has advanced the importance of simulations and visualization tools for elaborating new models, designs and technologies, the discipline of fluid mechanics is still large, and turbulence in flows remains a challenging problem in classical physics. Like its predecessor, the revised and expanded Second Edition of this book addresses the basic principles of fluid mechanics and solves fluid flow problems where viscous effects are the dominant physical phenomena. Much progress has occurred in the nearly half a century that has passed since the edition of 1964. As predicted, aspects of hydrodynamics once considered offbeat have risen to importance. For example, the authors have worked on problems where variations in viscosity a...

  9. Slow light in moving media

    Science.gov (United States)

    Leonhardt, U.; Piwnicki, P.

    2001-06-01

    We review the theory of light propagation in moving media with extremely low group velocity. We intend to clarify the most elementary features of monochromatic slow light in a moving medium and, whenever possible, to give an instructive simplified picture.

  10. Birth control - slow release methods

    Science.gov (United States)

    Contraception - slow-release hormonal methods; Progestin implants; Progestin injections; Skin patch; Vaginal ring ... might want to consider a different birth control method. SKIN PATCH The skin patch is placed on ...

  11. High-Throughput Tabular Data Processor - Platform independent graphical tool for processing large data sets.

    Science.gov (United States)

    Madanecki, Piotr; Bałut, Magdalena; Buckley, Patrick G; Ochocka, J Renata; Bartoszewski, Rafał; Crossman, David K; Messiaen, Ludwine M; Piotrowski, Arkadiusz

    2018-01-01

    High-throughput technologies generate considerable amount of data which often requires bioinformatic expertise to analyze. Here we present High-Throughput Tabular Data Processor (HTDP), a platform independent Java program. HTDP works on any character-delimited column data (e.g. BED, GFF, GTF, PSL, WIG, VCF) from multiple text files and supports merging, filtering and converting of data that is produced in the course of high-throughput experiments. HTDP can also utilize itemized sets of conditions from external files for complex or repetitive filtering/merging tasks. The program is intended to aid global, real-time processing of large data sets using a graphical user interface (GUI). Therefore, no prior expertise in programming, regular expression, or command line usage is required of the user. Additionally, no a priori assumptions are imposed on the internal file composition. We demonstrate the flexibility and potential of HTDP in real-life research tasks including microarray and massively parallel sequencing, i.e. identification of disease predisposing variants in the next generation sequencing data as well as comprehensive concurrent analysis of microarray and sequencing results. We also show the utility of HTDP in technical tasks including data merge, reduction and filtering with external criteria files. HTDP was developed to address functionality that is missing or rudimentary in other GUI software for processing character-delimited column data from high-throughput technologies. Flexibility, in terms of input file handling, provides long term potential functionality in high-throughput analysis pipelines, as the program is not limited by the currently existing applications and data formats. HTDP is available as the Open Source software (https://github.com/pmadanecki/htdp).

  12. INVESTIGATION OF LAUNCHING PROCESS FOR STEEL REINFORCED CONCRETE FRAMEWORK OF LARGE BRIDGES

    Directory of Open Access Journals (Sweden)

    V. A. Grechukhin

    2017-01-01

    Full Text Available Bridges are considered as the most complicated, labour-consuming and expensive components in roadway network of the Republic of Belarus. So their construction and operation are to be carried out at high technological level. One of the modern industrial methods is a cyclic longitudinal launching of large frameworks which provide the possibility to reject usage of expensive auxiliary facilities and reduce a construction period. There are several variants of longitudinal launching according to shipping conditions and span length: without launching girder, with launching girder, with top strut-framed beam in the form of cable-stayed system, with strut-framed beam located under span. While using method for the cyclic longitudinal launching manufacturing process of span is concentrated on the shore. The main task of the investigations is to select economic, quick and technologically simple type of the cyclic longitudinal launching with minimum resource- and labour inputs. Span launching has been comparatively analyzed with temporary supports being specially constructed within the span and according to capital supports with the help of launching girder. Conclusions made on the basis of calculations for constructive elements of span according to bearing ability of element sections during launching and also during the process of reinforced concrete plate grouting and at the stage of operation have shown that span assembly with application of temporary supports does not reduce steel spread in comparison with the variant excluding them. Results of the conducted investigations have been approbated in cooperation with state enterprise “Belgiprodor” while designing a bridge across river Sozh.

  13. Slowed ageing, welfare, and population problems.

    Science.gov (United States)

    Wareham, Christopher

    2015-10-01

    Biological studies have demonstrated that it is possible to slow the ageing process and extend lifespan in a wide variety of organisms, perhaps including humans. Making use of the findings of these studies, this article examines two problems concerning the effect of life extension on population size and welfare. The first--the problem of overpopulation--is that as a result of life extension too many people will co-exist at the same time, resulting in decreases in average welfare. The second--the problem of underpopulation--is that life extension will result in too few people existing across time, resulting in decreases in total welfare. I argue that overpopulation is highly unlikely to result from technologies that slow ageing. Moreover, I claim that the problem of underpopulation relies on claims about life extension that are false in the case of life extension by slowed ageing. The upshot of these arguments is that the population problems discussed provide scant reason to oppose life extension by slowed ageing.

  14. Slow rupture of frictional interfaces

    OpenAIRE

    Sinai, Yohai Bar; Brener, Efim A.; Bouchbinder, Eran

    2011-01-01

    The failure of frictional interfaces and the spatiotemporal structures that accompany it are central to a wide range of geophysical, physical and engineering systems. Recent geophysical and laboratory observations indicated that interfacial failure can be mediated by slow slip rupture phenomena which are distinct from ordinary, earthquake-like, fast rupture. These discoveries have influenced the way we think about frictional motion, yet the nature and properties of slow rupture are not comple...

  15. ObspyDMT: a Python toolbox for retrieving and processing large seismological data sets

    Directory of Open Access Journals (Sweden)

    K. Hosseini

    2017-10-01

    Full Text Available We present obspyDMT, a free, open-source software toolbox for the query, retrieval, processing and management of seismological data sets, including very large, heterogeneous and/or dynamically growing ones. ObspyDMT simplifies and speeds up user interaction with data centers, in more versatile ways than existing tools. The user is shielded from the complexities of interacting with different data centers and data exchange protocols and is provided with powerful diagnostic and plotting tools to check the retrieved data and metadata. While primarily a productivity tool for research seismologists and observatories, easy-to-use syntax and plotting functionality also make obspyDMT an effective teaching aid. Written in the Python programming language, it can be used as a stand-alone command-line tool (requiring no knowledge of Python or can be integrated as a module with other Python codes. It facilitates data archiving, preprocessing, instrument correction and quality control – routine but nontrivial tasks that can consume much user time. We describe obspyDMT's functionality, design and technical implementation, accompanied by an overview of its use cases. As an example of a typical problem encountered in seismogram preprocessing, we show how to check for inconsistencies in response files of two example stations. We also demonstrate the fully automated request, remote computation and retrieval of synthetic seismograms from the Synthetics Engine (Syngine web service of the Data Management Center (DMC at the Incorporated Research Institutions for Seismology (IRIS.

  16. PART 2: LARGE PARTICLE MODELLING Simulation of particle filtration processes in deformable media

    Directory of Open Access Journals (Sweden)

    Gernot Boiger

    2008-06-01

    Full Text Available In filtration processes it is necessary to consider both, the interaction of thefluid with the solid parts as well as the effect of particles carried in the fluidand accumulated on the solid. While part 1 of this paper deals with themodelling of fluid structure interaction effects, the accumulation of dirtparticles will be addressed in this paper. A closer look is taken on theimplementation of a spherical, LAGRANGIAN particle model suitable forsmall and large particles. As dirt accumulates in the fluid stream, it interactswith the surrounding filter fibre structure and over time causes modificationsof the filter characteristics. The calculation of particle force interactioneffects is necessary for an adequate simulation of this situation. A detailedDiscrete Phase Lagrange Model was developed to take into account thetwo-way coupling of the fluid and accumulated particles. The simulation oflarge particles and the fluid-structure interaction is realised in a single finitevolume flow solver on the basis of the OpenSource software OpenFoam.

  17. Optical methods to study the gas exchange processes in large diesel engines

    Energy Technology Data Exchange (ETDEWEB)

    Gros, S.; Hattar, C. [Wartsila Diesel International Oy, Vaasa (Finland); Hernberg, R.; Vattulainen, J. [Tampere Univ. of Technology, Tampere (Finland). Plasma Technology Lab.

    1996-12-01

    To be able to study the gas exchange processes in realistic conditions for a single cylinder of a large production-line-type diesel engine, a fast optical absorption spectroscopic method was developed. With this method line-of-sight UV-absorption of SO{sub 2} contained in the exhaust gas was measured as a function of time in the exhaust port area in a continuously fired medium speed diesel engine type Waertsilae 6L20. SO{sub 2} formed during the combustion from the fuel contained sulphur was used as a tracer to study the gas exchange as a function of time in the exhaust channel. In this case of a 4-stroke diesel engine by assuming a known concentration of SO{sub 2} in the exhaust gas after exhaust valve opening and before inlet and exhaust valve overlap period, the measured optical absorption was used to determine the gas density and further the instantaneous exhaust gas temperature during the exhaust cycle. (author)

  18. A High Density Low Cost Digital Signal Processing Module for Large Scale Radiation Detectors

    International Nuclear Information System (INIS)

    Tan, Hui; Hennig, Wolfgang; Walby, Mark D.; Breus, Dimitry; Harris, Jackson T.; Grudberg, Peter M.; Warburton, William K.

    2013-06-01

    A 32-channel digital spectrometer PIXIE-32 is being developed for nuclear physics or other radiation detection applications requiring digital signal processing with large number of channels at relatively low cost. A single PIXIE-32 provides spectrometry and waveform acquisition for 32 input signals per module whereas multiple modules can be combined into larger systems. It is based on the PCI Express standard which allows data transfer rates to the host computer of up to 800 MB/s. Each of the 32 channels in a PIXIE-32 module accepts signals directly from a detector preamplifier or photomultiplier. Digitally controlled offsets can be individually adjusted for each channel. Signals are digitized in 12-bit, 50 MHz multi-channel ADCs. Triggering, pile-up inspection and filtering of the data stream are performed in real time, and pulse heights and other event data are calculated on an event-by event basis. The hardware architecture, internal and external triggering features, and the spectrometry and waveform acquisition capability of the PIXIE- 32 as well as its capability to distribute clock and triggers among multiple modules, are presented. (authors)

  19. ObspyDMT: a Python toolbox for retrieving and processing large seismological data sets

    Science.gov (United States)

    Hosseini, Kasra; Sigloch, Karin

    2017-10-01

    We present obspyDMT, a free, open-source software toolbox for the query, retrieval, processing and management of seismological data sets, including very large, heterogeneous and/or dynamically growing ones. ObspyDMT simplifies and speeds up user interaction with data centers, in more versatile ways than existing tools. The user is shielded from the complexities of interacting with different data centers and data exchange protocols and is provided with powerful diagnostic and plotting tools to check the retrieved data and metadata. While primarily a productivity tool for research seismologists and observatories, easy-to-use syntax and plotting functionality also make obspyDMT an effective teaching aid. Written in the Python programming language, it can be used as a stand-alone command-line tool (requiring no knowledge of Python) or can be integrated as a module with other Python codes. It facilitates data archiving, preprocessing, instrument correction and quality control - routine but nontrivial tasks that can consume much user time. We describe obspyDMT's functionality, design and technical implementation, accompanied by an overview of its use cases. As an example of a typical problem encountered in seismogram preprocessing, we show how to check for inconsistencies in response files of two example stations. We also demonstrate the fully automated request, remote computation and retrieval of synthetic seismograms from the Synthetics Engine (Syngine) web service of the Data Management Center (DMC) at the Incorporated Research Institutions for Seismology (IRIS).

  20. Cogeneration in large processing power stations; Cogeneracion en grandes centrales de proceso

    Energy Technology Data Exchange (ETDEWEB)

    Munoz, Jose Manuel [Observatorio Ciudadano de la Energia A. C., (Mexico)

    2004-06-15

    In this communication it is spoken of the cogeneration in large processing power stations with or without electricity surplus, the characteristics of combined cycle power plants and a comparative analysis in a graph entitled Sale price of electricity in combined cycle and cogeneration power plants. The industrial plants, such as refineries, petrochemical, breweries, paper mills and cellulose plants, among others, with steam necessities for their processes, have the technical and economical conditions to cogenerate, that is, to produce steam and electricity simultaneously. In fact, many of such facilities that exist at the moment in any country, count on cogeneration equipment that allows them to obtain their electricity at a very low cost, taking advantage of the existence steam generators that anyway are indispensable to satisfy their demand. In Mexico, given the existing legal frame, the public services of electricity as well as the oil industry are activities of obligatory character for the State. For these reasons, the subject should be part of the agenda of planning of this power sector. The opportunities to which we are referring to, are valid for the small industries, but from the point of view of the national interest, they are more important for the large size facilities and in that rank, the most numerous are indeed in PEMEX, whereas large energy surplus and capacity would result into cogenerations in refineries and petrochemical facilities and they would be of a high value, precisely for the electricity public service, that is, for the Comision Federal de Electricidad (CFE). [Spanish] En esta ponencia se habla de la cogeneracion en grandes centrales de proceso con o sin excedentes de electricidad, las caracteristicas de plantas de ciclo combinado y se muestra el analisis comparativo en una grafica titulada precio de venta de electricidad en plantas de ciclo combinado y de cogeneracion. Las plantas industriales, tales como refinerias, petroquimicas

  1. Pediatric Intubation by Paramedics in a Large Emergency Medical Services System: Process, Challenges, and Outcomes.

    Science.gov (United States)

    Prekker, Matthew E; Delgado, Fernanda; Shin, Jenny; Kwok, Heemun; Johnson, Nicholas J; Carlbom, David; Grabinsky, Andreas; Brogan, Thomas V; King, Mary A; Rea, Thomas D

    2016-01-01

    Pediatric intubation is a core paramedic skill in some emergency medical services (EMS) systems. The literature lacks a detailed examination of the challenges and subsequent adjustments made by paramedics when intubating children in the out-of-hospital setting. We undertake a descriptive evaluation of the process of out-of-hospital pediatric intubation, focusing on challenges, adjustments, and outcomes. We performed a retrospective analysis of EMS responses between 2006 and 2012 that involved attempted intubation of children younger than 13 years by paramedics in a large, metropolitan EMS system. We calculated the incidence rate of attempted pediatric intubation with EMS and county census data. To summarize the intubation process, we linked a detailed out-of-hospital airway registry with clinical records from EMS, hospital, or autopsy encounters for each child. The main outcome measures were procedural challenges, procedural success, complications, and patient disposition. Paramedics attempted intubation in 299 cases during 6.3 years, with an incidence of 1 pediatric intubation per 2,198 EMS responses. Less than half of intubations (44%) were for patients in cardiac arrest. Two thirds of patients were intubated on the first attempt (66%), and overall success was 97%. The most prevalent challenge was body fluids obscuring the laryngeal view (33%). After a failed first intubation attempt, corrective actions taken by paramedics included changing equipment (33%), suctioning (32%), and repositioning the patient (27%). Six patients (2%) experienced peri-intubation cardiac arrest and 1 patient had an iatrogenic tracheal injury. No esophageal intubations were observed. Of patients transported to the hospital, 86% were admitted to intensive care and hospital mortality was 27%. Pediatric intubation by paramedics was performed infrequently in this EMS system. Although overall intubation success was high, a detailed evaluation of the process of intubation revealed specific

  2. Water-Transfer Slows Aging in Saccharomyces cerevisiae.

    Directory of Open Access Journals (Sweden)

    Aviv Cohen

    Full Text Available Transferring Saccharomyces cerevisiae cells to water is known to extend their lifespan. However, it is unclear whether this lifespan extension is due to slowing the aging process or merely keeping old yeast alive. Here we show that in water-transferred yeast, the toxicity of polyQ proteins is decreased and the aging biomarker 47Q aggregates at a reduced rate and to a lesser extent. These beneficial effects of water-transfer could not be reproduced by diluting the growth medium and depended on de novo protein synthesis and proteasomes levels. Interestingly, we found that upon water-transfer 27 proteins are downregulated, 4 proteins are upregulated and 81 proteins change their intracellular localization, hinting at an active genetic program enabling the lifespan extension. Furthermore, the aging-related deterioration of the heat shock response (HSR, the unfolded protein response (UPR and the endoplasmic reticulum-associated protein degradation (ERAD, was largely prevented in water-transferred yeast, as the activities of these proteostatic network pathways remained nearly as robust as in young yeast. The characteristics of young yeast that are actively maintained upon water-transfer indicate that the extended lifespan is the outcome of slowing the rate of the aging process.

  3. Water-Transfer Slows Aging in Saccharomyces cerevisiae.

    Science.gov (United States)

    Cohen, Aviv; Weindling, Esther; Rabinovich, Efrat; Nachman, Iftach; Fuchs, Shai; Chuartzman, Silvia; Gal, Lihi; Schuldiner, Maya; Bar-Nun, Shoshana

    2016-01-01

    Transferring Saccharomyces cerevisiae cells to water is known to extend their lifespan. However, it is unclear whether this lifespan extension is due to slowing the aging process or merely keeping old yeast alive. Here we show that in water-transferred yeast, the toxicity of polyQ proteins is decreased and the aging biomarker 47Q aggregates at a reduced rate and to a lesser extent. These beneficial effects of water-transfer could not be reproduced by diluting the growth medium and depended on de novo protein synthesis and proteasomes levels. Interestingly, we found that upon water-transfer 27 proteins are downregulated, 4 proteins are upregulated and 81 proteins change their intracellular localization, hinting at an active genetic program enabling the lifespan extension. Furthermore, the aging-related deterioration of the heat shock response (HSR), the unfolded protein response (UPR) and the endoplasmic reticulum-associated protein degradation (ERAD), was largely prevented in water-transferred yeast, as the activities of these proteostatic network pathways remained nearly as robust as in young yeast. The characteristics of young yeast that are actively maintained upon water-transfer indicate that the extended lifespan is the outcome of slowing the rate of the aging process.

  4. The Potential of/for 'Slow': Slow Tourists and Slow Destinations

    Directory of Open Access Journals (Sweden)

    J. Guiver

    2016-05-01

    Full Text Available Slow tourism practices are nothing new; in fact, they were once the norm and still are for millions of people whose annual holiday is spent camping, staying in caravans, rented accommodation, with friends and relations or perhaps in a second home, who immerse themselves in their holiday environment, eat local food, drink local wine and walk or cycle around the area. So why a special edition about slow tourism? Like many aspects of life once considered normal (such as organic farming or free-range eggs, the emergence of new practices has highlighted differences and prompted a re-evaluation of once accepted practices and values. In this way, the concept of ‘slow tourism’ has recently appeared as a type of tourism that contrasts with many contemporary mainstream tourism practices. It has also been associated with similar trends already ‘branded’ slow: slow food and cittaslow (slow towns and concepts such as mindfulness, savouring and well-being.

  5. Large-scale grain growth in the solid-state process: From "Abnormal" to "Normal"

    Science.gov (United States)

    Jiang, Minhong; Han, Shengnan; Zhang, Jingwei; Song, Jiageng; Hao, Chongyan; Deng, Manjiao; Ge, Lingjing; Gu, Zhengfei; Liu, Xinyu

    2018-02-01

    Abnormal grain growth (AGG) has been a common phenomenon during the ceramic or metallurgy processing since prehistoric times. However, usually it had been very difficult to grow big single crystal (centimeter scale over) by using the AGG method due to its so-called occasionality. Based on the AGG, a solid-state crystal growth (SSCG) method was developed. The greatest advantages of the SSCG technology are the simplicity and cost-effectiveness of the technique. But the traditional SSCG technology is still uncontrollable. This article first summarizes the history and current status of AGG, and then reports recent technical developments from AGG to SSCG, and further introduces a new seed-free, solid-state crystal growth (SFSSCG) technology. This SFSSCG method allows us to repeatedly and controllably fabricate large-scale single crystals with appreciable high quality and relatively stable chemical composition at a relatively low temperature, at least in (K0.5Na0.5)NbO3(KNN) and Cu-Al-Mn systems. In this sense, the exaggerated grain growth is no longer 'Abnormal' but 'Normal' since it is able to be artificially controllable and repeated now. This article also provides a crystal growth model to qualitatively explain the mechanism of SFSSCG for KNN system. Compared with the traditional melt and high temperature solution growth methods, the SFSSCG method has the advantages of low energy consumption, low investment, simple technique, composition homogeneity overcoming the issues with incongruent melting and high volatility. This SFSSCG could be helpful for improving the mechanical and physical properties of single crystals, which should be promising for industrial applications.

  6. Slowing down and straggling of protons and heavy ions in matter

    International Nuclear Information System (INIS)

    Aernsbergen, L.M. van.

    1986-01-01

    The Doppler Shift Attenuation (DSA) method is widely used to measure lifetimes of nuclear states. However, many of the lifetimes resulting from DSA measurements display large variations which are caused by an insufficient knowledge of slowing down processes of nucleus recoils. The measurement of 'ranges' is an often used method to study these slowing down processes. In this kind of measurement the distributions of implanted ions are determined for example by the method of Rutherford backscattering or from the yield curve of a resonant nuclear reaction. In this thesis, research on energy-loss processes of protons and Si ions in aluminium is presented. The so-called Resonance Shift method has been improved for the measurements on the protons themselves. This method has only been used occasionally before. A new method has been developed, which is called the Transmission Doppler Shift Attenuation (TDSA) method, for the measurement on Si ions. (Auth.)

  7. 7 CFR 201.33 - Seed in bulk or large quantities; seed for cleaning or processing.

    Science.gov (United States)

    2010-01-01

    ... quantities; seed for cleaning or processing. (a) In the case of seed in bulk, the information required under... seeds. (b) Seed consigned to a seed cleaning or processing establishment, for cleaning or processing for... pertaining to such seed show that it is “Seed for processing,” or, if the seed is in containers and in...

  8. Sensorimotor and cognitive slowing in schizophrenia as measured by the Symbol Digit Substitution Test

    NARCIS (Netherlands)

    Morrens, M.; Hulstijn, W.; Hecke, J. van; Peuskens, J.; Sabbe, B.G.C.

    2006-01-01

    Objectives A vast amount of studies demonstrates the presence of psychomotor slowing in schizophrenia. The objective of the present study was to investigate whether this overall psychomotor slowing can be divided into distinct processes that differentially affect cognitive functioning in

  9. Forest landscape models, a tool for understanding the effect of the large-scale and long-term landscape processes

    Science.gov (United States)

    Hong S. He; Robert E. Keane; Louis R. Iverson

    2008-01-01

    Forest landscape models have become important tools for understanding large-scale and long-term landscape (spatial) processes such as climate change, fire, windthrow, seed dispersal, insect outbreak, disease propagation, forest harvest, and fuel treatment, because controlled field experiments designed to study the effects of these processes are often not possible (...

  10. Slow, stopped and stored light

    International Nuclear Information System (INIS)

    Welch, G.; Scully, M.

    2005-01-01

    Light that can been slowed to walking pace could have applications in telecommunications, optical storage and quantum computing. Whether we use it to estimate how far away a thunderstorm is, or simply take it for granted that we can have a conversation with someone on the other side of the world, we all know that light travels extremely fast. Indeed, special relativity teaches us that nothing in the universe can ever move faster than the speed of light in a vacuum: 299 792 458 ms sup - sup 1. However, there is no such limitation on how slowly light can travel. For the last few years, researchers have been routinely slowing light to just a few metres per second, and have recently even stopped it dead in its tracks so that it can be stored for future use. Slow-light has considerable popular appeal, deriving perhaps from the importance of the speed of light in relativity and cosmology. If everyday objects such as cars or people can travel faster than 'slow' light, for example, then it might appear that relativistic effects could be observed at very low speeds. Although this is not the case, slow light nonetheless promises to play an important role in optical technology because it allows light to be delayed for any period of time desired. This could lead to all-optical routers that would increase the bandwidth of the Internet, and applications in optical data storage, quantum information and even radar. (U.K.)

  11. Slow rupture of frictional interfaces

    Science.gov (United States)

    Bar Sinai, Yohai; Brener, Efim A.; Bouchbinder, Eran

    2012-02-01

    The failure of frictional interfaces and the spatiotemporal structures that accompany it are central to a wide range of geophysical, physical and engineering systems. Recent geophysical and laboratory observations indicated that interfacial failure can be mediated by slow slip rupture phenomena which are distinct from ordinary, earthquake-like, fast rupture. These discoveries have influenced the way we think about frictional motion, yet the nature and properties of slow rupture are not completely understood. We show that slow rupture is an intrinsic and robust property of simple non-monotonic rate-and-state friction laws. It is associated with a new velocity scale cmin, determined by the friction law, below which steady state rupture cannot propagate. We further show that rupture can occur in a continuum of states, spanning a wide range of velocities from cmin to elastic wave-speeds, and predict different properties for slow rupture and ordinary fast rupture. Our results are qualitatively consistent with recent high-resolution laboratory experiments and may provide a theoretical framework for understanding slow rupture phenomena along frictional interfaces.

  12. MURPHYS-HSFS-2014: 7th International Workshop on MUlti-Rate Processes and HYSteresis (MURPHYS) and the 2nd International Workshop on Hysteresis and Slow-Fast Systems (HSFS)

    International Nuclear Information System (INIS)

    2016-01-01

    Foreword MURPHYS-HSFS-2014 was the 7th International Workshop on MUlti-Rate Processes and HYSteresis (MURPHYS) in conjunction with the 2nd International Workshop on Hysteresis and Slow-Fast Systems (HSFS) . It took place at the Weierstrass Institute for Applied Analysis and Stochastics (WIAS), Berlin, Germany, from April 7 to April 11 in 2014. The international workshop on “Multi-Rate Processes and Hysteresis” continued a series of biennial conferences (Cork, Ireland, 2002-2008; Pecs, Hungary, 2010; Suceava, Romania, 2012) and the international workshop on “Hysteresis and Slow-Fast Systems” was the follow-up of the HSFS-workshop that had taken place in Lutherstadt Wittenberg, Germany, in 2011. More then 60 scientists from nine European countries and from the USA participated in MURPHYS-HSFS-2014. The program of the workshop featured 49 talks, including 15 main lectures and 15 invited talks. Recent mathematical results for systems with hysteresis operators, multiple scale systems, rate-independent systems, systems with energetic solutions, singularly perturbed systems, and systems with stochastic effects were presented. The considered applications included magnetization dynamics, biological systems, smart materials, networks, ferroelectric and ferroelastic hysteresis, fatigue in materials, market models with hysteresis, biomedical applications, chemical reactions, noise-induced phenomena, partially saturated soils, colloidal films and evaporation of automotive fuel droplets. Statement of Peer Review: All papers published in this volume of Journal of Physics: Conference Series have been peer reviewed through processes administered by the Editors. Reviews were conducted by expert referees to the professional and scientific standards expected of a proceedings journal published by IOP Publishing. International steering committee: E. Benoit (France), M. Brokate (Germany), R. Cross (UK), K. Dahmen (USA), M. Dimian (Romania), M. Eleuteri (Italy), G. Friedman (USA

  13. COPASutils: an R package for reading, processing, and visualizing data from COPAS large-particle flow cytometers.

    Directory of Open Access Journals (Sweden)

    Tyler C Shimko

    Full Text Available The R package COPASutils provides a logical workflow for the reading, processing, and visualization of data obtained from the Union Biometrica Complex Object Parametric Analyzer and Sorter (COPAS or the BioSorter large-particle flow cytometers. Data obtained from these powerful experimental platforms can be unwieldy, leading to difficulties in the ability to process and visualize the data using existing tools. Researchers studying small organisms, such as Caenorhabditis elegans, Anopheles gambiae, and Danio rerio, and using these devices will benefit from this streamlined and extensible R package. COPASutils offers a powerful suite of functions for the rapid processing and analysis of large high-throughput screening data sets.

  14. Development of slow pyrolysis business operations in Finland - Hidaspyro

    Energy Technology Data Exchange (ETDEWEB)

    Fagernas, L. [VTT Technical Research Centre of Finland, Espoo (Finland)], email: leena.fagernas@vtt.fi

    2012-07-01

    Birch distillate, a by-product in slow pyrolysis process of charcoal production, was found to be a promising source for biological pesticides. However, product commercialization was problematic, for EU registration is costly, and composition, active ingredients and ecotoxicological properties were not known. In addition, constant quality and process optimisation were needed. More collaboration between SMEs and research institutes was required. The primary aim was to support and develop slow pyrolysis business operations of SMEs in Finland by generating knowledge that was needed.

  15. BPM-in-the-Large - Towards a higher level of abstraction in Business Process Management

    OpenAIRE

    Houy , Constantin; Fettke , Peter; Loos , Peter; Aalst , Wil M. P.; Krogstie , John

    2010-01-01

    International audience; Business Process Management (BPM) has gained tremendous importance in recent years and BPM technologies and techniques are widely applied in practice. Furthermore there is a growing and very active research community looking at process modeling and analysis, reference models, workflow flexibility, process mining and process-centric Service-Oriented Architectures (SOA). However, it is clear that existing approaches have problems dealing with the enormous challenges real...

  16. Slow Images and Entangled Photons

    International Nuclear Information System (INIS)

    Swordy, Simon

    2007-01-01

    I will discuss some recent experiments using slow light and entangled photons. We recently showed that it was possible to map a two dimensional image onto very low light level signals, slow them down in a hot atomic vapor while preserving the amplitude and phase of the images. If time remains, I will discuss some of our recent work with time-energy entangled photons for quantum cryptography. We were able to show that we could have a measurable state space of over 1000 states for a single pair of entangled photons in fiber.

  17. Pulsar slow-down epochs

    International Nuclear Information System (INIS)

    Heintzmann, H.; Novello, M.

    1981-01-01

    The relative importance of magnetospheric currents and low frequency waves for pulsar braking is assessed and a model is developed which tries to account for the available pulsar timing data under the unifying aspect that all pulsars have equal masses and magnetic moments and are born as rapid rotators. Four epochs of slow-down are distinguished which are dominated by different braking mechanisms. According to the model no direct relationship exists between 'slow-down age' and true age of a pulsar and leads to a pulsar birth-rate of one event per hundred years. (Author) [pt

  18. Understanding process behaviours in a large insurance company in Australia : a case study

    NARCIS (Netherlands)

    Suriadi, S.; Wynn, M.T.; Ouyang, C.; Hofstede, ter A.H.M.; van Dijk, N.J.; Salinesi, C.; Norrie, M.C.; Pastor, O.

    2013-01-01

    Having a reliable understanding about the behaviours, problems, and performance of existing processes is important in enabling a targeted process improvement initiative. Recently, there has been an increase in the application of innovative process mining techniques to facilitate evidence-based

  19. Strategic alliances between SMEs and large firms: An exploration of the dynamic process

    OpenAIRE

    Rothkegel, Senad; Erakovic, Ljiljana; Shepherd, Deborah

    2006-01-01

    This paper explores the dynamics in strategic alliances between small and medium sized enterprises (SMEs) and large organisations (corporates). Despite the volumes written on this subject, few studies take into account this context of interorganisational relationships. The dynamics in strategic partnerships between small and large organisations are potentially multifaceted and fraught with complexities and contradictions. The partner organisations bring diverse interests and resources to the ...

  20. Exocytosis from chromaffin cells: hydrostatic pressure slows vesicle fusion

    Science.gov (United States)

    Stühmer, Walter

    2015-01-01

    Pressure affects reaction kinetics because chemical transitions involve changes in volume, and therefore pressure is a standard thermodynamic parameter to measure these volume changes. Many organisms live in environments at external pressures other than one atmosphere (0.1 MPa). Marine animals have adapted to live at depths of over 7000 m (at pressures over 70 MPa), and microorganisms living in trenches at over 110 MPa have been retrieved. Here, kinetic changes in secretion from chromaffin cells, measured as capacitance changes using the patch-clamp technique at pressures of up to 20 MPa are presented. It is known that these high pressures drastically slow down physiological functions. High hydrostatic pressure also affects the kinetics of ion channel gating and the amount of current carried by them, and it drastically slows down synaptic transmission. The results presented here indicate a similar change in volume (activation volume) of 390 ± 57 Å3 for large dense-core vesicles undergoing fusion in chromaffin cells and for degranulation of mast cells. It is significantly larger than activation volumes of voltage-gated ion channels in chromaffin cells. This information will be useful in finding possible protein conformational changes during the reactions involved in vesicle fusion and in testing possible molecular dynamic models of secretory processes. PMID:26009771

  1. Comparative study of surface-lattice-site resolved neutralization of slow multicharged ions during large-angle quasi-binary collisions with Au(1 1 0): Simulation and experiment

    International Nuclear Information System (INIS)

    Meyer, F.W.; Morozov, V.A.

    2002-01-01

    In this article we extend our earlier studies of the azimuthal dependences of low energy projectiles scattered in large angle quasi-binary collisions (BCs) from Au(1 1 0). Measurements are presented for 20 keV Ar 9+ at normal incidence, which are compared with our earlier measurements for this ion at 5 keV and 10 deg. incidence angle. A deconvolution procedure based on MARLOWE simulation results carried out at both energies provides information about the energy dependence of projectile neutralization during interactions just with the atoms along the top ridge of the reconstructed Au(1 1 0) surface corrugation, in comparison to, e.g. interactions with atoms lying on the sidewalls. To test the sensitivity of the agreement between the MARLOWE results and the experimental measurements, we show simulation results obtained for a non-reconstructed Au(1 1 0) surface with 20 keV Ar projectiles, and for different scattering potentials that are intended to simulate the effects on scattering trajectory of a projectile inner shell vacancy surviving the BC. In addition, simulation results are shown for a number of different total scattering angles, to illustrate their utility in finding optimum values for this parameter prior to the actual measurements

  2. A slowing-down problem

    Energy Technology Data Exchange (ETDEWEB)

    Carlvik, I; Pershagen, B

    1958-06-15

    An infinitely long circular cylinder of radius a is surrounded by an infinite moderator. Both media are non-capturing. The cylinder emits neutrons of age zero with a constant source density of S. We assume that the ratios of the slowing-down powers and of the diffusion constants are independent of the neutron energy. The slowing-down density is calculated for two cases, a) when the slowing-down power of the cylinder medium is very small, and b) when the cylinder medium is identical with the moderator. The ratios of the slowing-down density at the age {tau} and the source density in the two cases are called {psi}{sub V}, and {psi}{sub M} respectively. {psi}{sub V} and {psi}{sub M} are functions of y=a{sup 2}/4{tau}. These two functions ({psi}{sub V} and {psi}{sub M}) are calculated and tabulated for y = 0-0.25.

  3. Numerical modeling of slow shocks

    International Nuclear Information System (INIS)

    Winske, D.

    1987-01-01

    This paper reviews previous attempt and the present status of efforts to understand the structure of slow shocks by means of time dependent numerical calculations. Studies carried out using MHD or hybrid-kinetic codes have demonstrated qualitative agreement with theory. A number of unresolved issues related to hybrid simulations of the internal shock structure are discussed in some detail. 43 refs., 8 figs

  4. The Role of the Process Organizational Structure in the Development of Intrapreneurship in Large Companies

    Directory of Open Access Journals (Sweden)

    Delić Adisa

    2016-12-01

    Full Text Available Modern companies’ business environments have become increasingly complex, dynamic, and uncertain as a consequence of globalization and the rapid development of information communications technology. Companies are urged to increase their flexibility in order to keep their competitiveness in the global market. The affirmation of intrapreneurship becomes one of the basic ways for achieving higher adaptability and competitiveness of large companies in the modern business environment. In this context, the choice of an organizational solution that improves the development of entrepreneurial orientation and increases employee entrepreneurship and innovativeness becomes an important task for large companies. Research studies and business practices have indicated that various types of modern organizational forms enable the development of intrapreneurship. Therefore, the main aim of this paper is to identify dominant characteristics of organizational solutions and analyse their influence on the development of intrapreneurship in large companies in Bosnia and Herzegovina (BiH. The research results indicate that current organizational characteristics are not favourable for the development of intrapreneurship in large BiH companies and that improvement is necessary in order to create an enabling environment for intrapreneurship and innovativeness. Based on these findings, recommendations for appropriate organizational changes are presented that might result in a more intensive development of intrapreneurship in large BiH companies.

  5. Investigating Coastal Processes Responsible for Large-Scale Shoreline Responses to Human Shoreline Stabilization

    Science.gov (United States)

    Slott, J. M.; Murray, A. B.; Ashton, A. D.

    2006-12-01

    Human shoreline stabilization practices, such as beach nourishment (i.e. placing sand on an eroding beach), have become more prevalent as erosion threatens coastal communities. On sandy shorelines, recent experiments with a numerical model of shoreline change (Slott, et al., in press) indicate that moderate shifts in storminess patterns, one possible outcome of global warming, may accelerate the rate at which shorelines erode or accrete, by altering the angular distribution of approaching waves (the `wave climate'). Accelerated erosion would undoubtedly place greater demands on stabilization. Scientists and coastal engineers have typically only considered the site-specific consequences of shoreline stabilization; here we explore the coastal processes responsible for large-scale (10's kms) and long-term (decades) effects using a numerical model developed by Ashton, et al. (2001). In this numerical model, waves breaking at oblique angles drive a flux of sediment along the shoreline, where gradients in this flux can shape the coastline into surprisingly complex forms (e.g. cuspate-capes found on the Carolina coast). Wave "shadowing" plays a major role in shoreline evolution, whereby coastline features may block incoming waves from reaching distant parts. In this work, we include beach nourishment in the Ashton, et al. (2001) model. Using a cuspate-cape shoreline as our initial model condition, we conducted pairs of experiments and varied the wave-climate forcing across each pair, each representing different storminess scenarios. Here we report on one scenario featuring increased extra-tropical storm influence. For each experiment-pair we ran a control experiment with no shoreline stabilization and a second where a beach nourishment project stabilized a cape tip. By comparing the results of these two parallel runs, we isolate the tendency of the shoreline to migrate landward or seaward along the domain due solely to beach nourishment. Significant effects from beach

  6. A percolation process on the square lattice where large finite clusters are frozen

    NARCIS (Netherlands)

    van den Berg, J.; de Lima, B.N.B.; Nolin, P.

    2012-01-01

    In (Aldous, Math. Proc. Cambridge Philos. Soc. 128 (2000), 465-477), Aldous constructed a growth process for the binary tree where clusters freeze as soon as they become infinite. It was pointed out by Benjamini and Schramm that such a process does not exist for the square lattice. This motivated us

  7. Large transverse momentum processes in a non-scaling parton model

    International Nuclear Information System (INIS)

    Stirling, W.J.

    1977-01-01

    The production of large transverse momentum mesons in hadronic collisions by the quark fusion mechanism is discussed in a parton model which gives logarithmic corrections to Bjorken scaling. It is found that the moments of the large transverse momentum structure function exhibit a simple scale breaking behaviour similar to the behaviour of the Drell-Yan and deep inelastic structure functions of the model. An estimate of corresponding experimental consequences is made and the extent to which analogous results can be expected in an asymptotically free gauge theory is discussed. A simple set of rules is presented for incorporating the logarithmic corrections to scaling into all covariant parton model calculations. (Auth.)

  8. The method of arbitrarily large moments to calculate single scale processes in quantum field theory

    Energy Technology Data Exchange (ETDEWEB)

    Bluemlein, Johannes [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany); Schneider, Carsten [Johannes Kepler Univ., Linz (Austria). Research Inst. for Symbolic Computation (RISC)

    2017-01-15

    We device a new method to calculate a large number of Mellin moments of single scale quantities using the systems of differential and/or difference equations obtained by integration-by-parts identities between the corresponding Feynman integrals of loop corrections to physical quantities. These scalar quantities have a much simpler mathematical structure than the complete quantity. A sufficiently large set of moments may even allow the analytic reconstruction of the whole quantity considered, holding in case of first order factorizing systems. In any case, one may derive highly precise numerical representations in general using this method, which is otherwise completely analytic.

  9. The method of arbitrarily large moments to calculate single scale processes in quantum field theory

    Directory of Open Access Journals (Sweden)

    Johannes Blümlein

    2017-08-01

    Full Text Available We devise a new method to calculate a large number of Mellin moments of single scale quantities using the systems of differential and/or difference equations obtained by integration-by-parts identities between the corresponding Feynman integrals of loop corrections to physical quantities. These scalar quantities have a much simpler mathematical structure than the complete quantity. A sufficiently large set of moments may even allow the analytic reconstruction of the whole quantity considered, holding in case of first order factorizing systems. In any case, one may derive highly precise numerical representations in general using this method, which is otherwise completely analytic.

  10. Modeling of a Large-Scale High Temperature Regenerative Sulfur Removal Process

    DEFF Research Database (Denmark)

    Konttinen, Jukka T.; Johnsson, Jan Erik

    1999-01-01

    model that does not account for bed hydrodynamics. The pilot-scale test run results, obtained in the test runs of the sulfur removal process with real coal gasifier gas, have been used for parameter estimation. The validity of the reactor model for commercial-scale design applications is discussed.......Regenerable mixed metal oxide sorbents are prime candidates for the removal of hydrogen sulfide from hot gasifier gas in the simplified integrated gasification combined cycle (IGCC) process. As part of the regenerative sulfur removal process development, reactor models are needed for scale......-up. Steady-state kinetic reactor models are needed for reactor sizing, and dynamic models can be used for process control design and operator training. The regenerative sulfur removal process to be studied in this paper consists of two side-by-side fluidized bed reactors operating at temperatures of 400...

  11. Dynamic analysis of the conditional oscillator underlying slow waves in thalamocortical neurons

    Directory of Open Access Journals (Sweden)

    Francois eDavid

    2016-02-01

    Full Text Available During non-REM sleep the EEG shows characteristics waves that are generated by the dynamic interactions between cortical and thalamic oscillators. In thalamic neurons, low-threshold T-type Ca2+ channels play a pivotal role in almost every type of neuronal oscillations, including slow (<1 Hz waves, sleep spindles and delta waves. The transient opening of T channels gives rise to the low threshold spikes (LTSs, and associated high frequency bursts of action potentials, that are characteristically present during sleep spindles and delta waves, whereas the persistent opening of a small fraction of T channels, (i.e. ITwindow is responsible for the membrane potential bistability underlying sleep slow oscillations. Surprisingly thalamocortical (TC neurons express a very high density of T channels that largely exceed the amount required to generate LTSs and therefore, to support certain, if not all, sleep oscillations. Here, to clarify the relationship between T current density and sleep oscillations, we systematically investigated the impact of the T conductance level on the intrinsic rhythmic activities generated in TC neurons, combining in vitro experiments and TC neuron simulation. Using bifurcation analysis, we provide insights into the dynamical processes taking place at the transition between slow and delta oscillations. Our results show that although stable delta oscillations can be evoked with minimal T conductance, the full range of slow oscillation patterns, including groups of delta oscillations separated by Up states (grouped-delta slow waves requires a high density of T channels. Moreover, high levels of T conductance ensure the robustness of different types of slow oscillations.

  12. Large-scale production of diesel-like biofuels - process design as an inherent part of microorganism development.

    Science.gov (United States)

    Cuellar, Maria C; Heijnen, Joseph J; van der Wielen, Luuk A M

    2013-06-01

    Industrial biotechnology is playing an important role in the transition to a bio-based economy. Currently, however, industrial implementation is still modest, despite the advances made in microorganism development. Given that the fuels and commodity chemicals sectors are characterized by tight economic margins, we propose to address overall process design and efficiency at the start of bioprocess development. While current microorganism development is targeted at product formation and product yield, addressing process design at the start of bioprocess development means that microorganism selection can also be extended to other critical targets for process technology and process scale implementation, such as enhancing cell separation or increasing cell robustness at operating conditions that favor the overall process. In this paper we follow this approach for the microbial production of diesel-like biofuels. We review current microbial routes with both oleaginous and engineered microorganisms. For the routes leading to extracellular production, we identify the process conditions for large scale operation. The process conditions identified are finally translated to microorganism development targets. We show that microorganism development should be directed at anaerobic production, increasing robustness at extreme process conditions and tailoring cell surface properties. All the same time, novel process configurations integrating fermentation and product recovery, cell reuse and low-cost technologies for product separation are mandatory. This review provides a state-of-the-art summary of the latest challenges in large-scale production of diesel-like biofuels. Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  13. Processing and Application of ICESat Large Footprint Full Waveform Laser Range Data

    NARCIS (Netherlands)

    Duong, V.H.

    2010-01-01

    In the last two decades, laser scanning systems made the transition from scientific research to the commercial market. Laser scanning has a large variety of applications such as digital elevation models, forest inventory and man-made object reconstruction, and became the most required input data for

  14. Large Aircraft Robotic Paint Stripping (LARPS) system and the high pressure water process

    Science.gov (United States)

    See, David W.; Hofacker, Scott A.; Stone, M. Anthony; Harbaugh, Darcy

    1993-03-01

    The aircraft maintenance industry is beset by new Environmental Protection Agency (EPA) guidelines on air emissions, Occupational Safety and Health Administration (OSHA) standards, dwindling labor markets, Federal Aviation Administration (FAA) safety guidelines, and increased operating costs. In light of these factors, the USAF's Wright Laboratory Manufacturing Technology Directorate and the Aircraft Division of the Oklahoma City Air Logistics Center initiated a MANTECH/REPTECH effort to automate an alternate paint removal method and eliminate the current manual methylene chloride chemical stripping methods. This paper presents some of the background and history of the LARPS program, describes the LARPS system, documents the projected operational flow, quantifies some of the projected system benefits and describes the High Pressure Water Stripping Process. Certification of an alternative paint removal method to replace the current chemical process is being performed in two phases: Process Optimization and Process Validation. This paper also presents the results of the Process Optimization for metal substrates. Data on the coating removal rate, residual stresses, surface roughness, preliminary process envelopes, and technical plans for process Validation Testing will be discussed.

  15. Theory of neutron slowing down in nuclear reactors

    CERN Document Server

    Ferziger, Joel H; Dunworth, J V

    2013-01-01

    The Theory of Neutron Slowing Down in Nuclear Reactors focuses on one facet of nuclear reactor design: the slowing down (or moderation) of neutrons from the high energies with which they are born in fission to the energies at which they are ultimately absorbed. In conjunction with the study of neutron moderation, calculations of reactor criticality are presented. A mathematical description of the slowing-down process is given, with particular emphasis on the problems encountered in the design of thermal reactors. This volume is comprised of four chapters and begins by considering the problems

  16. Ion-implantation induced defects in ZnO studied by a slow positron beam

    International Nuclear Information System (INIS)

    Chen, Z.Q.; Maekawa, M.; Kawasuso, A.; Sekiguchi, T.; Suzuki, R.

    2004-01-01

    Introduction and annealing behavior of defects in Al + -implanted ZnO have been studied using an energy variable slow positron beam. Vacancy clusters are produced after Al + -implantation. With increasing ion dose above 10 14 Al + /cm 2 the implanted layer is amorphized. Heat treatment up to 600 C enhances the creation of large voids that allow the positronium formation. The large voids disappear accompanying the recrystallization process by further heat treatment above 600 C. Afterwards, implanted Al impurities are completely activated to contribute to the n-type conduction. The ZnO crystal quality is also improved after recrystallization. (orig.)

  17. Ion-implantation induced defects in ZnO studied by a slow positron beam

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Z.Q.; Maekawa, M.; Kawasuso, A. [Japan Atomic Energy Research Institute, Gunma (Japan); Sekiguchi, T. [National Inst. for Materials Science, Tsukuba, Ibaraki (Japan); Suzuki, R. [National Inst. of Advanced Industrial Science and Technology, Tsukuba, Ibaraki (Japan)

    2004-07-01

    Introduction and annealing behavior of defects in Al{sup +}-implanted ZnO have been studied using an energy variable slow positron beam. Vacancy clusters are produced after Al{sup +}-implantation. With increasing ion dose above 10{sup 14} Al{sup +}/cm{sup 2} the implanted layer is amorphized. Heat treatment up to 600 C enhances the creation of large voids that allow the positronium formation. The large voids disappear accompanying the recrystallization process by further heat treatment above 600 C. Afterwards, implanted Al impurities are completely activated to contribute to the n-type conduction. The ZnO crystal quality is also improved after recrystallization. (orig.)

  18. The unappreciated slowness of conventional tourism

    Directory of Open Access Journals (Sweden)

    G.R. Larsen

    2016-05-01

    Full Text Available Most tourists are not consciously engaging in ‘slow travel’, but a number of travel behaviours displayed by conventional tourists can be interpreted as slow travel behaviour. Based on Danish tourists’ engagement with the distances they travel across to reach their holiday destination, this paper explores unintended slow travel behaviours displayed by these tourists. None of the tourists participating in this research were consciously doing ‘slow travel’, and yet some of their most valued holiday memories are linked to slow travel behaviours. Based on the analysis of these unintended slow travel behaviours, this paper will discuss the potential this insight might hold for promotion of slow travel. If unappreciated and unintentional slow travel behaviours could be utilised in the deliberate effort of encouraging more people to travel slow, ‘slow travel’ will be in a better position to become integrated into conventional travel behaviour.

  19. Manager Experiences with the Return to Work Process in a Large, Publically Funded, Hospital Setting

    DEFF Research Database (Denmark)

    Stochkendahl, Mette Jensen; Myburgh, Corrie; Young, Amanda Ellen

    2015-01-01

    Purpose Previous research on the role of managers in the return to work (RTW) process has primarily been conducted in contexts where the workplace has declared organizational responsibility for the process. While this is a common scenario, in some countries, including Denmark, there is no explicit......, organizational, and policy factors. Instances were observed where supervisors faced the dilemma of balancing ethical and managerial principles with requirements of keeping staffing budgets. Conclusion Although it is not their legislative responsibility, Danish managers play a key role in the RTW process. As has...... been observed in other contexts, Danish supervisors struggle to balance considerations for the returning worker with those of their teams....

  20. Large-scale production of UO2 kernels by sol–gel process at INET

    International Nuclear Information System (INIS)

    Hao, Shaochang; Ma, Jingtao; Zhao, Xingyu; Wang, Yang; Zhou, Xiangwen; Deng, Changsheng

    2014-01-01

    In order to supply elements (300,000 elements per year) for the Chinese pebble bed modular high temperature gas cooled reactor (HTR-PM), it is necessary to scale up the production of UO 2 kernels to 3–6 kgU per batch. The sol–gel process for preparation of UO 2 kernels have been improved and optimized at Institute of Nuclear and New Energy Technology (INET), Tsinghua University, PR China, and a whole set of facility was designed and constructed based on the process. This report briefly describes the main steps of the process, the key equipment and the production capacities of every step. Six batches of kernels for scale-up verification and four batches of kernels for fuel elements for in-pile irradiation tests have been successfully produced, respectively. The quality of the produced kernels meets the design requirements. The production capacity of the process reaches 3–6 kgU per batch

  1. Innovative Techniques for Large-Scale Collection, Processing, and Storage of Eelgrass (Zostera marina) Seeds

    National Research Council Canada - National Science Library

    Orth, Robert J; Marion, Scott R

    2007-01-01

    .... Although methods for hand-collecting, processing and storing eelgrass seeds have advanced to match the scale of collections, the number of seeds collected has limited the scale of restoration efforts...

  2. Boundary driven Kawasaki process with long-range interaction: dynamical large deviations and steady states

    International Nuclear Information System (INIS)

    Mourragui, Mustapha; Orlandi, Enza

    2013-01-01

    A particle system with a single locally-conserved field (density) in a bounded interval with different densities maintained at the two endpoints of the interval is under study here. The particles interact in the bulk through a long-range potential parametrized by β⩾0 and evolve according to an exclusion rule. It is shown that the empirical particle density under the diffusive scaling solves a quasilinear integro-differential evolution equation with Dirichlet boundary conditions. The associated dynamical large deviation principle is proved. Furthermore, when β is small enough, it is also demonstrated that the empirical particle density obeys a law of large numbers with respect to the stationary measures (hydrostatic). The macroscopic particle density solves a non-local, stationary, transport equation. (paper)

  3. Large scale particle simulations in a virtual memory computer

    International Nuclear Information System (INIS)

    Gray, P.C.; Million, R.; Wagner, J.S.; Tajima, T.

    1983-01-01

    Virtual memory computers are capable of executing large-scale particle simulations even when the memory requirements exceeds the computer core size. The required address space is automatically mapped onto slow disc memory the the operating system. When the simulation size is very large, frequent random accesses to slow memory occur during the charge accumulation and particle pushing processes. Assesses to slow memory significantly reduce the excecution rate of the simulation. We demonstrate in this paper that with the proper choice of sorting algorithm, a nominal amount of sorting to keep physically adjacent particles near particles with neighboring array indices can reduce random access to slow memory, increase the efficiency of the I/O system, and hence, reduce the required computing time. (orig.)

  4. Computer-Controlled Cylindrical Polishing Process for Large X-Ray Mirror Mandrels

    Science.gov (United States)

    Khan, Gufran S.; Gubarev, Mikhail; Speegle, Chet; Ramsey, Brian

    2010-01-01

    We are developing high-energy grazing incidence shell optics for hard-x-ray telescopes. The resolution of a mirror shells depends on the quality of cylindrical mandrel from which they are being replicated. Mid-spatial-frequency axial figure error is a dominant contributor in the error budget of the mandrel. This paper presents our efforts to develop a deterministic cylindrical polishing process in order to keep the mid-spatial-frequency axial figure errors to a minimum. Simulation software is developed to model the residual surface figure errors of a mandrel due to the polishing process parameters and the tools used, as well as to compute the optical performance of the optics. The study carried out using the developed software was focused on establishing a relationship between the polishing process parameters and the mid-spatial-frequency error generation. The process parameters modeled are the speeds of the lap and the mandrel, the tool s influence function, the contour path (dwell) of the tools, their shape and the distribution of the tools on the polishing lap. Using the inputs from the mathematical model, a mandrel having conical approximated Wolter-1 geometry, has been polished on a newly developed computer-controlled cylindrical polishing machine. The preliminary results of a series of polishing experiments demonstrate a qualitative agreement with the developed model. We report our first experimental results and discuss plans for further improvements in the polishing process. The ability to simulate the polishing process is critical to optimize the polishing process, improve the mandrel quality and significantly reduce the cost of mandrel production

  5. Managing Active Learning Processes in Large First Year Physics Classes: The Advantages of an Integrated Approach

    Directory of Open Access Journals (Sweden)

    Michael J. Drinkwater

    2014-09-01

    Full Text Available Turning lectures into interactive, student-led question and answer sessions is known to increase learning, but enabling interaction in a large class seems aninsurmountable task. This can discourage adoption of this new approach – who has time to individualize responses, address questions from over 200 students and encourage active participation in class? An approach adopted by a teaching team in large first-year classes at a research-intensive university appears to provide a means to do so. We describe the implementation of active learning strategies in a large first-year undergraduate physics unit of study, replacing traditional, content-heavy lectures with an integrated approach to question-driven learning. A key feature of our approach is that it facilitates intensive in-class discussions by requiring students to engage in preparatory reading and answer short written quizzes before every class. The lecturer uses software to rapidly analyze the student responses and identify the main issues faced by the students before the start of each class. We report the success of the integration of student preparation with this analysis and feedback framework, and the impact on the in-class discussions. We also address some of the difficulties commonly experienced by staff preparing for active learning classes.

  6. A large-scale forest landscape model incorporating multi-scale processes and utilizing forest inventory data

    Science.gov (United States)

    Wen J. Wang; Hong S. He; Martin A. Spetich; Stephen R. Shifley; Frank R. Thompson III; David R. Larsen; Jacob S. Fraser; Jian. Yang

    2013-01-01

    Two challenges confronting forest landscape models (FLMs) are how to simulate fine, standscale processes while making large-scale (i.e., .107 ha) simulation possible, and how to take advantage of extensive forest inventory data such as U.S. Forest Inventory and Analysis (FIA) data to initialize and constrain model parameters. We present the LANDIS PRO model that...

  7. Solution processed large area fabrication of Ag patterns as electrodes for flexible heaters, electrochromics and organic solar cells

    DEFF Research Database (Denmark)

    Gupta, Ritu; Walia, Sunil; Hösel, Markus

    2014-01-01

    , the process takes only a few minutes without any expensive instrumentation. The electrodes exhibited excellent adhesion and mechanical properties, important for flexible device application. Using Ag patterned electrodes, heaters operating at low voltages, pixelated electrochromic displays as well as organic...... solar cells have been demonstrated. The method is extendable to produce defect-free patterns over large areas as demonstrated by roll coating....

  8. Automation of Survey Data Processing, Documentation and Dissemination: An Application to Large-Scale Self-Reported Educational Survey.

    Science.gov (United States)

    Shim, Eunjae; Shim, Minsuk K.; Felner, Robert D.

    Automation of the survey process has proved successful in many industries, yet it is still underused in educational research. This is largely due to the facts (1) that number crunching is usually carried out using software that was developed before information technology existed, and (2) that the educational research is to a great extent trapped…

  9. Perceptions of the Slow Food Cultural Trend among the Youth

    OpenAIRE

    Lelia Voinea; Anca Atanase; Ion Schileru

    2016-01-01

    As they become increasingly aware of the importance of healthy eating and of the serious food imbalance caused by the overconsumption of industrial, ultra-processed and superorganoleptic food, consumers are now beginning to turn their attention to food choices guaranteeing both individual health and also of the environment . Thus, in recent years we are witnessing the rise of a cultural trend ‒ Slow Food. Slow Food has become an international movement that advocates for satisfying culinary pl...

  10. Large critical current density improvement in Bi-2212 wires through the groove-rolling process

    International Nuclear Information System (INIS)

    Malagoli, A; Bernini, C; Braccini, V; Romano, G; Putti, M; Chaud, X; Debray, F

    2013-01-01

    Recently there has been a growing interest in Bi-2212 superconductor round wire for high magnetic field use despite the fact that an increase of the critical current is still needed to boost its successful use in such applications. Recent studies have demonstrated that the main obstacle to current flow, especially in long wires, is the residual porosity inside these powder-in-tube processed conductors that develops from bubble agglomeration when the Bi-2212 melts. In this work we tried to overcome this issue affecting the wire densification by changing the deformation process. Here we show the effects of groove rolling versus the drawing process on the critical current density J C and on the microstructure. In particular, groove-rolled multifilamentary wires show a J C increased by a factor of about 3 with respect to drawn wires prepared with the same Bi-2212 powder and architecture. We think that this approach in the deformation process is able to produce the required improvements both because the superconducting properties are enhanced and because it makes the fabrication process faster and cheaper. (paper)

  11. E-health, phase two: the imperative to integrate process automation with communication automation for large clinical reference laboratories.

    Science.gov (United States)

    White, L; Terner, C

    2001-01-01

    The initial efforts of e-health have fallen far short of expectations. They were buoyed by the hype and excitement of the Internet craze but limited by their lack of understanding of important market and environmental factors. E-health now recognizes that legacy systems and processes are important, that there is a technology adoption process that needs to be followed, and that demonstrable value drives adoption. Initial e-health transaction solutions have targeted mostly low-cost problems. These solutions invariably are difficult to integrate into existing systems, typically requiring manual interfacing to supported processes. This limitation in particular makes them unworkable for large volume providers. To meet the needs of these providers, e-health companies must rethink their approaches, appropriately applying technology to seamlessly integrate all steps into existing business functions. E-automation is a transaction technology that automates steps, integration of steps, and information communication demands, resulting in comprehensive automation of entire business functions. We applied e-automation to create a billing management solution for clinical reference laboratories. Large volume, onerous regulations, small margins, and only indirect access to patients challenge large laboratories' billing departments. Couple these problems with outmoded, largely manual systems and it becomes apparent why most laboratory billing departments are in crisis. Our approach has been to focus on the most significant and costly problems in billing: errors, compliance, and system maintenance and management. The core of the design relies on conditional processing, a "universal" communications interface, and ASP technologies. The result is comprehensive automation of all routine processes, driving out errors and costs. Additionally, compliance management and billing system support and management costs are dramatically reduced. The implications of e-automated processes can extend

  12. A Primer to Slow Light

    OpenAIRE

    Leonhardt, U.

    2001-01-01

    Laboratory-based optical analogs of astronomical objects such as black holes rely on the creation of light with an extremely low or even vanishing group velocity (slow light). These brief notes represent a pedagogical attempt towards elucidating this extraordinary form of light. This paper is a contribution to the book Artificial Black Holes edited by Mario Novello, Matt Visser and Grigori Volovik. The paper is intended as a primer, an introduction to the subject for non-experts, not as a det...

  13. Capillary waves in slow motion

    International Nuclear Information System (INIS)

    Seydel, Tilo; Tolan, Metin; Press, Werner; Madsen, Anders; Gruebel, Gerhard

    2001-01-01

    Capillary wave dynamics on glycerol surfaces has been investigated by means of x-ray photon correlation spectroscopy performed at grazing angles. The measurements show that thermally activated capillary wave motion is slowed down exponentially when the sample is cooled below 273 K. This finding directly reflects the freezing of the surface waves. The wave-number dependence of the measured time constants is in quantitative agreement with theoretical predictions for overdamped capillary waves

  14. The fast slow TDPAC spectrometer

    International Nuclear Information System (INIS)

    Cekic, B.; Koicki, S.; Manasijevic, M.; Ivanovic, N.; Koteski, V.; Milosevic, Z.; Radisavljevic, I.; Cavor, J.; Novakovic, N.; Marjanovic, D.

    2001-01-01

    A 2-BaF 2 detector - fast slow time spectrometer for time differential perturbed angular correlations (TDPAC) experiments is described. This apparatus has been developed in the Group for Hyperfine Interactions in the Institute for Nuclear Sciences in VINCA. The excellent time resolution combined with high efficiency offered by these detectors enables one high counting rate performance and is operating in the wide temperature range 78-1200 K. (author)

  15. Hidden slow pulsars in binaries

    Science.gov (United States)

    Tavani, Marco; Brookshaw, Leigh

    1993-01-01

    The recent discovery of the binary containing the slow pulsar PSR 1718-19 orbiting around a low-mass companion star adds new light on the characteristics of binary pulsars. The properties of the radio eclipses of PSR 1718-19 are the most striking observational characteristics of this system. The surface of the companion star produces a mass outflow which leaves only a small 'window' in orbital phase for the detection of PSR 1718-19 around 400 MHz. At this observing frequency, PSR 1718-19 is clearly observable only for about 1 hr out of the total 6.2 hr orbital period. The aim of this Letter is twofold: (1) to model the hydrodynamical behavior of the eclipsing material from the companion star of PSR 1718-19 and (2) to argue that a population of binary slow pulsars might have escaped detection in pulsar surveys carried out at 400 MHz. The possible existence of a population of partially or totally hidden slow pulsars in binaries will have a strong impact on current theories of binary evolution of neutron stars.

  16. Prediction of process induced shape distortions and residual stresses in large fibre reinforced composite laminates

    DEFF Research Database (Denmark)

    Nielsen, Michael Wenani

    to their accuracy in predicting process induced strain and stress development in thick section laminates during curing, and more precisely regarding the evolution of the composite thermoset polymer matrix mechanical behaviour during the phase transitions experienced during curing. The different constitutive...

  17. Dissolving decision making? : Models and their roles in decision-making processes and policy at large

    NARCIS (Netherlands)

    Zeiss, Ragna; van Egmond, S.

    2014-01-01

    This article studies the roles three science-based models play in Dutch policy and decision making processes. Key is the interaction between model construction and environment. Their political and scientific environments form contexts that shape the roles of models in policy decision making.

  18. Index Compression and Efficient Query Processing in Large Web Search Engines

    Science.gov (United States)

    Ding, Shuai

    2013-01-01

    The inverted index is the main data structure used by all the major search engines. Search engines build an inverted index on their collection to speed up query processing. As the size of the web grows, the length of the inverted list structures, which can easily grow to hundreds of MBs or even GBs for common terms (roughly linear in the size of…

  19. Validating the extract, transform, load process used to populate a large clinical research database.

    Science.gov (United States)

    Denney, Michael J; Long, Dustin M; Armistead, Matthew G; Anderson, Jamie L; Conway, Baqiyyah N

    2016-10-01

    Informaticians at any institution that are developing clinical research support infrastructure are tasked with populating research databases with data extracted and transformed from their institution's operational databases, such as electronic health records (EHRs). These data must be properly extracted from these source systems, transformed into a standard data structure, and then loaded into the data warehouse while maintaining the integrity of these data. We validated the correctness of the extract, load, and transform (ETL) process of the extracted data of West Virginia Clinical and Translational Science Institute's Integrated Data Repository, a clinical data warehouse that includes data extracted from two EHR systems. Four hundred ninety-eight observations were randomly selected from the integrated data repository and compared with the two source EHR systems. Of the 498 observations, there were 479 concordant and 19 discordant observations. The discordant observations fell into three general categories: a) design decision differences between the IDR and source EHRs, b) timing differences, and c) user interface settings. After resolving apparent discordances, our integrated data repository was found to be 100% accurate relative to its source EHR systems. Any institution that uses a clinical data warehouse that is developed based on extraction processes from operational databases, such as EHRs, employs some form of an ETL process. As secondary use of EHR data begins to transform the research landscape, the importance of the basic validation of the extracted EHR data cannot be underestimated and should start with the validation of the extraction process itself. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  20. Keeping a large-pupilled eye on high-level visual processing.

    Science.gov (United States)

    Binda, Paola; Murray, Scott O

    2015-01-01

    The pupillary light response has long been considered an elementary reflex. However, evidence now shows that it integrates information from such complex phenomena as attention, contextual processing, and imagery. These discoveries make pupillometry a promising tool for an entirely new application: the study of high-level vision. Copyright © 2014 Elsevier Ltd. All rights reserved.

  1. Non-destructive measurement methods for large scale gaseous diffusion process equipment

    International Nuclear Information System (INIS)

    Mayer, R.L.; Hagenauer, R.C.; McGinnis, B.R.

    1994-01-01

    Two measurement methods have been developed to measure non-destructively uranium hold-up in gaseous diffusion plants. These methods include passive neutron and passive γ ray measurements. An additional method, high resolution γ ray spectroscopy, provides supplementary information about additional γ ray emitting isotopes, γ ray correction factors, 235 U/ 234 U ratios and 235 U enrichment. Many of these methods can be used as a general purpose measurement technique for large containers of uranium. Measurement applications for these methods include uranium hold-up, waste measurements, criticality safety and nuclear accountability

  2. Investigation of Low-Cost Surface Processing Techniques for Large-Size Multicrystalline Silicon Solar Cells

    OpenAIRE

    Cheng, Yuang-Tung; Ho, Jyh-Jier; Lee, William J.; Tsai, Song-Yeu; Lu, Yung-An; Liou, Jia-Jhe; Chang, Shun-Hsyung; Wang, Kang L.

    2010-01-01

    The subject of the present work is to develop a simple and effective method of enhancing conversion efficiency in large-size solar cells using multicrystalline silicon (mc-Si) wafer. In this work, industrial-type mc-Si solar cells with area of 125×125 mm2 were acid etched to produce simultaneously POCl3 emitters and silicon nitride deposition by plasma-enhanced chemical vapor deposited (PECVD). The study of surface morphology and reflectivity of different mc-Si etched surfaces has also been d...

  3. High-energy, large-momentum-transfer processes: Ladder diagrams in φ3 theory. Pt. 2

    International Nuclear Information System (INIS)

    Osland, P.; Wu, T.T.; Harvard Univ., Cambridge, MA

    1987-01-01

    The scattering amplitude for the four-rung ladder diagram in φ 3 theory is evaluated at high energies and for large momentum transfers. The result takes the form of s -1 vertical stroketvertical stroke -3 multiplied by a homogeneous sixth-order polynomial in ln s and 1nvertical stroketvertical stroke. The novel and unexpected feature is that this polynomial is different depending on whether 1n vertical stroketvertical stroke is larger or less than 1/2 1n s. Thus the asymptotic formula is not analytic at 1n vertical stroketvertical stroke=1/2 1n s, although the first five derivatives are continuous. (orig.)

  4. Large deviation estimates for exceedance times of perpetuity sequences and their dual processes

    DEFF Research Database (Denmark)

    Buraczewski, Dariusz; Collamore, Jeffrey F.; Damek, Ewa

    2016-01-01

    In a variety of problems in pure and applied probability, it is of relevant to study the large exceedance probabilities of the perpetuity sequence $Y_n := B_1 + A_1 B_2 + \\cdots + (A_1 \\cdots A_{n-1}) B_n$, where $(A_i,B_i) \\subset (0,\\infty) \\times \\reals$. Estimates for the stationary tail dist......-time exceedance probabilities of $\\{ M_n^\\ast \\}$, yielding a new result concerning the convergence of $\\{ M_n^\\ast \\}$ to its stationary distribution.......In a variety of problems in pure and applied probability, it is of relevant to study the large exceedance probabilities of the perpetuity sequence $Y_n := B_1 + A_1 B_2 + \\cdots + (A_1 \\cdots A_{n-1}) B_n$, where $(A_i,B_i) \\subset (0,\\infty) \\times \\reals$. Estimates for the stationary tail...... distribution of $\\{ Y_n \\}$ have been developed in the seminal papers of Kesten (1973) and Goldie (1991). Specifically, it is well-known that if $M := \\sup_n Y_n$, then ${\\mathbb P} \\left\\{ M > u \\right\\} \\sim {\\cal C}_M u^{-\\xi}$ as $u \\to \\infty$. While much attention has been focused on extending...

  5. Large-scale gas dynamical processes affecting the origin and evolution of gaseous galactic halos

    Science.gov (United States)

    Shapiro, Paul R.

    1991-01-01

    Observations of galactic halo gas are consistent with an interpretation in terms of the galactic fountain model in which supernova heated gas in the galactic disk escapes into the halo, radiatively cools and forms clouds which fall back to the disk. The results of a new study of several large-scale gas dynamical effects which are expected to occur in such a model for the origin and evolution of galactic halo gas will be summarized, including the following: (1) nonequilibrium absorption line and emission spectrum diagnostics for radiatively cooling halo gas in our own galaxy, as well the implications of such absorption line diagnostics for the origin of quasar absorption lines in galactic halo clouds of high redshift galaxies; (2) numerical MHD simulations and analytical analysis of large-scale explosions ad superbubbles in the galactic disk and halo; (3) numerical MHD simulations of halo cloud formation by thermal instability, with and without magnetic field; and (4) the effect of the galactic fountain on the galactic dynamo.

  6. Compact PCI/Linux platform in FTU slow control system

    International Nuclear Information System (INIS)

    Iannone, F.; Centioli, C.; Panella, M.; Mazza, G.; Vitale, V.; Wang, L.

    2004-01-01

    In large fusion experiments, such as tokamak devices, there is a common trend for slow control systems. Because of complexity of the plants, the so-called 'Standard Model' (SM) in slow control has been adopted on several tokamak machines. This model is based on a three-level hierarchical control: 1) High-Level Control (HLC) with a supervisory function; 2) Medium-Level Control (MLC) to interface and concentrate I/O field equipment; 3) Low-Level Control (LLC) with hard real-time I/O function, often managed by PLCs. FTU (Frascati Tokamak Upgrade) control system designed with SM concepts has underwent several stages of developments in its fifteen years duration of runs. The latest evolution was inevitable, due to the obsolescence of the MLC CPUs, based on VME-MOTOROLA 68030 with OS9 operating system. A large amount of C code was developed for that platform to route the data flow from LLC, which is constituted by 24 Westinghouse Numalogic PC-700 PLCs with about 8000 field-points, to HLC, based on a commercial Object-Oriented Real-Time database on Alpha/CompaqTru64 platform. Therefore, authors have to look for cost-effective solutions and finally a CompactPCI-Intel x86 platform with Linux operating system was chosen. A software porting has been done, taking into account the differences between OS9 and Linux operating system in terms of Inter/Network Processes Communications and I/O multi-ports serial driver. This paper describes the hardware/software architecture of the new MLC system, emphasizing the reliability and the low costs of the open source solutions. Moreover, a huge amount of software packages available in open source environment will assure a less painful maintenance, and will open the way to further improvements of the system itself. (authors)

  7. Large area SiC coating technology of RBSC for semiconductor processing component

    Energy Technology Data Exchange (ETDEWEB)

    Park, Ji Yeon; Kim, Weon Ju

    2001-06-01

    As the semiconductor process is developed for the larger area wafer and the larger-scale integration, the processing fixtures are required to have excellent mechanical and high temperature properties. This highlights the importance of silicon carbide-based materials as a substitute for quartz-based susceptors. In this study, SiC coating technology on reaction sintered (RS) SiC with thickness variation of +/- 10% within a diameter of 8 inch by low pressure chemical vapor deposition has been developed for making a plate type SiC fixture such as heater, baffle, etc., with a diameter of 12 inch. Additionally, a state of art on fabrication technology and products of the current commercial SiC fixtures has been described.

  8. Large area SiC coating technology of RBSC for semiconductor processing component

    International Nuclear Information System (INIS)

    Park, Ji Yeon; Kim, Weon Ju

    2001-06-01

    As the semiconductor process is developed for the larger area wafer and the larger-scale integration, the processing fixtures are required to have excellent mechanical and high temperature properties. This highlights the importance of silicon carbide-based materials as a substitute for quartz-based susceptors. In this study, SiC coating technology on reaction sintered (RS) SiC with thickness variation of +/- 10% within a diameter of 8 inch by low pressure chemical vapor deposition has been developed for making a plate type SiC fixture such as heater, baffle, etc., with a diameter of 12 inch. Additionally, a state of art on fabrication technology and products of the current commercial SiC fixtures has been described

  9. Distributed Random Process for a Large-Scale Peer-to-Peer Lottery

    OpenAIRE

    Grumbach, Stéphane; Riemann, Robert

    2017-01-01

    International audience; Most online lotteries today fail to ensure the verifiability of the random process and rely on a trusted third party. This issue has received little attention since the emergence of distributed protocols like Bitcoin that demonstrated the potential of protocols with no trusted third party. We argue that the security requirements of online lotteries are similar to those of online voting, and propose a novel distributed online lottery protocol that applies techniques dev...

  10. Perspectives of intellectual processing of large volumes of astronomical data using neural networks

    Science.gov (United States)

    Gorbunov, A. A.; Isaev, E. A.; Samodurov, V. A.

    2018-01-01

    In the process of astronomical observations vast amounts of data are collected. BSA (Big Scanning Antenna) LPI used in the study of impulse phenomena, daily logs 87.5 GB of data (32 TB per year). This data has important implications for both short-and long-term monitoring of various classes of radio sources (including radio transients of different nature), monitoring the Earth’s ionosphere, the interplanetary and the interstellar plasma, the search and monitoring of different classes of radio sources. In the framework of the studies discovered 83096 individual pulse events (in the interval of the study highlighted July 2012 - October 2013), which may correspond to pulsars, twinkling springs, and a rapid radio transients. Detected impulse events are supposed to be used to filter subsequent observations. The study suggests approach, using the creation of the multilayered artificial neural network, which processes the input raw data and after processing, by the hidden layer, the output layer produces a class of impulsive phenomena.

  11. Assessing the stretch-blow moulding FE simulation of PET over a large process window

    Science.gov (United States)

    Nixon, J.; Menary, G. H.; Yan, S.

    2017-10-01

    Injection stretch blow moulding has been extensively researched for numerous years and is a well-established method of forming thin-walled containers. This paper is concerned with validating the finite element analysis of the stretch-blow-moulding (SBM) process in an effort to progress the development of injection stretch blow moulding of poly(ethylene terephthalate). Extensive data was obtained experimentally over a wide process window accounting for material temperature, air flow rate and stretch-rod speed while capturing cavity pressure, stretch-rod reaction force, in-mould contact timing and material thickness distribution. This data was then used to assess the accuracy of the correlating FE simulation constructed using ABAQUS/Explicit solver and an appropriate user-defined viscoelastic material subroutine. Results reveal that the simulation was able to pick up the general trends of how the pressure, reaction force and in-mould contact timings vary with the variation in preform temperature and air flow rate. Trends in material thickness were also accurately predicted over the length of the bottle relative to the process conditions. The knowledge gained from these analyses provides insight into the mechanisms of bottle formation, subsequently improving the blow moulding simulation and potentially providing a reduction in production costs.

  12. Combining Vertex-centric Graph Processing with SPARQL for Large-scale RDF Data Analytics

    KAUST Repository

    Abdelaziz, Ibrahim

    2017-06-27

    Modern applications, such as drug repositioning, require sophisticated analytics on RDF graphs that combine structural queries with generic graph computations. Existing systems support either declarative SPARQL queries, or generic graph processing, but not both. We bridge the gap by introducing Spartex, a versatile framework for complex RDF analytics. Spartex extends SPARQL to support programs that combine seamlessly generic graph algorithms (e.g., PageRank, Shortest Paths, etc.) with SPARQL queries. Spartex builds on existing vertex-centric graph processing frameworks, such as Graphlab or Pregel. It implements a generic SPARQL operator as a vertex-centric program that interprets SPARQL queries and executes them efficiently using a built-in optimizer. In addition, any graph algorithm implemented in the underlying vertex-centric framework, can be executed in Spartex. We present various scenarios where our framework simplifies significantly the implementation of complex RDF data analytics programs. We demonstrate that Spartex scales to datasets with billions of edges, and show that our core SPARQL engine is at least as fast as the state-of-the-art specialized RDF engines. For complex analytical tasks that combine generic graph processing with SPARQL, Spartex is at least an order of magnitude faster than existing alternatives.

  13. High-Temperature-Short-Time Annealing Process for High-Performance Large-Area Perovskite Solar Cells.

    Science.gov (United States)

    Kim, Minjin; Kim, Gi-Hwan; Oh, Kyoung Suk; Jo, Yimhyun; Yoon, Hyun; Kim, Ka-Hyun; Lee, Heon; Kim, Jin Young; Kim, Dong Suk

    2017-06-27

    Organic-inorganic hybrid metal halide perovskite solar cells (PSCs) are attracting tremendous research interest due to their high solar-to-electric power conversion efficiency with a high possibility of cost-effective fabrication and certified power conversion efficiency now exceeding 22%. Although many effective methods for their application have been developed over the past decade, their practical transition to large-size devices has been restricted by difficulties in achieving high performance. Here we report on the development of a simple and cost-effective production method with high-temperature and short-time annealing processing to obtain uniform, smooth, and large-size grain domains of perovskite films over large areas. With high-temperature short-time annealing at 400 °C for 4 s, the perovskite film with an average domain size of 1 μm was obtained, which resulted in fast solvent evaporation. Solar cells fabricated using this processing technique had a maximum power conversion efficiency exceeding 20% over a 0.1 cm 2 active area and 18% over a 1 cm 2 active area. We believe our approach will enable the realization of highly efficient large-area PCSs for practical development with a very simple and short-time procedure. This simple method should lead the field toward the fabrication of uniform large-scale perovskite films, which are necessary for the production of high-efficiency solar cells that may also be applicable to several other material systems for more widespread practical deployment.

  14. Large-Scale Reactive Atomistic Simulation of Shock-induced Initiation Processes in Energetic Materials

    Science.gov (United States)

    Thompson, Aidan

    2013-06-01

    Initiation in energetic materials is fundamentally dependent on the interaction between a host of complex chemical and mechanical processes, occurring on scales ranging from intramolecular vibrations through molecular crystal plasticity up to hydrodynamic phenomena at the mesoscale. A variety of methods (e.g. quantum electronic structure methods (QM), non-reactive classical molecular dynamics (MD), mesoscopic continuum mechanics) exist to study processes occurring on each of these scales in isolation, but cannot describe how these processes interact with each other. In contrast, the ReaxFF reactive force field, implemented in the LAMMPS parallel MD code, allows us to routinely perform multimillion-atom reactive MD simulations of shock-induced initiation in a variety of energetic materials. This is done either by explicitly driving a shock-wave through the structure (NEMD) or by imposing thermodynamic constraints on the collective dynamics of the simulation cell e.g. using the Multiscale Shock Technique (MSST). These MD simulations allow us to directly observe how energy is transferred from the shockwave into other processes, including intramolecular vibrational modes, plastic deformation of the crystal, and hydrodynamic jetting at interfaces. These processes in turn cause thermal excitation of chemical bonds leading to initial chemical reactions, and ultimately to exothermic formation of product species. Results will be presented on the application of this approach to several important energetic materials, including pentaerythritol tetranitrate (PETN) and ammonium nitrate/fuel oil (ANFO). In both cases, we validate the ReaxFF parameterizations against QM and experimental data. For PETN, we observe initiation occurring via different chemical pathways, depending on the shock direction. For PETN containing spherical voids, we observe enhanced sensitivity due to jetting, void collapse, and hotspot formation, with sensitivity increasing with void size. For ANFO, we

  15. Low-impedance internal linear inductive antenna for large-area flat panel display plasma processing

    International Nuclear Information System (INIS)

    Kim, K.N.; Jung, S.J.; Lee, Y.J.; Yeom, G.Y.; Lee, S.H.; Lee, J.K.

    2005-01-01

    An internal-type linear inductive antenna, that is, a double-comb-type antenna, was developed for a large-area plasma source having the size of 1020 mmx830 mm, and high density plasmas on the order of 2.3x10 11 cm -3 were obtained with 15 mTorr Ar at 5000 W of inductive power with good plasma stability. This is higher than that for the conventional serpentine-type antenna, possibly due to the low impedance, resulting in high efficiency of power transfer for the double-comb antenna type. In addition, due to the remarkable reduction of the antenna length, a plasma uniformity of less than 8% was obtained within the substrate area of 880 mmx660 mm at 5000 W without having a standing-wave effect

  16. Processing, microstructure, and mechanical properties of large-grained zirconium diboride ceramics

    Energy Technology Data Exchange (ETDEWEB)

    Neuman, Eric W.; Hilmas, Gregory E., E-mail: ghilmas@mst.edu; Fahrenholtz, William G.

    2016-07-18

    Zirconium diboride ceramics produced using commercial ZrB{sub 2} powders, and milled with zirconium diboride grinding media, were fabricated by hot-pressing at temperatures of 2100–2200 °C with hold times of 30–120 min. This ZrB{sub 2} exhibits no additional impurities typically introduced by milling with grinding media of differing composition. Microstructure analysis revealed grain sizes ranging from ~25 to ~50 µm along with ~3 vol% porosity. Flexure strength ranged from 335 to 400 MPa, elastic modulus from 490 to 510 GPa, fracture toughness from 2.7 to 3.2 MPa m{sup ½}, and hardness from 13.0 to 14.4 GPa. Strength limiting flaws were identified as surface grain pullout induced by machining. Elastic modulus and hardness were found to increase with decreasing porosity. Compared to the fine grained ceramics typically reported, large grain zirconium diboride ceramics exhibit higher than expected room temperature strengths.

  17. Process for producing curved surface of membrane rings for large containers, particulary for prestressed concrete pressure vessels of nuclear reactors

    International Nuclear Information System (INIS)

    Kumpf, H.

    1977-01-01

    Membrane rings for large pressure vessels, particularly for prestressed-concrete pressure vessels, often have curved surfaces. The invention describes a process of producing these at site, which is particularly advantageous as the forming and installation of the vessel component coincide. According to the invention, the originally flat membrane ring is set in a predetermined position, is then pressed in sections by a forming tool (with a preformed support ring as opposite tool), and shaped. After this, the shaped parts are welded to the ring-shaped wall parts of the large vessel. The manufacture of single and double membrane rings arrangements is described. (HP) [de

  18. Operational experinece with large scale biogas production at the promest manure processing plant in Helmond, the Netherlands

    International Nuclear Information System (INIS)

    Schomaker, A.H.H.M.

    1992-01-01

    In The Netherlands a surplus of 15 million tons of liquid pig manure is produced yearly on intensive pig breeding farms. The dutch government has set a three-way policy to reduce this excess of manure: 1. conversion of animal fodder into a product with less and better ingestible nutrients; 2. distribution of the surplus to regions with a shortage of animal manure; 3. processing of the remainder of the surplus in large scale processing plants. The first large scale plant for the processing of liquid pig manure was put in operation in 1988 as a demonstration plant at Promest in Helmond. The design capacity of this plant is 100,000 tons of pig manure per year. The plant was initiated by the Manure Steering Committee of the province Noord-Brabant in order to prove at short notice whether large scale manure processing might contribute to the solution of the problem of the manure surplus in The Netherlands. This steering committee is a corporation of the national and provincial government and the agricultural industrial life. (au)

  19. Printing Outside the Box: Additive Manufacturing Processes for Fabrication of Large Aerospace Structures

    Science.gov (United States)

    Babai, Majid; Peters, Warren

    2015-01-01

    To achieve NASA's mission of space exploration, innovative manufacturing processes are being applied to the fabrication of propulsion elements. Liquid rocket engines (LREs) are comprised of a thrust chamber and nozzle extension as illustrated in figure 1 for the J2X upper stage engine. Development of the J2X engine, designed for the Ares I launch vehicle, is currently being incorporated on the Space Launch System. A nozzle extension is attached to the combustion chamber to obtain the expansion ratio needed to increase specific impulse. If the nozzle extension could be printed as one piece using free-form additive manufacturing (AM) processes, rather than the current method of forming welded parts, a considerable time savings could be realized. Not only would this provide a more homogenous microstructure than a welded structure, but could also greatly shorten the overall fabrication time. The main objective of this study is to fabricate test specimens using a pulsed arc source and solid wire as shown in figure 2. The mechanical properties of these specimens will be compared with those fabricated using the powder bed, selective laser melting technology at NASA Marshall Space Flight Center. As printed components become larger, maintaining a constant temperature during the build process becomes critical. This predictive capability will require modeling of the moving heat source as illustrated in figure 3. Predictive understanding of the heat profile will allow a constant temperature to be maintained as a function of height from substrate while printing complex shapes. In addition, to avoid slumping, this will also allow better control of the microstructural development and hence the properties. Figure 4 shows a preliminary comparison of the mechanical properties obtained.

  20. Asymptotics of the spectral gap for the interchange process on large hypercubes

    International Nuclear Information System (INIS)

    Starr, Shannon; Conomos, Matthew P

    2011-01-01

    We consider the interchange process (IP) on the d-dimensional, discrete hypercube of side-length n. Specifically, we compare the spectral gap of the IP to the spectral gap of the random walk (RW) on the same graph. We prove that the two spectral gaps are asymptotically equivalent, in the limit n→∞. This result gives further supporting evidence for a conjecture of Aldous, that the spectral gap of the IP equals the spectral gap of the RW on all finite graphs. Our proof is based on an argument invented by Handjani and Jungreis, who proved Aldous's conjecture for all trees

  1. Large-scale environmental effects and ecological processes in the Baltic Sea

    International Nuclear Information System (INIS)

    Wulff, F.

    1990-11-01

    A Swedish research programme concerning the Baltic Sea is initiated by the SNV to produce budgets and models of eutrophying substances (nitrogen, phosphorus, silicate, some organic substances) and toxic substances (PCB, lindane and PAH). A description of the distribution and turnover of these substances including their transformation will be necessary in the evaluation of critical processes controlling concentrations in relation to external load. A geographical information system will be made available as a database and analytical tool for all participants (BED, Baltic Ecosystem Data). This project is designed around cooperation between the Baltic Sea countries. (au)

  2. Low cost silicon solar array project large area silicon sheet task: Silicon web process development

    Science.gov (United States)

    Duncan, C. S.; Seidensticker, R. G.; Mchugh, J. P.; Blais, P. D.; Davis, J. R., Jr.

    1977-01-01

    Growth configurations were developed which produced crystals having low residual stress levels. The properties of a 106 mm diameter round crucible were evaluated and it was found that this design had greatly enhanced temperature fluctuations arising from convection in the melt. Thermal modeling efforts were directed to developing finite element models of the 106 mm round crucible and an elongated susceptor/crucible configuration. Also, the thermal model for the heat loss modes from the dendritic web was examined for guidance in reducing the thermal stress in the web. An economic analysis was prepared to evaluate the silicon web process in relation to price goals.

  3. Large-Scale Sentinel-1 Processing for Solid Earth Science and Urgent Response using Cloud Computing and Machine Learning

    Science.gov (United States)

    Hua, H.; Owen, S. E.; Yun, S. H.; Agram, P. S.; Manipon, G.; Starch, M.; Sacco, G. F.; Bue, B. D.; Dang, L. B.; Linick, J. P.; Malarout, N.; Rosen, P. A.; Fielding, E. J.; Lundgren, P.; Moore, A. W.; Liu, Z.; Farr, T.; Webb, F.; Simons, M.; Gurrola, E. M.

    2017-12-01

    With the increased availability of open SAR data (e.g. Sentinel-1 A/B), new challenges are being faced with processing and analyzing the voluminous SAR datasets to make geodetic measurements. Upcoming SAR missions such as NISAR are expected to generate close to 100TB per day. The Advanced Rapid Imaging and Analysis (ARIA) project can now generate geocoded unwrapped phase and coherence products from Sentinel-1 TOPS mode data in an automated fashion, using the ISCE software. This capability is currently being exercised on various study sites across the United States and around the globe, including Hawaii, Central California, Iceland and South America. The automated and large-scale SAR data processing and analysis capabilities use cloud computing techniques to speed the computations and provide scalable processing power and storage. Aspects such as how to processing these voluminous SLCs and interferograms at global scales, keeping up with the large daily SAR data volumes, and how to handle the voluminous data rates are being explored. Scene-partitioning approaches in the processing pipeline help in handling global-scale processing up to unwrapped interferograms with stitching done at a late stage. We have built an advanced science data system with rapid search functions to enable access to the derived data products. Rapid image processing of Sentinel-1 data to interferograms and time series is already being applied to natural hazards including earthquakes, floods, volcanic eruptions, and land subsidence due to fluid withdrawal. We will present the status of the ARIA science data system for generating science-ready data products and challenges that arise from being able to process SAR datasets to derived time series data products at large scales. For example, how do we perform large-scale data quality screening on interferograms? What approaches can be used to minimize compute, storage, and data movement costs for time series analysis in the cloud? We will also

  4. Computational Modelling of Large Scale Phage Production Using a Two-Stage Batch Process

    Directory of Open Access Journals (Sweden)

    Konrad Krysiak-Baltyn

    2018-04-01

    Full Text Available Cost effective and scalable methods for phage production are required to meet an increasing demand for phage, as an alternative to antibiotics. Computational models can assist the optimization of such production processes. A model is developed here that can simulate the dynamics of phage population growth and production in a two-stage, self-cycling process. The model incorporates variable infection parameters as a function of bacterial growth rate and employs ordinary differential equations, allowing application to a setup with multiple reactors. The model provides simple cost estimates as a function of key operational parameters including substrate concentration, feed volume and cycling times. For the phage and bacteria pairing examined, costs and productivity varied by three orders of magnitude, with the lowest cost found to be most sensitive to the influent substrate concentration and low level setting in the first vessel. An example case study of phage production is also presented, showing how parameter values affect the production costs and estimating production times. The approach presented is flexible and can be used to optimize phage production at laboratory or factory scale by minimizing costs or maximizing productivity.

  5. Ocean acidification induces biochemical and morphological changes in the calcification process of large benthic foraminifera.

    Science.gov (United States)

    Prazeres, Martina; Uthicke, Sven; Pandolfi, John M

    2015-03-22

    Large benthic foraminifera are significant contributors to sediment formation on coral reefs, yet they are vulnerable to ocean acidification. Here, we assessed the biochemical and morphological impacts of acidification on the calcification of Amphistegina lessonii and Marginopora vertebralis exposed to different pH conditions. We measured growth rates (surface area and buoyant weight) and Ca-ATPase and Mg-ATPase activities and calculated shell density using micro-computer tomography images. In A. lessonii, we detected a significant decrease in buoyant weight, a reduction in the density of inner skeletal chambers, and an increase of Ca-ATPase and Mg-ATPase activities at pH 7.6 when compared with ambient conditions of pH 8.1. By contrast, M. vertebralis showed an inhibition in Mg-ATPase activity under lowered pH, with growth rate and skeletal density remaining constant. While M. vertebralis is considered to be more sensitive than A. lessonii owing to its high-Mg-calcite skeleton, it appears to be less affected by changes in pH, based on the parameters assessed in this study. We suggest difference in biochemical pathways of calcification as the main factor influencing response to changes in pH levels, and that A. lessonii and M. vertebralis have the ability to regulate biochemical functions to cope with short-term increases in acidity. © 2015 The Author(s) Published by the Royal Society. All rights reserved.

  6. Large-eddy simulations of mechanical and thermal processes within boundary layer of the Graciosa Island

    Science.gov (United States)

    Sever, G.; Collis, S. M.; Ghate, V. P.

    2017-12-01

    Three-dimensional numerical experiments are performed to explore the mechanical and thermal impacts of Graciosa Island on the sampling of oceanic airflow and cloud evolution. Ideal and real configurations of flow and terrain are planned using high-resolution, large-eddy resolving (e.g., Δ cold-pool formation upstream of an ideal two-kilometer island, with von Kármán like vortices propagation downstream. Although the peak height of Graciosa is less than half kilometer, the Azores island chain has a mountain over 2 km, which may be leading to more complex flow patterns when simulations are extended to a larger domain. Preliminary idealized low-resolution moist simulations indicate that the cloud field is impacted due to the presence of the island. Longer simulations that are performed to capture diurnal evolution of island boundary layer show distinct land/sea breeze formations under quiescent flow conditions. Further numerical experiments are planned to extend moist simulations to include realistic atmospheric profiles and observations of surface fluxes coupled with radiative effects. This work is intended to produce a useful simulation framework coupled with instruments to guide airborne and ground sampling strategies during the ACE-ENA field campaign which is aimed to better characterize marine boundary layer clouds.

  7. Mizan: A system for dynamic load balancing in large-scale graph processing

    KAUST Repository

    Khayyat, Zuhair

    2013-01-01

    Pregel [23] was recently introduced as a scalable graph mining system that can provide significant performance improvements over traditional MapReduce implementations. Existing implementations focus primarily on graph partitioning as a preprocessing step to balance computation across compute nodes. In this paper, we examine the runtime characteristics of a Pregel system. We show that graph partitioning alone is insufficient for minimizing end-to-end computation. Especially where data is very large or the runtime behavior of the algorithm is unknown, an adaptive approach is needed. To this end, we introduce Mizan, a Pregel system that achieves efficient load balancing to better adapt to changes in computing needs. Unlike known implementations of Pregel, Mizan does not assume any a priori knowledge of the structure of the graph or behavior of the algorithm. Instead, it monitors the runtime characteristics of the system. Mizan then performs efficient fine-grained vertex migration to balance computation and communication. We have fully implemented Mizan; using extensive evaluation we show that - especially for highly-dynamic workloads - Mizan provides up to 84% improvement over techniques leveraging static graph pre-partitioning. © 2013 ACM.

  8. Investigation of Low-Cost Surface Processing Techniques for Large-Size Multicrystalline Silicon Solar Cells

    Directory of Open Access Journals (Sweden)

    Yuang-Tung Cheng

    2010-01-01

    Full Text Available The subject of the present work is to develop a simple and effective method of enhancing conversion efficiency in large-size solar cells using multicrystalline silicon (mc-Si wafer. In this work, industrial-type mc-Si solar cells with area of 125×125 mm2 were acid etched to produce simultaneously POCl3 emitters and silicon nitride deposition by plasma-enhanced chemical vapor deposited (PECVD. The study of surface morphology and reflectivity of different mc-Si etched surfaces has also been discussed in this research. Using our optimal acid etching solution ratio, we are able to fabricate mc-Si solar cells of 16.34% conversion efficiency with double layers silicon nitride (Si3N4 coating. From our experiment, we find that depositing double layers silicon nitride coating on mc-Si solar cells can get the optimal performance parameters. Open circuit (Voc is 616 mV, short circuit current (Jsc is 34.1 mA/cm2, and minority carrier diffusion length is 474.16 μm. The isotropic texturing and silicon nitride layers coating approach contribute to lowering cost and achieving high efficiency in mass production.

  9. The transverse momentum of partons in large psub(T) processes

    International Nuclear Information System (INIS)

    Chase, M.K.

    1977-11-01

    An approximate method is used to investigate the effects of parton transverse momentum in large psub(T) particle production within the framework of hard scattering models. An approximate expression is derived for the mean bias towards the trigger of each of the two participating partons and it is found that event by event one of the partons is biased more than the other, even with a 90 0 trigger. The transverse momentum of partons and their closely related off mass shell behaviour are treated as a perturbation in the equation for the single particle inclusive cross-section, which is then expanded in a Taylor series. The first non-zero correction term is calculated and it is found that to this order, the cross-section is increased by parton transverse momentum effects by typically a factor of 2 for psub(T) = 2 to 3 GeV/c, and that the correction decreases rapidly with increasing psub(T). (author)

  10. Segmentation of the hippocampus by transferring algorithmic knowledge for large cohort processing.

    Science.gov (United States)

    Thyreau, Benjamin; Sato, Kazunori; Fukuda, Hiroshi; Taki, Yasuyuki

    2018-01-01

    The hippocampus is a particularly interesting target for neuroscience research studies due to its essential role within the human brain. In large human cohort studies, bilateral hippocampal structures are frequently identified and measured to gain insight into human behaviour or genomic variability in neuropsychiatric disorders of interest. Automatic segmentation is performed using various algorithms, with FreeSurfer being a popular option. In this manuscript, we present a method to segment the bilateral hippocampus using a deep-learned appearance model. Deep convolutional neural networks (ConvNets) have shown great success in recent years, due to their ability to learn meaningful features from a mass of training data. Our method relies on the following key novelties: (i) we use a wide and variable training set coming from multiple cohorts (ii) our training labels come in part from the output of the FreeSurfer algorithm, and (iii) we include synthetic data and use a powerful data augmentation scheme. Our method proves to be robust, and it has fast inference (deep neural-network methods can easily encode, and even improve, existing anatomical knowledge, even when this knowledge exists in algorithmic form. Copyright © 2017 Elsevier B.V. All rights reserved.

  11. Absolute quantitative profiling of the key metabolic pathways in slow and fast skeletal muscle

    DEFF Research Database (Denmark)

    Rakus, Dariusz; Gizak, Agnieszka; Deshmukh, Atul

    2015-01-01

    . Proteomic analysis of mouse slow and fast muscles allowed estimation of the titers of enzymes involved in the carbohydrate, lipid, and energy metabolism. Notably, we observed that differences observed between the two muscle types occur simultaneously for all proteins involved in a specific process......Slow and fast skeletal muscles are composed of, respectively, mainly oxidative and glycolytic muscle fibers, which are the basic cellular motor units of the motility apparatus. They largely differ in excitability, contraction mechanism, and metabolism. Because of their pivotal role in body motion...... and homeostasis, the skeletal muscles have been extensively studied using biochemical and molecular biology approaches. Here we describe a simple analytical and computational approach to estimate titers of enzymes of basic metabolic pathways and proteins of the contractile machinery in the skeletal muscles...

  12. Process Simulation and Characterization of Substrate Engineered Silicon Thin Film Transistor for Display Sensors and Large Area Electronics

    International Nuclear Information System (INIS)

    Hashmi, S M; Ahmed, S

    2013-01-01

    Design, simulation, fabrication and post-process qualification of substrate-engineered Thin Film Transistors (TFTs) are carried out to suggest an alternate manufacturing process step focused on display sensors and large area electronics applications. Damage created by ion implantation of Helium and Silicon ions into single-crystalline n-type silicon substrate provides an alternate route to create an amorphized region responsible for the fabrication of TFT structures with controllable and application-specific output parameters. The post-process qualification of starting material and full-cycle devices using Rutherford Backscattering Spectrometry (RBS) and Proton or Particle induced X-ray Emission (PIXE) techniques also provide an insight to optimize the process protocols as well as their applicability in the manufacturing cycle

  13. Towards large-scale production of solution-processed organic tandem modules based on ternary composites: Design of the intermediate layer, device optimization and laser based module processing

    DEFF Research Database (Denmark)

    Li, Ning; Kubis, Peter; Forberich, Karen

    2014-01-01

    on commercially available materials, which enhances the absorption of poly(3-hexylthiophene) (P3HT) and as a result increase the PCE of the P3HT-based large-scale OPV devices; 3. laser-based module processing, which provides an excellent processing resolution and as a result can bring the power conversion...... efficiency (PCE) of mass-produced organic photovoltaic (OPV) devices close to the highest PCE values achieved for lab-scale solar cells through a significant increase in the geometrical fill factor. We believe that the combination of the above mentioned concepts provides a clear roadmap to push OPV towards...

  14. Large deviations in stochastic heat-conduction processes provide a gradient-flow structure for heat conduction

    International Nuclear Information System (INIS)

    Peletier, Mark A.; Redig, Frank; Vafayi, Kiamars

    2014-01-01

    We consider three one-dimensional continuous-time Markov processes on a lattice, each of which models the conduction of heat: the family of Brownian Energy Processes with parameter m (BEP(m)), a Generalized Brownian Energy Process, and the Kipnis-Marchioro-Presutti (KMP) process. The hydrodynamic limit of each of these three processes is a parabolic equation, the linear heat equation in the case of the BEP(m) and the KMP, and a nonlinear heat equation for the Generalized Brownian Energy Process with parameter a (GBEP(a)). We prove the hydrodynamic limit rigorously for the BEP(m), and give a formal derivation for the GBEP(a). We then formally derive the pathwise large-deviation rate functional for the empirical measure of the three processes. These rate functionals imply gradient-flow structures for the limiting linear and nonlinear heat equations. We contrast these gradient-flow structures with those for processes describing the diffusion of mass, most importantly the class of Wasserstein gradient-flow systems. The linear and nonlinear heat-equation gradient-flow structures are each driven by entropy terms of the form −log ρ; they involve dissipation or mobility terms of order ρ 2 for the linear heat equation, and a nonlinear function of ρ for the nonlinear heat equation

  15. Superconducting properties of single-crystal Nb sphere formed by large-undercooling solidification process

    Energy Technology Data Exchange (ETDEWEB)

    Takeya, H.; Sung, Y.S.; Hirata, K.; Togano, K

    2003-10-15

    An electrostatic levitation (ESL) system has been used for investigating undercooling effects on superconducting materials. In this report, preliminary experiments on Nb (melting temperature: T{sub m}=2477 deg. C) have been performed by melting Nb in levitation using 150 and 250 W Nd-YAG lasers. Since molten Nb is solidified without any contact in a high vacuum condition, a significantly undercooled state up to 400 deg. C is maintained before recalescence followed by solidification. Spherical single crystals of Nb are formed by the ESL process due to the suppression of heterogeneous nucleation. The field dependence of magnetization of Nb shows a reversible behavior as an ideal type II superconductor, implying that it contains almost no flux-pinning centers.

  16. Cost-effective large-scale fabrication of diffractive optical elements by using conventional semiconducting processes.

    Science.gov (United States)

    Yoo, Seunghwan; Song, Ho Young; Lee, Junghoon; Jang, Cheol-Yong; Jeong, Hakgeun

    2012-11-20

    In this article, we introduce a simple fabrication method for SiO(2)-based thin diffractive optical elements (DOEs) that uses the conventional processes widely used in the semiconductor industry. Photolithography and an inductively coupled plasma etching technique are easy and cost-effective methods for fabricating subnanometer-scale and thin DOEs with a refractive index of 1.45, based on SiO(2). After fabricating DOEs, we confirmed the shape of the output light emitted from the laser diode light source and applied to a light-emitting diode (LED) module. The results represent a new approach to mass-produce DOEs and realize a high-brightness LED module.

  17. Commissioning of the SPACAL calorimeter of H1 and analysis of large transverse energy processes

    International Nuclear Information System (INIS)

    Zini, P.

    1998-01-01

    The lepton-nucleon scattering experiments take a prominent part in the matter structure study. The H1 experiment (HERA), progressed regularly and the accumulated luminosity allows today the exploration of the phase space limits to the high energies. The H1 detector has been improved: SPACAL the new calorimeter since 1995 is the main part. This thesis presents two parts. The first one deals with the calorimeter performances study, the second one with the events of very high transverse energy. The most depopulated area of the phase space are explored. For these regions it was necessary to improve the theoretical forecasting; analytical calculations have been used. These studies on the extreme processes understanding will be very useful for the high luminosity period of HERA, between 2000 and 2005. This study is a solid base for the researches on new phenomena leading to the existence of a physics upper the standard model. (A.L.B.)

  18. On the self-organizing process of large scale shear flows

    Energy Technology Data Exchange (ETDEWEB)

    Newton, Andrew P. L. [Department of Applied Maths, University of Sheffield, Sheffield, Yorkshire S3 7RH (United Kingdom); Kim, Eun-jin [School of Mathematics and Statistics, University of Sheffield, Sheffield, Yorkshire S3 7RH (United Kingdom); Liu, Han-Li [High Altitude Observatory, National Centre for Atmospheric Research, P. O. BOX 3000, Boulder, Colorado 80303-3000 (United States)

    2013-09-15

    Self organization is invoked as a paradigm to explore the processes governing the evolution of shear flows. By examining the probability density function (PDF) of the local flow gradient (shear), we show that shear flows reach a quasi-equilibrium state as its growth of shear is balanced by shear relaxation. Specifically, the PDFs of the local shear are calculated numerically and analytically in reduced 1D and 0D models, where the PDFs are shown to converge to a bimodal distribution in the case of finite correlated temporal forcing. This bimodal PDF is then shown to be reproduced in nonlinear simulation of 2D hydrodynamic turbulence. Furthermore, the bimodal PDF is demonstrated to result from a self-organizing shear flow with linear profile. Similar bimodal structure and linear profile of the shear flow are observed in gulf stream, suggesting self-organization.

  19. Slowness and sparseness have diverging effects on complex cell learning.

    Directory of Open Access Journals (Sweden)

    Jörn-Philipp Lies

    2014-03-01

    Full Text Available Following earlier studies which showed that a sparse coding principle may explain the receptive field properties of complex cells in primary visual cortex, it has been concluded that the same properties may be equally derived from a slowness principle. In contrast to this claim, we here show that slowness and sparsity drive the representations towards substantially different receptive field properties. To do so, we present complete sets of basis functions learned with slow subspace analysis (SSA in case of natural movies as well as translations, rotations, and scalings of natural images. SSA directly parallels independent subspace analysis (ISA with the only difference that SSA maximizes slowness instead of sparsity. We find a large discrepancy between the filter shapes learned with SSA and ISA. We argue that SSA can be understood as a generalization of the Fourier transform where the power spectrum corresponds to the maximally slow subspace energies in SSA. Finally, we investigate the trade-off between slowness and sparseness when combined in one objective function.

  20. Slow electrons kill the ozone

    International Nuclear Information System (INIS)

    Maerk, T.

    2001-01-01

    A new method and apparatus (Trochoidal electron monochromator) to study the interactions of electrons with atoms, molecules and clusters was developed. Two applications are briefly reported: a) the ozone destruction in the atmosphere is caused by different reasons, a new mechanism is proposed, that slow thermal electrons are self added to the ozone molecule (O 3 ) with a high frequency, then O 3 is destroyed ( O 3 + e - → O - + O 2 ); b) another application is the study of the binding energy of the football molecule C60. (nevyjel)

  1. The CUORE slow monitoring systems

    Science.gov (United States)

    Gladstone, L.; Biare, D.; Cappelli, L.; Cushman, J. S.; Del Corso, F.; Fujikawa, B. K.; Hickerson, K. P.; Moggi, N.; Pagliarone, C. E.; Schmidt, B.; Wagaarachchi, S. L.; Welliver, B.; Winslow, L. A.

    2017-09-01

    CUORE is a cryogenic experiment searching primarily for neutrinoless double decay in 130Te. It will begin data-taking operations in 2016. To monitor the cryostat and detector during commissioning and data taking, we have designed and developed Slow Monitoring systems. In addition to real-time systems using LabVIEW, we have an alarm, analysis, and archiving website that uses MongoDB, AngularJS, and Bootstrap software. These modern, state of the art software packages make the monitoring system transparent, easily maintainable, and accessible on many platforms including mobile devices.

  2. Blowup for flat slow manifolds

    DEFF Research Database (Denmark)

    Kristiansen, Kristian Uldall

    2017-01-01

    In this paper, we present a way of extending the blowup method, in the formulation of Krupa and Szmolyan, to flat slow manifolds that lose hyperbolicity beyond any algebraic order. Although these manifolds have infinite co-dimensions, they do appear naturally in certain settings; for example, in (a......) the regularization of piecewise smooth systems by tanh, (b) a particular aircraft landing dynamics model, and finally (c) in a model of earthquake faulting. We demonstrate the approach using a simple model system and the examples (a) and (b)....

  3. Blowup for flat slow manifolds

    Science.gov (United States)

    Kristiansen, K. U.

    2017-05-01

    In this paper, we present a way of extending the blowup method, in the formulation of Krupa and Szmolyan, to flat slow manifolds that lose hyperbolicity beyond any algebraic order. Although these manifolds have infinite co-dimensions, they do appear naturally in certain settings; for example, in (a) the regularization of piecewise smooth systems by \\tanh , (b) a particular aircraft landing dynamics model, and finally (c) in a model of earthquake faulting. We demonstrate the approach using a simple model system and the examples (a) and (b).

  4. On the possibility of the multiple inductively coupled plasma and helicon plasma sources for large-area processes

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Jin-Won; Lee, Yun-Seong, E-mail: leeeeys@kaist.ac.kr; Chang, Hong-Young [Low-temperature Plasma Laboratory, Department of Physics, Korea Advanced Institute of Science and Technology, Daejeon 305-701 (Korea, Republic of); An, Sang-Hyuk [Agency of Defense Development, Yuseong-gu, Daejeon 305-151 (Korea, Republic of)

    2014-08-15

    In this study, we attempted to determine the possibility of multiple inductively coupled plasma (ICP) and helicon plasma sources for large-area processes. Experiments were performed with the one and two coils to measure plasma and electrical parameters, and a circuit simulation was performed to measure the current at each coil in the 2-coil experiment. Based on the result, we could determine the possibility of multiple ICP sources due to a direct change of impedance due to current and saturation of impedance due to the skin-depth effect. However, a helicon plasma source is difficult to adapt to the multiple sources due to the consistent change of real impedance due to mode transition and the low uniformity of the B-field confinement. As a result, it is expected that ICP can be adapted to multiple sources for large-area processes.

  5. Design of an RF Antenna for a Large-Bore, High Power, Steady State Plasma Processing Chamber for Material Separation

    International Nuclear Information System (INIS)

    Rasmussen, D.A.; Freeman, R.L.

    2001-01-01

    The purpose of this Cooperative Research and Development Agreement (CRADA) between UT-Battelle, LLC, (Contractor), and Archimedes Technology Group, (Participant) is to evaluate the design of an RF antenna for a large-bore, high power, steady state plasma processing chamber for material separation. Criteria for optimization will be to maximize the power deposition in the plasma while operating at acceptable voltages and currents in the antenna structure. The project objectives are to evaluate the design of an RF antenna for a large-bore, high power, steady state plasma processing chamber for material separation. Criteria for optimization will be to maximize the power deposition in the plasma while operating at acceptable voltages and currents in the antenna structure

  6. Some Examples of Residence-Time Distribution Studies in Large-Scale Chemical Processes by Using Radiotracer Techniques

    Energy Technology Data Exchange (ETDEWEB)

    Bullock, R. M.; Johnson, P.; Whiston, J. [Imperial Chemical Industries Ltd., Billingham, Co., Durham (United Kingdom)

    1967-06-15

    The application of radiotracers to determine flow patterns in chemical processes is discussed with particular reference to the derivation of design data from model reactors for translation to large-scale units, the study of operating efficiency and design attainment in established plant and the rapid identification of various types of process malfunction. The requirements governing the selection of tracers for various types of media are considered and an example is given of the testing of the behaviour of a typical tracer before use in a particular large-scale process operating at 250 atm and 200 Degree-Sign C. Information which may be derived from flow patterns is discussed including the determination of mixing parameters, gas hold-up in gas/liquid reactions and the detection of channelling and stagnant regions. Practical results and their interpretation are given in relation to an define hydroformylation reaction system, a process for the conversion of propylene to isopropanol, a moving bed catalyst system for the isomerization of xylenes and a three-stage gas-liquid reaction system. The use of mean residence-time data for the detection of leakage between reaction vessels and a heat interchanger system is given as an example of the identification of process malfunction. (author)

  7. Formation of Large-scale Coronal Loops Interconnecting Two Active Regions through Gradual Magnetic Reconnection and an Associated Heating Process

    Science.gov (United States)

    Du, Guohui; Chen, Yao; Zhu, Chunming; Liu, Chang; Ge, Lili; Wang, Bing; Li, Chuanyang; Wang, Haimin

    2018-06-01

    Coronal loops interconnecting two active regions (ARs), called interconnecting loops (ILs), are prominent large-scale structures in the solar atmosphere. They carry a significant amount of magnetic flux and therefore are considered to be an important element of the solar dynamo process. Earlier observations showed that eruptions of ILs are an important source of CMEs. It is generally believed that ILs are formed through magnetic reconnection in the high corona (>150″–200″), and several scenarios have been proposed to explain their brightening in soft X-rays (SXRs). However, the detailed IL formation process has not been fully explored, and the associated energy release in the corona still remains unresolved. Here, we report the complete formation process of a set of ILs connecting two nearby ARs, with successive observations by STEREO-A on the far side of the Sun and by SDO and Hinode on the Earth side. We conclude that ILs are formed by gradual reconnection high in the corona, in line with earlier postulations. In addition, we show evidence that ILs brighten in SXRs and EUVs through heating at or close to the reconnection site in the corona (i.e., through the direct heating process of reconnection), a process that has been largely overlooked in earlier studies of ILs.

  8. Large 3D resistivity and induced polarization acquisition using the Fullwaver system: towards an adapted processing methodology

    Science.gov (United States)

    Truffert, Catherine; Leite, Orlando; Gance, Julien; Texier, Benoît; Bernard, Jean

    2017-04-01

    Driven by needs in the mineral exploration market for ever faster and ever easier set-up of large 3D resistivity and induced polarization, autonomous and cableless recorded systems come to the forefront. Opposite to the traditional centralized acquisition, this new system permits a complete random distribution of receivers on the survey area allowing to obtain a real 3D imaging. This work presents the results of a 3 km2 large experiment up to 600m of depth performed with a new type of autonomous distributed receivers: the I&V-Fullwaver. With such system, all usual drawbacks induced by long cable set up over large 3D areas - time consuming, lack of accessibility, heavy weight, electromagnetic induction, etc. - disappear. The V-Fullwavers record the entire time series of voltage on two perpendicular axes, for a good determination of the data quality although I-Fullwaver records injected current simultaneously. For this survey, despite good assessment of each individual signal quality, on each channel of the set of Fullwaver systems, a significant number of negative apparent resistivity and chargeability remains present in the dataset (around 15%). These values are commonly not taken into account in the inversion software although they may be due to complex geological structure of interest (e.g. linked to the presence of sulfides in the earth). Taking into account that such distributed recording system aims to restitute the best 3D resistivity and IP tomography, how can 3D inversion be improved? In this work, we present the dataset, the processing chain and quality control of a large 3D survey. We show that the quality of the data selected is good enough to include it into the inversion processing. We propose a second way of processing based on the modulus of the apparent resistivity that stabilizes the inversion. We then discuss the results of both processing. We conclude that an effort could be made on the inclusion of negative apparent resistivity in the inversion

  9. Neutron slowing-down time in finite water systems

    International Nuclear Information System (INIS)

    Hirschberg, S.

    1981-11-01

    The influence of the size of a moderator system on the neutron slowing-down time has been investigated. The experimental part of the study was performed on six cubes of water with side lengths from 8 to 30 cm. Neutrons generated in pulses of about 1 ns width were slowed down from 14 MeV to 1.457 eV. The detection method used was based on registration of gamma radiation from the main capture resonance of indium. The most probable slowing-down times were found to be 778 +- 23 ns and 898 +- 25 ns for the smallest and for the largest cubes, respectively. The corresponding mean slowing-down times were 1205 +- 42 ns and 1311 +- 42 ns. In a separate measurement series the space dependence of the slowing-down time close to the source was studied. These experiments were supplemented by a theoretical calculation which gave an indication of the space dependence of the slowingdown time in finite systems. The experimental results were compared to the slowing-down times obtained from various theoretical approaches and from Monte Carlo calculations. All the methods show a decrease of the slowing-down time with decreasing size of the moderator. This effect was least pronounced in the experimental results, which can be explained by the fact the measurements are spatially dependent. The agreement between the Monte Carlo results and those obtained using the diffusion approximation or the age-diffusion theory is surprisingly good, especially for large systems. The P1 approximation, on the other hand, leads to an overestimation of the effect of the finite size on the slowing-down time. (author)

  10. Comparison of Different Approach of Back Projection Method in Retrieving the Rupture Process of Large Earthquakes

    Science.gov (United States)

    Tan, F.; Wang, G.; Chen, C.; Ge, Z.

    2016-12-01

    Back-projection of teleseismic P waves [Ishii et al., 2005] has been widely used to image the rupture of earthquakes. Besides the conventional narrowband beamforming in time domain, approaches in frequency domain such as MUSIC back projection (Meng 2011) and compressive sensing (Yao et al, 2011), are proposed to improve the resolution. Each method has its advantages and disadvantages and should be properly used in different cases. Therefore, a thorough research to compare and test these methods is needed. We write a GUI program, which puts the three methods together so that people can conveniently use different methods to process the same data and compare the results. Then we use all the methods to process several earthquake data, including 2008 Wenchuan Mw7.9 earthquake and 2011 Tohoku-Oki Mw9.0 earthquake, and theoretical seismograms of both simple sources and complex ruptures. Our results show differences in efficiency, accuracy and stability among the methods. Quantitative and qualitative analysis are applied to measure their dependence on data and parameters, such as station number, station distribution, grid size, calculate window length and so on. In general, back projection makes it possible to get a good result in a very short time using less than 20 lines of high-quality data with proper station distribution, but the swimming artifact can be significant. Some ways, for instance, combining global seismic data, could help ameliorate this method. Music back projection needs relatively more data to obtain a better and more stable result, which means it needs a lot more time since its runtime accumulates obviously faster than back projection with the increase of station number. Compressive sensing deals more effectively with multiple sources in a same time window, however, costs the longest time due to repeatedly solving matrix. Resolution of all the methods is complicated and depends on many factors. An important one is the grid size, which in turn influences

  11. Formation and fate of marine snow: small-scale processes with large- scale implications

    Directory of Open Access Journals (Sweden)

    Thomas Kiørboe

    2001-12-01

    Full Text Available Marine snow aggregates are believed to be the main vehicles for vertical material transport in the ocean. However, aggregates are also sites of elevated heterotrophic activity, which may rather cause enhanced retention of aggregated material in the upper ocean. Small-scale biological-physical interactions govern the formation and fate of marine snow. Aggregates may form by physical coagulation: fluid motion causes collisions between small primary particles (e.g. phytoplankton that may then stick together to form aggregates with enhanced sinking velocities. Bacteria may subsequently solubilise and remineralise aggregated particles. Because the solubilization rate exceeds the remineralization rate, organic solutes leak out of sinking aggregates. The leaking solutes spread by diffusion and advection and form a chemical trail in the wake of the sinking aggregate that may guide small zooplankters to the aggregate. Also, suspended bacteria may enjoy the elevated concentration of organic solutes in the plume. I explore these small-scale formation and degradation processes by means of models, experiments and field observations. The larger scale implications for the structure and functioning of pelagic food chains of export vs. retention of material will be discussed.

  12. Deep-inelastic processes: a workbench for large scale motion in nuclear matter

    International Nuclear Information System (INIS)

    Moretto, L.G.; Schmitt, R.P.

    1978-07-01

    The most prominent collective modes excited in deep-inelastic reactions are reviewed, and the natural hierarchy provided by their characteristic relaxation times is described. A model is presented which treats the relaxation of the mass asymmetry mode in terms of a diffusion process. Charge distributions and angular distributions as a function of Z calculated with this model are in good agreement with experimental data. An extension of this diffusion model which treats the transfer of energy and angular momentum in terms of particle transfer is described, and is successfully compared with experimental γ-ray multiplicities as a function of both Q-value and mass asymmetry. The problem of angular momentum transfer is again considered in connection with the sequential fission of heavy, deep-inelastic fragments and the excitation of collective modes in the exit channel is suggested. Lastly, the role of the giant E1 mode in the equilibration of the neutron-to-proton ratio is discussed. 14 figures, 39 references

  13. Deep-inelastic processes: a workbench for large scale motion in nuclear matter

    Energy Technology Data Exchange (ETDEWEB)

    Moretto, L.G.; Schmitt, R.P.

    1978-07-01

    The most prominent collective modes excited in deep-inelastic reactions are reviewed, and the natural hierarchy provided by their characteristic relaxation times is described. A model is presented which treats the relaxation of the mass asymmetry mode in terms of a diffusion process. Charge distributions and angular distributions as a function of Z calculated with this model are in good agreement with experimental data. An extension of this diffusion model which treats the transfer of energy and angular momentum in terms of particle transfer is described, and is successfully compared with experimental ..gamma..-ray multiplicities as a function of both Q-value and mass asymmetry. The problem of angular momentum transfer is again considered in connection with the sequential fission of heavy, deep-inelastic fragments and the excitation of collective modes in the exit channel is suggested. Lastly, the role of the giant E1 mode in the equilibration of the neutron-to-proton ratio is discussed. 14 figures, 39 references.

  14. The relationship marketing in the process of customer loyalty. Case large construction of Manizales

    Directory of Open Access Journals (Sweden)

    María Cristina Torres Camacho

    2015-06-01

    Full Text Available This paper is based on the model Lindgreen (2001, upholding the relationship marketing should be approached in three dimensions: objectives, definition of constructs and tools, which enable better customer management within organizations. The objective was to determine the characteristics of relationship marketing as a key factor in the process of Customer Loyalty in the big construction of Manizales Colombia. From a joint perspective, methodology relies on instruments and qualitative and quantitative analysis of court. The results tend to confirm that developers recognize the importance of Relational Marketing but not raised as a policy or have not defined in its strategic plan; additionally, expressed lack of strategies for customer retention, however, these remain loyal because the construction work on meeting your needs, based on trust, commitment and communication. To conclude, the faithful customers perceive that construction does not periodically evaluate their satisfaction with the purchased product, also claim that they show little interest in understanding the perception, make personal meetings, making constant communication through phone calls and meet their tastes and preferences.

  15. Connecting slow earthquakes to huge earthquakes

    OpenAIRE

    Obara, Kazushige; Kato, Aitaro

    2016-01-01

    Slow earthquakes are characterized by a wide spectrum of fault slip behaviors and seismic radiation patterns that differ from those of traditional earthquakes. However, slow earthquakes and huge megathrust earthquakes can have common slip mechanisms and are located in neighboring regions of the seismogenic zone. The frequent occurrence of slow earthquakes may help to reveal the physics underlying megathrust events as useful analogs. Slow earthquakes may function as stress meters because of th...

  16. Integrated Photonics Enabled by Slow Light

    DEFF Research Database (Denmark)

    Mørk, Jesper; Chen, Yuntian; Ek, Sara

    2012-01-01

    In this talk we will discuss the physics of slow light in semiconductor materials and in particular the possibilities offered for integrated photonics. This includes ultra-compact slow light enabled optical amplifiers, lasers and pulse sources.......In this talk we will discuss the physics of slow light in semiconductor materials and in particular the possibilities offered for integrated photonics. This includes ultra-compact slow light enabled optical amplifiers, lasers and pulse sources....

  17. Slowing down bubbles with sound

    Science.gov (United States)

    Poulain, Cedric; Dangla, Remie; Guinard, Marion

    2009-11-01

    We present experimental evidence that a bubble moving in a fluid in which a well-chosen acoustic noise is superimposed can be significantly slowed down even for moderate acoustic pressure. Through mean velocity measurements, we show that a condition for this effect to occur is for the acoustic noise spectrum to match or overlap the bubble's fundamental resonant mode. We render the bubble's oscillations and translational movements using high speed video. We show that radial oscillations (Rayleigh-Plesset type) have no effect on the mean velocity, while above a critical pressure, a parametric type instability (Faraday waves) is triggered and gives rise to nonlinear surface oscillations. We evidence that these surface waves are subharmonic and responsible for the bubble's drag increase. When the acoustic intensity is increased, Faraday modes interact and the strongly nonlinear oscillations behave randomly, leading to a random behavior of the bubble's trajectory and consequently to a higher slow down. Our observations may suggest new strategies for bubbly flow control, or two-phase microfluidic devices. It might also be applicable to other elastic objects, such as globules, cells or vesicles, for medical applications such as elasticity-based sorting.

  18. Effect of Heat Treatment Process on Mechanical Properties and Microstructure of a 9% Ni Steel for Large LNG Storage Tanks

    Science.gov (United States)

    Zhang, J. M.; Li, H.; Yang, F.; Chi, Q.; Ji, L. K.; Feng, Y. R.

    2013-12-01

    In this paper, two different heat treatment processes of a 9% Ni steel for large liquefied natural gas storage tanks were performed in an industrial heating furnace. The former was a special heat treatment process consisting of quenching and intercritical quenching and tempering (Q-IQ-T). The latter was a heat treatment process only consisting of quenching and tempering. Mechanical properties were measured by tensile testing and charpy impact testing, and the microstructure was analyzed by optical microscopy, transmission electron microscopy, and x-ray diffraction. The results showed that outstanding mechanical properties were obtained from the Q-IQ-T process in comparison with the Q-T process, and a cryogenic toughness with charpy impact energy value of 201 J was achieved at 77 K. Microstructure analysis revealed that samples of the Q-IQ-T process had about 9.8% of austenite in needle-like martensite, while samples of the Q-T process only had about 0.9% of austenite retained in tempered martensite.

  19. NOBLE - Flexible concept recognition for large-scale biomedical natural language processing.

    Science.gov (United States)

    Tseytlin, Eugene; Mitchell, Kevin; Legowski, Elizabeth; Corrigan, Julia; Chavan, Girish; Jacobson, Rebecca S

    2016-01-14

    Natural language processing (NLP) applications are increasingly important in biomedical data analysis, knowledge engineering, and decision support. Concept recognition is an important component task for NLP pipelines, and can be either general-purpose or domain-specific. We describe a novel, flexible, and general-purpose concept recognition component for NLP pipelines, and compare its speed and accuracy against five commonly used alternatives on both a biological and clinical corpus. NOBLE Coder implements a general algorithm for matching terms to concepts from an arbitrary vocabulary set. The system's matching options can be configured individually or in combination to yield specific system behavior for a variety of NLP tasks. The software is open source, freely available, and easily integrated into UIMA or GATE. We benchmarked speed and accuracy of the system against the CRAFT and ShARe corpora as reference standards and compared it to MMTx, MGrep, Concept Mapper, cTAKES Dictionary Lookup Annotator, and cTAKES Fast Dictionary Lookup Annotator. We describe key advantages of the NOBLE Coder system and associated tools, including its greedy algorithm, configurable matching strategies, and multiple terminology input formats. These features provide unique functionality when compared with existing alternatives, including state-of-the-art systems. On two benchmarking tasks, NOBLE's performance exceeded commonly used alternatives, performing almost as well as the most advanced systems. Error analysis revealed differences in error profiles among systems. NOBLE Coder is comparable to other widely used concept recognition systems in terms of accuracy and speed. Advantages of NOBLE Coder include its interactive terminology builder tool, ease of configuration, and adaptability to various domains and tasks. NOBLE provides a term-to-concept matching system suitable for general concept recognition in biomedical NLP pipelines.

  20. THE SLOWING DOWN OF THE CORROSION OF ELEMENTS OF THE EQUIPMENT OF HEAVY MET-ALS AT ELEVATED TEMPERATURES

    OpenAIRE

    Носачова, Юлія Вікторівна; Ярошенко, М. М.; Корзун, А. О.; КОРОВЧЕНКО, К. С.

    2017-01-01

    In this article examined the heavy metals ions and their ability to slow down the corrosion process also the impact of ambient temperature on their effectiveness. Solving the problem of corrosion will reduce the impact of large industrial enterprises on the environment and minimize the economic costs. To do this, plants should create a system without a discharge of waste water that is closed recycling systems, which result is a significant reduction in intake of fresh water from natural sourc...

  1. Redesigning a Large Lecture Course for Student Engagement: Process and Outcomes

    Directory of Open Access Journals (Sweden)

    Leslie F. Reid

    2012-12-01

    Full Text Available Using an action-research approach, a large-lecture science course (240 students was redesigned to improve student engagement in the areas of active and collaborative learning, faculty-student interaction and level of academic challenge. This was mainly achieved through the addition of a half-semester long group project, which replaced half of the lectures and the final exam. The course redesign did not result in more hours spent on teaching and teaching-related activities (grading, assessment preparation, lecturing, lecture preparation for the instructor – although the redesigned course requires the support of teaching assistants for the project component. Data on students’ perceptions of the modified course and the frequency to which they participated in the engagement activities were collected using the Classroom Survey of Student Engagement (CLASSE. The majority of students reported high levels of engagement in most of the intended areas and were comfortable with the new class design. The CLASSE data also helped identify areas where intended engagement levels were not met. These areas are the focus for future course development and action research questions.Utilisant une approche de type recherche-action, un cours de science offert dans un grand auditorium (240 étudiants a été reconfiguré afin d’amener les étudiants à s’engager davantage dans un apprentissage actif et collaboratif ainsi que dans leur interaction professeur-étudiants et à relever un défi de nature académique. Pour ce faire, la moitié des cours magistraux ainsi que l’examen final ont été remplacés par un projet de groupe. La reconfiguration du cours ne s’est pas traduite par une augmentation des heures d’enseignement ou des activités liées à l’enseignement (notation, préparation des évaluations, exposé magistral, préparation de l’exposé magistral – bien qu’elle ait nécessité le soutien des assistants à l’enseignement pour la

  2. Optimizing detection and analysis of slow waves in sleep EEG.

    Science.gov (United States)

    Mensen, Armand; Riedner, Brady; Tononi, Giulio

    2016-12-01

    Analysis of individual slow waves in EEG recording during sleep provides both greater sensitivity and specificity compared to spectral power measures. However, parameters for detection and analysis have not been widely explored and validated. We present a new, open-source, Matlab based, toolbox for the automatic detection and analysis of slow waves; with adjustable parameter settings, as well as manual correction and exploration of the results using a multi-faceted visualization tool. We explore a large search space of parameter settings for slow wave detection and measure their effects on a selection of outcome parameters. Every choice of parameter setting had some effect on at least one outcome parameter. In general, the largest effect sizes were found when choosing the EEG reference, type of canonical waveform, and amplitude thresholding. Previously published methods accurately detect large, global waves but are conservative and miss the detection of smaller amplitude, local slow waves. The toolbox has additional benefits in terms of speed, user-interface, and visualization options to compare and contrast slow waves. The exploration of parameter settings in the toolbox highlights the importance of careful selection of detection METHODS: The sensitivity and specificity of the automated detection can be improved by manually adding or deleting entire waves and or specific channels using the toolbox visualization functions. The toolbox standardizes the detection procedure, sets the stage for reliable results and comparisons and is easy to use without previous programming experience. Copyright © 2016 Elsevier B.V. All rights reserved.

  3. An evaluation of touchscreen versus keyboard/mouse interaction for large screen process control displays.

    Science.gov (United States)

    Noah, Benjamin; Li, Jingwen; Rothrock, Ling

    2017-10-01

    The objectives of this study were to test the effect of interaction device on performance in a process control task (managing a tank farm). The study compared the following two conditions: a) 4K-resolution 55" screen with a 21" touchscreen versus b) 4K-resolution 55″ screen with keyboard/mouse. The touchscreen acted both as an interaction device for data entry and navigation and as an additional source of information. A within-subject experiment was conducted among 20 college engineering students. A primary task of preventing tanks from overfilling as well as a secondary task of manual logging with situation awareness questions were designed for the study. Primary Task performance (including tank level at discharge, number of tank discharged and performance score), Secondary Task Performance (including Tank log count, performance score), system interaction times, subjective workload, situation awareness questionnaire, user experience survey regarding usability and condition comparison were used as the measures. Parametric data resulted in two metrics statistically different means between the two conditions: The 4K-keyboard condition resulted in faster Detection + Navigation time compared to the 4K-touchscreen condition, by about 2 s, while participants within the 4K-touchscreen condition were about 2 s faster in data entry than in the 4K-keyboard condition. No significant results were found for: performance on the secondary task, situation awareness, and workload. Additionally, no clear significant differences were found in the non-parametric data analysis. However, participants showed a slight preference for the 4K-touchscreen condition compared to the 4K-keyboard condition in subjective responses in comparing the conditions. Introducing the touchscreen as an additional/alternative input device showed to have an effect in interaction times, which suggests that proper design considerations need to be made. While having values shown on the interaction device

  4. The diversity of atomic hydrogen in slow rotator early-type galaxies

    Science.gov (United States)

    Young, Lisa M.; Serra, Paolo; Krajnović, Davor; Duc, Pierre-Alain

    2018-06-01

    We present interferometric observations of H I in nine slow rotator early-type galaxies of the Atlas3D sample. With these data, we now have sensitive H I searches in 34 of the 36 slow rotators. The aggregate detection rate is 32 per cent ± 8 per cent, consistent with the previous work; however, we find two detections with extremely high H I masses, whose gas kinematics are substantially different from what was previously known about H I in slow rotators. These two cases (NGC 1222 and NGC 4191) broaden the known diversity of H I properties in slow rotators. NGC 1222 is a merger remnant with prolate-like rotation and, if it is indeed prolate in shape, an equatorial gas disc; NGC 4191 has two counter-rotating stellar discs and an unusually large H I disc. We comment on the implications of this disc for the formation of 2σ galaxies. In general, the H I detection rate, the incidence of relaxed H I discs, and the H I/stellar mass ratios of slow rotators are indistinguishable from those of fast rotators. These broad similarities suggest that the H I we are detecting now is unrelated to the galaxies' formation processes and was often acquired after their stars were mostly in place. We also discuss the H I non-detections; some of these galaxies that are undetected in H I or CO are detected in other tracers (e.g. FIR fine structure lines and dust). The question of whether there is cold gas in massive galaxies' scoured nuclear cores still needs work. Finally, we discuss an unusual isolated H I cloud with a surprisingly faint (undetected) optical counterpart.

  5. The Diversity of Atomic Hydrogen in Slow Rotator Early-type Galaxies

    Science.gov (United States)

    Young, Lisa M.; Serra, Paolo; Krajnović, Davor; Duc, Pierre-Alain

    2018-02-01

    We present interferometric observations of H I in nine slow rotator early-type galaxies of the ATLAS3D sample. With these data, we now have sensitive H I searches in 34 of the 36 slow rotators. The aggregate detection rate is 32% ± 8%, consistent with previous work; however, we find two detections with extremely high H I masses, whose gas kinematics are substantially different from what was previously known about H I in slow rotators. These two cases (NGC 1222 and NGC 4191) broaden the known diversity of H I properties in slow rotators. NGC 1222 is a merger remnant with prolate-like rotation and, if it is indeed prolate in shape, an equatorial gas disc; NGC 4191 has two counterrotating stellar discs and an unusually large H I disc. We comment on the implications of this disc for the formation of 2σ galaxies. In general, the H I detection rate, the incidence of relaxed H I discs, and the H I/stellar mass ratios of slow rotators are indistinguishable from those of fast rotators. These broad similarities suggest that the H I we are detecting now is unrelated to the galaxies' formation processes and was often acquired after their stars were mostly in place. We also discuss the H I nondetections; some of these galaxies that are undetected in H I or CO are detected in other tracers (e.g. FIR fine structure lines and dust). The question of whether there is cold gas in massive galaxies' scoured nuclear cores still needs work. Finally, we discuss an unusual isolated H I cloud with a surprisingly faint (undetected) optical counterpart.

  6. Statistical process control charts for attribute data involving very large sample sizes: a review of problems and solutions.

    Science.gov (United States)

    Mohammed, Mohammed A; Panesar, Jagdeep S; Laney, David B; Wilson, Richard

    2013-04-01

    The use of statistical process control (SPC) charts in healthcare is increasing. The primary purpose of SPC is to distinguish between common-cause variation which is attributable to the underlying process, and special-cause variation which is extrinsic to the underlying process. This is important because improvement under common-cause variation requires action on the process, whereas special-cause variation merits an investigation to first find the cause. Nonetheless, when dealing with attribute or count data (eg, number of emergency admissions) involving very large sample sizes, traditional SPC charts often produce tight control limits with most of the data points appearing outside the control limits. This can give a false impression of common and special-cause variation, and potentially misguide the user into taking the wrong actions. Given the growing availability of large datasets from routinely collected databases in healthcare, there is a need to present a review of this problem (which arises because traditional attribute charts only consider within-subgroup variation) and its solutions (which consider within and between-subgroup variation), which involve the use of the well-established measurements chart and the more recently developed attribute charts based on Laney's innovative approach. We close by making some suggestions for practice.

  7. Consultancy on Large-Scale Submerged Aerobic Cultivation Process Design - Final Technical Report: February 1, 2016 -- June 30, 2016

    Energy Technology Data Exchange (ETDEWEB)

    Crater, Jason [Gemomatica, Inc., San Diego, CA (United States); Galleher, Connor [Gemomatica, Inc., San Diego, CA (United States); Lievense, Jeff [Gemomatica, Inc., San Diego, CA (United States)

    2017-05-12

    NREL is developing an advanced aerobic bubble column model using Aspen Custom Modeler (ACM). The objective of this work is to integrate the new fermentor model with existing techno-economic models in Aspen Plus and Excel to establish a new methodology for guiding process design. To assist this effort, NREL has contracted Genomatica to critique and make recommendations for improving NREL's bioreactor model and large scale aerobic bioreactor design for biologically producing lipids at commercial scale. Genomatica has highlighted a few areas for improving the functionality and effectiveness of the model. Genomatica recommends using a compartment model approach with an integrated black-box kinetic model of the production microbe. We also suggest including calculations for stirred tank reactors to extend the models functionality and adaptability for future process designs. Genomatica also suggests making several modifications to NREL's large-scale lipid production process design. The recommended process modifications are based on Genomatica's internal techno-economic assessment experience and are focused primarily on minimizing capital and operating costs. These recommendations include selecting/engineering a thermotolerant yeast strain with lipid excretion; using bubble column fermentors; increasing the size of production fermentors; reducing the number of vessels; employing semi-continuous operation; and recycling cell mass.

  8. Slow creep in soft granular packings.

    Science.gov (United States)

    Srivastava, Ishan; Fisher, Timothy S

    2017-05-14

    Transient creep mechanisms in soft granular packings are studied numerically using a constant pressure and constant stress simulation method. Rapid compression followed by slow dilation is predicted on the basis of a logarithmic creep phenomenon. Characteristic scales of creep strain and time exhibit a power-law dependence on jamming pressure, and they diverge at the jamming point. Microscopic analysis indicates the existence of a correlation between rheology and nonaffine fluctuations. Localized regions of large strain appear during creep and grow in magnitude and size at short times. At long times, the spatial structure of highly correlated local deformation becomes time-invariant. Finally, a microscale connection between local rheology and local fluctuations is demonstrated in the form of a linear scaling between granular fluidity and nonaffine velocity.

  9. Using value stream mapping technique through the lean production transformation process: An implementation in a large-scaled tractor company

    Directory of Open Access Journals (Sweden)

    Mehmet Rıza Adalı

    2017-04-01

    Full Text Available Today’s world, manufacturing industries have to continue their development and continuity in more competitive environment via decreasing their costs. As a first step in the lean production process transformation is to analyze the value added activities and non-value adding activities. This study aims at applying the concepts of Value Stream Mapping (VSM in a large-scaled tractor company in Sakarya. Waste and process time are identified by mapping the current state in the production line of platform. The future state was suggested with improvements for elimination of waste and reduction of lead time, which went from 13,08 to 4,35 days. Analysis are made using current and future states to support the suggested improvements and cycle time of the production line of platform is improved 8%. Results showed that VSM is a good alternative in the decision-making for change in production process.

  10. Signal Formation Processes in Micromegas Detectors and Quality Control for large size Detector Construction for the ATLAS New Small Wheel

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00387450; Rembser, Christoph

    2017-08-04

    The Micromegas technology is one of the most successful modern gaseous detector concepts and widely utilized in nuclear and particle physics experiments. Twenty years of R & D rendered the technology sufficiently mature to be selected as precision tracking detector for the New Small Wheel (NSW) upgrade of the ATLAS Muon spectrometer. This will be the first large scale application of Micromegas in one of the major LHC experiments. However, many of the fundamental microscopic processes in these gaseous detectors are still not fully understood and studies on several detector aspects, like the micromesh geometry, have never been addressed systematically. The studies on signal formation in Micromegas, presented in the first part of this thesis, focuses on the microscopic signal electron loss mechanisms and the amplification processes in electron gas interaction. Based on a detailed model of detector parameter dependencies, these processes are scrutinized in an iterating comparison between exper- imental result...

  11. Understanding Nearshore Processes Of a Large Arctic Delta Using Combined Seabed Mapping, In Situ Observations, Remote Sensing and Modeling

    Science.gov (United States)

    Solomon, S. M.; Couture, N. J.; Forbes, D. L.; Hoque, A.; Jenner, K. A.; Lintern, G.; Mulligan, R. P.; Perrie, W. A.; Stevens, C. W.; Toulany, B.; Whalen, D.

    2009-12-01

    The Mackenzie River Delta and the adjacent continental shelf in the southeastern Beaufort Sea are known to host significant quantities of hydrocarbons. Recent environmental reviews of proposed hydrocarbon development have highlighted the need for a better understanding of the processes that control sediment transport and coastal stability. Over the past several years field surveys have been undertaken in winter, spring and summer to acquire data on seabed morphology, sediment properties, sea ice, river-ocean interaction and nearshore oceanography. These data are being used to improve conceptual models of nearshore processes and to develop and validate numerical models of waves, circulation and sediment transport. The timing and location of sediment erosion, transport and deposition is complex, driven by a combination of open water season storms and spring floods. Unlike temperate counterparts, the interaction between the Mackenzie River and the Beaufort Sea during spring freshet is mediated by the presence of ice cover. Increasing discharge exceeds the under-ice flow capacity leading to flooding of the ice surface, followed by vortex drainage through the ice and scour of the seabed below (“strudel” drainage and scour). During winter months, nearshore circulation slows beneath a thickening ice canopy. Recent surveys have shown that the low gradient inner shelf is composed of extensive shoals where ice freezes to the seabed and intervening zones which are slightly deeper than the ice is thick. The duration of ice contact with the bed determines the thermal characteristics of the seabed. Analysis of cores shows that the silts comprising the shoals are up to 6 m thick. The predominantly well sorted and cross-laminated nature of the silts at the top of the cores suggests an active delta front environment. Measurements of waves, currents, conductivity, temperature and sediment concentration during spring and late summer have been acquired. During moderate August

  12. Deciding about fast and slow decisions.

    Science.gov (United States)

    Croskerry, Pat; Petrie, David A; Reilly, James B; Tait, Gordon

    2014-02-01

    Two reports in this issue address the important topic of clinical decision making. Dual process theory has emerged as the dominant model for understanding the complex processes that underlie human decision making. This theory distinguishes between the reflexive, autonomous processes that characterize intuitive decision making and the deliberate reasoning of an analytical approach. In this commentary, the authors address the polarization of viewpoints that has developed around the relative merits of the two systems. Although intuitive processes are typically fast and analytical processes slow, speed alone does not distinguish them. In any event, the majority of decisions in clinical medicine are not dependent on very short response times. What does appear relevant to diagnostic ease and accuracy is the degree to which the symptoms of the disease being diagnosed are characteristic ones. There are also concerns around some methodological issues related to research design in this area of enquiry. Reductionist approaches that attempt to isolate dependent variables may create such artificial experimental conditions that both external and ecological validity are sacrificed. Clinical decision making is a complex process with many independent (and interdependent) variables that need to be separated out in a discrete fashion and then reflected on in real time to preserve the fidelity of clinical practice. With these caveats in mind, the authors believe that research in this area should promote a better understanding of clinical practice and teaching by focusing less on the deficiencies of intuitive and analytical systems and more on their adaptive strengths.

  13. A fast-slow logic system

    International Nuclear Information System (INIS)

    Kawashima, Hideo.

    1977-01-01

    A fast-slow logic system has been made for use in multi-detector experiments in nuclear physics such as particle-gamma and particle-particle coincidence experiments. The system consists of a fast logic system and a slow logic system. The fast logic system has a function of fast coincidences and provides timing signals for the slow logic system. The slow logic system has a function of slow coincidences and a routing control of input analog signals to the ADCs. (auth.)

  14. Slowing down of alpha particles in ICF DT plasmas

    Science.gov (United States)

    He, Bin; Wang, Zhi-Gang; Wang, Jian-Guo

    2018-01-01

    With the effects of the projectile recoil and plasma polarization considered, the slowing down of 3.54 MeV alpha particles is studied in inertial confinement fusion DT plasmas within the plasma density range from 1024 to 1026 cm-3 and the temperature range from 100 eV to 200 keV. It includes the rate of the energy change and range of the projectile, and the partition fraction of its energy deposition to the deuteron and triton. The comparison with other models is made and the reason for their difference is explored. It is found that the plasmas will not be heated by the alpha particle in its slowing down the process once the projectile energy becomes close to or less than the temperature of the electron or the deuteron and triton in the plasmas. This leads to less energy deposition to the deuteron and triton than that if the recoil of the projectile is neglected when the temperature is close to or higher than 100 keV. Our model is found to be able to provide relevant, reliable data in the large range of the density and temperature mentioned above, even if the density is around 1026 cm-3 while the deuteron and triton temperature is below 500 eV. Meanwhile, the two important models [Phys. Rev. 126, 1 (1962) and Phys. Rev. E 86, 016406 (2012)] are found not to work in this case. Some unreliable data are found in the last model, which include the range of alpha particles and the electron-ion energy partition fraction when the electron is much hotter than the deuteron and triton in the plasmas.

  15. Process parameter impact on properties of sputtered large-area Mo bilayers for CIGS thin film solar cell applications

    Energy Technology Data Exchange (ETDEWEB)

    Badgujar, Amol C.; Dhage, Sanjay R., E-mail: dhage@arci.res.in; Joshi, Shrikant V.

    2015-08-31

    Copper indium gallium selenide (CIGS) has emerged as a promising candidate for thin film solar cells, with efficiencies approaching those of silicon-based solar cells. To achieve optimum performance in CIGS solar cells, uniform, conductive, stress-free, well-adherent, reflective, crystalline molybdenum (Mo) thin films with preferred orientation (110) are desirable as a back contact on large area glass substrates. The present study focuses on cylindrical rotating DC magnetron sputtered bilayer Mo thin films on 300 mm × 300 mm soda lime glass (SLG) substrates. Key sputtering variables, namely power and Ar gas flow rates, were optimized to achieve best structural, electrical and optical properties. The Mo films were comprehensively characterized and found to possess high degree of thickness uniformity over large area. Best crystallinity, reflectance and sheet resistance was obtained at high sputtering powers and low argon gas flow rates, while mechanical properties like adhesion and residual stress were found to be best at low sputtering power and high argon gas flow rate, thereby indicating a need to arrive at a suitable trade-off during processing. - Highlights: • Sputtering of bilayer molybdenum thin films on soda lime glass • Large area deposition using rotating cylindrical direct current magnetron • Trade of sputter process parameters power and pressure • High uniformity of thickness and best electrical properties obtained • Suitable mechanical and optical properties of molybdenum are achieved for CIGS application.

  16. Methods for Prediction of Steel Temperature Curve in the Whole Process of a Localized Fire in Large Spaces

    Directory of Open Access Journals (Sweden)

    Zhang Guowei

    2014-01-01

    Full Text Available Based on a full-scale bookcase fire experiment, a fire development model is proposed for the whole process of localized fires in large-space buildings. We found that for localized fires in large-space buildings full of wooden combustible materials the fire growing phases can be simplified into a t2 fire with a 0.0346 kW/s2 fire growth coefficient. FDS technology is applied to study the smoke temperature curve for a 2 MW to 25 MW fire occurring within a large space with a height of 6 m to 12 m and a building area of 1 500 m2 to 10 000 m2 based on the proposed fire development model. Through the analysis of smoke temperature in various fire scenarios, a new approach is proposed to predict the smoke temperature curve. Meanwhile, a modified model of steel temperature development in localized fire is built. In the modified model, the localized fire source is treated as a point fire source to evaluate the flame net heat flux to steel. The steel temperature curve in the whole process of a localized fire could be accurately predicted by the above findings. These conclusions obtained in this paper could provide valuable reference to fire simulation, hazard assessment, and fire protection design.

  17. Slow pyrolysis of pistachio shell

    Energy Technology Data Exchange (ETDEWEB)

    Apaydin-Varol, Esin; Putun, Ersan; Putun, Ayse E [Anadolu University, Eskisehir (Turkey). Department of Chemical Engineering

    2007-08-15

    In this study, pistachio shell is taken as the biomass sample to investigate the effects of pyrolysis temperature on the product yields and composition when slow pyrolysis is applied in a fixed-bed reactor at atmospheric pressure to the temperatures of 300, 400, 500, 550, 700{sup o}C. The maximum liquid yield was attained at about 500-550{sup o}C with a yield of 20.5%. The liquid product obtained under this optimum temperature and solid products obtained at all temperatures were characterized. As well as proximate and elemental analysis for the products were the basic steps for characterization, column chromatography, FT-IR, GC/MS and SEM were used for further characterization. The results showed that liquid and solid products from pistachio shells show similarities with high value conventional fuels. 31 refs., 9 figs., 1 tab.

  18. The TTI slowness surface approximation

    KAUST Repository

    Stovas, A.

    2011-01-01

    The relation between the vertical and horizontal slownesses, better known as the dispersion relation, for a transversely isotropic media with titled symmetry axis {left parenthesis, less than bracket}TTI{right parenthesis, greater than bracket} requires solving a quartic polynomial, which does not admit a practical explicit solution to be used, for example, in downward continuation. Using a combination of perturbation theory with respect to the anelliptic parameter and Shanks transform to improve the accuracy of the expansion, we develop an explicit formula for the dispersion relation that is highly accurate for all practical purposes. It also reveals some insights into the anisotropy parameter dependency of the dispersion relation including the low impact that the anelliptic parameter has on the vertical placement of reflectors for small tilt in the symmetry angle. © 2011 Society of Exploration Geophysicists.

  19. Near-Space TOPSAR Large-Scene Full-Aperture Imaging Scheme Based on Two-Step Processing

    Directory of Open Access Journals (Sweden)

    Qianghui Zhang

    2016-07-01

    Full Text Available Free of the constraints of orbit mechanisms, weather conditions and minimum antenna area, synthetic aperture radar (SAR equipped on near-space platform is more suitable for sustained large-scene imaging compared with the spaceborne and airborne counterparts. Terrain observation by progressive scans (TOPS, which is a novel wide-swath imaging mode and allows the beam of SAR to scan along the azimuth, can reduce the time of echo acquisition for large scene. Thus, near-space TOPS-mode SAR (NS-TOPSAR provides a new opportunity for sustained large-scene imaging. An efficient full-aperture imaging scheme for NS-TOPSAR is proposed in this paper. In this scheme, firstly, two-step processing (TSP is adopted to eliminate the Doppler aliasing of the echo. Then, the data is focused in two-dimensional frequency domain (FD based on Stolt interpolation. Finally, a modified TSP (MTSP is performed to remove the azimuth aliasing. Simulations are presented to demonstrate the validity of the proposed imaging scheme for near-space large-scene imaging application.

  20. Low, slow, small target recognition based on spatial vision network

    Science.gov (United States)

    Cheng, Zhao; Guo, Pei; Qi, Xin

    2018-03-01

    Traditional photoelectric monitoring is monitored using a large number of identical cameras. In order to ensure the full coverage of the monitoring area, this monitoring method uses more cameras, which leads to more monitoring and repetition areas, and higher costs, resulting in more waste. In order to reduce the monitoring cost and solve the difficult problem of finding, identifying and tracking a low altitude, slow speed and small target, this paper presents spatial vision network for low-slow-small targets recognition. Based on camera imaging principle and monitoring model, spatial vision network is modeled and optimized. Simulation experiment results demonstrate that the proposed method has good performance.

  1. Investigation of deep inelastic scattering processes involving large p$_{t}$ direct photons in the final state

    CERN Multimedia

    2002-01-01

    This experiment will investigate various aspects of photon-parton scattering and will be performed in the H2 beam of the SPS North Area with high intensity hadron beams up to 350 GeV/c. \\\\\\\\ a) The directly produced photon yield in deep inelastic hadron-hadron collisions. Large p$_{t}$ direct photons from hadronic interactions are presumably a result of a simple annihilation process of quarks and antiquarks or of a QCD-Compton process. The relative contribution of the two processes can be studied by using various incident beam projectiles $\\pi^{+}, \\pi^{-}, p$ and in the future $\\bar{p}$. \\\\\\\\b) The correlations between directly produced photons and their accompanying hadronic jets. We will examine events with a large p$_{t}$ direct photon for away-side jets. If jets are recognised their properties will be investigated. Differences between a gluon and a quark jet may become observable by comparing reactions where valence quark annihilations (away-side jet originates from a gluon) dominate over the QDC-Compton...

  2. Multi-format all-optical processing based on a large-scale, hybridly integrated photonic circuit.

    Science.gov (United States)

    Bougioukos, M; Kouloumentas, Ch; Spyropoulou, M; Giannoulis, G; Kalavrouziotis, D; Maziotis, A; Bakopoulos, P; Harmon, R; Rogers, D; Harrison, J; Poustie, A; Maxwell, G; Avramopoulos, H

    2011-06-06

    We investigate through numerical studies and experiments the performance of a large scale, silica-on-silicon photonic integrated circuit for multi-format regeneration and wavelength-conversion. The circuit encompasses a monolithically integrated array of four SOAs inside two parallel Mach-Zehnder structures, four delay interferometers and a large number of silica waveguides and couplers. Exploiting phase-incoherent techniques, the circuit is capable of processing OOK signals at variable bit rates, DPSK signals at 22 or 44 Gb/s and DQPSK signals at 44 Gbaud. Simulation studies reveal the wavelength-conversion potential of the circuit with enhanced regenerative capabilities for OOK and DPSK modulation formats and acceptable quality degradation for DQPSK format. Regeneration of 22 Gb/s OOK signals with amplified spontaneous emission (ASE) noise and DPSK data signals degraded with amplitude, phase and ASE noise is experimentally validated demonstrating a power penalty improvement up to 1.5 dB.

  3. Research Update: Large-area deposition, coating, printing, and processing techniques for the upscaling of perovskite solar cell technology

    Directory of Open Access Journals (Sweden)

    Stefano Razza

    2016-09-01

    Full Text Available To bring perovskite solar cells to the industrial world, performance must be maintained at the photovoltaic module scale. Here we present large-area manufacturing and processing options applicable to large-area cells and modules. Printing and coating techniques, such as blade coating, slot-die coating, spray coating, screen printing, inkjet printing, and gravure printing (as alternatives to spin coating, as well as vacuum or vapor based deposition and laser patterning techniques are being developed for an effective scale-up of the technology. The latter also enables the manufacture of solar modules on flexible substrates, an option beneficial for many applications and for roll-to-roll production.

  4. EBSD-based techniques for characterization of microstructural restoration processes during annealing of metals deformed to large plastic strains

    DEFF Research Database (Denmark)

    Godfrey, A.; Mishin, Oleg; Yu, Tianbo

    2012-01-01

    Some methods for quantitative characterization of the microstructures deformed to large plastic strains both before and after annealing are discussed and illustrated using examples of samples after equal channel angular extrusion and cold-rolling. It is emphasized that the microstructures...... in such deformed samples exhibit a heterogeneity in the microstructural refinement by high angle boundaries. Based on this, a new parameter describing the fraction of regions containing predominantly low angle boundaries is introduced. This parameter has some advantages over the simpler high angle boundary...... on mode of the distribution of dislocation cell sizes is outlined, and it is demonstrated how this parameter can be used to investigate the uniformity, or otherwise, of the restoration processes occurring during annealing of metals deformed to large plastic strains. © (2012) Trans Tech Publications...

  5. Slow decay of magnetic fields in open Friedmann universes

    International Nuclear Information System (INIS)

    Barrow, John D.; Tsagas, Christos G.

    2008-01-01

    Magnetic fields in Friedmann universes can experience superadiabatic growth without departing from conventional electromagnetism. The reason is the relativistic coupling between vector fields and spacetime geometry, which slows down the decay of large-scale magnetic fields in open universes, compared to that seen in perfectly flat models. The result is a large relative gain in magnetic strength that can lead to astrophysically interesting B fields, even if our Universe is only marginally open today

  6. Signal formation processes in Micromegas detectors and quality control for large size detector construction for the ATLAS new small wheel

    Energy Technology Data Exchange (ETDEWEB)

    Kuger, Fabian

    2017-07-31

    The Micromegas technology is one of the most successful modern gaseous detector concepts and widely utilized in nuclear and particle physics experiments. Twenty years of R and D rendered the technology sufficiently mature to be selected as precision tracking detector for the New Small Wheel (NSW) upgrade of the ATLAS Muon spectrometer. This will be the first large scale application of Micromegas in one of the major LHC experiments. However, many of the fundamental microscopic processes in these gaseous detectors are still not fully understood and studies on several detector aspects, like the micromesh geometry, have never been addressed systematically. The studies on signal formation in Micromegas, presented in the first part of this thesis, focuses on the microscopic signal electron loss mechanisms and the amplification processes in electron gas interaction. Based on a detailed model of detector parameter dependencies, these processes are scrutinized in an iterating comparison between experimental results, theory prediction of the macroscopic observables and process simulation on the microscopic level. Utilizing the specialized detectors developed in the scope of this thesis as well as refined simulation algorithms, an unprecedented level of accuracy in the description of the microscopic processes is reached, deepening the understanding of the fundamental process in gaseous detectors. The second part is dedicated to the challenges arising with the large scale Micromegas production for the ATLAS NSW. A selection of technological choices, partially influenced or determined by the herein presented studies, are discussed alongside a final report on two production related tasks addressing the detectors' core components: For the industrial production of resistive anode PCBs a detailed quality control (QC) and quality assurance (QA) scheme as well as the therefore required testing tools have been developed. In parallel the study on micromesh parameter optimization

  7. Signal formation processes in Micromegas detectors and quality control for large size detector construction for the ATLAS new small wheel

    International Nuclear Information System (INIS)

    Kuger, Fabian

    2017-01-01

    The Micromegas technology is one of the most successful modern gaseous detector concepts and widely utilized in nuclear and particle physics experiments. Twenty years of R and D rendered the technology sufficiently mature to be selected as precision tracking detector for the New Small Wheel (NSW) upgrade of the ATLAS Muon spectrometer. This will be the first large scale application of Micromegas in one of the major LHC experiments. However, many of the fundamental microscopic processes in these gaseous detectors are still not fully understood and studies on several detector aspects, like the micromesh geometry, have never been addressed systematically. The studies on signal formation in Micromegas, presented in the first part of this thesis, focuses on the microscopic signal electron loss mechanisms and the amplification processes in electron gas interaction. Based on a detailed model of detector parameter dependencies, these processes are scrutinized in an iterating comparison between experimental results, theory prediction of the macroscopic observables and process simulation on the microscopic level. Utilizing the specialized detectors developed in the scope of this thesis as well as refined simulation algorithms, an unprecedented level of accuracy in the description of the microscopic processes is reached, deepening the understanding of the fundamental process in gaseous detectors. The second part is dedicated to the challenges arising with the large scale Micromegas production for the ATLAS NSW. A selection of technological choices, partially influenced or determined by the herein presented studies, are discussed alongside a final report on two production related tasks addressing the detectors' core components: For the industrial production of resistive anode PCBs a detailed quality control (QC) and quality assurance (QA) scheme as well as the therefore required testing tools have been developed. In parallel the study on micromesh parameter optimization

  8. Massive Cloud Computing Processing of P-SBAS Time Series for Displacement Analyses at Large Spatial Scale

    Science.gov (United States)

    Casu, F.; de Luca, C.; Lanari, R.; Manunta, M.; Zinno, I.

    2016-12-01

    A methodology for computing surface deformation time series and mean velocity maps of large areas is presented. Our approach relies on the availability of a multi-temporal set of Synthetic Aperture Radar (SAR) data collected from ascending and descending orbits over an area of interest, and also permits to estimate the vertical and horizontal (East-West) displacement components of the Earth's surface. The adopted methodology is based on an advanced Cloud Computing implementation of the Differential SAR Interferometry (DInSAR) Parallel Small Baseline Subset (P-SBAS) processing chain which allows the unsupervised processing of large SAR data volumes, from the raw data (level-0) imagery up to the generation of DInSAR time series and maps. The presented solution, which is highly scalable, has been tested on the ascending and descending ENVISAT SAR archives, which have been acquired over a large area of Southern California (US) that extends for about 90.000 km2. Such an input dataset has been processed in parallel by exploiting 280 computing nodes of the Amazon Web Services Cloud environment. Moreover, to produce the final mean deformation velocity maps of the vertical and East-West displacement components of the whole investigated area, we took also advantage of the information available from external GPS measurements that permit to account for possible regional trends not easily detectable by DInSAR and to refer the P-SBAS measurements to an external geodetic datum. The presented results clearly demonstrate the effectiveness of the proposed approach that paves the way to the extensive use of the available ERS and ENVISAT SAR data archives. Furthermore, the proposed methodology can be particularly suitable to deal with the very huge data flow provided by the Sentinel-1 constellation, thus permitting to extend the DInSAR analyses at a nearly global scale. This work is partially supported by: the DPC-CNR agreement, the EPOS-IP project and the ESA GEP project.

  9. Performance of the front-end signal processing electronics for the drift chambers of the Stanford Large Detector

    International Nuclear Information System (INIS)

    Honma, A.; Haller, G.M.; Usher, T.; Shypit, R.

    1990-10-01

    This paper reports on the performance of the front-end analog and digital signal processing electronics for the drift chambers of the Stanford Large Detector (SLD) detector at the Stanford Linear Collider. The electronics mounted on printed circuit boards include up to 64 channels of transimpedance amplification, analog sampling, A/D conversion, and associated control circuitry. Measurements of the time resolution, gain, noise, linearity, crosstalk, and stability of the readout electronics are described and presented. The expected contribution of the electronics to the relevant drift chamber measurement resolutions (i.e., timing and charge division) is given

  10. How engineering data management and system support the main process[-oriented] functions of a large-scale project

    CERN Document Server

    Hameri, A P

    1999-01-01

    By dividing the development process into successive functional operations, this paper studies the benefits of establishing configuration management procedures and of using an engineering data management systems (EDMS) in order to execute the tasks. The underlying environment is that of CERN and the ongoing, a decade long, Large Hadron Collider (LHC)-project. By identifying the main functional groups who will use the EDMS the paper outlines the basic motivations and services provided by such a system to each process function. The implications of strict configuration management on the daily operation of each functional user group are also discussed. The main argument of the paper is that each and every user of the EDMS must act in compliance with the configuration management procedures to guarantee the overall benefits from the system. The pilot EDMS being developed at CERN, which serves as a test-bed to discover the real functional needs of the organisation of an EDMS supports the conclusions. The preliminary ...

  11. Slow Money for Soft Energy: Lessons for Energy Finance from the Slow Money Movement

    Energy Technology Data Exchange (ETDEWEB)

    Kock, Beaudry E. [Environmental Change Institute, University of Oxford, Oxford (United Kingdom)], e-mail: beaudry.kock@ouce.ox.ac.uk

    2012-12-15

    Energy infrastructure is decarbonizing, shifting from dirty coal to cleaner gas- and emissions-free renewables. This is an important and necessary change that unfortunately risks preserving many problematic technical and institutional properties of the old energy system: in particular, the large scales, high aggregation, and excessive centralization of renewable energy infrastructure and, importantly, its financing. Large-scale renewables carry environmental, social and political risks that cannot be ignored, and more importantly they may not alone accomplish the necessary decarbonization of the power sector. We need to revive a different approach to clean energy infrastructure: a 'softer' (Lovins 1978), more distributed, decentralized, local-scale strategy. To achieve this, we need a fundamentally different approach to the financing of clean energy infrastructure. I propose we learn from the 'Slow Money' approach being pioneered in sustainable agriculture (Tasch 2010), emphasizing a better connection to place, smaller scales, and a focus on quality over quantity. This 'slow money, soft energy' vision is not a repudiation of big-scale renewables, since there are some societal needs, which can only be met by big, centralized power. But we do not need the level of concentration in control and finance epitomized by the current trends in the global renewables sector: this can and must change.

  12. Inverse problem to constrain the controlling parameters of large-scale heat transport processes: The Tiberias Basin example

    Science.gov (United States)

    Goretzki, Nora; Inbar, Nimrod; Siebert, Christian; Möller, Peter; Rosenthal, Eliyahu; Schneider, Michael; Magri, Fabien

    2015-04-01

    Salty and thermal springs exist along the lakeshore of the Sea of Galilee, which covers most of the Tiberias Basin (TB) in the northern Jordan- Dead Sea Transform, Israel/Jordan. As it is the only freshwater reservoir of the entire area, it is important to study the salinisation processes that pollute the lake. Simulations of thermohaline flow along a 35 km NW-SE profile show that meteoric and relic brines are flushed by the regional flow from the surrounding heights and thermally induced groundwater flow within the faults (Magri et al., 2015). Several model runs with trial and error were necessary to calibrate the hydraulic conductivity of both faults and major aquifers in order to fit temperature logs and spring salinity. It turned out that the hydraulic conductivity of the faults ranges between 30 and 140 m/yr whereas the hydraulic conductivity of the Upper Cenomanian aquifer is as high as 200 m/yr. However, large-scale transport processes are also dependent on other physical parameters such as thermal conductivity, porosity and fluid thermal expansion coefficient, which are hardly known. Here, inverse problems (IP) are solved along the NW-SE profile to better constrain the physical parameters (a) hydraulic conductivity, (b) thermal conductivity and (c) thermal expansion coefficient. The PEST code (Doherty, 2010) is applied via the graphical interface FePEST in FEFLOW (Diersch, 2014). The results show that both thermal and hydraulic conductivity are consistent with the values determined with the trial and error calibrations. Besides being an automatic approach that speeds up the calibration process, the IP allows to cover a wide range of parameter values, providing additional solutions not found with the trial and error method. Our study shows that geothermal systems like TB are more comprehensively understood when inverse models are applied to constrain coupled fluid flow processes over large spatial scales. References Diersch, H.-J.G., 2014. FEFLOW Finite

  13. USING THE BUSINESS ENGINEERING APPROACH IN THE DEVELOPMENT OF A STRATEGIC MANAGEMENT PROCESS FOR A LARGE CORPORATION: A CASE STUDY

    Directory of Open Access Journals (Sweden)

    C.M. Moll

    2012-01-01

    Full Text Available Most South African organisations were historically part of a closed competitive system with little global competition and a relatively stable economy (Manning: 18, Sunter: 32. Since the political transformation, the globalisation of the world economy, the decline of world economic fundamentals and specific challenges in the South African scenario such as GEAR and employment equity, the whole playingfield has changed. With these changes, new challenges ', appear. A significant challenge for organisations within this scenario is to think, plan and manage strategically. In order to do so, the organisation must understand its relationship with its environment and establish innovative new strategies to manipulate; interact with; and ultimately survive in the environment. The legacy of the past has, in many organisations, implanted an operational short-term focus because the planning horizon was stable. It was sufficient to construct annual plans rather than strategies. These plans were typically internally focused rather than driven by the external environment. Strategic planning in this environment tended to be a form of team building through which the various members of the organisation 's management team discussed and documented the problems of the day. A case study is presented of the development of a strategic management process for a large South African Mining company. The authors believe that the approach is a new and different way of addressing a problem that exists in many organisations - the establishment of a process of strategic thinking, whilst at the same time ensuring that a formal process of strategic planning is followed in order to prompt the management of the organisation for strategic action. The lessons that were drawn from this process are applicable to a larger audience due to the homogenous nature of the management style of a large number of South African organisations.

  14. Human skeletal muscle: transition between fast and slow fibre types.

    Science.gov (United States)

    Neunhäuserer, Daniel; Zebedin, Michaela; Obermoser, Magdalena; Moser, Gerhard; Tauber, Mark; Niebauer, Josef; Resch, Herbert; Galler, Stefan

    2011-05-01

    Human skeletal muscles consist of different fibre types: slow fibres (slow twitch or type I) containing the myosin heavy chain isoform (MHC)-I and fast fibres (fast twitch or type II) containing MHC-IIa (type IIA) or MHC-IId (type IID). The following order of decreasing kinetics is known: type IID > type IIA > type I. This order is especially based on the kinetics of stretch activation, which is the most discriminative property among fibre types. In this study we tested if hybrid fibres containing both MHC-IIa and MHC-I (type C fibres) provide a transition in kinetics between fast (type IIA) and slow fibres (type I). Our data of stretch activation kinetics suggest that type C fibres, with different ratios of MHC-IIa and MHC-I, do not provide a continuous transition. Instead, a specialized group of slow fibres, which we called "transition fibres", seems to provide a transition. Apart of their kinetics of stretch activation, which is most close to that of type IIA, the transition fibres are characterized by large cross-sectional areas and low maximal tensions. The molecular cause for the mechanical properties of the transition fibres is unknown. It is possible that the transition fibres contain an unknown slow MHC isoform, which cannot be separated by biochemical methods. Alternatively, or in addition, isoforms of myofibrillar proteins, other than MHC, and posttranslational modifications of myofibrillar proteins could play a role regarding the characteristics of the transition fibres.

  15. Survey of high-voltage pulse technology suitable for large-scale plasma source ion implantation processes

    International Nuclear Information System (INIS)

    Reass, W.A.

    1994-01-01

    Many new plasma processes ideas are finding their way from the research lab to the manufacturing plant floor. These require high voltage (HV) pulse power equipment, which must be optimized for application, system efficiency, and reliability. Although no single HV pulse technology is suitable for all plasma processes, various classes of high voltage pulsers may offer a greater versatility and economy to the manufacturer. Technology developed for existing radar and particle accelerator modulator power systems can be utilized to develop a modern large scale plasma source ion implantation (PSII) system. The HV pulse networks can be broadly defined by two classes of systems, those that generate the voltage directly, and those that use some type of pulse forming network and step-up transformer. This article will examine these HV pulse technologies and discuss their applicability to the specific PSII process. Typical systems that will be reviewed will include high power solid state, hard tube systems such as crossed-field ''hollow beam'' switch tubes and planar tetrodes, and ''soft'' tube systems with crossatrons and thyratrons. Results will be tabulated and suggestions provided for a particular PSII process

  16. The Adaptive Organization and Fast-slow Systems

    DEFF Research Database (Denmark)

    Andersen, Torben Juul; Hallin, Carina Antonia

    2016-01-01

    Contemporary organizations operate under turbulent business conditions and must adapt their strategies to ongoing changes. This article argues that sustainable organizational performance is achieved when top management directs and coordinates interactive processes anchored in emerging...... organizational opportunities and forward-looking analytics. The fast and emergent processes performed by local managers at the frontline observe and respond to environmental stimuli and the slow processes initiated by decision makers interpret events and reasons about updated strategic actions. Current...

  17. Large scale synthesis of α-Si3N4 nanowires through a kinetically favored chemical vapour deposition process

    Science.gov (United States)

    Liu, Haitao; Huang, Zhaohui; Zhang, Xiaoguang; Fang, Minghao; Liu, Yan-gai; Wu, Xiaowen; Min, Xin

    2018-01-01

    Understanding the kinetic barrier and driving force for crystal nucleation and growth is decisive for the synthesis of nanowires with controllable yield and morphology. In this research, we developed an effective reaction system to synthesize very large scale α-Si3N4 nanowires (hundreds of milligrams) and carried out a comparative study to characterize the kinetic influence of gas precursor supersaturation and liquid metal catalyst. The phase composition, morphology, microstructure and photoluminescence properties of the as-synthesized products were characterized by X-ray diffraction, fourier-transform infrared spectroscopy, field emission scanning electron microscopy, transmission electron microscopy and room temperature photoluminescence measurement. The yield of the products not only relates to the reaction temperature (thermodynamic condition) but also to the distribution of gas precursors (kinetic condition). As revealed in this research, by controlling the gas diffusion process, the yield of the nanowire products could be greatly improved. The experimental results indicate that the supersaturation is the dominant factor in the as-designed system rather than the catalyst. With excellent non-flammability and high thermal stability, the large scale α-Si3N4 products would have potential applications to the improvement of strength of high temperature ceramic composites. The photoluminescence spectrum of the α-Si3N4 shows a blue shift which could be valued for future applications in blue-green emitting devices. There is no doubt that the large scale products are the base of these applications.

  18. Model reduction for the dynamics and control of large structural systems via neutral network processing direct numerical optimization

    Science.gov (United States)

    Becus, Georges A.; Chan, Alistair K.

    1993-01-01

    Three neural network processing approaches in a direct numerical optimization model reduction scheme are proposed and investigated. Large structural systems, such as large space structures, offer new challenges to both structural dynamicists and control engineers. One such challenge is that of dimensionality. Indeed these distributed parameter systems can be modeled either by infinite dimensional mathematical models (typically partial differential equations) or by high dimensional discrete models (typically finite element models) often exhibiting thousands of vibrational modes usually closely spaced and with little, if any, damping. Clearly, some form of model reduction is in order, especially for the control engineer who can actively control but a few of the modes using system identification based on a limited number of sensors. Inasmuch as the amount of 'control spillover' (in which the control inputs excite the neglected dynamics) and/or 'observation spillover' (where neglected dynamics affect system identification) is to a large extent determined by the choice of particular reduced model (RM), the way in which this model reduction is carried out is often critical.

  19. Plant domestication slows pest evolution.

    Science.gov (United States)

    Turcotte, Martin M; Lochab, Amaneet K; Turley, Nash E; Johnson, Marc T J

    2015-09-01

    Agricultural practices such as breeding resistant varieties and pesticide use can cause rapid evolution of pest species, but it remains unknown how plant domestication itself impacts pest contemporary evolution. Using experimental evolution on a comparative phylogenetic scale, we compared the evolutionary dynamics of a globally important economic pest - the green peach aphid (Myzus persicae) - growing on 34 plant taxa, represented by 17 crop species and their wild relatives. Domestication slowed aphid evolution by 13.5%, maintained 10.4% greater aphid genotypic diversity and 5.6% higher genotypic richness. The direction of evolution (i.e. which genotypes increased in frequency) differed among independent domestication events but was correlated with specific plant traits. Individual-based simulation models suggested that domestication affects aphid evolution directly by reducing the strength of selection and indirectly by increasing aphid density and thus weakening genetic drift. Our results suggest that phenotypic changes during domestication can alter pest evolutionary dynamics. © 2015 John Wiley & Sons Ltd/CNRS.

  20. Schrodinger cat state generation using a slow light

    International Nuclear Information System (INIS)

    Ham, B. S.; Kim, M. S.

    2003-01-01

    We show a practical application of giant Kerr nonlinearity to quantum information processing based on superposition of two distinct macroscopic states- Schrodinger cat state. The giant Kerr nonlinearity can be achieved by using electromagnetically induced transparency, in which light propagation should be slowed down so that a pi-phase shift can be easily obtained owing to increased interaction time.

  1. Human Growth Hormone (HGH): Does It Slow Aging?

    Science.gov (United States)

    Healthy Lifestyle Healthy aging Human growth hormone is described by some as the key to slowing the aging process. Before you sign up, get the ... slowdown has triggered an interest in using synthetic human growth hormone (HGH) as a way to stave ...

  2. Biochar production from freshwater algae by slow pyrolysis

    Directory of Open Access Journals (Sweden)

    Tanongkiat Kiatsiriroat

    2012-05-01

    Full Text Available A study on the feasibility of biochar production from 3 kinds of freshwateralgae, viz. Spirulina, Spirogyra and Cladophora, was undertaken. Using a slow pyrolysis process in a specially designed reactor, biochar could be generated at 550oC under nitrogen atmosphere. The yields of biochar were between 28-31% of the dry algae.

  3. IAEA Conference on Large Radiation Sources in Industry (Warsaw 1959): Which technologies of radiation processing survived and why?

    International Nuclear Information System (INIS)

    Zagorski, Z.P.

    1999-01-01

    The IAEA has organized in Warsaw an International Conference on Large Radiation Sources in Industry from 8 to 12 September 1959. Proceedings of the Conference have been published in two volumes of summary amount of 925 pages. This report presents analysis, which technologies presented at the Conference have survived and why. The analysis is interesting because already in the fifties practically full range of possibilities of radiation processing was explored, and partially implemented. Not many new technologies were presented at the next IAEA Conferences on the same theme. Already at the time of the Warsaw Conference an important role of economy of the technology has recognized. The present report selects the achievements of the Conference into two groups: the first concerns technologies which have not been implemented in the next decades and the second group which is the basis of highly profitable, unsubsidized commercial production. The criterion of belonging of the technology to the second group, is the value of the quotient of the cost of the ready, saleable product diminished by the cost of a raw material before processing, to the expense of radiation processing, being the sum of irradiation cost and such operations as transportation of the object to and from the irradiation facility. Low value of the quotient, as compared to successful technologies is prophesying badly as concerns the future of the commercial proposal. A special position among objects of radiation processing is occupied by radiation processing technologies direct towards the protection or improving of the environment. Market economy does not apply here and the implementation has to be subsidized. (author)

  4. Applications of Slow Light in Telecommunications

    National Research Council Canada - National Science Library

    Boyd, Robert W; Gauthier, Daniel J; Gaeta, Alexander L

    2006-01-01

    .... Now, optical scientists are turning their attention toward developing useful applications of slow light, including controllable optical delay lines, optical buffers and true time delay methods...

  5. Single ion induced surface nanostructures: a comparison between slow highly charged and swift heavy ions.

    Science.gov (United States)

    Aumayr, Friedrich; Facsko, Stefan; El-Said, Ayman S; Trautmann, Christina; Schleberger, Marika

    2011-10-05

    This topical review focuses on recent advances in the understanding of the formation of surface nanostructures, an intriguing phenomenon in ion-surface interaction due to the impact of individual ions. In many solid targets, swift heavy ions produce narrow cylindrical tracks accompanied by the formation of a surface nanostructure. More recently, a similar nanometric surface effect has been revealed for the impact of individual, very slow but highly charged ions. While swift ions transfer their large kinetic energy to the target via ionization and electronic excitation processes (electronic stopping), slow highly charged ions produce surface structures due to potential energy deposited at the top surface layers. Despite the differences in primary excitation, the similarity between the nanostructures is striking and strongly points to a common mechanism related to the energy transfer from the electronic to the lattice system of the target. A comparison of surface structures induced by swift heavy ions and slow highly charged ions provides a valuable insight to better understand the formation mechanisms. © 2011 IOP Publishing Ltd

  6. Systematic dependence on the slowing down environment, of nuclear lifetime measurements by DSAM

    International Nuclear Information System (INIS)

    Toulemonde, M.; Haas, F.

    1976-01-01

    The meanlife of the 22 Ne 3.34MeV level measured by DSAM (Doppler Shift Attenuation Method) at an average velocity of 0.009 c, shows large fluctuations with different slowing down materials ranging from Li to Pb. These fluctuations are correlated with a linear dependence of the 'apparent' meanlife tau on the electronic slowing down time

  7. Spatiotemporal complexity of 2-D rupture nucleation process observed by direct monitoring during large-scale biaxial rock friction experiments

    Science.gov (United States)

    Fukuyama, Eiichi; Tsuchida, Kotoyo; Kawakata, Hironori; Yamashita, Futoshi; Mizoguchi, Kazuo; Xu, Shiqing

    2018-05-01

    We were able to successfully capture rupture nucleation processes on a 2-D fault surface during large-scale biaxial friction experiments using metagabbro rock specimens. Several rupture nucleation patterns have been detected by a strain gauge array embedded inside the rock specimens as well as by that installed along the edge walls of the fault. In most cases, the unstable rupture started just after the rupture front touched both ends of the rock specimen (i.e., when rupture front extended to the entire width of the fault). In some cases, rupture initiated at multiple locations and the rupture fronts coalesced to generate unstable ruptures, which could only be detected from the observation inside the rock specimen. Therefore, we need to carefully examine the 2-D nucleation process of the rupture especially when analyzing the data measured only outside the rock specimen. At least the measurements should be done at both sides of the fault to identify the asymmetric rupture propagation on the fault surface, although this is not perfect yet. In the present experiment, we observed three typical types of the 2-D rupture propagation patterns, two of which were initiated at a single location either close to the fault edge or inside the fault. This initiation could be accelerated by the free surface effect at the fault edge. The third one was initiated at multiple locations and had a rupture coalescence at the middle of the fault. These geometrically complicated rupture initiation patterns are important for understanding the earthquake nucleation process in nature.

  8. Utilization of Workflow Process Maps to Analyze Gaps in Critical Event Notification at a Large, Urban Hospital.

    Science.gov (United States)

    Bowen, Meredith; Prater, Adam; Safdar, Nabile M; Dehkharghani, Seena; Fountain, Jack A

    2016-08-01

    Stroke care is a time-sensitive workflow involving multiple specialties acting in unison, often relying on one-way paging systems to alert care providers. The goal of this study was to map and quantitatively evaluate such a system and address communication gaps with system improvements. A workflow process map of the stroke notification system at a large, urban hospital was created via observation and interviews with hospital staff. We recorded pager communication regarding 45 patients in the emergency department (ED), neuroradiology reading room (NRR), and a clinician residence (CR), categorizing transmissions as successful or unsuccessful (dropped or unintelligible). Data analysis and consultation with information technology staff and the vendor informed a quality intervention-replacing one paging antenna and adding another. Data from a 1-month post-intervention period was collected. Error rates before and after were compared using a chi-squared test. Seventy-five pages regarding 45 patients were recorded pre-intervention; 88 pages regarding 86 patients were recorded post-intervention. Initial transmission error rates in the ED, NRR, and CR were 40.0, 22.7, and 12.0 %. Post-intervention, error rates were 5.1, 18.8, and 1.1 %, a statistically significant improvement in the ED (p workflow process maps. The workflow process map effectively defined communication failure parameters, allowing for systematic testing and intervention to improve communication in essential clinical locations.

  9. Slow extraction control system of HIRFL-CSR

    International Nuclear Information System (INIS)

    Liu Wufeng; Qiao Weimin; Yuan Youjin; Mao Ruishi; Zhao Tiecheng

    2013-01-01

    For heavy-ion radiotherapy, HIRFL-CSR (Heavy Ion Research Facility in Lanzhou-Cooler Storage Ring) needs a long term uniform ion beam extraction from HIRFL-CSR main ring to high energy beam transport line to meet the requirement of heavy-ion radiotherapy's ion beam. Slow extraction control system uses the synchronous signal of HIRFL-CSR control system's timing system to realize process control. When the synchronous event data of HIRFL-CSR control system's timing system trigger controlling and changing data (frequency value, tune value, voltage value), the waveform generator will generate waveform by frequency value, tune value and voltage value, and will amplify the generated waveform by power amplifier to electrostatic deflector to achieve RF-KO slow extraction. The synchronous event receiver of slow extraction system is designed by using FPGA and optical fiber interface to keep high transmission speed and anti-jamming. HIRFL-CSR's running for heavy-ion radiotherapy and ten thousand seconds long period slow extraction experiments show that slow extraction control system is workable and can meet the requirement of heavy-ion radiotherapy's ion beam. (authors)

  10. Neurogenetics of slow axonal transport: from cells to animals.

    Science.gov (United States)

    Sadananda, Aparna; Ray, Krishanu

    2012-09-01

    Slow axonal transport is a multivariate phenomenon implicated in several neurodegenerative disorders. Recent reports have unraveled the molecular basis of the transport of certain slow component proteins, such as the neurofilament subunits, tubulin, and certain soluble enzymes such as Ca(2+)/calmodulin-dependent protein kinase IIa (CaM kinase IIa), etc., in tissue cultured neurons. In addition, genetic analyses also implicate microtubule-dependent motors and other housekeeping proteins in this process. However, the biological relevance of this phenomenon is not so well understood. Here, the authors have discussed the possibility of adopting neurogenetic analyses in multiple model organisms to correlate molecular level measurements of the slow transport phenomenon to animal behavior, thus facilitating the investigation of its biological efficacy.

  11. A progress report for the large block test of the coupled thermal-mechanical-hydrological-chemical processes

    International Nuclear Information System (INIS)

    Lin, W.; Wilder, D.G.; Blink, J.

    1994-10-01

    This is a progress report on the Large Block Test (LBT) project. The purpose of the LBT is to study some of the coupled thermal-mechanical-hydrological-chemical (TMHC) processes in the near field of a nuclear waste repository under controlled boundary conditions. To do so, a large block of Topopah Spring tuff will be heated from within for about 4 to 6 months, then cooled down for about the same duration. Instruments to measure temperature, moisture content, stress, displacement, and chemical changes will be installed in three directions in the block. Meanwhile, laboratory tests will be conducted on small blocks to investigate individual thermal-mechanical, thermal-hydrological, and thermal-chemical processes. The fractures in the large block will be characterized from five exposed surfaces. The minerals on fracture surfaces will be studied before and after the test. The results from the LBT will be useful for testing and building confidence in models that will be used to predict TMHC processes in a repository. The boundary conditions to be controlled on the block include zero moisture flux and zero heat flux on the sides, constant temperature on the top, and constant stress on the outside surfaces of the block. To control these boundary conditions, a load-retaining frame is required. A 3 x 3 x 4.5 m block of Topopah Spring tuff has been isolated on the outcrop at Fran Ridge, Nevada Test Site. Pre-test model calculations indicate that a permeability of at least 10 -15 m 2 is required so that a dryout zone can be created within a practical time frame when the block is heated from within. Neutron logging was conducted in some of the vertical holes to estimate the initial moisture content of the block. It was found that about 60 to 80% of the pore volume of the block is saturated with water. Cores from the vertical holes have been used to map the fractures and to determine the properties of the rock. A current schedule is included in the report

  12. Accelerating solidification process simulation for large-sized system of liquid metal atoms using GPU with CUDA

    Energy Technology Data Exchange (ETDEWEB)

    Jie, Liang [School of Information Science and Engineering, Hunan University, Changshang, 410082 (China); Li, KenLi, E-mail: lkl@hnu.edu.cn [School of Information Science and Engineering, Hunan University, Changshang, 410082 (China); National Supercomputing Center in Changsha, 410082 (China); Shi, Lin [School of Information Science and Engineering, Hunan University, Changshang, 410082 (China); Liu, RangSu [School of Physics and Micro Electronic, Hunan University, Changshang, 410082 (China); Mei, Jing [School of Information Science and Engineering, Hunan University, Changshang, 410082 (China)

    2014-01-15

    Molecular dynamics simulation is a powerful tool to simulate and analyze complex physical processes and phenomena at atomic characteristic for predicting the natural time-evolution of a system of atoms. Precise simulation of physical processes has strong requirements both in the simulation size and computing timescale. Therefore, finding available computing resources is crucial to accelerate computation. However, a tremendous computational resource (GPGPU) are recently being utilized for general purpose computing due to its high performance of floating-point arithmetic operation, wide memory bandwidth and enhanced programmability. As for the most time-consuming component in MD simulation calculation during the case of studying liquid metal solidification processes, this paper presents a fine-grained spatial decomposition method to accelerate the computation of update of neighbor lists and interaction force calculation by take advantage of modern graphics processors units (GPU), enlarging the scale of the simulation system to a simulation system involving 10 000 000 atoms. In addition, a number of evaluations and tests, ranging from executions on different precision enabled-CUDA versions, over various types of GPU (NVIDIA 480GTX, 580GTX and M2050) to CPU clusters with different number of CPU cores are discussed. The experimental results demonstrate that GPU-based calculations are typically 9∼11 times faster than the corresponding sequential execution and approximately 1.5∼2 times faster than 16 CPU cores clusters implementations. On the basis of the simulated results, the comparisons between the theoretical results and the experimental ones are executed, and the good agreement between the two and more complete and larger cluster structures in the actual macroscopic materials are observed. Moreover, different nucleation and evolution mechanism of nano-clusters and nano-crystals formed in the processes of metal solidification is observed with large

  13. GATECloud.net: a platform for large-scale, open-source text processing on the cloud.

    Science.gov (United States)

    Tablan, Valentin; Roberts, Ian; Cunningham, Hamish; Bontcheva, Kalina

    2013-01-28

    Cloud computing is increasingly being regarded as a key enabler of the 'democratization of science', because on-demand, highly scalable cloud computing facilities enable researchers anywhere to carry out data-intensive experiments. In the context of natural language processing (NLP), algorithms tend to be complex, which makes their parallelization and deployment on cloud platforms a non-trivial task. This study presents a new, unique, cloud-based platform for large-scale NLP research--GATECloud. net. It enables researchers to carry out data-intensive NLP experiments by harnessing the vast, on-demand compute power of the Amazon cloud. Important infrastructural issues are dealt with by the platform, completely transparently for the researcher: load balancing, efficient data upload and storage, deployment on the virtual machines, security and fault tolerance. We also include a cost-benefit analysis and usage evaluation.

  14. Accelerating Best Care in Pennsylvania: adapting a large academic system's quality improvement process to rural community hospitals.

    Science.gov (United States)

    Haydar, Ziad; Gunderson, Julie; Ballard, David J; Skoufalos, Alexis; Berman, Bettina; Nash, David B

    2008-01-01

    Industrial quality improvement (QI) methods such as continuous quality improvement (CQI) may help bridge the gap between evidence-based "best care" and the quality of care provided. In 2006, Baylor Health Care System collaborated with Jefferson Medical College of Thomas Jefferson University to conduct a QI demonstration project in select Pennsylvania hospitals using CQI techniques developed by Baylor. The training was provided over a 6-month period and focused on methods for rapid-cycle improvement; data system design; data management; tools to improve patient outcomes, processes of care, and cost-effectiveness; use of clinical guidelines and protocols; leadership skills; and customer service skills. Participants successfully implemented a variety of QI projects. QI education programs developed and pioneered within large health care systems can be adapted and applied successfully to other settings, providing needed tools to smaller rural and community hospitals that lack the necessary resources to establish such programs independently.

  15. The front-end analog and digital signal processing electronics for the drift chambers of the Stanford Large Detector

    International Nuclear Information System (INIS)

    Haller, G.M.; Freytag, D.R.; Fox, J.; Olsen, J.; Paffrath, L.; Yim, A.; Honma, A.

    1990-10-01

    The front-end signal processing electronics for the drift-chambers of the Stanford Large Detector (SLD) at the Stanford Linear Collider is described. The system is implemented with printed-circuit boards which are shaped for direct mounting on the detector. Typically, a motherboard comprises 64 channels of transimpedance amplification and analog waveform sampling, A/D conversion, and associated control and readout circuitry. The loaded motherboard thus forms a processor which records low-level wave forms from 64 detector channels and transforms the information into a 64 k-byte serial data stream. In addition, the package performs calibration functions, measures leakage currents on the wires, and generates wire hit patterns for triggering purposes. The construction and operation of the electronic circuits utilizing monolithic, hybridized, and programmable components are discussed

  16. Brackish and seawater desalination for process and demineralised water production for large power plants in the North Sea region

    Energy Technology Data Exchange (ETDEWEB)

    Nagel, Rolf [Hager + Elsaesser GmbH, Stuttgart (Germany); Brinkmann, Juergen [RWE Technology GmbH, Essen (Germany)

    2010-06-15

    Large power plants for power generation from fossil fuels are constantly being optimised in order to improve their efficiency. One element of the overall considerations is once-through cooling with brackish or seawater on sites near the sea. In addition to the higher overall efficiency, such sites - thanks to their connection to ocean shipping - also offer infrastructural advantages regarding fuel supply and residual material disposal compared to inland sites. Because the cooling water intake and discharge structures have to be built anyway, they lend themselves to also producing the process and demineralised water from the brackish or seawater. In this case, the use of fresh or drinking water as resources can be minimised. In the following report, we present a pilot study using ultrafiltration and reverse osmosis on a North Sea site with raw water intake from a seaport basin. (orig.)

  17. Observing and modeling the spectrum of a slow slip event: Constraints on the scaling of slow slip and tremor

    Science.gov (United States)

    Hawthorne, J. C.; Bartlow, N. M.; Ghosh, A.

    2017-12-01

    We estimate the normalized moment rate spectrum of a slow slip event in Cascadia and then attempt to reproduce it. Our goal is to further assess whether a single physical mechanism could govern slow slip and tremor events, with durations that span 6 orders of magnitude, so we construct the spectrum by parameterizing a large slow slip event as the sum of a number of subevents with various durations. The spectrum estimate uses data from three sources: the GPS-based slip inversion of Bartlow et al (2011), PBO borehole strain measurements, and beamforming-based tremor moment estimates of Ghosh et al (2009). We find that at periods shorter than 1 day, the moment rate power spectrum decays as frequencyn, where n is between 0.7 and 1.4 when measured from strain and between 1.2 and 1.4 when inferred from tremor. The spectrum appears roughly flat at periods of 1 to 10 days, as both the 1-day-period strain and tremor data and the 6-day-period slip inversion data imply a moment rate power of 0.02 times the the total moment squared. We demonstrate one way to reproduce this spectrum: by constructing the large-scale slow slip event as the sum of a series of subevents. The shortest of these subevents could be interpreted as VLFEs or even LFEs, while longer subevents might represent the aseismic slip that drives rapid tremor reverals, streaks, or rapid tremor migrations. We pick the subevent magnitudes from a Gutenberg-Richter distribution and place the events randomly throughout a 30-day interval. Then we assign each subevent a duration that scales with its moment to a specified power. Finally, we create a moment rate function for each subevent and sum all of the moment rates. We compute the summed slow slip moment rate spectra with two approaches: a time-domain numerical computation and a frequency-domain analytical summation. Several sets of subevent parameters can allow the constructed slow slip event to match the observed spectrum. One allowable set of parameters is of

  18. Perceptions of the Slow Food Cultural Trend among the Youth

    Directory of Open Access Journals (Sweden)

    Lelia Voinea

    2016-11-01

    Full Text Available As they become increasingly aware of the importance of healthy eating and of the serious food imbalance caused by the overconsumption of industrial, ultra-processed and superorganoleptic food, consumers are now beginning to turn their attention to food choices guaranteeing both individual health and also of the environment . Thus, in recent years we are witnessing the rise of a cultural trend ‒ Slow Food. Slow Food has become an international movement that advocates for satisfying culinary pleasure, protects biological and cultural diversity, spread taste education, links "green" producers to consumers and believes that gastronomy intersects with politics, agriculture and ecology. Slow Food proposes a holistic approach to food problem, where the economic, sociocultural and environmental aspects are interlinked, being pursued as part of an overall strategy. In order to highlight the manner in which the principles of this cultural trend are perceived by the representatives of the new generation of consumers in Romania, exploratory research marketing was conducted among the students in the second year of the master’s program Quality Management, Expertise and Consumer Protection, from the Faculty of Business and Tourism from the Buchares t University of Economic Studies . The results of this research have shown an insufficient knowledge of Slow Food phenomenon and, especially, the Slow Food network activity in Romania. To show that the Slow Food type of food is a healthier option towards which the future consumer demand should be guided, especially those belonging to the younger generation, an antithetical comparative analysis of the nutritional value of two menus was performed: a suggestive one for the Slow Food feeding style and other one, specific to the fast food style. Slow Food style was considered antithetical to the fast food because many previous studies have shown a preference of the young for the fast-food type products, despite the

  19. Response of electret dosemeter to slow neutrons

    International Nuclear Information System (INIS)

    Ghilardi, A.J.P.; Pela, C.A.; Zimmerman, R.L.

    1987-01-01

    The response of electret dosemeter to slow neutrons exposure is cited, mentioning the preparation and the irradiation of dosemeter with Am-Be source. Some theory considerations about the response of electret dosemeter to slow and fast neutrons are also presented. (C.G.C.) [pt

  20. Tandem queue with server slow-down

    NARCIS (Netherlands)

    Miretskiy, D.I.; Scheinhardt, W.R.W.; Mandjes, M.R.H.

    2007-01-01

    We study how rare events happen in the standard two-node tandem Jackson queue and in a generalization, the socalled slow-down network, see [2]. In the latter model the service rate of the first server depends on the number of jobs in the second queue: the first server slows down if the amount of

  1. Slow-light pulses in moving media

    International Nuclear Information System (INIS)

    Fiurasek, J.; Leonhardt, U.; Parentani, R.

    2002-01-01

    Slow light in moving media reaches a counterintuitive regime when the flow speed of the medium approaches the group velocity of light. Pulses can penetrate a region where a counterpropagating flow exceeds the group velocity. When the counterflow slows down, pulses are reflected

  2. Can fast and slow intelligence be differentiated?

    NARCIS (Netherlands)

    Partchev, I.; de Boeck, P.

    2012-01-01

    Responses to items from an intelligence test may be fast or slow. The research issue dealt with in this paper is whether the intelligence involved in fast correct responses differs in nature from the intelligence involved in slow correct responses. There are two questions related to this issue: 1.

  3. Slow Movement/Slow University: Critical Engagements. Introduction to the Thematic Section

    Directory of Open Access Journals (Sweden)

    Maggie O'Neill

    2014-09-01

    Full Text Available This thematic section emerged from two seminars that took place at Durham University in England in November 2013 and March 2014 on the possibilities for thinking through what a change movement towards slow might mean for the University. Slow movements have emerged in relation to a number of topics: Slow food, Citta slow and more recently, slow science. What motivated us in the seminars was to explore how far these movements could help us address the acceleration and intensification of work within our own and other universities, and indeed, what new learning, research, philosophies, practices, structures and governance might emerge. This editorial introduction presents the concept of the "slow university" and introduces our critical engagements with slow. The articles presented here interrogate the potentialities, challenges, problems and pitfalls of the slow university in an era of corporate culture and management rationality. URN: http://nbn-resolving.de/urn:nbn:de:0114-fqs1403166

  4. Slow, Wet and Catalytic Pyrolysis of Fowl Manure

    OpenAIRE

    Renzo Carta; Mario Cruccu; Francesco Desogus

    2012-01-01

    This work presents the experimental results obtained at a pilot plant which works with a slow, wet and catalytic pyrolysis process of dry fowl manure. This kind of process mainly consists in the cracking of the organic matrix and in the following reaction of carbon with water, which is either already contained in the organic feed or added, to produce carbon monoxide and hydrogen. Reactions are conducted in a rotating reactor maintained at a temperature of 500°C; the requi...

  5. Connecting slow earthquakes to huge earthquakes.

    Science.gov (United States)

    Obara, Kazushige; Kato, Aitaro

    2016-07-15

    Slow earthquakes are characterized by a wide spectrum of fault slip behaviors and seismic radiation patterns that differ from those of traditional earthquakes. However, slow earthquakes and huge megathrust earthquakes can have common slip mechanisms and are located in neighboring regions of the seismogenic zone. The frequent occurrence of slow earthquakes may help to reveal the physics underlying megathrust events as useful analogs. Slow earthquakes may function as stress meters because of their high sensitivity to stress changes in the seismogenic zone. Episodic stress transfer to megathrust source faults leads to an increased probability of triggering huge earthquakes if the adjacent locked region is critically loaded. Careful and precise monitoring of slow earthquakes may provide new information on the likelihood of impending huge earthquakes. Copyright © 2016, American Association for the Advancement of Science.

  6. Technical basis and programmatic requirements for large block testing of coupled thermal-mechanical-hydrological-chemical processes

    International Nuclear Information System (INIS)

    Lin, Wunan.

    1993-09-01

    This document contains the technical basis and programmatic requirements for a scientific investigation plan that governs tests on a large block of tuff for understanding the coupled thermal- mechanical-hydrological-chemical processes. This study is part of the field testing described in Section 8.3.4.2.4.4.1 of the Site Characterization Plan (SCP) for the Yucca Mountain Project. The first, and most important objective is to understand the coupled TMHC processes in order to develop models that will predict the performance of a nuclear waste repository. The block and fracture properties (including hydrology and geochemistry) can be well characterized from at least five exposed surfaces, and the block can be dismantled for post-test examinations. The second objective is to provide preliminary data for development of models that will predict the quality and quantity of water in the near-field environment of a repository over the current 10,000 year regulatory period of radioactive decay. The third objective is to develop and evaluate the various measurement systems and techniques that will later be employed in the Engineered Barrier System Field Tests (EBSFT)

  7. Large-scale analyses of synonymous substitution rates can be sensitive to assumptions about the process of mutation.

    Science.gov (United States)

    Aris-Brosou, Stéphane; Bielawski, Joseph P

    2006-08-15

    A popular approach to examine the roles of mutation and selection in the evolution of genomes has been to consider the relationship between codon bias and synonymous rates of molecular evolution. A significant relationship between these two quantities is taken to indicate the action of weak selection on substitutions among synonymous codons. The neutral theory predicts that the rate of evolution is inversely related to the level of functional constraint. Therefore, selection against the use of non-preferred codons among those coding for the same amino acid should result in lower rates of synonymous substitution as compared with sites not subject to such selection pressures. However, reliably measuring the extent of such a relationship is problematic, as estimates of synonymous rates are sensitive to our assumptions about the process of molecular evolution. Previous studies showed the importance of accounting for unequal codon frequencies, in particular when synonymous codon usage is highly biased. Yet, unequal codon frequencies can be modeled in different ways, making different assumptions about the mutation process. Here we conduct a simulation study to evaluate two different ways of modeling uneven codon frequencies and show that both model parameterizations can have a dramatic impact on rate estimates and affect biological conclusions about genome evolution. We reanalyze three large data sets to demonstrate the relevance of our results to empirical data analysis.

  8. Large-Scale Consumption and Zero-Waste Recycling Method of Red Mud in Steel Making Process

    Directory of Open Access Journals (Sweden)

    Guoshan Ning

    2018-03-01

    Full Text Available To release the environmental pressure from the massive discharge of bauxite residue (red mud, a novel recycling method of red mud in steel making process was investigated through high-temperature experiments and thermodynamic analysis. The results showed that after the reduction roasting of the carbon-bearing red mud pellets at 1100–1200 °C for 12–20 min, the metallic pellets were obtained with the metallization ratio of ≥88%. Then, the separation of slag and iron achieved from the metallic pellets at 1550 °C, after composition adjustment targeting the primary crystal region of the 12CaO·7Al2O3 phase. After iron removal and composition adjustment, the smelting-separation slag had good smelting performance and desulfurization capability, which meets the demand of sulfurization flux in steel making process. The pig iron quality meets the requirements of the high-quality raw material for steel making. In virtue of the huge scale and output of steel industry, the large-scale consumption and zero-waste recycling method of red mud was proposed, which comprised of the carbon-bearing red mud pellets roasting in the rotary hearth furnace and smelting separation in the electric arc furnace after composition adjustment.

  9. Beyond single syllables: large-scale modeling of reading aloud with the Connectionist Dual Process (CDP++) model.

    Science.gov (United States)

    Perry, Conrad; Ziegler, Johannes C; Zorzi, Marco

    2010-09-01

    Most words in English have more than one syllable, yet the most influential computational models of reading aloud are restricted to processing monosyllabic words. Here, we present CDP++, a new version of the Connectionist Dual Process model (Perry, Ziegler, & Zorzi, 2007). CDP++ is able to simulate the reading aloud of mono- and disyllabic words and nonwords, and learns to assign stress in exactly the same way as it learns to associate graphemes with phonemes. CDP++ is able to simulate the monosyllabic benchmark effects its predecessor could, and therefore shows full backwards compatibility. CDP++ also accounts for a number of novel effects specific to disyllabic words, including the effects of stress regularity and syllable number. In terms of database performance, CDP++ accounts for over 49% of the reaction time variance on items selected from the English Lexicon Project, a very large database of several thousand of words. With its lexicon of over 32,000 words, CDP++ is therefore a notable example of the successful scaling-up of a connectionist model to a size that more realistically approximates the human lexical system. Copyright © 2010 Elsevier Inc. All rights reserved.

  10. The medicine selection process in four large university hospitals in Brazil: Does the DTC have a role?

    Directory of Open Access Journals (Sweden)

    Elisangela da Costa Lima-Dellamora

    2015-03-01

    Full Text Available Knowledge about evidence-based medicine selection and the role of the Drug and Therapeutics Committee (DTC is an important topic in the literature but is scarcely discussed in Brazil. Our objective, using a qualitative design, was to analyze the medicine selection process performed in four large university hospitals in the state of Rio de Janeiro. Information was collected from documents, interviews with key informants and direct observations. Two dimensions were analyzed: the structural and organizational aspects of the selection process and the criteria and methods used in medicine selection. The findings showed that the DTC was active in two hospitals. The structure for decision-making was weak. DTC members had little experience in evidence-based selection, and their everyday functions did not influence their participation in DTC activities. The methods used to evaluate evidence were inadequate. The uncritical adoption of new medicines in these complex hospital facilities may be hampering pharmaceutical services, with consequences for the entire health system. Although the qualitative approach considerably limits the extent to which the results can be extrapolated, we believe that our findings may be relevant to other university hospitals in the country.

  11. Large-scale experiments for the vulnerability analysis of buildings impacted and intruded by fluviatile torrential hazard processes

    Science.gov (United States)

    Sturm, Michael; Gems, Bernhard; Fuchs, Sven; Mazzorana, Bruno; Papathoma-Köhle, Maria; Aufleger, Markus

    2016-04-01

    In European mountain regions, losses due to torrential hazards are still considerable high despite the ongoing debate on an overall increasing or decreasing trend. Recent events in Austria severely revealed that due to technical and economic reasons, an overall protection of settlements in the alpine environment against torrential hazards is not feasible. On the side of the hazard process, events with unpredictable intensities may represent overload scenarios for existent protection structures in the torrent catchments. They bear a particular risk of significant losses in the living space. Although the importance of vulnerability is widely recognised, there is still a research gap concerning its assessment. Currently, potential losses at buildings due to torrential hazards and their comparison with reinstatement costs are determined by the use of empirical functions. Hence, relations of process intensities and the extent of losses, gathered by the analysis of historic hazard events and the information of object-specific restoration values, are used. This approach does not represent a physics-based and integral concept since relevant and often crucial processes, as the intrusion of the fluid-sediment-mixture into elements at risk, are not considered. Based on these findings, our work is targeted at extending these findings and models of present risk research in the context of an integral, more physics-based vulnerability analysis concept. Fluviatile torrential hazard processes and their impacts on the building envelope are experimentally modelled. Material intrusion processes are thereby explicitly considered. Dynamic impacts are gathered quantitatively and spatially distributed by the use of a large set of force transducers. The experimental tests are accomplished with artificial, vertical and skewed plates, including also openings for material intrusion. Further, the impacts on specific buildings within the test site of the work, the fan apex of the Schnannerbach

  12. Improving predictions of large scale soil carbon dynamics: Integration of fine-scale hydrological and biogeochemical processes, scaling, and benchmarking

    Science.gov (United States)

    Riley, W. J.; Dwivedi, D.; Ghimire, B.; Hoffman, F. M.; Pau, G. S. H.; Randerson, J. T.; Shen, C.; Tang, J.; Zhu, Q.

    2015-12-01

    Numerical model representations of decadal- to centennial-scale soil-carbon dynamics are a dominant cause of uncertainty in climate change predictions. Recent attempts by some Earth System Model (ESM) teams to integrate previously unrepresented soil processes (e.g., explicit microbial processes, abiotic interactions with mineral surfaces, vertical transport), poor performance of many ESM land models against large-scale and experimental manipulation observations, and complexities associated with spatial heterogeneity highlight the nascent nature of our community's ability to accurately predict future soil carbon dynamics. I will present recent work from our group to develop a modeling framework to integrate pore-, column-, watershed-, and global-scale soil process representations into an ESM (ACME), and apply the International Land Model Benchmarking (ILAMB) package for evaluation. At the column scale and across a wide range of sites, observed depth-resolved carbon stocks and their 14C derived turnover times can be explained by a model with explicit representation of two microbial populations, a simple representation of mineralogy, and vertical transport. Integrating soil and plant dynamics requires a 'process-scaling' approach, since all aspects of the multi-nutrient system cannot be explicitly resolved at ESM scales. I will show that one approach, the Equilibrium Chemistry Approximation, improves predictions of forest nitrogen and phosphorus experimental manipulations and leads to very different global soil carbon predictions. Translating model representations from the site- to ESM-scale requires a spatial scaling approach that either explicitly resolves the relevant processes, or more practically, accounts for fine-resolution dynamics at coarser scales. To that end, I will present recent watershed-scale modeling work that applies reduced order model methods to accurately scale fine-resolution soil carbon dynamics to coarse-resolution simulations. Finally, we

  13. On the theory of inelastic scattering of slow electrons by surface excitations: 1. Half-space formalism

    International Nuclear Information System (INIS)

    Nkoma, J.S.

    1982-08-01

    A quantum-mechanical theory for the inelastic scattering of slow electrons (ISSE) by surface excitations is developed within the half-space model. The process of transmission of incident electrons into the crystal is described by the homogeneous Schroedinger equation, while the scattering process inside the crystal is described by an inhomogeneous Schroedinger equation. The scattering cross-section for ISSE by surface excitations is derived and is found to be small since it is dependent on an inverse sum of wavevectors which is large. It is also dependent on the fluctuations in the scattering potential. (author)

  14. Molten fuel behaviour during slow overpower transients

    International Nuclear Information System (INIS)

    Guerin, Y.; Boidron, M.

    1985-01-01

    In large commercial reactors as Super-Phenix, if we take into account all the uncertainties on the pins and on the core, it is no longer possible to guarantee the absence of fuel melting during incidental events such as slow overpower transients. We have then to explain what happens in the pins when fuel melting occurs and to demonstrate that a limited amount of molten fuel generates no risk of clad failure. For that purpose, we may use the results of a great number of experiments (about 40) that have been performed at C.E.A., most of them in thermal reactor, but some experiments have also been performed in Rapsodie, especially during the last run of this reactor. In a great part of these experiments, fuel melting occurred at beginning of life, but we have also some results at different burnups up to 5 at %. It is not the aim of this paper to describe all these experiments and the results of their post irradiation examination, but to summarize the main conclusions that have been set out of them and that have enabled us to determine the main characteristics of fuel element behaviour when fuel melting occurs

  15. Ponderomotive force effects on slow-wave coupling

    International Nuclear Information System (INIS)

    Wilson, J.R.; Wong, K.L.

    1982-01-01

    Localized plasma density depressions are observed to form near a multi-ring slow-wave structure when the value of the nonlinearity parameter, s = ω 2 /sub p/eVertical BarE/sub z/Vertical Bar 2 /8πω 2 nkappaT, is of order unity. Consequent changes in the wave propagation and coupling efficiency are reported. For large enough values of s, the coupling efficiency may be reduced by 50% from the linear value

  16. Large-scale particle simulations in a virtual-memory computer

    International Nuclear Information System (INIS)

    Gray, P.C.; Wagner, J.S.; Tajima, T.; Million, R.

    1982-08-01

    Virtual memory computers are capable of executing large-scale particle simulations even when the memory requirements exceed the computer core size. The required address space is automatically mapped onto slow disc memory by the operating system. When the simulation size is very large, frequent random accesses to slow memory occur during the charge accumulation and particle pushing processes. Accesses to slow memory significantly reduce the execution rate of the simulation. We demonstrate in this paper that with the proper choice of sorting algorithm, a nominal amount of sorting to keep physically adjacent particles near particles with neighboring array indices can reduce random access to slow memory, increase the efficiency of the I/O system, and hence, reduce the required computing time

  17. Etching of semiconductor cubic crystals: Determination of the dissolution slowness surfaces

    Science.gov (United States)

    Tellier, C. R.

    1990-03-01

    Equations of the representative surface of dissolution slowness for cubic crystals are determined in the framework of a tensorial approach of the orientation-dependent etching process. The independent dissolution constants are deduced from symmetry considerations. Using previous data on the chemical etching of germanium and gallium arsenide crystals, some possible polar diagrams of the dissolution slowness are proposed. A numerical and graphical simulation method is used to obtain the derived dissolution shapes. The influence of extrema in the dissolution slowness on the successive dissolution shapes is also examined. A graphical construction of limiting shapes of etched crystals appears possible using the tensorial representation of the dissolution slowness.

  18. Efficient Processing of Multiple DTW Queries in Time Series Databases

    DEFF Research Database (Denmark)

    Kremer, Hardy; Günnemann, Stephan; Ivanescu, Anca-Maria

    2011-01-01

    . In many of today’s applications, however, large numbers of queries arise at any given time. Existing DTW techniques do not process multiple DTW queries simultaneously, a serious limitation which slows down overall processing. In this paper, we propose an efficient processing approach for multiple DTW...... for multiple DTW queries....

  19. Large eddy simulation of the low temperature ignition and combustion processes on spray flame with the linear eddy model

    Science.gov (United States)

    Wei, Haiqiao; Zhao, Wanhui; Zhou, Lei; Chen, Ceyuan; Shu, Gequn

    2018-03-01

    Large eddy simulation coupled with the linear eddy model (LEM) is employed for the simulation of n-heptane spray flames to investigate the low temperature ignition and combustion process in a constant-volume combustion vessel under diesel-engine relevant conditions. Parametric studies are performed to give a comprehensive understanding of the ignition processes. The non-reacting case is firstly carried out to validate the present model by comparing the predicted results with the experimental data from the Engine Combustion Network (ECN). Good agreements are observed in terms of liquid and vapour penetration length, as well as the mixture fraction distributions at different times and different axial locations. For the reacting cases, the flame index was introduced to distinguish between the premixed and non-premixed combustion. A reaction region (RR) parameter is used to investigate the ignition and combustion characteristics, and to distinguish the different combustion stages. Results show that the two-stage combustion process can be identified in spray flames, and different ignition positions in the mixture fraction versus RR space are well described at low and high initial ambient temperatures. At an initial condition of 850 K, the first-stage ignition is initiated at the fuel-lean region, followed by the reactions in fuel-rich regions. Then high-temperature reaction occurs mainly at the places with mixture concentration around stoichiometric mixture fraction. While at an initial temperature of 1000 K, the first-stage ignition occurs at the fuel-rich region first, then it moves towards fuel-richer region. Afterwards, the high-temperature reactions move back to the stoichiometric mixture fraction region. For all of the initial temperatures considered, high-temperature ignition kernels are initiated at the regions richer than stoichiometric mixture fraction. By increasing the initial ambient temperature, the high-temperature ignition kernels move towards richer

  20. KEK-IMSS Slow Positron Facility

    Energy Technology Data Exchange (ETDEWEB)

    Hyodo, T; Wada, K; Yagishita, A; Kosuge, T; Saito, Y; Kurihara, T; Kikuchi, T; Shirakawa, A; Sanami, T; Ikeda, M; Ohsawa, S; Kakihara, K; Shidara, T, E-mail: toshio.hyodo@kek.jp [High Energy Accelerator Research Organization (KEK) 1-1 Oho, Tsukuba, Ibaraki, 305-0801 (Japan)

    2011-12-01

    The Slow Positron Facility at the Institute of Material Structure Science (IMSS) of High Energy Accelerator Research Organization (KEK) is a user dedicated facility with an energy tunable (0.1 - 35 keV) slow positron beam produced by a dedicated 55MeV linac. The present beam line branches have been used for the positronium time-of-flight (Ps-TOF) measurements, the transmission positron microscope (TPM) and the photo-detachment of Ps negative ions (Ps{sup -}). During the year 2010, a reflection high-energy positron diffraction (RHEPD) measurement station is going to be installed. The slow positron generator (converter/ moderator) system will be modified to get a higher slow positron intensity, and a new user-friendly beam line power-supply control and vacuum monitoring system is being developed. Another plan for this year is the transfer of a {sup 22}Na-based slow positron beam from RIKEN. This machine will be used for the continuous slow positron beam applications and for the orientation training of those who are interested in beginning researches with a slow positron beam.