WorldWideScience

Sample records for macroscale unit processes

  1. Investigation of Micro- and Macro-Scale Transport Processes for Improved Fuel Cell Performance

    Energy Technology Data Exchange (ETDEWEB)

    Gu, Wenbin [General Motors LLC, Pontiac, MI (United States)

    2014-08-29

    This report documents the work performed by General Motors (GM) under the Cooperative agreement No. DE-EE0000470, “Investigation of Micro- and Macro-Scale Transport Processes for Improved Fuel Cell Performance,” in collaboration with the Penn State University (PSU), University of Tennessee Knoxville (UTK), Rochester Institute of Technology (RIT), and University of Rochester (UR) via subcontracts. The overall objectives of the project are to investigate and synthesize fundamental understanding of transport phenomena at both the macro- and micro-scales for the development of a down-the-channel model that accounts for all transport domains in a broad operating space. GM as a prime contractor focused on cell level experiments and modeling, and the Universities as subcontractors worked toward fundamental understanding of each component and associated interface.

  2. Micro- and macro-scale petrophysical characterization of potential reservoir units from the Northern Israel

    Science.gov (United States)

    Haruzi, Peleg; Halisch, Matthias; Katsman, Regina; Waldmann, Nicolas

    2016-04-01

    Lower Cretaceous sandstone serves as hydrocarbon reservoir in some places over the world, and potentially in Hatira formation in the Golan Heights, northern Israel. The purpose of the current research is to characterize the petrophysical properties of these sandstone units. The study is carried out by two alternative methods: using conventional macroscopic lab measurements, and using CT-scanning, image processing and subsequent fluid mechanics simulations at a microscale, followed by upscaling to the conventional macroscopic rock parameters (porosity and permeability). Comparison between the upscaled and measured in the lab properties will be conducted. The best way to upscale the microscopic rock characteristics will be analyzed based the models suggested in the literature. Proper characterization of the potential reservoir will provide necessary analytical parameters for the future experimenting and modeling of the macroscopic fluid flow behavior in the Lower Cretaceous sandstone.

  3. Implementation of a phenomenological DNB prediction model based on macroscale boiling flow processes in PWR fuel bundles

    International Nuclear Information System (INIS)

    Mohitpour, Maryam; Jahanfarnia, Gholamreza; Shams, Mehrzad

    2014-01-01

    Highlights: • A numerical framework was developed to mechanistically predict DNB in PWR bundles. • The DNB evaluation module was incorporated into the two-phase flow solver module. • Three-dimensional two-fluid model was the basis of two-phase flow solver module. • Liquid sublayer dryout model was adapted as CHF-triggering mechanism in DNB module. • Ability of DNB modeling approach was studied based on PSBT DNB tests in rod bundle. - Abstract: In this study, a numerical framework, comprising of a two-phase flow subchannel solver module and a Departure from Nucleate Boiling (DNB) evaluation module, was developed to mechanistically predict DNB in rod bundles of Pressurized Water Reactor (PWR). In this regard, the liquid sublayer dryout model was adapted as the Critical Heat Flux (CHF) triggering mechanism to reduce the dependency of the model on empirical correlations in the DNB evaluation module. To predict local flow boiling processes, a three-dimensional two-fluid formalism coupled with heat conduction was selected as the basic tool for the development of the two-phase flow subchannel analysis solver. Evaluation of the DNB modeling approach was performed against OECD/NRC NUPEC PWR Bundle tests (PSBT Benchmark) which supplied an extensive database for the development of truly mechanistic and consistent models for boiling transition and CHF. The results of the analyses demonstrated the need for additional assessment of the subcooled boiling model and the bulk condensation model implemented in the two-phase flow solver module. The proposed model slightly under-predicts the DNB power in comparison with the ones obtained from steady-state benchmark measurements. However, this prediction is acceptable compared with other codes. Another point about the DNB prediction model is that it has a conservative behavior. Examination of the axial and radial position of the first detected DNB using code-to-code comparisons on the basis of PSBT data indicated that the our

  4. Micro- to macroscale perspectives on space plasmas

    International Nuclear Information System (INIS)

    Eastman, T.E.

    1993-01-01

    The Earth's magnetosphere is the most accessible of natural collisionless plasma environments; an astrophysical plasma ''laboratory.'' Magnetospheric physics has been in an exploration phase since its origin 35 years ago but new coordinated, multipoint observations, theory, modeling, and simulations are moving this highly interdisciplinary field of plasma science into a new phase of synthesis and understanding. Plasma systems are ones in which binary collisions are relatively negligible and collective behavior beyond the microscale emerges. Most readily accessible natural plasma systems are collisional and nearest-neighbor classical interactions compete with longer-range plasma effects. Except for stars, most space plasmas are collisionless, however, and the effects of electrodynamic coupling dominate. Basic physical processes in such collisionless plasmas occur at micro-, meso-, and macroscales that are not merely reducible to each other in certain crucial ways as illustrated for the global coupling of the Earth's magnetosphere and for the nonlinear dynamics of charged particle motion in the magnetotail. Such global coupling and coherence makes the geospace environment, the domain of solar-terrestrial science, the most highly coupled of all physical geospheres

  5. Multi-unit Integration in Microfluidic Processes: Current Status and Future Horizons

    Directory of Open Access Journals (Sweden)

    Pratap R. Patnaik

    2011-07-01

    Full Text Available Microfluidic processes, mainly for biological and chemical applications, have expanded rapidly in recent years. While the initial focus was on single units, principally microreactors, technological and economic considerations have caused a shift to integrated microchips in which a number of microdevices function coherently. These integrated devices have many advantages over conventional macro-scale processes. However, the small scale of operation, complexities in the underlying physics and chemistry, and differences in the time constants of the participating units, in the interactions among them and in the outputs of interest make it difficult to design and optimize integrated microprocesses. These aspects are discussed here, current research and applications are reviewed, and possible future directions are considered.

  6. Macroscale tribological properties of fluorinated graphene

    Science.gov (United States)

    Matsumura, Kento; Chiashi, Shohei; Maruyama, Shigeo; Choi, Junho

    2018-02-01

    Because graphene is carbon material and has excellent mechanical characteristics, its use as ultrathin lubrication protective films for machine elements is greatly expected. The durability of graphene strongly depends on the number of layers and the load scale. For use in ultrathin lubrication protective films for machine elements, it is also necessary to maintain low friction and high durability under macroscale loads in the atmosphere. In this study, we modified the surfaces of both monolayer and multilayer graphene by fluorine plasma treatment and examined the friction properties and durability of the fluorinated graphene under macroscale load. The durability of both monolayer and multilayer graphene improved by the surface fluorination owing to the reduction of adhesion forces between the friction interfaces. This occurs because the carbon film containing fluorine is transferred to the friction-mating material, and thus friction acts between the two carbon films containing fluorine. On the other hand, the friction coefficient decreased from 0.20 to 0.15 by the fluorine plasma treatment in the multilayer graphene, whereas it increased from 0.21 to 0.27 in the monolayer graphene. It is considered that, in the monolayer graphene, the change of the surface structure had a stronger influence on the friction coefficient than in the multilayer graphene, and the friction coefficient increased mainly due to the increase in defects on the graphene surface by the fluorine plasma treatment.

  7. Cortical chemoarchitecture shapes macroscale effective functional connectivity patterns in macaque cerebral cortex

    NARCIS (Netherlands)

    Turk, Elise; Scholtens, Lianne H.; van den Heuvel, Martijn P.

    The mammalian cortex is a complex system of-at the microscale level-interconnected neurons and-at the macroscale level-interconnected areas, forming the infrastructure for local and global neural processing and information integration. While the effects of regional chemoarchitecture on local

  8. Micro- and macroscale coefficients of friction of cementitious materials

    International Nuclear Information System (INIS)

    Lomboy, Gilson; Sundararajan, Sriram; Wang, Kejin

    2013-01-01

    Millions of metric tons of cementitious materials are produced, transported and used in construction each year. The ease or difficulty of handling cementitious materials is greatly influenced by the material friction properties. In the present study, the coefficients of friction of cementitious materials were measured at the microscale and macroscale. The materials tested were commercially-available Portland cement, Class C fly ash, and ground granulated blast furnace slag. At the microscale, the coefficient of friction was determined from the interaction forces between cementitious particles using an Atomic Force Microscope. At the macroscale, the coefficient of friction was determined from stresses on bulk cementitious materials under direct shear. The study indicated that the microscale coefficient of friction ranged from 0.020 to 0.059, and the macroscale coefficient of friction ranged from 0.56 to 0.75. The fly ash studied had the highest microscale coefficient of friction and the lowest macroscale coefficient of friction. -- Highlights: •Microscale (interparticle) coefficient of friction (COF) was determined with AFM. •Macroscale (bulk) COF was measured under direct shear. •Fly ash had the highest microscale COF and the lowest macroscale COF. •Portland cement against GGBFS had the lowest microscale COF. •Portland cement against Portland cement had the highest macroscale COF

  9. The correlation between gelatin macroscale differences and nanoparticle properties: providing insight into biopolymer variability.

    Science.gov (United States)

    Stevenson, André T; Jankus, Danny J; Tarshis, Max A; Whittington, Abby R

    2018-05-21

    From therapeutic delivery to sustainable packaging, manipulation of biopolymers into nanostructures imparts biocompatibility to numerous materials with minimal environmental pollution during processing. While biopolymers are appealing natural based materials, the lack of nanoparticle (NP) physicochemical consistency has decreased their nanoscale translation into actual products. Insights regarding the macroscale and nanoscale property variation of gelatin, one of the most common biopolymers already utilized in its bulk form, are presented. Novel correlations between macroscale and nanoscale properties were made by characterizing similar gelatin rigidities obtained from different manufacturers. Samples with significant differences in clarity, indicating sample purity, obtained the largest deviations in NP diameter. Furthermore, a statistically significant positive correlation between macroscale molecular weight dispersity and NP diameter was determined. New theoretical calculations proposing the limited number of gelatin chains that can aggregate and subsequently get crosslinked for NP formation were presented as one possible reason to substantiate the correlation analysis. NP charge and crosslinking extent were also related to diameter. Lower gelatin sample molecular weight dispersities produced statistically smaller average diameters (<75 nm), and higher average electrostatic charges (∼30 mV) and crosslinking extents (∼95%), which were independent of gelatin rigidity, conclusions not shown in the literature. This study demonstrates that the molecular weight composition of the starting material is one significant factor affecting gelatin nanoscale properties and must be characterized prior to NP preparation. Identifying gelatin macroscale and nanoscale correlations offers a route toward greater physicochemical property control and reproducibility of new NP formulations for translation to industry.

  10. Judicial Process, Grade Eight. Resource Unit (Unit V).

    Science.gov (United States)

    Minnesota Univ., Minneapolis. Project Social Studies Curriculum Center.

    This resource unit, developed by the University of Minnesota's Project Social Studies, introduces eighth graders to the judicial process. The unit was designed with two major purposes in mind. First, it helps pupils understand judicial decision-making, and second, it provides for the study of the rights guaranteed by the federal Constitution. Both…

  11. The Executive Process, Grade Eight. Resource Unit (Unit III).

    Science.gov (United States)

    Minnesota Univ., Minneapolis. Project Social Studies Curriculum Center.

    This resource unit, developed by the University of Minnesota's Project Social Studies, introduces eighth graders to the executive process. The unit uses case studies of presidential decision making such as the decision to drop the atomic bomb on Hiroshima, the Cuba Bay of Pigs and quarantine decisions, and the Little Rock decision. A case study of…

  12. Image processing unit with fall-back.

    NARCIS (Netherlands)

    2011-01-01

    An image processing unit ( 100,200,300 ) for computing a sequence of output images on basis of a sequence of input images, comprises: a motion estimation unit ( 102 ) for computing a motion vector field on basis of the input images; a quality measurement unit ( 104 ) for computing a value of a

  13. Macroscale hydrologic modeling of ecologically relevant flow metrics

    Science.gov (United States)

    Wenger, Seth J.; Luce, Charles H.; Hamlet, Alan F.; Isaak, Daniel J.; Neville, Helen M.

    2010-09-01

    Stream hydrology strongly affects the structure of aquatic communities. Changes to air temperature and precipitation driven by increased greenhouse gas concentrations are shifting timing and volume of streamflows potentially affecting these communities. The variable infiltration capacity (VIC) macroscale hydrologic model has been employed at regional scales to describe and forecast hydrologic changes but has been calibrated and applied mainly to large rivers. An important question is how well VIC runoff simulations serve to answer questions about hydrologic changes in smaller streams, which are important habitat for many fish species. To answer this question, we aggregated gridded VIC outputs within the drainage basins of 55 streamflow gages in the Pacific Northwest United States and compared modeled hydrographs and summary metrics to observations. For most streams, several ecologically relevant aspects of the hydrologic regime were accurately modeled, including center of flow timing, mean annual and summer flows and frequency of winter floods. Frequencies of high and low flows in the summer were not well predicted, however. Predictions were worse for sites with strong groundwater influence, and some sites showed errors that may result from limitations in the forcing climate data. Higher resolution (1/16th degree) modeling provided small improvements over lower resolution (1/8th degree). Despite some limitations, the VIC model appears capable of representing several ecologically relevant hydrologic characteristics in streams, making it a useful tool for understanding the effects of hydrology in delimiting species distributions and predicting the potential effects of climate shifts on aquatic organisms.

  14. Portable brine evaporator unit, process, and system

    Science.gov (United States)

    Hart, Paul John; Miller, Bruce G.; Wincek, Ronald T.; Decker, Glenn E.; Johnson, David K.

    2009-04-07

    The present invention discloses a comprehensive, efficient, and cost effective portable evaporator unit, method, and system for the treatment of brine. The evaporator unit, method, and system require a pretreatment process that removes heavy metals, crude oil, and other contaminates in preparation for the evaporator unit. The pretreatment and the evaporator unit, method, and system process metals and brine at the site where they are generated (the well site). Thus, saving significant money to producers who can avoid present and future increases in transportation costs.

  15. Scaling up: Assessing social impacts at the macro-scale

    International Nuclear Information System (INIS)

    Schirmer, Jacki

    2011-01-01

    Social impacts occur at various scales, from the micro-scale of the individual to the macro-scale of the community. Identifying the macro-scale social changes that results from an impacting event is a common goal of social impact assessment (SIA), but is challenging as multiple factors simultaneously influence social trends at any given time, and there are usually only a small number of cases available for examination. While some methods have been proposed for establishing the contribution of an impacting event to macro-scale social change, they remain relatively untested. This paper critically reviews methods recommended to assess macro-scale social impacts, and proposes and demonstrates a new approach. The 'scaling up' method involves developing a chain of logic linking change at the individual/site scale to the community scale. It enables a more problematised assessment of the likely contribution of an impacting event to macro-scale social change than previous approaches. The use of this approach in a recent study of change in dairy farming in south east Australia is described.

  16. Characteristics of soil water retention curve at macro-scale

    Institute of Scientific and Technical Information of China (English)

    2009-01-01

    Scale adaptable hydrological models have attracted more and more attentions in the hydrological modeling research community, and the constitutive relationship at the macro-scale is one of the most important issues, upon which there are not enough research activities yet. Taking the constitutive relationships of soil water movement--soil water retention curve (SWRC) as an example, this study extends the definition of SWRC at the micro-scale to that at the macro-scale, and aided by Monte Carlo method we demonstrate that soil property and the spatial distribution of soil moisture will affect the features of SWRC greatly. Furthermore, we assume that the spatial distribution of soil moisture is the result of self-organization of climate, soil, ground water and soil water movement under the specific boundary conditions, and we also carry out numerical experiments of soil water movement at the vertical direction in order to explore the relationship between SWRC at the macro-scale and the combinations of climate, soil, and groundwater. The results show that SWRCs at the macro-scale and micro-scale presents totally different features, e.g., the essential hysteresis phenomenon which is exaggerated with increasing aridity index and rising groundwater table. Soil property plays an important role in the shape of SWRC which will even lead to a rectangular shape under drier conditions, and power function form of SWRC widely adopted in hydrological model might be revised for most situations at the macro-scale.

  17. Data Sorting Using Graphics Processing Units

    Directory of Open Access Journals (Sweden)

    M. J. Mišić

    2012-06-01

    Full Text Available Graphics processing units (GPUs have been increasingly used for general-purpose computation in recent years. The GPU accelerated applications are found in both scientific and commercial domains. Sorting is considered as one of the very important operations in many applications, so its efficient implementation is essential for the overall application performance. This paper represents an effort to analyze and evaluate the implementations of the representative sorting algorithms on the graphics processing units. Three sorting algorithms (Quicksort, Merge sort, and Radix sort were evaluated on the Compute Unified Device Architecture (CUDA platform that is used to execute applications on NVIDIA graphics processing units. Algorithms were tested and evaluated using an automated test environment with input datasets of different characteristics. Finally, the results of this analysis are briefly discussed.

  18. Semi-automatic film processing unit

    International Nuclear Information System (INIS)

    Mohamad Annuar Assadat Husain; Abdul Aziz Bin Ramli; Mohd Khalid Matori

    2005-01-01

    The design concept applied in the development of an semi-automatic film processing unit needs creativity and user support in channelling the required information to select materials and operation system that suit the design produced. Low cost and efficient operation are the challenges that need to be faced abreast with the fast technology advancement. In producing this processing unit, there are few elements which need to be considered in order to produce high quality image. Consistent movement and correct time coordination for developing and drying are a few elements which need to be controlled. Other elements which need serious attentions are temperature, liquid density and the amount of time for the chemical liquids to react. Subsequent chemical reaction that take place will cause the liquid chemical to age and this will adversely affect the quality of image produced. This unit is also equipped with liquid chemical drainage system and disposal chemical tank. This unit would be useful in GP clinics especially in rural area which practice manual system for developing and require low operational cost. (Author)

  19. Instruction Set Architectures for Quantum Processing Units

    OpenAIRE

    Britt, Keith A.; Humble, Travis S.

    2017-01-01

    Progress in quantum computing hardware raises questions about how these devices can be controlled, programmed, and integrated with existing computational workflows. We briefly describe several prominent quantum computational models, their associated quantum processing units (QPUs), and the adoption of these devices as accelerators within high-performance computing systems. Emphasizing the interface to the QPU, we analyze instruction set architectures based on reduced and complex instruction s...

  20. Micromagnetic simulations using Graphics Processing Units

    International Nuclear Information System (INIS)

    Lopez-Diaz, L; Aurelio, D; Torres, L; Martinez, E; Hernandez-Lopez, M A; Gomez, J; Alejos, O; Carpentieri, M; Finocchio, G; Consolo, G

    2012-01-01

    The methodology for adapting a standard micromagnetic code to run on graphics processing units (GPUs) and exploit the potential for parallel calculations of this platform is discussed. GPMagnet, a general purpose finite-difference GPU-based micromagnetic tool, is used as an example. Speed-up factors of two orders of magnitude can be achieved with GPMagnet with respect to a serial code. This allows for running extensive simulations, nearly inaccessible with a standard micromagnetic solver, at reasonable computational times. (topical review)

  1. Micro and Macroscale Drivers of Nutrient Concentrations in Urban Streams in South, Central and North America.

    Science.gov (United States)

    Loiselle, Steven A; Gasparini Fernandes Cunha, Davi; Shupe, Scott; Valiente, Elsa; Rocha, Luciana; Heasley, Eleanore; Belmont, Patricia Pérez; Baruch, Avinoam

    Global metrics of land cover and land use provide a fundamental basis to examine the spatial variability of human-induced impacts on freshwater ecosystems. However, microscale processes and site specific conditions related to bank vegetation, pollution sources, adjacent land use and water uses can have important influences on ecosystem conditions, in particular in smaller tributary rivers. Compared to larger order rivers, these low-order streams and rivers are more numerous, yet often under-monitored. The present study explored the relationship of nutrient concentrations in 150 streams in 57 hydrological basins in South, Central and North America (Buenos Aires, Curitiba, São Paulo, Rio de Janeiro, Mexico City and Vancouver) with macroscale information available from global datasets and microscale data acquired by trained citizen scientists. Average sub-basin phosphate (P-PO4) concentrations were found to be well correlated with sub-basin attributes on both macro and microscales, while the relationships between sub-basin attributes and nitrate (N-NO3) concentrations were limited. A phosphate threshold for eutrophic conditions (>0.1 mg L-1 P-PO4) was exceeded in basins where microscale point source discharge points (eg. residential, industrial, urban/road) were identified in more than 86% of stream reaches monitored by citizen scientists. The presence of bankside vegetation covaried (rho = -0.53) with lower phosphate concentrations in the ecosystems studied. Macroscale information on nutrient loading allowed for a strong separation between basins with and without eutrophic conditions. Most importantly, the combination of macroscale and microscale information acquired increased our ability to explain sub-basin variability of P-PO4 concentrations. The identification of microscale point sources and bank vegetation conditions by citizen scientists provided important information that local authorities could use to improve their management of lower order river ecosystems.

  2. Macroscale particle simulation of externally driven magnetic reconnection

    International Nuclear Information System (INIS)

    Murakami, Sadayoshi; Sato, Tetsuya.

    1991-09-01

    Externally driven reconnection, assuming an anomalous particle collision model, is numerically studied by means of a 2.5D macroscale particle simulation code in which the field and particle motions are solved self-consistently. Explosive magnetic reconnection and energy conversion are observed as a result of slow shock formation. Electron and ion distribution functions exhibit large bulk acceleration and heating of the plasma. Simulation runs with different collision parameters suggest that the development of reconnection, particle acceleration and heating do not significantly depend on the parameters of the collision model. (author)

  3. Partial wave analysis using graphics processing units

    Energy Technology Data Exchange (ETDEWEB)

    Berger, Niklaus; Liu Beijiang; Wang Jike, E-mail: nberger@ihep.ac.c [Institute of High Energy Physics, Chinese Academy of Sciences, 19B Yuquan Lu, Shijingshan, 100049 Beijing (China)

    2010-04-01

    Partial wave analysis is an important tool for determining resonance properties in hadron spectroscopy. For large data samples however, the un-binned likelihood fits employed are computationally very expensive. At the Beijing Spectrometer (BES) III experiment, an increase in statistics compared to earlier experiments of up to two orders of magnitude is expected. In order to allow for a timely analysis of these datasets, additional computing power with short turnover times has to be made available. It turns out that graphics processing units (GPUs) originally developed for 3D computer games have an architecture of massively parallel single instruction multiple data floating point units that is almost ideally suited for the algorithms employed in partial wave analysis. We have implemented a framework for tensor manipulation and partial wave fits called GPUPWA. The user writes a program in pure C++ whilst the GPUPWA classes handle computations on the GPU, memory transfers, caching and other technical details. In conjunction with a recent graphics processor, the framework provides a speed-up of the partial wave fit by more than two orders of magnitude compared to legacy FORTRAN code.

  4. Graphics Processing Units for HEP trigger systems

    International Nuclear Information System (INIS)

    Ammendola, R.; Bauce, M.; Biagioni, A.; Chiozzi, S.; Cotta Ramusino, A.; Fantechi, R.; Fiorini, M.; Giagu, S.; Gianoli, A.; Lamanna, G.; Lonardo, A.; Messina, A.

    2016-01-01

    General-purpose computing on GPUs (Graphics Processing Units) is emerging as a new paradigm in several fields of science, although so far applications have been tailored to the specific strengths of such devices as accelerator in offline computation. With the steady reduction of GPU latencies, and the increase in link and memory throughput, the use of such devices for real-time applications in high-energy physics data acquisition and trigger systems is becoming ripe. We will discuss the use of online parallel computing on GPU for synchronous low level trigger, focusing on CERN NA62 experiment trigger system. The use of GPU in higher level trigger system is also briefly considered.

  5. Graphics Processing Units for HEP trigger systems

    Energy Technology Data Exchange (ETDEWEB)

    Ammendola, R. [INFN Sezione di Roma “Tor Vergata”, Via della Ricerca Scientifica 1, 00133 Roma (Italy); Bauce, M. [INFN Sezione di Roma “La Sapienza”, P.le A. Moro 2, 00185 Roma (Italy); University of Rome “La Sapienza”, P.lee A.Moro 2, 00185 Roma (Italy); Biagioni, A. [INFN Sezione di Roma “La Sapienza”, P.le A. Moro 2, 00185 Roma (Italy); Chiozzi, S.; Cotta Ramusino, A. [INFN Sezione di Ferrara, Via Saragat 1, 44122 Ferrara (Italy); University of Ferrara, Via Saragat 1, 44122 Ferrara (Italy); Fantechi, R. [INFN Sezione di Pisa, Largo B. Pontecorvo 3, 56127 Pisa (Italy); CERN, Geneve (Switzerland); Fiorini, M. [INFN Sezione di Ferrara, Via Saragat 1, 44122 Ferrara (Italy); University of Ferrara, Via Saragat 1, 44122 Ferrara (Italy); Giagu, S. [INFN Sezione di Roma “La Sapienza”, P.le A. Moro 2, 00185 Roma (Italy); University of Rome “La Sapienza”, P.lee A.Moro 2, 00185 Roma (Italy); Gianoli, A. [INFN Sezione di Ferrara, Via Saragat 1, 44122 Ferrara (Italy); University of Ferrara, Via Saragat 1, 44122 Ferrara (Italy); Lamanna, G., E-mail: gianluca.lamanna@cern.ch [INFN Sezione di Pisa, Largo B. Pontecorvo 3, 56127 Pisa (Italy); INFN Laboratori Nazionali di Frascati, Via Enrico Fermi 40, 00044 Frascati (Roma) (Italy); Lonardo, A. [INFN Sezione di Roma “La Sapienza”, P.le A. Moro 2, 00185 Roma (Italy); Messina, A. [INFN Sezione di Roma “La Sapienza”, P.le A. Moro 2, 00185 Roma (Italy); University of Rome “La Sapienza”, P.lee A.Moro 2, 00185 Roma (Italy); and others

    2016-07-11

    General-purpose computing on GPUs (Graphics Processing Units) is emerging as a new paradigm in several fields of science, although so far applications have been tailored to the specific strengths of such devices as accelerator in offline computation. With the steady reduction of GPU latencies, and the increase in link and memory throughput, the use of such devices for real-time applications in high-energy physics data acquisition and trigger systems is becoming ripe. We will discuss the use of online parallel computing on GPU for synchronous low level trigger, focusing on CERN NA62 experiment trigger system. The use of GPU in higher level trigger system is also briefly considered.

  6. Integration Process for the Habitat Demonstration Unit

    Science.gov (United States)

    Gill, Tracy; Merbitz, Jerad; Kennedy, Kriss; Tri, Terry; Howe, A. Scott

    2010-01-01

    The Habitat Demonstration Unit (HDU) is an experimental exploration habitat technology and architecture test platform designed for analog demonstration activities The HDU project has required a team to integrate a variety of contributions from NASA centers and outside collaborators and poses a challenge in integrating these disparate efforts into a cohesive architecture To complete the development of the HDU from conception in June 2009 to rollout for operations in July 2010, a cohesive integration strategy has been developed to integrate the various systems of HDU and the payloads, such as the Geology Lab, that those systems will support The utilization of interface design standards and uniquely tailored reviews have allowed for an accelerated design process Scheduled activities include early fit-checks and the utilization of a Habitat avionics test bed prior to equipment installation into HDU A coordinated effort to utilize modeling and simulation systems has aided in design and integration concept development Modeling tools have been effective in hardware systems layout, cable routing and length estimation, and human factors analysis Decision processes on the shell development including the assembly sequence and the transportation have been fleshed out early on HDU to maximize the efficiency of both integration and field operations Incremental test operations leading up to an integrated systems test allows for an orderly systems test program The HDU will begin its journey as an emulation of a Pressurized Excursion Module (PEM) for 2010 field testing and then may evolve to a Pressurized Core Module (PCM) for 2011 and later field tests, depending on agency architecture decisions The HDU deployment will vary slightly from current lunar architecture plans to include developmental hardware and software items and additional systems called opportunities for technology demonstration One of the HDU challenges has been designing to be prepared for the integration of

  7. Macroscale implicit electromagnetic particle simulation of magnetized plasmas

    International Nuclear Information System (INIS)

    Tanaka, Motohiko.

    1988-01-01

    An electromagnetic and multi-dimensional macroscale particle simulation code (MACROS) is presented which enables us to make a large time and spatial scale kinetic simulation of magnetized plasmas. Particle ions, finite mass electrons with the guiding-center approximation and a complete set of Maxwell equations are employed. Implicit field-particle coupled equations are derived in which a time-decentered (slightly backward) finite differential scheme is used to achieve stability for large time and spatial scales. It is shown analytically that the present simulation scheme suppresses high frequency electromagnetic waves and that it accurately reproduces low frequency waves in the plasma. These properties are verified by numerical examination of eigenmodes in a 2-D thermal equilibrium plasma and by that of the kinetic Alfven wave. (author)

  8. Effect of fiber geometry on macroscale friction of ordered low-density polyethylene nanofiber arrays.

    Science.gov (United States)

    Lee, Dae Ho; Kim, Yongkwan; Fearing, Ronald S; Maboudian, Roya

    2011-09-06

    Ordered low-density polyethylene (LDPE) nanofiber arrays are fabricated from silicon nanowire (SiNW) templates synthesized by a simple wet-chemical process based on metal-assisted electroless etching combined with colloidal lithography. The geometrical effect of nanofibrillar structures on their macroscale friction is investigated over a wide range of diameters and lengths under the same fiber density. The optimum geometry for contacting a smooth glass surface is presented with discussions on the compromise between fiber tip-contact area and fiber compliance. A friction design map is developed, which shows that the theoretical optimum design condition agrees well with the LDPE nanofiber geometries exhibiting high measured friction. © 2011 American Chemical Society

  9. Modeling of biopharmaceutical processes. Part 2: Process chromatography unit operation

    DEFF Research Database (Denmark)

    Kaltenbrunner, Oliver; McCue, Justin; Engel, Philip

    2008-01-01

    Process modeling can be a useful tool to aid in process development, process optimization, and process scale-up. When modeling a chromatography process, one must first select the appropriate models that describe the mass transfer and adsorption that occurs within the porous adsorbent. The theoret......Process modeling can be a useful tool to aid in process development, process optimization, and process scale-up. When modeling a chromatography process, one must first select the appropriate models that describe the mass transfer and adsorption that occurs within the porous adsorbent...

  10. Creating multithemed ecological regions for macroscale ecology: Testing a flexible, repeatable, and accessible clustering method

    Science.gov (United States)

    Cheruvelil, Kendra Spence; Yuan, Shuai; Webster, Katherine E.; Tan, Pang-Ning; Lapierre, Jean-Francois; Collins, Sarah M.; Fergus, C. Emi; Scott, Caren E.; Norton Henry, Emily; Soranno, Patricia A.; Filstrup, Christopher T.; Wagner, Tyler

    2017-01-01

    Understanding broad-scale ecological patterns and processes often involves accounting for regional-scale heterogeneity. A common way to do so is to include ecological regions in sampling schemes and empirical models. However, most existing ecological regions were developed for specific purposes, using a limited set of geospatial features and irreproducible methods. Our study purpose was to: (1) describe a method that takes advantage of recent computational advances and increased availability of regional and global data sets to create customizable and reproducible ecological regions, (2) make this algorithm available for use and modification by others studying different ecosystems, variables of interest, study extents, and macroscale ecology research questions, and (3) demonstrate the power of this approach for the research question—How well do these regions capture regional-scale variation in lake water quality? To achieve our purpose we: (1) used a spatially constrained spectral clustering algorithm that balances geospatial homogeneity and region contiguity to create ecological regions using multiple terrestrial, climatic, and freshwater geospatial data for 17 northeastern U.S. states (~1,800,000 km2); (2) identified which of the 52 geospatial features were most influential in creating the resulting 100 regions; and (3) tested the ability of these ecological regions to capture regional variation in water nutrients and clarity for ~6,000 lakes. We found that: (1) a combination of terrestrial, climatic, and freshwater geospatial features influenced region creation, suggesting that the oft-ignored freshwater landscape provides novel information on landscape variability not captured by traditionally used climate and terrestrial metrics; and (2) the delineated regions captured macroscale heterogeneity in ecosystem properties not included in region delineation—approximately 40% of the variation in total phosphorus and water clarity among lakes was at the regional

  11. On Tour... Primary Hardwood Processing, Products and Recycling Unit

    Science.gov (United States)

    Philip A. Araman; Daniel L. Schmoldt

    1995-01-01

    Housed within the Department of Wood Science and Forest Products at Virginia Polytechnic Institute is a three-person USDA Forest Service research work unit (with one vacancy) devoted to hardwood processing and recycling research. Phil Araman is the project leader of this truly unique and productive unit, titled ãPrimary Hardwood Processing, Products and Recycling.ä The...

  12. Proton Testing of Advanced Stellar Compass Digital Processing Unit

    DEFF Research Database (Denmark)

    Thuesen, Gøsta; Denver, Troelz; Jørgensen, Finn E

    1999-01-01

    The Advanced Stellar Compass Digital Processing Unit was radiation tested with 300 MeV protons at Proton Irradiation Facility (PIF), Paul Scherrer Institute, Switzerland.......The Advanced Stellar Compass Digital Processing Unit was radiation tested with 300 MeV protons at Proton Irradiation Facility (PIF), Paul Scherrer Institute, Switzerland....

  13. 15 CFR 971.209 - Processing outside the United States.

    Science.gov (United States)

    2010-01-01

    ... 15 Commerce and Foreign Trade 3 2010-01-01 2010-01-01 false Processing outside the United States... THE ENVIRONMENTAL DATA SERVICE DEEP SEABED MINING REGULATIONS FOR COMMERCIAL RECOVERY PERMITS Applications Contents § 971.209 Processing outside the United States. (a) Except as provided in this section...

  14. Macro-scale turbulence modelling for flows in porous media

    International Nuclear Information System (INIS)

    Pinson, F.

    2006-03-01

    - This work deals with the macroscopic modeling of turbulence in porous media. It concerns heat exchangers, nuclear reactors as well as urban flows, etc. The objective of this study is to describe in an homogenized way, by the mean of a spatial average operator, turbulent flows in a solid matrix. In addition to this first operator, the use of a statistical average operator permits to handle the pseudo-aleatory character of turbulence. The successive application of both operators allows us to derive the balance equations of the kind of flows under study. Two major issues are then highlighted, the modeling of dispersion induced by the solid matrix and the turbulence modeling at a macroscopic scale (Reynolds tensor and turbulent dispersion). To this aim, we lean on the local modeling of turbulence and more precisely on the k - ε RANS models. The methodology of dispersion study, derived thanks to the volume averaging theory, is extended to turbulent flows. Its application includes the simulation, at a microscopic scale, of turbulent flows within a representative elementary volume of the porous media. Applied to channel flows, this analysis shows that even within the turbulent regime, dispersion remains one of the dominating phenomena within the macro-scale modeling framework. A two-scale analysis of the flow allows us to understand the dominating role of the drag force in the kinetic energy transfers between scales. Transfers between the mean part and the turbulent part of the flow are formally derived. This description significantly improves our understanding of the issue of macroscopic modeling of turbulence and leads us to define the sub-filter production and the wake dissipation. A f - f - w >f model is derived. It is based on three balance equations for the turbulent kinetic energy, the viscous dissipation and the wake dissipation. Furthermore, a dynamical predictor for the friction coefficient is proposed. This model is then successfully applied to the study of

  15. Advancing Tissue Engineering: A Tale of Nano-, Micro-, and Macroscale Integration

    NARCIS (Netherlands)

    Leijten, Jeroen Christianus Hermanus; Rouwkema, Jeroen; Zhang, Y.S.; Nasajpour, A.; Dokmeci, M.R.; Khademhosseini, A.

    2016-01-01

    Tissue engineering has the potential to revolutionize the health care industry. Delivering on this promise requires the generation of efficient, controllable and predictable implants. The integration of nano- and microtechnologies into macroscale regenerative biomaterials plays an essential role in

  16. Radiation processing in the United States

    International Nuclear Information System (INIS)

    Brynjolfsson, A.

    1986-01-01

    In animal feeding studies, including the huge animal feeding studies on radiation sterilized poultry products irradiated with sterilizing dose of 58 kGy revealed no harmful effects. This finding is corroborated by the very extensive analysis of the radiolytic products, which indicated that the radiolytic products could not in the quantity found in the food be expected to produce any toxic effect. It thus appears to be proven with reasonable certainty that no harm will result from the proposed use of the process. Accordingly, FDA is moving forward with approvals while allowing the required time for hearings and objection. On July 5, 1983 FDA permitted gamma irradiation for control of microbial contamination in dried spices and dehydrated vegetable seasoning at doses up to 10 kGy; on June 19, 1984 the approval was expanded to cover insect infection; and additional seasonings and irradiation of dry or dehydrated enzyme preparations were approved on February 12 and June 4, respectively, 1985. In addition, in July 1985, FDA cleared irradiation of pork products with doses of 0.3 to 1 kGy for eliminating trichinosis. Approvals of other agencies, including Food and Drug Administration, Department of Agriculture, the Nuclear Regulatory Commission, Occupational Safety and Health Administration, Department of Transportation, Environmental Protection Agency, and States and local communities, are usually of a technological nature and can then be obtained if the process is technologically feasible. (Namekawa, K.)

  17. [The nursing process at a burns unit: an ethnographic study].

    Science.gov (United States)

    Rossi, L A; Casagrande, L D

    2001-01-01

    This ethnographic study aimed at understanding the cultural meaning that nursing professionals working at a Burns Unit attribute to the nursing process as well as at identifying the factors affecting the implementation of this methodology. Data were collected through participant observation and semi-structured interviews. The findings indicate that, to the nurses from the investigated unit, the nursing process seems to be identified as bureaucratic management. Some factors determining this perception are: the way in which the nursing process has been taught and interpreted, routine as a guideline for nursing activity, and knowledge and power in the life-world of the Burns Unit.

  18. Business Process Compliance through Reusable Units of Compliant Processes

    NARCIS (Netherlands)

    D. Shumm; O. Turetken; N. Kokash (Natallia); A. Elgammal; F. Leymann; J. van den Heuvel

    2010-01-01

    htmlabstractCompliance management is essential for ensuring that organizational business processes and supporting information systems are in accordance with a set of prescribed requirements originating from laws, regulations, and various legislative or technical documents such as Sarbanes-Oxley Act

  19. Macroscale and Nanoscale Morphology Evolution during in Situ Spray Coating of Titania Films for Perovskite Solar Cells.

    Science.gov (United States)

    Su, Bo; Caller-Guzman, Herbert A; Körstgens, Volker; Rui, Yichuan; Yao, Yuan; Saxena, Nitin; Santoro, Gonzalo; Roth, Stephan V; Müller-Buschbaum, Peter

    2017-12-20

    Mesoporous titania is a cheap and widely used material for photovoltaic applications. To enable a large-scale fabrication and a controllable pore size, we combined a block copolymer-assisted sol-gel route with spray coating to fabricate titania films, in which the block copolymer polystyrene-block-poly(ethylene oxide) (PS-b-PEO) is used as a structure-directing template. Both the macroscale and nanoscale are studied. The kinetics and thermodynamics of the spray deposition processes are simulated on a macroscale, which shows a good agreement with the large-scale morphology of the spray-coated films obtained in practice. On the nanoscale, the structure evolution of the titania films is probed with in situ grazing incidence small-angle X-ray scattering (GISAXS) during the spray process. The changes of the PS domain size depend not only on micellization but also on solvent evaporation during the spray coating. Perovskite (CH 3 NH 3 PbI 3 ) solar cells (PSCs) based on sprayed titania film are fabricated, which showcases the suitability of spray-deposited titania films for PSCs.

  20. The statistical power to detect cross-scale interactions at macroscales

    Science.gov (United States)

    Wagner, Tyler; Fergus, C. Emi; Stow, Craig A.; Cheruvelil, Kendra S.; Soranno, Patricia A.

    2016-01-01

    Macroscale studies of ecological phenomena are increasingly common because stressors such as climate and land-use change operate at large spatial and temporal scales. Cross-scale interactions (CSIs), where ecological processes operating at one spatial or temporal scale interact with processes operating at another scale, have been documented in a variety of ecosystems and contribute to complex system dynamics. However, studies investigating CSIs are often dependent on compiling multiple data sets from different sources to create multithematic, multiscaled data sets, which results in structurally complex, and sometimes incomplete data sets. The statistical power to detect CSIs needs to be evaluated because of their importance and the challenge of quantifying CSIs using data sets with complex structures and missing observations. We studied this problem using a spatially hierarchical model that measures CSIs between regional agriculture and its effects on the relationship between lake nutrients and lake productivity. We used an existing large multithematic, multiscaled database, LAke multiscaled GeOSpatial, and temporal database (LAGOS), to parameterize the power analysis simulations. We found that the power to detect CSIs was more strongly related to the number of regions in the study rather than the number of lakes nested within each region. CSI power analyses will not only help ecologists design large-scale studies aimed at detecting CSIs, but will also focus attention on CSI effect sizes and the degree to which they are ecologically relevant and detectable with large data sets.

  1. High Input Voltage, Silicon Carbide Power Processing Unit Performance Demonstration

    Science.gov (United States)

    Bozak, Karin E.; Pinero, Luis R.; Scheidegger, Robert J.; Aulisio, Michael V.; Gonzalez, Marcelo C.; Birchenough, Arthur G.

    2015-01-01

    A silicon carbide brassboard power processing unit has been developed by the NASA Glenn Research Center in Cleveland, Ohio. The power processing unit operates from two sources: a nominal 300 Volt high voltage input bus and a nominal 28 Volt low voltage input bus. The design of the power processing unit includes four low voltage, low power auxiliary supplies, and two parallel 7.5 kilowatt (kW) discharge power supplies that are capable of providing up to 15 kilowatts of total power at 300 to 500 Volts (V) to the thruster. Additionally, the unit contains a housekeeping supply, high voltage input filter, low voltage input filter, and master control board, such that the complete brassboard unit is capable of operating a 12.5 kilowatt Hall effect thruster. The performance of the unit was characterized under both ambient and thermal vacuum test conditions, and the results demonstrate exceptional performance with full power efficiencies exceeding 97%. The unit was also tested with a 12.5kW Hall effect thruster to verify compatibility and output filter specifications. With space-qualified silicon carbide or similar high voltage, high efficiency power devices, this would provide a design solution to address the need for high power electric propulsion systems.

  2. The process of implementation of emergency care units in Brazil.

    Science.gov (United States)

    O'Dwyer, Gisele; Konder, Mariana Teixeira; Reciputti, Luciano Pereira; Lopes, Mônica Guimarães Macau; Agostinho, Danielle Fernandes; Alves, Gabriel Farias

    2017-12-11

    To analyze the process of implementation of emergency care units in Brazil. We have carried out a documentary analysis, with interviews with twenty-four state urgency coordinators and a panel of experts. We have analyzed issues related to policy background and trajectory, players involved in the implementation, expansion process, advances, limits, and implementation difficulties, and state coordination capacity. We have used the theoretical framework of the analysis of the strategic conduct of the Giddens theory of structuration. Emergency care units have been implemented after 2007, initially in the Southeast region, and 446 emergency care units were present in all Brazilian regions in 2016. Currently, 620 emergency care units are under construction, which indicates expectation of expansion. Federal funding was a strong driver for the implementation. The states have planned their emergency care units, but the existence of direct negotiation between municipalities and the Union has contributed with the significant number of emergency care units that have been built but that do not work. In relation to the urgency network, there is tension with the hospital because of the lack of beds in the country, which generates hospitalizations in the emergency care unit. The management of emergency care units is predominantly municipal, and most of the emergency care units are located outside the capitals and classified as Size III. The main challenges identified were: under-funding and difficulty in recruiting physicians. The emergency care unit has the merit of having technological resources and being architecturally differentiated, but it will only succeed within an urgency network. Federal induction has generated contradictory responses, since not all states consider the emergency care unit a priority. The strengthening of the state management has been identified as a challenge for the implementation of the urgency network.

  3. The process of implementation of emergency care units in Brazil

    Directory of Open Access Journals (Sweden)

    Gisele O'Dwyer

    2017-12-01

    Full Text Available ABSTRACT OBJECTIVE To analyze the process of implementation of emergency care units in Brazil. METHODS We have carried out a documentary analysis, with interviews with twenty-four state urgency coordinators and a panel of experts. We have analyzed issues related to policy background and trajectory, players involved in the implementation, expansion process, advances, limits, and implementation difficulties, and state coordination capacity. We have used the theoretical framework of the analysis of the strategic conduct of the Giddens theory of structuration. RESULTS Emergency care units have been implemented after 2007, initially in the Southeast region, and 446 emergency care units were present in all Brazilian regions in 2016. Currently, 620 emergency care units are under construction, which indicates expectation of expansion. Federal funding was a strong driver for the implementation. The states have planned their emergency care units, but the existence of direct negotiation between municipalities and the Union has contributed with the significant number of emergency care units that have been built but that do not work. In relation to the urgency network, there is tension with the hospital because of the lack of beds in the country, which generates hospitalizations in the emergency care unit. The management of emergency care units is predominantly municipal, and most of the emergency care units are located outside the capitals and classified as Size III. The main challenges identified were: under-funding and difficulty in recruiting physicians. CONCLUSIONS The emergency care unit has the merit of having technological resources and being architecturally differentiated, but it will only succeed within an urgency network. Federal induction has generated contradictory responses, since not all states consider the emergency care unit a priority. The strengthening of the state management has been identified as a challenge for the implementation of the

  4. 15 CFR 971.427 - Processing outside the United States.

    Science.gov (United States)

    2010-01-01

    ... 15 Commerce and Foreign Trade 3 2010-01-01 2010-01-01 false Processing outside the United States... THE ENVIRONMENTAL DATA SERVICE DEEP SEABED MINING REGULATIONS FOR COMMERCIAL RECOVERY PERMITS Issuance/Transfer: Terms, Conditions and Restrictions Terms, Conditions and Restrictions § 971.427 Processing...

  5. Formalizing the Process of Constructing Chains of Lexical Units

    Directory of Open Access Journals (Sweden)

    Grigorij Chetverikov

    2015-06-01

    Full Text Available Formalizing the Process of Constructing Chains of Lexical Units The paper investigates mathematical aspects of describing the construction of chains of lexical units on the basis of finite-predicate algebra. Analyzing the construction peculiarities is carried out and application of the method of finding the power of linear logical transformation for removing characteristic words of a dictionary entry is given. Analysis and perspectives of the results of the study are provided.

  6. CALCULATION PECULIARITIES OF RE-PROCESSED ROAD COVERING UNIT COST

    Directory of Open Access Journals (Sweden)

    Dilyara Kyazymovna Izmaylova

    2017-09-01

    Full Text Available In the article there are considered questions of economic expediency of non-waste technology application for road covering repair and restoration. Determined the conditions of asphalt-concrete processing at plants. Carried out cost changing analysis of asphalt granulate considering the conditions of transportation and preproduction processing. Given an example of expense calculation of one conventional unit of asphalt-concrete mixture volume preparation with and without processing.

  7. Transport phenomena in fuel cells : from microscale to macroscale

    Energy Technology Data Exchange (ETDEWEB)

    Djilali, N. [Victoria Univ., BC (Canada). Dept. of Mechanical Engineering]|[Victoria Univ., BC (Canada). Inst. for Integrated Energy Systems

    2006-07-01

    Proton Exchange Membrane (PEM) fuel cells rely on an array of thermofluid transport processes for the regulated supply of reactant gases and the removal of by-product heat and water. Flows are characterized by a broad range of length and time scales that take place in conjunction with reaction kinetics in a variety of regimes and structures. This paper examined some of the challenges related to computational fluid dynamics (CFD) modelling of PEM fuel cell transport phenomena. An overview of the main features, components and operation of PEM fuel cells was followed by a discussion of the various strategies used for component modelling of the electrolyte membrane; the gas diffusion layer; microporous layer; and flow channels. A review of integrated CFD models for PEM fuel cells included the coupling of electrochemical thermal and fluid transport with 3-D unit cell simulations; air-breathing micro-structured fuel cells; and stack level modelling. Physical models for modelling of transport at the micro-scale were also discussed. Results of the review indicated that the treatment of electrochemical reactions in a PEM fuel cell currently combines classical reaction kinetics with solutions procedures to resolve charged species transport, which may lead to thermodynamically inconsistent solutions for more complex systems. Proper representation of the surface coverage of all the chemical species at all reaction sites is needed, and secondary reactions such as platinum (Pt) dissolution and oxidation must be accounted for in order to model and understand degradation mechanisms in fuel cells. While progress has been made in CFD-based modelling of fuel cells, functional and predictive capabilities remain a challenge because of fundamental modelling and material characterization deficiencies in ionic and water transport in polymer membranes; 2-phase transport in porous gas diffusion electrodes and gas flow channels; inadequate macroscopic modelling and resolution of catalyst

  8. Tomography system having an ultrahigh-speed processing unit

    International Nuclear Information System (INIS)

    Brunnett, C.J.; Gerth, V.W. Jr.

    1977-01-01

    A transverse section tomography system has an ultrahigh-speed data processing unit for performing back projection and updating. An x-ray scanner directs x-ray beams through a planar section of a subject from a sequence of orientations and positions. The data processing unit includes a scan storage section for retrievably storing a set of filtered scan signals in scan storage locations corresponding to predetermined beam orientations. An array storage section is provided for storing image signals as they are generated

  9. Iterative Methods for MPC on Graphical Processing Units

    DEFF Research Database (Denmark)

    Gade-Nielsen, Nicolai Fog; Jørgensen, John Bagterp; Dammann, Bernd

    2012-01-01

    The high oating point performance and memory bandwidth of Graphical Processing Units (GPUs) makes them ideal for a large number of computations which often arises in scientic computing, such as matrix operations. GPUs achieve this performance by utilizing massive par- allelism, which requires ree...... as to avoid the use of dense matrices, which may be too large for the limited memory capacity of current graphics cards.......The high oating point performance and memory bandwidth of Graphical Processing Units (GPUs) makes them ideal for a large number of computations which often arises in scientic computing, such as matrix operations. GPUs achieve this performance by utilizing massive par- allelism, which requires...

  10. Reflector antenna analysis using physical optics on Graphics Processing Units

    DEFF Research Database (Denmark)

    Borries, Oscar Peter; Sørensen, Hans Henrik Brandenborg; Dammann, Bernd

    2014-01-01

    The Physical Optics approximation is a widely used asymptotic method for calculating the scattering from electrically large bodies. It requires significant computational work and little memory, and is thus well suited for application on a Graphics Processing Unit. Here, we investigate the perform......The Physical Optics approximation is a widely used asymptotic method for calculating the scattering from electrically large bodies. It requires significant computational work and little memory, and is thus well suited for application on a Graphics Processing Unit. Here, we investigate...

  11. Derivation of a macroscale formulation for a class of nonlinear partial differential equations

    International Nuclear Information System (INIS)

    Pantelis, G.

    1995-05-01

    A macroscale formulation is constructed from a system of partial differential equations which govern the microscale dependent variables. The construction is based upon the requirement that the solutions of the macroscale partial differential equations satisfy, in some approximate sense, the system of partial differential equations associated with the microscale. These results are restricted to the class of nonlinear partial differential equations which can be expressed as polynomials of the dependent variables and their partial derivatives up to second order. A linear approximation of transformations of second order contact manifolds is employed. 6 refs

  12. Bridging micro to macroscale fracture properties in highly heterogeneous brittle solids: weak pinning versus fingering

    Science.gov (United States)

    Vasoya, Manish; Lazarus, Véronique; Ponson, Laurent

    2016-10-01

    The effect of strong toughness heterogeneities on the macroscopic failure properties of brittle solids is investigated in the context of planar crack propagation. The basic mechanism at play is that the crack is locally slowed down or even trapped when encountering tougher material. The induced front deformation results in a selection of local toughness values that reflect at larger scale on the material resistance. To unravel this complexity and bridge micro to macroscale in failure of strongly heterogeneous media, we propose a homogenization procedure based on the introduction of two complementary macroscopic properties: An apparent toughness defined from the loading required to make the crack propagate and an effective fracture energy defined from the rate of energy released by unit area of crack advance. The relationship between these homogenized properties and the features of the local toughness map is computed using an iterative perturbation method. This approach is applied to a circular crack pinned by a periodic array of obstacles invariant in the radial direction, which gives rise to two distinct propagation regimes: A weak pinning regime where the crack maintains a stationary shape after reaching an equilibrium position and a fingering regime characterized by the continuous growth of localized regions of the fronts while the other parts remain trapped. Our approach successfully bridges micro to macroscopic failure properties in both cases and illustrates how small scale heterogeneities can drastically affect the overall failure response of brittle solids. On a broader perspective, we believe that our approach can be used as a powerful tool for the rational design of heterogeneous brittle solids and interfaces with tailored failure properties.

  13. Quantum manifestation of systems on the macro-scale – the concept ...

    Indian Academy of Sciences (India)

    Transition amplitude; inelastic scattering; macro-scale quantum effects. ... ingly large wavelength of ∼5 cm for typical parameters (electron energy ε ∼ 1 keV ...... and hence as the generator of the transition amplitude wave at its position. As.

  14. Line-scan macro-scale Raman chemical imaging for authentication of powdered foods and ingredients

    Science.gov (United States)

    Adulteration and fraud for powdered foods and ingredients are rising food safety risks that threaten consumers’ health. In this study, a newly developed line-scan macro-scale Raman imaging system using a 5 W 785 nm line laser as excitation source was used to authenticate the food powders. The system...

  15. Macroscale patterns in body size of intertidal crustaceans provide insights on climate change effects

    Science.gov (United States)

    Dugan, Jenifer E.; Hubbard, David M.; Contreras, Heraldo; Duarte, Cristian; Acuña, Emilio; Schoeman, David S.

    2017-01-01

    Predicting responses of coastal ecosystems to altered sea surface temperatures (SST) associated with global climate change, requires knowledge of demographic responses of individual species. Body size is an excellent metric because it scales strongly with growth and fecundity for many ectotherms. These attributes can underpin demographic as well as community and ecosystem level processes, providing valuable insights for responses of vulnerable coastal ecosystems to changing climate. We investigated contemporary macroscale patterns in body size among widely distributed crustaceans that comprise the majority of intertidal abundance and biomass of sandy beach ecosystems of the eastern Pacific coasts of Chile and California, USA. We focused on ecologically important species representing different tidal zones, trophic guilds and developmental modes, including a high-shore macroalga-consuming talitrid amphipod (Orchestoidea tuberculata), two mid-shore scavenging cirolanid isopods (Excirolana braziliensis and E. hirsuticauda), and a low-shore suspension-feeding hippid crab (Emerita analoga) with an amphitropical distribution. Significant latitudinal patterns in body sizes were observed for all species in Chile (21° - 42°S), with similar but steeper patterns in Emerita analoga, in California (32°- 41°N). Sea surface temperature was a strong predictor of body size (-4% to -35% °C-1) in all species. Beach characteristics were subsidiary predictors of body size. Alterations in ocean temperatures of even a few degrees associated with global climate change are likely to affect body sizes of important intertidal ectotherms, with consequences for population demography, life history, community structure, trophic interactions, food-webs, and indirect effects such as ecosystem function. The consistency of results for body size and temperature across species with different life histories, feeding modes, ecological roles, and microhabitats inhabiting a single widespread coastal

  16. Molecular and macro-scale analysis of enzyme-crosslinked silk hydrogels for rational biomaterial design.

    Science.gov (United States)

    McGill, Meghan; Coburn, Jeannine M; Partlow, Benjamin P; Mu, Xuan; Kaplan, David L

    2017-11-01

    Silk fibroin-based hydrogels have exciting applications in tissue engineering and therapeutic molecule delivery; however, their utility is dependent on their diffusive properties. The present study describes a molecular and macro-scale investigation of enzymatically-crosslinked silk fibroin hydrogels, and demonstrates that these systems have tunable crosslink density and diffusivity. We developed a liquid chromatography tandem mass spectroscopy (LC-MS/MS) method to assess the quantity and order of covalent tyrosine crosslinks in the hydrogels. This analysis revealed between 28 and 56% conversion of tyrosine to dityrosine, which was dependent on the silk concentration and reactant concentration. The crosslink density was then correlated with storage modulus, revealing that both crosslinking and protein concentration influenced the mechanical properties of the hydrogels. The diffusive properties of the bulk material were studied by fluorescence recovery after photobleaching (FRAP), which revealed a non-linear relationship between silk concentration and diffusivity. As a result of this work, a model for synthesizing hydrogels with known crosslink densities and diffusive properties has been established, enabling the rational design of silk hydrogels for biomedical applications. Hydrogels from naturally-derived silk polymers offer versitile opportunities in the biomedical field, however, their design has largely been an empirical process. We present a fundamental study of the crosslink density, storage modulus, and diffusion behavior of enzymatically-crosslinked silk hydrogels to better inform scaffold design. These studies revealed unexpected non-linear trends in the crosslink density and diffusivity of silk hydrogels with respect to protein concentration and crosslink reagent concentration. This work demonstrates the tunable diffusivity and crosslinking in silk fibroin hydrogels, and enables the rational design of biomaterials. Further, the characterization methods

  17. Scale up risk of developing oil shale processing units

    International Nuclear Information System (INIS)

    Oepik, I.

    1991-01-01

    The experiences in oil shale processing in three large countries, China, the U.S.A. and the U.S.S.R. have demonstrated, that the relative scale up risk of developing oil shale processing units is related to the scale up factor. On the background of large programmes for developing the oil shale industry branch, i.e. the $30 billion investments in colorado and Utah or 50 million t/year oil shale processing in Estonia and Leningrad Region planned in the late seventies, the absolute scope of the scale up risk of developing single retorting plants, seems to be justified. But under the conditions of low crude oil prices, when the large-scale development of oil shale processing industry is stopped, the absolute scope of the scale up risk is to be divided between a small number of units. Therefore, it is reasonable to build the new commercial oil shale processing plants with a minimum scale up risk. For example, in Estonia a new oil shale processing plant with gas combustion retorts projected to start in the early nineties will be equipped with four units of 1500 t/day enriched oil shale throughput each, designed with scale up factor M=1.5 and with a minimum scale up risk, only r=2.5-4.5%. The oil shale retorting unit for the PAMA plant in Israel [1] is planned to develop in three steps, also with minimum scale up risk: feasibility studies in Colorado with Israel's shale at Paraho 250 t/day retort and other tests, demonstration retort of 700 t/day and M=2.8 in Israel, and commercial retorts in the early nineties with the capacity of about 1000 t/day with M=1.4. The scale up risk of the PAMA project r=2-4% is approximately the same as that in Estonia. the knowledge of the scope of the scale up risk of developing oil shale processing retorts assists on the calculation of production costs in erecting new units. (author). 9 refs., 2 tabs

  18. Ising Processing Units: Potential and Challenges for Discrete Optimization

    Energy Technology Data Exchange (ETDEWEB)

    Coffrin, Carleton James [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Nagarajan, Harsha [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Bent, Russell Whitford [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-07-05

    The recent emergence of novel computational devices, such as adiabatic quantum computers, CMOS annealers, and optical parametric oscillators, presents new opportunities for hybrid-optimization algorithms that leverage these kinds of specialized hardware. In this work, we propose the idea of an Ising processing unit as a computational abstraction for these emerging tools. Challenges involved in using and bench- marking these devices are presented, and open-source software tools are proposed to address some of these challenges. The proposed benchmarking tools and methodology are demonstrated by conducting a baseline study of established solution methods to a D-Wave 2X adiabatic quantum computer, one example of a commercially available Ising processing unit.

  19. Product- and Process Units in the CRITT Translation Process Research Database

    DEFF Research Database (Denmark)

    Carl, Michael

    than 300 hours of text production. The database provides the raw logging data, as well as Tables of pre-processed product- and processing units. The TPR-DB includes various types of simple and composed product and process units that are intended to support the analysis and modelling of human text......The first version of the "Translation Process Research Database" (TPR DB v1.0) was released In August 2012, containing logging data of more than 400 translation and text production sessions. The current version of the TPR DB, (v1.4), contains data from more than 940 sessions, which represents more...

  20. Tomography system having an ultrahigh speed processing unit

    International Nuclear Information System (INIS)

    Cox, J.P. Jr.; Gerth, V.W. Jr.

    1977-01-01

    A transverse section tomography system has an ultrahigh-speed data processing unit for performing back projection and updating. An x-ray scanner directs x-ray beams through a planar section of a subject from a sequence of orientations and positions. The scanner includes a movably supported radiation detector for detecting the intensity of the beams of radiation after they pass through the subject

  1. A Block-Asynchronous Relaxation Method for Graphics Processing Units

    OpenAIRE

    Anzt, H.; Dongarra, J.; Heuveline, Vincent; Tomov, S.

    2011-01-01

    In this paper, we analyze the potential of asynchronous relaxation methods on Graphics Processing Units (GPUs). For this purpose, we developed a set of asynchronous iteration algorithms in CUDA and compared them with a parallel implementation of synchronous relaxation methods on CPU-based systems. For a set of test matrices taken from the University of Florida Matrix Collection we monitor the convergence behavior, the average iteration time and the total time-to-solution time. Analyzing the r...

  2. Accelerating Malware Detection via a Graphics Processing Unit

    Science.gov (United States)

    2010-09-01

    Processing Unit . . . . . . . . . . . . . . . . . . 4 PE Portable Executable . . . . . . . . . . . . . . . . . . . . . 4 COFF Common Object File Format...operating systems for the future [Szo05]. The PE format is an updated version of the common object file format ( COFF ) [Mic06]. Microsoft released a new...NAs02]. These alerts can be costly in terms of time and resources for individuals and organizations to investigate each misidentified file [YWL07] [Vak10

  3. Accelerating Molecular Dynamic Simulation on Graphics Processing Units

    Science.gov (United States)

    Friedrichs, Mark S.; Eastman, Peter; Vaidyanathan, Vishal; Houston, Mike; Legrand, Scott; Beberg, Adam L.; Ensign, Daniel L.; Bruns, Christopher M.; Pande, Vijay S.

    2009-01-01

    We describe a complete implementation of all-atom protein molecular dynamics running entirely on a graphics processing unit (GPU), including all standard force field terms, integration, constraints, and implicit solvent. We discuss the design of our algorithms and important optimizations needed to fully take advantage of a GPU. We evaluate its performance, and show that it can be more than 700 times faster than a conventional implementation running on a single CPU core. PMID:19191337

  4. Graphics processing unit based computation for NDE applications

    Science.gov (United States)

    Nahas, C. A.; Rajagopal, Prabhu; Balasubramaniam, Krishnan; Krishnamurthy, C. V.

    2012-05-01

    Advances in parallel processing in recent years are helping to improve the cost of numerical simulation. Breakthroughs in Graphical Processing Unit (GPU) based computation now offer the prospect of further drastic improvements. The introduction of 'compute unified device architecture' (CUDA) by NVIDIA (the global technology company based in Santa Clara, California, USA) has made programming GPUs for general purpose computing accessible to the average programmer. Here we use CUDA to develop parallel finite difference schemes as applicable to two problems of interest to NDE community, namely heat diffusion and elastic wave propagation. The implementations are for two-dimensions. Performance improvement of the GPU implementation against serial CPU implementation is then discussed.

  5. Optimized Laplacian image sharpening algorithm based on graphic processing unit

    Science.gov (United States)

    Ma, Tinghuai; Li, Lu; Ji, Sai; Wang, Xin; Tian, Yuan; Al-Dhelaan, Abdullah; Al-Rodhaan, Mznah

    2014-12-01

    In classical Laplacian image sharpening, all pixels are processed one by one, which leads to large amount of computation. Traditional Laplacian sharpening processed on CPU is considerably time-consuming especially for those large pictures. In this paper, we propose a parallel implementation of Laplacian sharpening based on Compute Unified Device Architecture (CUDA), which is a computing platform of Graphic Processing Units (GPU), and analyze the impact of picture size on performance and the relationship between the processing time of between data transfer time and parallel computing time. Further, according to different features of different memory, an improved scheme of our method is developed, which exploits shared memory in GPU instead of global memory and further increases the efficiency. Experimental results prove that two novel algorithms outperform traditional consequentially method based on OpenCV in the aspect of computing speed.

  6. Controllable unit concept as applied to a hypothetical tritium process

    International Nuclear Information System (INIS)

    Seabaugh, P.W.; Sellers, D.E.; Woltermann, H.A.; Boh, D.R.; Miles, J.C.; Fushimi, F.C.

    1976-01-01

    A methodology (controllable unit accountability) is described that identifies controlling errors for corrective action, locates areas and time frames of suspected diversions, defines time and sensitivity limits of diversion flags, defines the time frame in which pass-through quantities of accountable material and by inference SNM remain controllable and provides a basis for identification of incremental cost associated with purely safeguards considerations. The concept provides a rationale from which measurement variability and specific safeguard criteria can be converted into a numerical value that represents the degree of control or improvement attainable with a specific measurement system or combination of systems. Currently the methodology is being applied to a high-throughput, mixed-oxide fuel fabrication process. The process described is merely used to illustrate a procedure that can be applied to other more pertinent processes

  7. Development of interface technology between unit processes in E-Refining process

    Energy Technology Data Exchange (ETDEWEB)

    Lee, S. H.; Lee, H. S.; Kim, J. G. [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2010-10-15

    The pyroprocessing is composed mainly four subprocesses, such as an electrolytic reduction, an electrorefining, an electrowinning, and waste salt regeneration/ solidification processes. The electrorefining process, one of main processes which are composed of pyroprocess to recover the useful elements from spent fuel, is under development by Korea Atomic Energy Research Institute as a sub process of pyrochemical treatment of spent PWR fuel. The CERS(Continuous ElectroRefining System) is composed of some unit processes such as an electrorefiner, a salt distiller, a melting furnace for the U-ingot and U-chlorinator (UCl{sub 3} making equipment) as shown in Fig. 1. In this study, the interfaces technology between unit processes in E-Refining system is investigated and developed for the establishment of integrated E-Refining operation system as a part of integrated pyroprocessing

  8. Use of general purpose graphics processing units with MODFLOW

    Science.gov (United States)

    Hughes, Joseph D.; White, Jeremy T.

    2013-01-01

    To evaluate the use of general-purpose graphics processing units (GPGPUs) to improve the performance of MODFLOW, an unstructured preconditioned conjugate gradient (UPCG) solver has been developed. The UPCG solver uses a compressed sparse row storage scheme and includes Jacobi, zero fill-in incomplete, and modified-incomplete lower-upper (LU) factorization, and generalized least-squares polynomial preconditioners. The UPCG solver also includes options for sequential and parallel solution on the central processing unit (CPU) using OpenMP. For simulations utilizing the GPGPU, all basic linear algebra operations are performed on the GPGPU; memory copies between the central processing unit CPU and GPCPU occur prior to the first iteration of the UPCG solver and after satisfying head and flow criteria or exceeding a maximum number of iterations. The efficiency of the UPCG solver for GPGPU and CPU solutions is benchmarked using simulations of a synthetic, heterogeneous unconfined aquifer with tens of thousands to millions of active grid cells. Testing indicates GPGPU speedups on the order of 2 to 8, relative to the standard MODFLOW preconditioned conjugate gradient (PCG) solver, can be achieved when (1) memory copies between the CPU and GPGPU are optimized, (2) the percentage of time performing memory copies between the CPU and GPGPU is small relative to the calculation time, (3) high-performance GPGPU cards are utilized, and (4) CPU-GPGPU combinations are used to execute sequential operations that are difficult to parallelize. Furthermore, UPCG solver testing indicates GPGPU speedups exceed parallel CPU speedups achieved using OpenMP on multicore CPUs for preconditioners that can be easily parallelized.

  9. Fast analytical scatter estimation using graphics processing units.

    Science.gov (United States)

    Ingleby, Harry; Lippuner, Jonas; Rickey, Daniel W; Li, Yue; Elbakri, Idris

    2015-01-01

    To develop a fast patient-specific analytical estimator of first-order Compton and Rayleigh scatter in cone-beam computed tomography, implemented using graphics processing units. The authors developed an analytical estimator for first-order Compton and Rayleigh scatter in a cone-beam computed tomography geometry. The estimator was coded using NVIDIA's CUDA environment for execution on an NVIDIA graphics processing unit. Performance of the analytical estimator was validated by comparison with high-count Monte Carlo simulations for two different numerical phantoms. Monoenergetic analytical simulations were compared with monoenergetic and polyenergetic Monte Carlo simulations. Analytical and Monte Carlo scatter estimates were compared both qualitatively, from visual inspection of images and profiles, and quantitatively, using a scaled root-mean-square difference metric. Reconstruction of simulated cone-beam projection data of an anthropomorphic breast phantom illustrated the potential of this method as a component of a scatter correction algorithm. The monoenergetic analytical and Monte Carlo scatter estimates showed very good agreement. The monoenergetic analytical estimates showed good agreement for Compton single scatter and reasonable agreement for Rayleigh single scatter when compared with polyenergetic Monte Carlo estimates. For a voxelized phantom with dimensions 128 × 128 × 128 voxels and a detector with 256 × 256 pixels, the analytical estimator required 669 seconds for a single projection, using a single NVIDIA 9800 GX2 video card. Accounting for first order scatter in cone-beam image reconstruction improves the contrast to noise ratio of the reconstructed images. The analytical scatter estimator, implemented using graphics processing units, provides rapid and accurate estimates of single scatter and with further acceleration and a method to account for multiple scatter may be useful for practical scatter correction schemes.

  10. Graphics Processing Unit Accelerated Hirsch-Fye Quantum Monte Carlo

    Science.gov (United States)

    Moore, Conrad; Abu Asal, Sameer; Rajagoplan, Kaushik; Poliakoff, David; Caprino, Joseph; Tomko, Karen; Thakur, Bhupender; Yang, Shuxiang; Moreno, Juana; Jarrell, Mark

    2012-02-01

    In Dynamical Mean Field Theory and its cluster extensions, such as the Dynamic Cluster Algorithm, the bottleneck of the algorithm is solving the self-consistency equations with an impurity solver. Hirsch-Fye Quantum Monte Carlo is one of the most commonly used impurity and cluster solvers. This work implements optimizations of the algorithm, such as enabling large data re-use, suitable for the Graphics Processing Unit (GPU) architecture. The GPU's sheer number of concurrent parallel computations and large bandwidth to many shared memories takes advantage of the inherent parallelism in the Green function update and measurement routines, and can substantially improve the efficiency of the Hirsch-Fye impurity solver.

  11. Heterogeneous Multicore Parallel Programming for Graphics Processing Units

    Directory of Open Access Journals (Sweden)

    Francois Bodin

    2009-01-01

    Full Text Available Hybrid parallel multicore architectures based on graphics processing units (GPUs can provide tremendous computing power. Current NVIDIA and AMD Graphics Product Group hardware display a peak performance of hundreds of gigaflops. However, exploiting GPUs from existing applications is a difficult task that requires non-portable rewriting of the code. In this paper, we present HMPP, a Heterogeneous Multicore Parallel Programming workbench with compilers, developed by CAPS entreprise, that allows the integration of heterogeneous hardware accelerators in a unintrusive manner while preserving the legacy code.

  12. Congestion estimation technique in the optical network unit registration process.

    Science.gov (United States)

    Kim, Geunyong; Yoo, Hark; Lee, Dongsoo; Kim, Youngsun; Lim, Hyuk

    2016-07-01

    We present a congestion estimation technique (CET) to estimate the optical network unit (ONU) registration success ratio for the ONU registration process in passive optical networks. An optical line terminal (OLT) estimates the number of collided ONUs via the proposed scheme during the serial number state. The OLT can obtain congestion level among ONUs to be registered such that this information may be exploited to change the size of a quiet window to decrease the collision probability. We verified the efficiency of the proposed method through simulation and experimental results.

  13. Macroscale and microscale fracture toughness of microporous sintered Ag for applications in power electronic devices

    International Nuclear Information System (INIS)

    Chen, Chuantong; Nagao, Shijo; Suganuma, Katsuaki; Jiu, Jinting; Sugahara, Tohru; Zhang, Hao; Iwashige, Tomohito; Sugiura, Kazuhiko; Tsuruta, Kazuhiro

    2017-01-01

    The application of microporous sintered silver (Ag) as a bonding material to replace conventional die-bonding materials in power electronic devices has attracted considerable interest. Characterization of the mechanical properties of microporous Ag will enable its use in applications such as lead-free solder electronics and provide a fundamental understanding of its design principles. However, the material typically suffers from thermal and mechanical stress during its production fabrication, and service. In this work, we have studied the effect of microporous Ag specimen size on fracture toughness from the microscale to the macroscale. A focused ion beam was used to fabricate 20-, 10- and 5-μm-wide microscale specimens, which were of the same order of magnitude as the pore networks in the microporous Ag. Micro-cantilever bending tests revealed that fracture toughness decreased as the specimen size decreased. Conventional middle-cracked tensile tests were performed to determine the fracture toughness of the macroscale specimens. The microscale and macroscale fracture toughness results showed a clear size effect, which is discussed in terms of both the deformation behavior of crack tip and the influence of pore networks within Ag with different specimen sizes. Finite element model simulations showed that stress at the crack tip increased as the specimen size increased, which led to larger plastic deformation and more energy being consumed when the specimen fractured.

  14. Coal conversion process by the United Power Plants of Westphalia

    Energy Technology Data Exchange (ETDEWEB)

    1974-08-01

    The coal conversion process used by the United Power Plants of Westphalia and its possible applications are described. In this process, the crushed and predried coal is degassed and partly gasified in a gas generator, during which time the sulfur present in the coal is converted into hydrogen sulfide, which together with the carbon dioxide is subsequently washed out and possibly utilized or marketed. The residual coke together with the ashes and tar is then sent to the melting chamber of the steam generator where the ashes are removed. After desulfurization, the purified gas is fed into an external circuit and/or to a gas turbine for electricity generation. The raw gas from the gas generator can be directly used as fuel in a conventional power plant. The calorific value of the purified gas varies from 3200 to 3500 kcal/cu m. The purified gas can be used as reducing agent, heating gas, as raw material for various chemical processes, or be conveyed via pipelines to remote areas for electricity generation. The conversion process has the advantages of increased economy of electricity generation with desulfurization, of additional gas generation, and, in long-term prospects, of the use of the waste heat from high-temperature nuclear reactors for this process.

  15. Conceptual Design for the Pilot-Scale Plutonium Oxide Processing Unit in the Radiochemical Processing Laboratory

    Energy Technology Data Exchange (ETDEWEB)

    Lumetta, Gregg J.; Meier, David E.; Tingey, Joel M.; Casella, Amanda J.; Delegard, Calvin H.; Edwards, Matthew K.; Jones, Susan A.; Rapko, Brian M.

    2014-08-05

    This report describes a conceptual design for a pilot-scale capability to produce plutonium oxide for use as exercise and reference materials, and for use in identifying and validating nuclear forensics signatures associated with plutonium production. This capability is referred to as the Pilot-scale Plutonium oxide Processing Unit (P3U), and it will be located in the Radiochemical Processing Laboratory at the Pacific Northwest National Laboratory. The key unit operations are described, including plutonium dioxide (PuO2) dissolution, purification of the Pu by ion exchange, precipitation, and conversion to oxide by calcination.

  16. Exploiting graphics processing units for computational biology and bioinformatics.

    Science.gov (United States)

    Payne, Joshua L; Sinnott-Armstrong, Nicholas A; Moore, Jason H

    2010-09-01

    Advances in the video gaming industry have led to the production of low-cost, high-performance graphics processing units (GPUs) that possess more memory bandwidth and computational capability than central processing units (CPUs), the standard workhorses of scientific computing. With the recent release of generalpurpose GPUs and NVIDIA's GPU programming language, CUDA, graphics engines are being adopted widely in scientific computing applications, particularly in the fields of computational biology and bioinformatics. The goal of this article is to concisely present an introduction to GPU hardware and programming, aimed at the computational biologist or bioinformaticist. To this end, we discuss the primary differences between GPU and CPU architecture, introduce the basics of the CUDA programming language, and discuss important CUDA programming practices, such as the proper use of coalesced reads, data types, and memory hierarchies. We highlight each of these topics in the context of computing the all-pairs distance between instances in a dataset, a common procedure in numerous disciplines of scientific computing. We conclude with a runtime analysis of the GPU and CPU implementations of the all-pairs distance calculation. We show our final GPU implementation to outperform the CPU implementation by a factor of 1700.

  17. Graphics processing units in bioinformatics, computational biology and systems biology.

    Science.gov (United States)

    Nobile, Marco S; Cazzaniga, Paolo; Tangherloni, Andrea; Besozzi, Daniela

    2017-09-01

    Several studies in Bioinformatics, Computational Biology and Systems Biology rely on the definition of physico-chemical or mathematical models of biological systems at different scales and levels of complexity, ranging from the interaction of atoms in single molecules up to genome-wide interaction networks. Traditional computational methods and software tools developed in these research fields share a common trait: they can be computationally demanding on Central Processing Units (CPUs), therefore limiting their applicability in many circumstances. To overcome this issue, general-purpose Graphics Processing Units (GPUs) are gaining an increasing attention by the scientific community, as they can considerably reduce the running time required by standard CPU-based software, and allow more intensive investigations of biological systems. In this review, we present a collection of GPU tools recently developed to perform computational analyses in life science disciplines, emphasizing the advantages and the drawbacks in the use of these parallel architectures. The complete list of GPU-powered tools here reviewed is available at http://bit.ly/gputools. © The Author 2016. Published by Oxford University Press.

  18. The First Prototype for the FastTracker Processing Unit

    CERN Document Server

    Andreani, A; The ATLAS collaboration; Beretta, M; Bogdan, M; Citterio, M; Alberti, F; Giannetti, P; Lanza, A; Magalotti, D; Piendibene, M; Shochet, M; Stabile, A; Tang, J; Tompkins, L

    2012-01-01

    Modern experiments search for extremely rare processes hidden in much larger background levels. As the experiment complexity and the accelerator backgrounds and luminosity increase we need increasingly complex and exclusive selections. We present the first prototype of a new Processing Unit, the core of the FastTracker processor for Atlas, whose computing power is such that a couple of hundreds of them will be able to reconstruct all the tracks with transverse momentum above 1 GeV in the ATLAS events up to Phase II instantaneous luminosities (5×1034 cm-2 s-1) with an event input rate of 100 kHz and a latency below hundreds of microseconds. We plan extremely powerful, very compact and low consumption units for the far future, essential to increase efficiency and purity of the Level 2 selected samples through the intensive use of tracking. This strategy requires massive computing power to minimize the online execution time of complex tracking algorithms. The time consuming pattern recognition problem, generall...

  19. Real-time radar signal processing using GPGPU (general-purpose graphic processing unit)

    Science.gov (United States)

    Kong, Fanxing; Zhang, Yan Rockee; Cai, Jingxiao; Palmer, Robert D.

    2016-05-01

    This study introduces a practical approach to develop real-time signal processing chain for general phased array radar on NVIDIA GPUs(Graphical Processing Units) using CUDA (Compute Unified Device Architecture) libraries such as cuBlas and cuFFT, which are adopted from open source libraries and optimized for the NVIDIA GPUs. The processed results are rigorously verified against those from the CPUs. Performance benchmarked in computation time with various input data cube sizes are compared across GPUs and CPUs. Through the analysis, it will be demonstrated that GPGPUs (General Purpose GPU) real-time processing of the array radar data is possible with relatively low-cost commercial GPUs.

  20. Minimization of entropy production in separate and connected process units

    Energy Technology Data Exchange (ETDEWEB)

    Roesjorde, Audun

    2004-08-01

    The objective of this thesis was to further develop a methodology for minimizing the entropy production of single and connected chemical process units. When chemical process equipment is designed and operated at the lowest entropy production possible, the energy efficiency of the equipment is enhanced. We have found for single process units that the entropy production could be reduced with up to 20-40%, given the degrees of freedom in the optimization. In processes, our results indicated that even bigger reductions were possible. The states of minimum entropy production were studied and important painter's for obtaining significant reductions in the entropy production were identified. Both from sustain ability and economical viewpoints knowledge of energy efficient design and operation are important. In some of the systems we studied, nonequilibrium thermodynamics was used to model the entropy production. In Chapter 2, we gave a brief introduction to different industrial applications of nonequilibrium thermodynamics. The link between local transport phenomena and overall system description makes nonequilibrium thermodynamics a useful tool for understanding design of chemical process units. We developed the methodology of minimization of entropy production in several steps. First, we analyzed and optimized the entropy production of single units: Two alternative concepts of adiabatic distillation; diabatic and heat-integrated distillation, were analyzed and optimized in Chapter 3 to 5. In diabatic distillation, heat exchange is allowed along the column, and it is this feature that increases the energy efficiency of the distillation column. In Chapter 3, we found how a given area of heat transfer should be optimally distributed among the trays in a column separating a mixture of propylene and propane. The results showed that heat exchange was most important on the trays close to the re boiler and condenser. In Chapter 4 and 5, we studied how the entropy

  1. MASSIVELY PARALLEL LATENT SEMANTIC ANALYSES USING A GRAPHICS PROCESSING UNIT

    Energy Technology Data Exchange (ETDEWEB)

    Cavanagh, J.; Cui, S.

    2009-01-01

    Latent Semantic Analysis (LSA) aims to reduce the dimensions of large term-document datasets using Singular Value Decomposition. However, with the ever-expanding size of datasets, current implementations are not fast enough to quickly and easily compute the results on a standard PC. A graphics processing unit (GPU) can solve some highly parallel problems much faster than a traditional sequential processor or central processing unit (CPU). Thus, a deployable system using a GPU to speed up large-scale LSA processes would be a much more effective choice (in terms of cost/performance ratio) than using a PC cluster. Due to the GPU’s application-specifi c architecture, harnessing the GPU’s computational prowess for LSA is a great challenge. We presented a parallel LSA implementation on the GPU, using NVIDIA® Compute Unifi ed Device Architecture and Compute Unifi ed Basic Linear Algebra Subprograms software. The performance of this implementation is compared to traditional LSA implementation on a CPU using an optimized Basic Linear Algebra Subprograms library. After implementation, we discovered that the GPU version of the algorithm was twice as fast for large matrices (1 000x1 000 and above) that had dimensions not divisible by 16. For large matrices that did have dimensions divisible by 16, the GPU algorithm ran fi ve to six times faster than the CPU version. The large variation is due to architectural benefi ts of the GPU for matrices divisible by 16. It should be noted that the overall speeds for the CPU version did not vary from relative normal when the matrix dimensions were divisible by 16. Further research is needed in order to produce a fully implementable version of LSA. With that in mind, the research we presented shows that the GPU is a viable option for increasing the speed of LSA, in terms of cost/performance ratio.

  2. Gas-centrifuge unit and centrifugal process for isotope separation

    International Nuclear Information System (INIS)

    Stark, T.M.

    1979-01-01

    An invention involving a process and apparatus for isotope-separation applications such as uranium-isotope enrichment is disclosed which employs cascades of gas centrifuges. A preferred apparatus relates to an isotope-enrichment unit which includes a first group of cascades of gas centrifuges and an auxiliary cascade. Each cascade has an input, a light-fraction output, and a heavy-fraction output for separating a gaseous-mixture feed including a compound of a light nuclear isotope and a compound of a heavy nuclear isotope into light and heavy fractions respectively enriched and depleted in the light isotope. The cascades of the first group have at least one enriching stage and at least one stripping stage. The unit further includes means for introducing a gaseous-mixture feedstock into each input of the first group of cascades, means for withdrawing at least a portion of a product fraction from the light-fraction outputs of the first group of cascades, and means for withdrawing at least a portion of a waste fraction from the heavy-fraction outputs of the first group of cascades. The isotope-enrichment unit also includes a means for conveying a gaseous-mixture from a light-fraction output of a first cascade included in the first group to the input of the auxiliary cascade so that at least a portion of a light gaseous-mixture fraction produced by the first group of cascades is further separated into a light and a heavy fraction by the auxiliary cascade. At least a portion of a product fraction is withdrawn from the light fraction output of the auxiliary cascade. If the light-fraction output of the first cascade and the heavy-fraction output of the auxiliary cascade are reciprocal outputs, the concentraton of the light isotope in the heavy fraction produced by the auxiliary cascade essentially equals the concentration of the light isotope in the gaseous-mixture feedstock

  3. Dynamic wavefront creation for processing units using a hybrid compactor

    Energy Technology Data Exchange (ETDEWEB)

    Puthoor, Sooraj; Beckmann, Bradford M.; Yudanov, Dmitri

    2018-02-20

    A method, a non-transitory computer readable medium, and a processor for repacking dynamic wavefronts during program code execution on a processing unit, each dynamic wavefront including multiple threads are presented. If a branch instruction is detected, a determination is made whether all wavefronts following a same control path in the program code have reached a compaction point, which is the branch instruction. If no branch instruction is detected in executing the program code, a determination is made whether all wavefronts following the same control path have reached a reconvergence point, which is a beginning of a program code segment to be executed by both a taken branch and a not taken branch from a previous branch instruction. The dynamic wavefronts are repacked with all threads that follow the same control path, if all wavefronts following the same control path have reached the branch instruction or the reconvergence point.

  4. Integrating post-Newtonian equations on graphics processing units

    Energy Technology Data Exchange (ETDEWEB)

    Herrmann, Frank; Tiglio, Manuel [Department of Physics, Center for Fundamental Physics, and Center for Scientific Computation and Mathematical Modeling, University of Maryland, College Park, MD 20742 (United States); Silberholz, John [Center for Scientific Computation and Mathematical Modeling, University of Maryland, College Park, MD 20742 (United States); Bellone, Matias [Facultad de Matematica, Astronomia y Fisica, Universidad Nacional de Cordoba, Cordoba 5000 (Argentina); Guerberoff, Gustavo, E-mail: tiglio@umd.ed [Facultad de Ingenieria, Instituto de Matematica y Estadistica ' Prof. Ing. Rafael Laguardia' , Universidad de la Republica, Montevideo (Uruguay)

    2010-02-07

    We report on early results of a numerical and statistical study of binary black hole inspirals. The two black holes are evolved using post-Newtonian approximations starting with initially randomly distributed spin vectors. We characterize certain aspects of the distribution shortly before merger. In particular we note the uniform distribution of black hole spin vector dot products shortly before merger and a high correlation between the initial and final black hole spin vector dot products in the equal-mass, maximally spinning case. More than 300 million simulations were performed on graphics processing units, and we demonstrate a speed-up of a factor 50 over a more conventional CPU implementation. (fast track communication)

  5. Graphics processing units accelerated semiclassical initial value representation molecular dynamics

    Energy Technology Data Exchange (ETDEWEB)

    Tamascelli, Dario; Dambrosio, Francesco Saverio [Dipartimento di Fisica, Università degli Studi di Milano, via Celoria 16, 20133 Milano (Italy); Conte, Riccardo [Department of Chemistry and Cherry L. Emerson Center for Scientific Computation, Emory University, Atlanta, Georgia 30322 (United States); Ceotto, Michele, E-mail: michele.ceotto@unimi.it [Dipartimento di Chimica, Università degli Studi di Milano, via Golgi 19, 20133 Milano (Italy)

    2014-05-07

    This paper presents a Graphics Processing Units (GPUs) implementation of the Semiclassical Initial Value Representation (SC-IVR) propagator for vibrational molecular spectroscopy calculations. The time-averaging formulation of the SC-IVR for power spectrum calculations is employed. Details about the GPU implementation of the semiclassical code are provided. Four molecules with an increasing number of atoms are considered and the GPU-calculated vibrational frequencies perfectly match the benchmark values. The computational time scaling of two GPUs (NVIDIA Tesla C2075 and Kepler K20), respectively, versus two CPUs (Intel Core i5 and Intel Xeon E5-2687W) and the critical issues related to the GPU implementation are discussed. The resulting reduction in computational time and power consumption is significant and semiclassical GPU calculations are shown to be environment friendly.

  6. Graphics Processing Unit Enhanced Parallel Document Flocking Clustering

    Energy Technology Data Exchange (ETDEWEB)

    Cui, Xiaohui [ORNL; Potok, Thomas E [ORNL; ST Charles, Jesse Lee [ORNL

    2010-01-01

    Analyzing and clustering documents is a complex problem. One explored method of solving this problem borrows from nature, imitating the flocking behavior of birds. One limitation of this method of document clustering is its complexity O(n2). As the number of documents grows, it becomes increasingly difficult to generate results in a reasonable amount of time. In the last few years, the graphics processing unit (GPU) has received attention for its ability to solve highly-parallel and semi-parallel problems much faster than the traditional sequential processor. In this paper, we have conducted research to exploit this archi- tecture and apply its strengths to the flocking based document clustering problem. Using the CUDA platform from NVIDIA, we developed a doc- ument flocking implementation to be run on the NVIDIA GEFORCE GPU. Performance gains ranged from thirty-six to nearly sixty times improvement of the GPU over the CPU implementation.

  7. From bentonite powder to engineered barrier units - an industrial process

    International Nuclear Information System (INIS)

    Gatabin, Claude; Guyot, Jean-Luc; Resnikow, Serge; Bosgiraud, Jean-Michel; Londe, Louis; Seidler, Wolf

    2008-01-01

    In the framework of the ESDRED Project, a consortium, called GME, dealt with the study and development of all required industrial processes for the fabrication of scale-1 buffer rings and discs, as well as all related means for transporting and handling the rings, the assembly in 4-unit sets, the packaging of buffer-ring assemblies, and all associated procedures. In 2006, a 100-t mould was built in order to compact in a few hours 12 rings and two discs measuring 2.3 m in diameter and 0.5 m in height, and weighing 4 t each. The ring-handling, assembly and transport means were tested successfully in 2007. (author)

  8. Spatial variation in nutrient and water color effects on lake chlorophyll at macroscales

    Science.gov (United States)

    Fergus, C. Emi; Finley, Andrew O.; Soranno, Patricia A.; Wagner, Tyler

    2016-01-01

    positive effect such that a unit increase in water color resulted in a 2 μg/L increase in CHL and other locations where it had a negative effect such that a unit increase in water color resulted in a 2 μg/L decrease in CHL. In addition, the spatial scales that captured variation in TP and water color effects were different for our study lakes. Variation in TP–CHL relationships was observed at intermediate distances (~20 km) compared to variation in water color–CHL relationships that was observed at regional distances (~200 km). These results demonstrate that there are lake-to-lake differences in the effects of TP and water color on lake CHL and that this variation is spatially structured. Quantifying spatial structure in these relationships furthers our understanding of the variability in these relationships at macroscales and would improve model prediction of chlorophyll a to better meet lake management goals.

  9. Implementation and adaptation of a macro-scale methodology to calculate direct economic losses

    Science.gov (United States)

    Natho, Stephanie; Thieken, Annegret

    2017-04-01

    As one of the 195 member countries of the United Nations, Germany signed the Sendai Framework for Disaster Risk Reduction 2015-2030 (SFDRR). With this, though voluntary and non-binding, Germany agreed to report on achievements to reduce disaster impacts. Among other targets, the SFDRR aims at reducing direct economic losses in relation to the global gross domestic product by 2030 - but how to measure this without a standardized approach? The United Nations Office for Disaster Risk Reduction (UNISDR) has hence proposed a methodology to estimate direct economic losses per event and country on the basis of the number of damaged or destroyed items in different sectors. The method bases on experiences from developing countries. However, its applicability in industrial countries has not been investigated so far. Therefore, this study presents the first implementation of this approach in Germany to test its applicability for the costliest natural hazards and suggests adaptations. The approach proposed by UNISDR considers assets in the sectors agriculture, industry, commerce, housing, and infrastructure by considering roads, medical and educational facilities. The asset values are estimated on the basis of sector and event specific number of affected items, sector specific mean sizes per item, their standardized construction costs per square meter and a loss ratio of 25%. The methodology was tested for the three costliest natural hazard types in Germany, i.e. floods, storms and hail storms, considering 13 case studies on the federal or state scale between 1984 and 2016. Not any complete calculation of all sectors necessary to describe the total direct economic loss was possible due to incomplete documentation. Therefore, the method was tested sector-wise. Three new modules were developed to better adapt this methodology to German conditions covering private transport (cars), forestry and paved roads. Unpaved roads in contrast were integrated into the agricultural and

  10. The ATLAS Fast TracKer Processing Units

    CERN Document Server

    Krizka, Karol; The ATLAS collaboration

    2016-01-01

    The Fast Tracker is a hardware upgrade to the ATLAS trigger and data-acquisition system, with the goal of providing global track reconstruction by the start of the High Level Trigger starts. The Fast Tracker can process incoming data from the whole inner detector at full first level trigger rate, up to 100 kHz, using custom electronic boards. At the core of the system is a Processing Unit installed in a VMEbus crate, formed by two sets of boards: the Associative Memory Board and a powerful rear transition module called the Auxiliary card, while the second set is the Second Stage board. The associative memories perform the pattern matching looking for correlations within the incoming data, compatible with track candidates at coarse resolution. The pattern matching task is performed using custom application specific integrated circuits, called associative memory chips. The auxiliary card prepares the input and reject bad track candidates obtained from from the Associative Memory Board using the full precision a...

  11. The ATLAS Fast Tracker Processing Units - track finding and fitting

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00384270; The ATLAS collaboration; Alison, John; Ancu, Lucian Stefan; Andreani, Alessandro; Annovi, Alberto; Beccherle, Roberto; Beretta, Matteo; Biesuz, Nicolo Vladi; Bogdan, Mircea Arghir; Bryant, Patrick; Calabro, Domenico; Citraro, Saverio; Crescioli, Francesco; Dell'Orso, Mauro; Donati, Simone; Gentsos, Christos; Giannetti, Paola; Gkaitatzis, Stamatios; Gramling, Johanna; Greco, Virginia; Horyn, Lesya Anna; Iovene, Alessandro; Kalaitzidis, Panagiotis; Kim, Young-Kee; Kimura, Naoki; Kordas, Kostantinos; Kubota, Takashi; Lanza, Agostino; Liberali, Valentino; Luciano, Pierluigi; Magnin, Betty; Sakellariou, Andreas; Sampsonidis, Dimitrios; Saxon, James; Shojaii, Seyed Ruhollah; Sotiropoulou, Calliope Louisa; Stabile, Alberto; Swiatlowski, Maximilian; Volpi, Guido; Zou, Rui; Shochet, Mel

    2016-01-01

    The Fast Tracker is a hardware upgrade to the ATLAS trigger and data-acquisition system, with the goal of providing global track reconstruction by the start of the High Level Trigger starts. The Fast Tracker can process incoming data from the whole inner detector at full first level trigger rate, up to 100 kHz, using custom electronic boards. At the core of the system is a Processing Unit installed in a VMEbus crate, formed by two sets of boards: the Associative Memory Board and a powerful rear transition module called the Auxiliary card, while the second set is the Second Stage board. The associative memories perform the pattern matching looking for correlations within the incoming data, compatible with track candidates at coarse resolution. The pattern matching task is performed using custom application specific integrated circuits, called associative memory chips. The auxiliary card prepares the input and reject bad track candidates obtained from from the Associative Memory Board using the full precision a...

  12. Beowulf Distributed Processing and the United States Geological Survey

    Science.gov (United States)

    Maddox, Brian G.

    2002-01-01

    Introduction In recent years, the United States Geological Survey's (USGS) National Mapping Discipline (NMD) has expanded its scientific and research activities. Work is being conducted in areas such as emergency response research, scientific visualization, urban prediction, and other simulation activities. Custom-produced digital data have become essential for these types of activities. High-resolution, remotely sensed datasets are also seeing increased use. Unfortunately, the NMD is also finding that it lacks the resources required to perform some of these activities. Many of these projects require large amounts of computer processing resources. Complex urban-prediction simulations, for example, involve large amounts of processor-intensive calculations on large amounts of input data. This project was undertaken to learn and understand the concepts of distributed processing. Experience was needed in developing these types of applications. The idea was that this type of technology could significantly aid the needs of the NMD scientific and research programs. Porting a numerically intensive application currently being used by an NMD science program to run in a distributed fashion would demonstrate the usefulness of this technology. There are several benefits that this type of technology can bring to the USGS's research programs. Projects can be performed that were previously impossible due to a lack of computing resources. Other projects can be performed on a larger scale than previously possible. For example, distributed processing can enable urban dynamics research to perform simulations on larger areas without making huge sacrifices in resolution. The processing can also be done in a more reasonable amount of time than with traditional single-threaded methods (a scaled version of Chester County, Pennsylvania, took about fifty days to finish its first calibration phase with a single-threaded program). This paper has several goals regarding distributed processing

  13. Assessment of Process Capability: the case of Soft Drinks Processing Unit

    Science.gov (United States)

    Sri Yogi, Kottala

    2018-03-01

    The process capability studies have significant impact in investigating process variation which is important in achieving product quality characteristics. Its indices are to measure the inherent variability of a process and thus to improve the process performance radically. The main objective of this paper is to understand capability of the process being produced within specification of the soft drinks processing unit, a premier brands being marketed in India. A few selected critical parameters in soft drinks processing: concentration of gas volume, concentration of brix, torque of crock has been considered for this study. Assessed some relevant statistical parameters: short term capability, long term capability as a process capability indices perspective. For assessment we have used real time data of soft drinks bottling company which is located in state of Chhattisgarh, India. As our research output suggested reasons for variations in the process which is validated using ANOVA and also predicted Taguchi cost function, assessed also predicted waste monetarily this shall be used by organization for improving process parameters. This research work has substantially benefitted the organization in understanding the various variations of selected critical parameters for achieving zero rejection.

  14. A novel low-power fluxgate sensor using a macroscale optimisation technique for space physics instrumentation

    Science.gov (United States)

    Dekoulis, G.; Honary, F.

    2007-05-01

    This paper describes the design of a novel low-power single-axis fluxgate sensor. Several soft magnetic alloy materials have been considered and the choice was based on the balance between maximum permeability and minimum saturation flux density values. The sensor has been modelled using the Finite Integration Theory (FIT) method. The sensor was imposed to a custom macroscale optimisation technique that significantly reduced the power consumption by a factor of 16. The results of the sensor's optimisation technique will be used, subsequently, in the development of a cutting-edge ground based magnetometer for the study of the complex solar wind-magnetospheric-ionospheric system.

  15. Monte Carlo MP2 on Many Graphical Processing Units.

    Science.gov (United States)

    Doran, Alexander E; Hirata, So

    2016-10-11

    In the Monte Carlo second-order many-body perturbation (MC-MP2) method, the long sum-of-product matrix expression of the MP2 energy, whose literal evaluation may be poorly scalable, is recast into a single high-dimensional integral of functions of electron pair coordinates, which is evaluated by the scalable method of Monte Carlo integration. The sampling efficiency is further accelerated by the redundant-walker algorithm, which allows a maximal reuse of electron pairs. Here, a multitude of graphical processing units (GPUs) offers a uniquely ideal platform to expose multilevel parallelism: fine-grain data-parallelism for the redundant-walker algorithm in which millions of threads compute and share orbital amplitudes on each GPU; coarse-grain instruction-parallelism for near-independent Monte Carlo integrations on many GPUs with few and infrequent interprocessor communications. While the efficiency boost by the redundant-walker algorithm on central processing units (CPUs) grows linearly with the number of electron pairs and tends to saturate when the latter exceeds the number of orbitals, on a GPU it grows quadratically before it increases linearly and then eventually saturates at a much larger number of pairs. This is because the orbital constructions are nearly perfectly parallelized on a GPU and thus completed in a near-constant time regardless of the number of pairs. In consequence, an MC-MP2/cc-pVDZ calculation of a benzene dimer is 2700 times faster on 256 GPUs (using 2048 electron pairs) than on two CPUs, each with 8 cores (which can use only up to 256 pairs effectively). We also numerically determine that the cost to achieve a given relative statistical uncertainty in an MC-MP2 energy increases as O(n 3 ) or better with system size n, which may be compared with the O(n 5 ) scaling of the conventional implementation of deterministic MP2. We thus establish the scalability of MC-MP2 with both system and computer sizes.

  16. 32 CFR 516.12 - Service of civil process outside the United States.

    Science.gov (United States)

    2010-07-01

    ... 32 National Defense 3 2010-07-01 2010-07-01 true Service of civil process outside the United... AID OF CIVIL AUTHORITIES AND PUBLIC RELATIONS LITIGATION Service of Process § 516.12 Service of civil process outside the United States. (a) Process of foreign courts. In foreign countries service of process...

  17. High-throughput sequence alignment using Graphics Processing Units

    Directory of Open Access Journals (Sweden)

    Trapnell Cole

    2007-12-01

    Full Text Available Abstract Background The recent availability of new, less expensive high-throughput DNA sequencing technologies has yielded a dramatic increase in the volume of sequence data that must be analyzed. These data are being generated for several purposes, including genotyping, genome resequencing, metagenomics, and de novo genome assembly projects. Sequence alignment programs such as MUMmer have proven essential for analysis of these data, but researchers will need ever faster, high-throughput alignment tools running on inexpensive hardware to keep up with new sequence technologies. Results This paper describes MUMmerGPU, an open-source high-throughput parallel pairwise local sequence alignment program that runs on commodity Graphics Processing Units (GPUs in common workstations. MUMmerGPU uses the new Compute Unified Device Architecture (CUDA from nVidia to align multiple query sequences against a single reference sequence stored as a suffix tree. By processing the queries in parallel on the highly parallel graphics card, MUMmerGPU achieves more than a 10-fold speedup over a serial CPU version of the sequence alignment kernel, and outperforms the exact alignment component of MUMmer on a high end CPU by 3.5-fold in total application time when aligning reads from recent sequencing projects using Solexa/Illumina, 454, and Sanger sequencing technologies. Conclusion MUMmerGPU is a low cost, ultra-fast sequence alignment program designed to handle the increasing volume of data produced by new, high-throughput sequencing technologies. MUMmerGPU demonstrates that even memory-intensive applications can run significantly faster on the relatively low-cost GPU than on the CPU.

  18. The United States nuclear regulatory commission license renewal process

    International Nuclear Information System (INIS)

    Holian, B.E.

    2009-01-01

    The United States (U.S.) Nuclear Regulatory Commission (NRC) license renewal process establishes the technical and administrative requirements for the renewal of operating power plant licenses. Reactor ope-rating licenses were originally issued for 40 years and are allowed to be renewed. The review process for license renewal applications (L.R.A.) provides continued assurance that the level of safety provided by an applicant's current licensing basis is maintained for the period of extended operation. The license renewal review focuses on passive, long-lived structures and components of the plant that are subject to the effects of aging. The applicant must demonstrate that programs are in place to manage those aging effects. The review also verifies that analyses based on the current operating term have been evaluated and shown to be valid for the period of extended operation. The NRC has renewed the licenses for 52 reactors at 30 plant sites. Each applicant requested, and was granted, an extension of 20 years. Applications to renew the licenses of 20 additional reactors at 13 plant sites are under review. As license renewal is voluntary, the decision to seek license renewal and the timing of the application is made by the licensee. However, the NRC expects that, over time, essentially all U.S. operating reactors will request license renewal. In 2009, the U.S. has 4 plants that enter their 41. year of ope-ration. The U.S. Nuclear Industry has expressed interest in 'life beyond 60', that is, requesting approval of a second renewal period. U.S. regulations allow for subsequent license renewals. The NRC is working with the U.S. Department of Energy (DOE) on research related to light water reactor sustainability. (author)

  19. Towards a Unified Sentiment Lexicon Based on Graphics Processing Units

    Directory of Open Access Journals (Sweden)

    Liliana Ibeth Barbosa-Santillán

    2014-01-01

    Full Text Available This paper presents an approach to create what we have called a Unified Sentiment Lexicon (USL. This approach aims at aligning, unifying, and expanding the set of sentiment lexicons which are available on the web in order to increase their robustness of coverage. One problem related to the task of the automatic unification of different scores of sentiment lexicons is that there are multiple lexical entries for which the classification of positive, negative, or neutral {P,N,Z} depends on the unit of measurement used in the annotation methodology of the source sentiment lexicon. Our USL approach computes the unified strength of polarity of each lexical entry based on the Pearson correlation coefficient which measures how correlated lexical entries are with a value between 1 and −1, where 1 indicates that the lexical entries are perfectly correlated, 0 indicates no correlation, and −1 means they are perfectly inversely correlated and so is the UnifiedMetrics procedure for CPU and GPU, respectively. Another problem is the high processing time required for computing all the lexical entries in the unification task. Thus, the USL approach computes a subset of lexical entries in each of the 1344 GPU cores and uses parallel processing in order to unify 155802 lexical entries. The results of the analysis conducted using the USL approach show that the USL has 95.430 lexical entries, out of which there are 35.201 considered to be positive, 22.029 negative, and 38.200 neutral. Finally, the runtime was 10 minutes for 95.430 lexical entries; this allows a reduction of the time computing for the UnifiedMetrics by 3 times.

  20. Accelerating VASP electronic structure calculations using graphic processing units

    KAUST Repository

    Hacene, Mohamed

    2012-08-20

    We present a way to improve the performance of the electronic structure Vienna Ab initio Simulation Package (VASP) program. We show that high-performance computers equipped with graphics processing units (GPUs) as accelerators may reduce drastically the computation time when offloading these sections to the graphic chips. The procedure consists of (i) profiling the performance of the code to isolate the time-consuming parts, (ii) rewriting these so that the algorithms become better-suited for the chosen graphic accelerator, and (iii) optimizing memory traffic between the host computer and the GPU accelerator. We chose to accelerate VASP with NVIDIA GPU using CUDA. We compare the GPU and original versions of VASP by evaluating the Davidson and RMM-DIIS algorithms on chemical systems of up to 1100 atoms. In these tests, the total time is reduced by a factor between 3 and 8 when running on n (CPU core + GPU) compared to n CPU cores only, without any accuracy loss. © 2012 Wiley Periodicals, Inc.

  1. Smoldyn on graphics processing units: massively parallel Brownian dynamics simulations.

    Science.gov (United States)

    Dematté, Lorenzo

    2012-01-01

    Space is a very important aspect in the simulation of biochemical systems; recently, the need for simulation algorithms able to cope with space is becoming more and more compelling. Complex and detailed models of biochemical systems need to deal with the movement of single molecules and particles, taking into consideration localized fluctuations, transportation phenomena, and diffusion. A common drawback of spatial models lies in their complexity: models can become very large, and their simulation could be time consuming, especially if we want to capture the systems behavior in a reliable way using stochastic methods in conjunction with a high spatial resolution. In order to deliver the promise done by systems biology to be able to understand a system as whole, we need to scale up the size of models we are able to simulate, moving from sequential to parallel simulation algorithms. In this paper, we analyze Smoldyn, a widely diffused algorithm for stochastic simulation of chemical reactions with spatial resolution and single molecule detail, and we propose an alternative, innovative implementation that exploits the parallelism of Graphics Processing Units (GPUs). The implementation executes the most computational demanding steps (computation of diffusion, unimolecular, and bimolecular reaction, as well as the most common cases of molecule-surface interaction) on the GPU, computing them in parallel on each molecule of the system. The implementation offers good speed-ups and real time, high quality graphics output

  2. Accelerating cardiac bidomain simulations using graphics processing units.

    Science.gov (United States)

    Neic, A; Liebmann, M; Hoetzl, E; Mitchell, L; Vigmond, E J; Haase, G; Plank, G

    2012-08-01

    Anatomically realistic and biophysically detailed multiscale computer models of the heart are playing an increasingly important role in advancing our understanding of integrated cardiac function in health and disease. Such detailed simulations, however, are computationally vastly demanding, which is a limiting factor for a wider adoption of in-silico modeling. While current trends in high-performance computing (HPC) hardware promise to alleviate this problem, exploiting the potential of such architectures remains challenging since strongly scalable algorithms are necessitated to reduce execution times. Alternatively, acceleration technologies such as graphics processing units (GPUs) are being considered. While the potential of GPUs has been demonstrated in various applications, benefits in the context of bidomain simulations where large sparse linear systems have to be solved in parallel with advanced numerical techniques are less clear. In this study, the feasibility of multi-GPU bidomain simulations is demonstrated by running strong scalability benchmarks using a state-of-the-art model of rabbit ventricles. The model is spatially discretized using the finite element methods (FEM) on fully unstructured grids. The GPU code is directly derived from a large pre-existing code, the Cardiac Arrhythmia Research Package (CARP), with very minor perturbation of the code base. Overall, bidomain simulations were sped up by a factor of 11.8 to 16.3 in benchmarks running on 6-20 GPUs compared to the same number of CPU cores. To match the fastest GPU simulation which engaged 20 GPUs, 476 CPU cores were required on a national supercomputing facility.

  3. Accelerating VASP electronic structure calculations using graphic processing units

    KAUST Repository

    Hacene, Mohamed; Anciaux-Sedrakian, Ani; Rozanska, Xavier; Klahr, Diego; Guignon, Thomas; Fleurat-Lessard, Paul

    2012-01-01

    We present a way to improve the performance of the electronic structure Vienna Ab initio Simulation Package (VASP) program. We show that high-performance computers equipped with graphics processing units (GPUs) as accelerators may reduce drastically the computation time when offloading these sections to the graphic chips. The procedure consists of (i) profiling the performance of the code to isolate the time-consuming parts, (ii) rewriting these so that the algorithms become better-suited for the chosen graphic accelerator, and (iii) optimizing memory traffic between the host computer and the GPU accelerator. We chose to accelerate VASP with NVIDIA GPU using CUDA. We compare the GPU and original versions of VASP by evaluating the Davidson and RMM-DIIS algorithms on chemical systems of up to 1100 atoms. In these tests, the total time is reduced by a factor between 3 and 8 when running on n (CPU core + GPU) compared to n CPU cores only, without any accuracy loss. © 2012 Wiley Periodicals, Inc.

  4. Remote Maintenance Design Guide for Compact Processing Units

    Energy Technology Data Exchange (ETDEWEB)

    Draper, J.V.

    2000-07-13

    Oak Ridge National Laboratory (ORNL) Robotics and Process Systems (RPSD) personnel have extensive experience working with remotely operated and maintained systems. These systems require expert knowledge in teleoperation, human factors, telerobotics, and other robotic devices so that remote equipment may be manipulated, operated, serviced, surveyed, and moved about in a hazardous environment. The RPSD staff has a wealth of experience in this area, including knowledge in the broad topics of human factors, modular electronics, modular mechanical systems, hardware design, and specialized tooling. Examples of projects that illustrate and highlight RPSD's unique experience in remote systems design and application include the following: (1) design of a remote shear and remote dissolver systems in support of U.S. Department of Energy (DOE) fuel recycling research and nuclear power missions; (2) building remotely operated mobile systems for metrology and characterizing hazardous facilities in support of remote operations within those facilities; (3) construction of modular robotic arms, including the Laboratory Telerobotic Manipulator, which was designed for the National Aeronautics and Space Administration (NASA) and the Advanced ServoManipulator, which was designed for the DOE; (4) design of remotely operated laboratories, including chemical analysis and biochemical processing laboratories; (5) construction of remote systems for environmental clean up and characterization, including underwater, buried waste, underground storage tank (UST) and decontamination and dismantlement (D&D) applications. Remote maintenance has played a significant role in fuel reprocessing because of combined chemical and radiological contamination. Furthermore, remote maintenance is expected to play a strong role in future waste remediation. The compact processing units (CPUs) being designed for use in underground waste storage tank remediation are examples of improvements in systems

  5. Macroscale porous carbonized polydopamine-modified cotton textile for application as electrode in microbial fuel cells

    Science.gov (United States)

    Zeng, Lizhen; Zhao, Shaofei; He, Miao

    2018-02-01

    The anode material is a crucial factor that significantly affects the cost and performance of microbial fuel cells (MFCs). In this study, a novel macroscale porous, biocompatible, highly conductive and low cost electrode, carbonized polydopamine-modified cotton textile (NC@CCT), is fabricated by using normal cheap waste cotton textiles as raw material via a simple in situ polymerization and carbonization treatment as anode of MFCs. The physical and chemical characterizations show that the macroscale porous and biocompatible NC@CCT electrode is coated by nitrogen-doped carbon nanoparticles and offers a large specific surface area (888.67 m2 g-1) for bacterial cells growth, accordingly greatly increases the loading amount of bacterial cells and facilitates extracellular electron transfer (EET). As a result, the MFC equipped with the NC@CCT anode achieves a maximum power density of 931 ± 61 mW m-2, which is 80.5% higher than that of commercial carbon felt (516 ± 27 mW m-2) anode. Moreover, making full use of the normal cheap waste cotton textiles can greatly reduce the cost of MFCs and the environmental pollution problem.

  6. Environmental drivers defining linkages among life-history traits: mechanistic insights from a semiterrestrial amphipod subjected to macroscale gradients.

    Science.gov (United States)

    Gómez, Julio; Barboza, Francisco R; Defeo, Omar

    2013-10-01

    Determining the existence of interconnected responses among life-history traits and identifying underlying environmental drivers are recognized as key goals for understanding the basis of phenotypic variability. We studied potentially interconnected responses among senescence, fecundity, embryos size, weight of brooding females, size at maturity and sex ratio in a semiterrestrial amphipod affected by macroscale gradients in beach morphodynamics and salinity. To this end, multiple modelling processes based on generalized additive mixed models were used to deal with the spatio-temporal structure of the data obtained at 10 beaches during 22 months. Salinity was the only nexus among life-history traits, suggesting that this physiological stressor influences the energy balance of organisms. Different salinity scenarios determined shifts in the weight of brooding females and size at maturity, having consequences in the number and size of embryos which in turn affected sex determination and sex ratio at the population level. Our work highlights the importance of analysing field data to find the variables and potential mechanisms that define concerted responses among traits, therefore defining life-history strategies.

  7. Undergraduate Game Degree Programs in the United Kingdom and United States: A Comparison of the Curriculum Planning Process

    Science.gov (United States)

    McGill, Monica M.

    2010-01-01

    Digital games are marketed, mass-produced, and consumed by an increasing number of people and the game industry is only expected to grow. In response, post-secondary institutions in the United Kingdom (UK) and the United States (US) have started to create game degree programs. Though curriculum theorists provide insight into the process of…

  8. Flocking-based Document Clustering on the Graphics Processing Unit

    Energy Technology Data Exchange (ETDEWEB)

    Cui, Xiaohui [ORNL; Potok, Thomas E [ORNL; Patton, Robert M [ORNL; ST Charles, Jesse Lee [ORNL

    2008-01-01

    Abstract?Analyzing and grouping documents by content is a complex problem. One explored method of solving this problem borrows from nature, imitating the flocking behavior of birds. Each bird represents a single document and flies toward other documents that are similar to it. One limitation of this method of document clustering is its complexity O(n2). As the number of documents grows, it becomes increasingly difficult to receive results in a reasonable amount of time. However, flocking behavior, along with most naturally inspired algorithms such as ant colony optimization and particle swarm optimization, are highly parallel and have found increased performance on expensive cluster computers. In the last few years, the graphics processing unit (GPU) has received attention for its ability to solve highly-parallel and semi-parallel problems much faster than the traditional sequential processor. Some applications see a huge increase in performance on this new platform. The cost of these high-performance devices is also marginal when compared with the price of cluster machines. In this paper, we have conducted research to exploit this architecture and apply its strengths to the document flocking problem. Our results highlight the potential benefit the GPU brings to all naturally inspired algorithms. Using the CUDA platform from NIVIDA? we developed a document flocking implementation to be run on the NIVIDA?GEFORCE 8800. Additionally, we developed a similar but sequential implementation of the same algorithm to be run on a desktop CPU. We tested the performance of each on groups of news articles ranging in size from 200 to 3000 documents. The results of these tests were very significant. Performance gains ranged from three to nearly five times improvement of the GPU over the CPU implementation. This dramatic improvement in runtime makes the GPU a potentially revolutionary platform for document clustering algorithms.

  9. 32 CFR 516.10 - Service of civil process within the United States.

    Science.gov (United States)

    2010-07-01

    ... 32 National Defense 3 2010-07-01 2010-07-01 true Service of civil process within the United States... CIVIL AUTHORITIES AND PUBLIC RELATIONS LITIGATION Service of Process § 516.10 Service of civil process within the United States. (a) Policy. DA officials will not prevent or evade the service or process in...

  10. 40 CFR 63.765 - Glycol dehydration unit process vent standards.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 10 2010-07-01 2010-07-01 false Glycol dehydration unit process vent... Facilities § 63.765 Glycol dehydration unit process vent standards. (a) This section applies to each glycol dehydration unit subject to this subpart with an actual annual average natural gas flowrate equal to or...

  11. 40 CFR 63.1275 - Glycol dehydration unit process vent standards.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 11 2010-07-01 2010-07-01 true Glycol dehydration unit process vent... Facilities § 63.1275 Glycol dehydration unit process vent standards. (a) This section applies to each glycol dehydration unit subject to this subpart with an actual annual average natural gas flowrate equal to or...

  12. Design of the Laboratory-Scale Plutonium Oxide Processing Unit in the Radiochemical Processing Laboratory

    Energy Technology Data Exchange (ETDEWEB)

    Lumetta, Gregg J. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Meier, David E. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Tingey, Joel M. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Casella, Amanda J. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Delegard, Calvin H. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Edwards, Matthew K. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Orton, Robert D. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Rapko, Brian M. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Smart, John E. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2015-05-01

    This report describes a design for a laboratory-scale capability to produce plutonium oxide (PuO2) for use in identifying and validating nuclear forensics signatures associated with plutonium production, as well as for use as exercise and reference materials. This capability will be located in the Radiochemical Processing Laboratory at the Pacific Northwest National Laboratory. The key unit operations are described, including PuO2 dissolution, purification of the Pu by ion exchange, precipitation, and re-conversion to PuO2 by calcination.

  13. Deriving social relations among organizational units from process models

    NARCIS (Netherlands)

    Song, M.S.; Choi, I.; Kim, K.M.; Aalst, van der W.M.P.

    2008-01-01

    For companies to sustain competitive advantages, it is required to redesign and improve business processes continuously by monitoring and analyzing process enactment results. Furthermore, organizational structures must be redesigned according to the changes in business processes. However, there are

  14. Reproducibility of Mammography Units, Film Processing and Quality Imaging

    International Nuclear Information System (INIS)

    Gaona, Enrique

    2003-01-01

    The purpose of this study was to carry out an exploratory survey of the problems of quality control in mammography and processors units as a diagnosis of the current situation of mammography facilities. Measurements of reproducibility, optical density, optical difference and gamma index are included. Breast cancer is the most frequently diagnosed cancer and is the second leading cause of cancer death among women in the Mexican Republic. Mammography is a radiographic examination specially designed for detecting breast pathology. We found that the problems of reproducibility of AEC are smaller than the problems of processors units because almost all processors fall outside of the acceptable variation limits and they can affect the mammography quality image and the dose to breast. Only four mammography units agree with the minimum score established by ACR and FDA for the phantom image

  15. Co-occurrence of Photochemical and Microbiological Transformation Processes in Open-Water Unit Process Wetlands.

    Science.gov (United States)

    Prasse, Carsten; Wenk, Jannis; Jasper, Justin T; Ternes, Thomas A; Sedlak, David L

    2015-12-15

    The fate of anthropogenic trace organic contaminants in surface waters can be complex due to the occurrence of multiple parallel and consecutive transformation processes. In this study, the removal of five antiviral drugs (abacavir, acyclovir, emtricitabine, lamivudine and zidovudine) via both bio- and phototransformation processes, was investigated in laboratory microcosm experiments simulating an open-water unit process wetland receiving municipal wastewater effluent. Phototransformation was the main removal mechanism for abacavir, zidovudine, and emtricitabine, with half-lives (t1/2,photo) in wetland water of 1.6, 7.6, and 25 h, respectively. In contrast, removal of acyclovir and lamivudine was mainly attributable to slower microbial processes (t1/2,bio = 74 and 120 h, respectively). Identification of transformation products revealed that bio- and phototransformation reactions took place at different moieties. For abacavir and zidovudine, rapid transformation was attributable to high reactivity of the cyclopropylamine and azido moieties, respectively. Despite substantial differences in kinetics of different antiviral drugs, biotransformation reactions mainly involved oxidation of hydroxyl groups to the corresponding carboxylic acids. Phototransformation rates of parent antiviral drugs and their biotransformation products were similar, indicating that prior exposure to microorganisms (e.g., in a wastewater treatment plant or a vegetated wetland) would not affect the rate of transformation of the part of the molecule susceptible to phototransformation. However, phototransformation strongly affected the rates of biotransformation of the hydroxyl groups, which in some cases resulted in greater persistence of phototransformation products.

  16. Predator-prey interactions as macro-scale drivers of species diversity in mammals

    DEFF Research Database (Denmark)

    Sandom, Christopher James; Sandel, Brody Steven; Dalby, Lars

    Background/Question/Methods Understanding the importance of predator-prey interactions for species diversity is a central theme in ecology, with fundamental consequences for predicting the responses of ecosystems to land use and climate change. We assessed the relative support for different...... mechanistic drivers of mammal species richness at macro-scales for two trophic levels: predators and prey. To disentangle biotic (i.e. functional predator-prey interactions) from abiotic (i.e. environmental) and bottom-up from top-down determinants we considered three hypotheses: 1) environmental factors...... that determine ecosystem productivity drive prey and predator richness (the productivity hypothesis, abiotic, bottom-up), 2) consumer richness is driven by resource diversity (the resource diversity hypothesis, biotic, bottom-up) and 3) consumers drive richness of their prey (the top-down hypothesis, biotic, top...

  17. 32 CFR 516.9 - Service of criminal process within the United States.

    Science.gov (United States)

    2010-07-01

    ... 32 National Defense 3 2010-07-01 2010-07-01 true Service of criminal process within the United... OF CIVIL AUTHORITIES AND PUBLIC RELATIONS LITIGATION Service of Process § 516.9 Service of criminal process within the United States. (a) Surrender of personnel. Guidance for surrender of military personnel...

  18. Effect of energetic dissipation processes on the friction unit tribological

    Directory of Open Access Journals (Sweden)

    Moving V. V.

    2007-01-01

    Full Text Available In article presented temperature influence on reological and fric-tion unit coefficients cast iron elements. It has been found that surface layer formed in the temperature friction has good rub off resistance. The surface layer structural hardening and capacity stress relaxation make up.

  19. Processing United Nations Documents in the University of Michigan Library.

    Science.gov (United States)

    Stolper, Gertrude

    This guide provides detailed instructions for recording documents in the United Nations (UN) card catalog which provides access to the UN depository collection in the Harlan Hatcher Graduate Library at the University of Michigan. Procedures for handling documents when they are received include stamping, counting, and sorting into five categories:…

  20. Comparing SMAP to Macro-scale and Hyper-resolution Land Surface Models over Continental U. S.

    Science.gov (United States)

    Pan, Ming; Cai, Xitian; Chaney, Nathaniel; Wood, Eric

    2016-04-01

    SMAP sensors collect moisture information in top soil at the spatial resolution of ~40 km (radiometer) and ~1 to 3 km (radar, before its failure in July 2015). Such information is extremely valuable for understanding various terrestrial hydrologic processes and their implications on human life. At the same time, soil moisture is a joint consequence of numerous physical processes (precipitation, temperature, radiation, topography, crop/vegetation dynamics, soil properties, etc.) that happen at a wide range of scales from tens of kilometers down to tens of meters. Therefore, a full and thorough analysis/exploration of SMAP data products calls for investigations at multiple spatial scales - from regional, to catchment, and to field scales. Here we first compare the SMAP retrievals to the Variable Infiltration Capacity (VIC) macro-scale land surface model simulations over the continental U. S. region at 3 km resolution. The forcing inputs to the model are merged/downscaled from a suite of best available data products including the NLDAS-2 forcing, Stage IV and Stage II precipitation, GOES Surface and Insolation Products, and fine elevation data. The near real time VIC simulation is intended to provide a source of large scale comparisons at the active sensor resolution. Beyond the VIC model scale, we perform comparisons at 30 m resolution against the recently developed HydroBloks hyper-resolution land surface model over several densely gauged USDA experimental watersheds. Comparisons are also made against in-situ point-scale observations from various SMAP Cal/Val and field campaign sites.

  1. On the hazard rate process for imperfectly monitored multi-unit systems

    International Nuclear Information System (INIS)

    Barros, A.; Berenguer, C.; Grall, A.

    2005-01-01

    The aim of this paper is to present a stochastic model to characterize the failure distribution of multi-unit systems when the current units state is imperfectly monitored. The definition of the hazard rate process existing with perfect monitoring is extended to the realistic case where the units failure time are not always detected (non-detection events). The so defined observed hazard rate process gives a better representation of the system behavior than the classical failure rate calculated without any information on the units state and than the hazard rate process based on perfect monitoring information. The quality of this representation is, however, conditioned by the monotony property of the process. This problem is mainly discussed and illustrated on a practical example (two parallel units). The results obtained motivate the use of the observed hazard rate process to characterize the stochastic behavior of the multi-unit systems and to optimize for example preventive maintenance policies

  2. On the hazard rate process for imperfectly monitored multi-unit systems

    Energy Technology Data Exchange (ETDEWEB)

    Barros, A. [Institut des Sciences et Techonologies de l' Information de Troyes (ISTIT-CNRS), Equipe de Modelisation et Surete des Systemes, Universite de Technologie de Troyes (UTT), 12, rue Marie Curie, BP2060, 10010 Troyes cedex (France)]. E-mail: anne.barros@utt.fr; Berenguer, C. [Institut des Sciences et Techonologies de l' Information de Troyes (ISTIT-CNRS), Equipe de Modelisation et Surete des Systemes, Universite de Technologie de Troyes (UTT), 12, rue Marie Curie, BP2060, 10010 Troyes cedex (France); Grall, A. [Institut des Sciences et Techonologies de l' Information de Troyes (ISTIT-CNRS), Equipe de Modelisation et Surete des Systemes, Universite de Technologie de Troyes (UTT), 12, rue Marie Curie, BP2060, 10010 Troyes cedex (France)

    2005-12-01

    The aim of this paper is to present a stochastic model to characterize the failure distribution of multi-unit systems when the current units state is imperfectly monitored. The definition of the hazard rate process existing with perfect monitoring is extended to the realistic case where the units failure time are not always detected (non-detection events). The so defined observed hazard rate process gives a better representation of the system behavior than the classical failure rate calculated without any information on the units state and than the hazard rate process based on perfect monitoring information. The quality of this representation is, however, conditioned by the monotony property of the process. This problem is mainly discussed and illustrated on a practical example (two parallel units). The results obtained motivate the use of the observed hazard rate process to characterize the stochastic behavior of the multi-unit systems and to optimize for example preventive maintenance policies.

  3. Nondestructive chemical imaging of wood at the micro-scale: advanced technology to complement macro-scale evaluations

    Science.gov (United States)

    Barbara L. Illman; Julia Sedlmair; Miriam Unger; Carol Hirschmugl

    2013-01-01

    Chemical images help understanding of wood properties, durability, and cell wall deconstruction for conversion of lignocellulose to biofuels, nanocellulose and other value added chemicals in forest biorefineries. We describe here a new method for nondestructive chemical imaging of wood and wood-based materials at the micro-scale to complement macro-scale methods based...

  4. Modelling PM10 aerosol data from the Qalabotjha low-smoke fuels macro-scale experiment in South Africa

    CSIR Research Space (South Africa)

    Engelbrecht, JP

    2000-03-30

    Full Text Available for combustion in cooking and heating appliances are being con- sidered to mitigate human exposure to D-grade coal combustion emissions. In 1997, South Africa's Department of Minerals and Energy conducted a macro-scale experiment to test three brands of low...

  5. Influence of Bubble-Bubble interactions on the macroscale circulation patterns in a bubbling gas-solid fluidized bed

    NARCIS (Netherlands)

    Laverman, J.A.; van Sint Annaland, M.; Kuipers, J.A.M.

    2007-01-01

    The macro-scale circulation patterns in the emulsion phase of a gas-solid fluidized bed in the bubbling regime have been studied with a 3D Discrete Bubble Model. It has been shown that bubble-bubble interactions strongly influence the extent of the solids circulation and the bubble size

  6. Control system design specification of advanced spent fuel management process units

    Energy Technology Data Exchange (ETDEWEB)

    Ahn, S. H.; Kim, S. H.; Yoon, J. S

    2003-06-01

    In this study, the design specifications of instrumentation and control system for advanced spent fuel management process units are presented. The advanced spent fuel management process consists of several process units such as slitting device, dry pulverizing/mixing device, metallizer, etc. In this study, the control and operation characteristics of the advanced spent fuel management mockup process devices and the process devices developed in 2001 and 2002 are analysed. Also, a integral processing system of the unit process control signals is proposed, which the operation efficiency is improved. And a redundant PLC control system is constructed which the reliability is improved. A control scheme is proposed for the time delayed systems compensating the control performance degradation caused by time delay. The control system design specification is presented for the advanced spent fuel management process units. This design specifications can be effectively used for the detail design of the advanced spent fuel management process.

  7. A Ten-Step Process for Developing Teaching Units

    Science.gov (United States)

    Butler, Geoffrey; Heslup, Simon; Kurth, Lara

    2015-01-01

    Curriculum design and implementation can be a daunting process. Questions quickly arise, such as who is qualified to design the curriculum and how do these people begin the design process. According to Graves (2008), in many contexts the design of the curriculum and the implementation of the curricular product are considered to be two mutually…

  8. Modellierung of meso- and macroscale river basins - a workshop held at Lauenburg; Modellierung in meso- bis makroskaligen Flusseinzugsgebieten - Tagungsband zum gleichnamigen Workshop

    Energy Technology Data Exchange (ETDEWEB)

    Sutmoeller, J.; Raschke, E. (eds.) [GKSS-Forschungszentrum Geesthacht GmbH (Germany). Inst. fuer Atmosphaerenphysik

    2001-07-01

    During the past decade measuring and modelling of global and regional processes that exchange energy and water in the climate system of the Earth became a focus in hydrological and meteorological research. Besides climate research many more applications will gain from this effort, e.g. as weather forecasting, water management and agriculture. As large scale weather and climate applications diversify to water related issues such as water resources, reservoir management, and flood and drought forecasting hydrologists and meteorologists are challenged to work interdisciplinary. The workshop 'Modelling of meso- and macroscale river basins' brought together various current aspects of this issue, ranging from coupled atmosphere-hydrology models to integrated river basin management to land use change. Recent results are introduced and summarised in this report. (orig.)

  9. Nuclear safety inspection in treatment process for SG heat exchange tubes deficiency of unit 1, TNPS

    International Nuclear Information System (INIS)

    Zhang Chunming; Song Chenxiu; Zhao Pengyu; Hou Wei

    2006-01-01

    This paper describes treatment process for SG heat exchange tubes deficiency of Unit 1, TNPS, nuclear safety inspection of Northern Regional Office during treatment process for deficiency and further inspection after deficiency had been treated. (authors)

  10. Application of ion-exchange unit in uranium extraction process in China (to be continued)

    International Nuclear Information System (INIS)

    Gong Chuanwen

    2004-01-01

    The application conditions of five different ion exchange units in uranium milling plant and wastewater treatment plant of uranium mine in China are introduced, including working parameters, existing problems and improvements. The advantages and disadvantages of these units are reviewed briefly. The procedure points to be followed in selecting ion exchange unit are recommended in the engineering design. The primary views are presented upon the application prospects of some ion exchange units in uranium extraction process in China

  11. Materials Process Design Branch. Work Unit Directive (WUD) 54

    National Research Council Canada - National Science Library

    LeClair, Steve

    2002-01-01

    The objectives of the Manufacturing Research WUD 54 are to 1) conduct in-house research to develop advanced materials process design/control technologies to enable more repeatable and affordable manufacturing capabilities and 2...

  12. Standardization of the licensing process in the United States

    International Nuclear Information System (INIS)

    Villa, R.

    1986-01-01

    The paper discusses a major problem with the design review process for light water reactors. Major confusion exists over the design-basis requirements for a future nuclear power plant in the US. It is not at all clear how the conclusions of a severe accident review are to be integrated into the design approval process. The separation between a design-basis review and a severe accident review makes absolutely no sense if the severe accident review is to have an influence on the design. If an acceptable design is defined during the deterministic review, it is destructive to allow new design-basis requirements to appear during the probabilistic review. Clearly, the review process has too many undefined steps. It is believed that once all of the requirements are defined for a future design, and once the licensing process is exactly defined, the industry can begin a productive and successful standardization program

  13. From micro-scale 3D simulations to macro-scale model of periodic porous media

    Science.gov (United States)

    Crevacore, Eleonora; Tosco, Tiziana; Marchisio, Daniele; Sethi, Rajandrea; Messina, Francesca

    2015-04-01

    In environmental engineering, the transport of colloidal suspensions in porous media is studied to understand the fate of potentially harmful nano-particles and to design new remediation technologies. In this perspective, averaging techniques applied to micro-scale numerical simulations are a powerful tool to extrapolate accurate macro-scale models. Choosing two simplified packing configurations of soil grains and starting from a single elementary cell (module), it is possible to take advantage of the periodicity of the structures to reduce the computation costs of full 3D simulations. Steady-state flow simulations for incompressible fluid in laminar regime are implemented. Transport simulations are based on the pore-scale advection-diffusion equation, that can be enriched introducing also the Stokes velocity (to consider the gravity effect) and the interception mechanism. Simulations are carried on a domain composed of several elementary modules, that serve as control volumes in a finite volume method for the macro-scale method. The periodicity of the medium involves the periodicity of the flow field and this will be of great importance during the up-scaling procedure, allowing relevant simplifications. Micro-scale numerical data are treated in order to compute the mean concentration (volume and area averages) and fluxes on each module. The simulation results are used to compare the micro-scale averaged equation to the integral form of the macroscopic one, making a distinction between those terms that could be computed exactly and those for which a closure in needed. Of particular interest it is the investigation of the origin of macro-scale terms such as the dispersion and tortuosity, trying to describe them with micro-scale known quantities. Traditionally, to study the colloidal transport many simplifications are introduced, such those concerning ultra-simplified geometry that usually account for a single collector. Gradual removal of such hypothesis leads to a

  14. Evidence of a sensory processing unit in the mammalian macula

    Science.gov (United States)

    Chimento, T. C.; Ross, M. D.

    1996-01-01

    We cut serial sections through the medial part of the rat vestibular macula for transmission electron microscopic (TEM) examination, computer-assisted 3-D reconstruction, and compartmental modeling. The ultrastructural research showed that many primary vestibular neurons have an unmyelinated segment, often branched, that extends between the heminode (putative site of the spike initiation zone) and the expanded terminal(s) (calyx, calyces). These segments, termed the neuron branches, and the calyces frequently have spine-like processes of various dimensions with bouton endings that morphologically are afferent, efferent, or reciprocal to other macular neural elements. The major questions posed by this study were whether small details of morphology, such as the size and location of neuronal processes or synapses, could influence the output of a vestibular afferent, and whether a knowledge of morphological details could guide the selection of values for simulation parameters. The conclusions from our simulations are (1) values of 5.0 k omega cm2 for membrane resistivity and 1.0 nS for synaptic conductance yield simulations that best match published physiological results; (2) process morphology has little effect on orthodromic spread of depolarization from the head (bouton) to the spike initiation zone (SIZ); (3) process morphology has no effect on antidromic spread of depolarization to the process head; (4) synapses do not sum linearly; (5) synapses are electrically close to the SIZ; and (6) all whole-cell simulations should be run with an active SIZ.

  15. Opportunities in the United States' gas processing industry

    International Nuclear Information System (INIS)

    Meyer, H.S.; Leppin, D.

    1997-01-01

    To keep up with the increasing amount of natural gas that will be required by the market and with the decreasing quality of the gas at the well-head, the gas processing industry must look to new technologies to stay competitive. The Gas Research Institute (GR); is managing a research, development, design and deployment program that is projected to save the industry US dollar 230 million/year in operating and capital costs from gas processing related activities in NGL extraction and recovery, dehydration, acid gas removal/sulfur recovery, and nitrogen rejection. Three technologies are addressed here. Multivariable Control (MVC) technology for predictive process control and optimization is installed or in design at fourteen facilities treating a combined total of over 30x10 9 normal cubic meter per year (BN m 3 /y) [1.1x10 12 standard cubic feet per year (Tcf/y)]. Simple pay backs are typically under 6 months. A new acid gas removal process based on n-formyl morpholine (NFM) is being field tested that offers 40-50% savings in operating costs and 15-30% savings in capital costs relative to a commercially available physical solvent. The GRI-MemCalc TM Computer Program for Membrane Separations and the GRI-Scavenger CalcBase TM Computer Program for Scavenging Technologies are screening tools that engineers can use to determine the best practice for treating their gas. (au) 19 refs

  16. Option pricing with COS method on Graphics Processing Units

    NARCIS (Netherlands)

    B. Zhang (Bo); C.W. Oosterlee (Kees)

    2009-01-01

    htmlabstractIn this paper, acceleration on the GPU for option pricing by the COS method is demonstrated. In particular, both European and Bermudan options will be discussed in detail. For Bermudan options, we consider both the Black-Scholes model and Levy processes of infinite activity. Moreover,

  17. Study of automatic boat loading unit and horizontal sintering process of uranium dioxide pellet

    International Nuclear Information System (INIS)

    He Zhongjing; Chen Yu; Yao Dengfeng; Wang Youliang; Shu Binhua; Wu Genjiu

    2014-01-01

    Sintering process is a key process for the manufacture of nuclear fuel UO_2 pellet. In our factory, the continuous high temperature sintering furnace is used for sintering process. During the sintering of green pellets, the furnace, the boat and the accumulation way can influence the quality of the final product. In this text, on the basis of early process research, The automatic loading boat Unit and horizontal sintering process is studied successively. The results show that the physical and chemical properties of the products manufactured by automatic loading boat unit and horizontal sintering process can meet the technique requirements completely, and this system is reliable and continuous. (authors)

  18. Developing maintenance technologies for FBR's heat exchanger units by advanced laser processing

    International Nuclear Information System (INIS)

    Nishimura, Akihiko; Shimada, Yukihiro

    2011-01-01

    Laser processing technologies were developed for the purpose of maintenance of FBR's heat exchanger units. Ultrashort laser processing fabricated fiber Bragg grating sensor for seismic monitoring. Fiber laser welding with a newly developed robot system repair cracks on inner wall of heat exchanger tubes. Safety operation of the heat exchanger units will be improved by the advanced laser processing technologies. These technologies are expected to be applied to the maintenance for the next generation FBRs. (author)

  19. An new MHD/kinetic model for exploring energetic particle production in macro-scale systems

    Science.gov (United States)

    Drake, J. F.; Swisdak, M.; Dahlin, J. T.

    2017-12-01

    A novel MHD/kinetic model is being developed to explore magneticreconnection and particle energization in macro-scale systems such asthe solar corona and the outer heliosphere. The model blends the MHDdescription with a macro-particle description. The rationale for thismodel is based on the recent discovery that energetic particleproduction during magnetic reconnection is controlled by Fermireflection and Betatron acceleration and not parallel electricfields. Since the former mechanisms are not dependent on kineticscales such as the Debye length and the electron and ion inertialscales, a model that sheds these scales is sufficient for describingparticle acceleration in macro-systems. Our MHD/kinetic model includesmacroparticles laid out on an MHD grid that are evolved with the MHDfields. Crucially, the feedback of the energetic component on the MHDfluid is included in the dynamics. Thus, energy of the total system,the MHD fluid plus the energetic component, is conserved. The systemhas no kinetic scales and therefore can be implemented to modelenergetic particle production in macro-systems with none of theconstraints associated with a PIC model. Tests of the new model insimple geometries will be presented and potential applications will bediscussed.

  20. Delineating the Macroscale Areal Organization of the Macaque Cortex In Vivo

    Directory of Open Access Journals (Sweden)

    Ting Xu

    2018-04-01

    Full Text Available Summary: Complementing long-standing traditions centered on histology, fMRI approaches are rapidly maturing in delineating brain areal organization at the macroscale. The non-human primate (NHP provides the opportunity to overcome critical barriers in translational research. Here, we establish the data requirements for achieving reproducible and internally valid parcellations in individuals. We demonstrate that functional boundaries serve as a functional fingerprint of the individual animals and can be achieved under anesthesia or awake conditions (rest, naturalistic viewing, though differences between awake and anesthetized states precluded the detection of individual differences across states. Comparison of awake and anesthetized states suggested a more nuanced picture of changes in connectivity for higher-order association areas, as well as visual and motor cortex. These results establish feasibility and data requirements for the generation of reproducible individual-specific parcellations in NHPs, provide insights into the impact of scan state, and motivate efforts toward harmonizing protocols. : Noninvasive fMRI in macaques is an essential tool in translation research. Xu et al. establish the individual functional parcellation of the macaque cortex and demonstrate that brain organization is unique, reproducible, and valid, serving as a fingerprint for an individual macaque. Keywords: macaque, parcellation, cortical areas, gradient, functional connectivity

  1. COSTS AND PROFITABILITY IN FOOD PROCESSING: PASTRY TYPE UNITS

    Directory of Open Access Journals (Sweden)

    DUMITRANA MIHAELA

    2013-08-01

    Full Text Available For each company, profitability, products quality and customer satisfaction are the most importanttargets. To attaint these targets, managers need to know all about costs that are used in decision making. Whatkind of costs? How these costs are calculated for a specific sector such as food processing? These are only a fewquestions with answers in our paper. We consider that a case study for this sector may be relevant for all peoplethat are interested to increase the profitability of this specific activity sector.

  2. Analysis of Unit Process Cost for an Engineering-Scale Pyroprocess Facility Using a Process Costing Method in Korea

    Directory of Open Access Journals (Sweden)

    Sungki Kim

    2015-08-01

    Full Text Available Pyroprocessing, which is a dry recycling method, converts spent nuclear fuel into U (Uranium/TRU (TRansUranium metal ingots in a high-temperature molten salt phase. This paper provides the unit process cost of a pyroprocess facility that can process up to 10 tons of pyroprocessing product per year by utilizing the process costing method. Toward this end, the pyroprocess was classified into four kinds of unit processes: pretreatment, electrochemical reduction, electrorefining and electrowinning. The unit process cost was calculated by classifying the cost consumed at each process into raw material and conversion costs. The unit process costs of the pretreatment, electrochemical reduction, electrorefining and electrowinning were calculated as 195 US$/kgU-TRU, 310 US$/kgU-TRU, 215 US$/kgU-TRU and 231 US$/kgU-TRU, respectively. Finally the total pyroprocess cost was calculated as 951 US$/kgU-TRU. In addition, the cost driver for the raw material cost was identified as the cost for Li3PO4, needed for the LiCl-KCl purification process, and platinum as an anode electrode in the electrochemical reduction process.

  3. Testing a model of componential processing of multi-symbol numbers-evidence from measurement units.

    Science.gov (United States)

    Huber, Stefan; Bahnmueller, Julia; Klein, Elise; Moeller, Korbinian

    2015-10-01

    Research on numerical cognition has addressed the processing of nonsymbolic quantities and symbolic digits extensively. However, magnitude processing of measurement units is still a neglected topic in numerical cognition research. Hence, we investigated the processing of measurement units to evaluate whether typical effects of multi-digit number processing such as the compatibility effect, the string length congruity effect, and the distance effect are also present for measurement units. In three experiments, participants had to single out the larger one of two physical quantities (e.g., lengths). In Experiment 1, the compatibility of number and measurement unit (compatible: 3 mm_6 cm with 3 mm) as well as string length congruity (congruent: 1 m_2 km with m 2 characters) were manipulated. We observed reliable compatibility effects with prolonged reaction times (RT) for incompatible trials. Moreover, a string length congruity effect was present in RT with longer RT for incongruent trials. Experiments 2 and 3 served as control experiments showing that compatibility effects persist when controlling for holistic distance and that a distance effect for measurement units exists. Our findings indicate that numbers and measurement units are processed in a componential manner and thus highlight that processing characteristics of multi-digit numbers generalize to measurement units. Thereby, our data lend further support to the recently proposed generalized model of componential multi-symbol number processing.

  4. New algorithms and pulse-processing units in radioisotope instruments

    International Nuclear Information System (INIS)

    Antonjak, V.; Gonsjorowski, L.; Jastschuk, E.; Kwasnewski, T.

    1981-01-01

    Three new algorithms and the corresponding electronic circuits are described, beginning with the automatic gain stabilisation circuit for scintillation counters. The signal obtained as the difference between two pulse trains from amplitude discriminators has been used for photomultiplier high voltage control. Furthermore, a real time digital filter for random pulse trains is presented, showing that the variance of pulse trains is decreasing after passing the filter. The block diagram, principle of operation and basic features of the filter are given. Finally, a digital circuit for polynomial linearization of the scale function in radioisotope instruments is described. Again, the block diagram of pulse train processing, the mode of operation and programming method are given. (author)

  5. The Curriculum Planning Process for Undergraduate Game Degree Programs in the United Kingdom and United States

    Science.gov (United States)

    McGill, Monica M.

    2012-01-01

    Digital games are marketed, mass-produced, and consumed by an increasing number of people and the game industry is only expected to grow. In response, postsecondary institutions in the UK and the U.S. have started to create game degree programs. Though curriculum theorists provide insight into the process of creating a new program, no formal…

  6. Low cost solar array project production process and equipment task. A Module Experimental Process System Development Unit (MEPSDU)

    Science.gov (United States)

    1981-01-01

    Technical readiness for the production of photovoltaic modules using single crystal silicon dendritic web sheet material is demonstrated by: (1) selection, design and implementation of solar cell and photovoltaic module process sequence in a Module Experimental Process System Development Unit; (2) demonstration runs; (3) passing of acceptance and qualification tests; and (4) achievement of a cost effective module.

  7. ENTREPRENEURIAL OPPORTUNITIES IN FOOD PROCESSING UNITS (WITH SPECIAL REFERENCES TO BYADGI RED CHILLI COLD STORAGE UNITS IN THE KARNATAKA STATE

    Directory of Open Access Journals (Sweden)

    P. ISHWARA

    2010-01-01

    Full Text Available After the green revolution, we are now ushering in the evergreen revolution in the country; food processing is an evergreen activity. It is the key to the agricultural sector. In this paper an attempt has been made to study the workings of food processing units with special references to Red Chilli Cold Storage units in the Byadgi district of Karnataka State. Byadgi has been famous for Red Chilli since the days it’s of antiquity. The vast and extensive market yard in Byadagi taluk is famous as the second largest Red Chilli dealing market in the country. However, the most common and recurring problem faced by the farmer is inability to store enough red chilli from one harvest to another. Red chilli that was locally abundant for only a short period of time had to be stored against times of scarcity. In recent years, due to Oleoresin, demand for Red Chilli has grow from other countries like Sri Lanka, Bangladesh, America, Europe, Nepal, Indonesia, Mexico etc. The study reveals that all the cold storage units of the study area have been using vapour compression refrigeration system or method. All entrepreneurs have satisfied with their turnover and profit and they are in a good economic position. Even though the average turnover and profits are increased, few units have shown negligible amount of decrease in turnover and profit. This is due to the competition from increasing number of cold storages and early established units. The cold storages of the study area have been storing Red chilli, Chilli seeds, Chilli powder, Tamarind, Jeera, Dania, Turmeric, Sunflower, Zinger, Channa, Flower seeds etc,. But the 80 per cent of the each cold storage is filled by the red chilli this is due to the existence of vast and extensivered chilli market yard in the Byadgi. There is no business without problems. In the same way the entrepreneurs who are chosen for the study are facing a few problems in their business like skilled labour, technical and management

  8. A FPGA-based signal processing unit for a GEM array detector

    International Nuclear Information System (INIS)

    Yen, W.W.; Chou, H.P.

    2013-06-01

    in the present study, a signal processing unit for a GEM one-dimensional array detector is presented to measure the trajectory of photoelectrons produced by cosmic X-rays. The present GEM array detector system has 16 signal channels. The front-end unit provides timing signals from trigger units and energy signals from charge sensitive amplifies. The prototype of the processing unit is implemented using commercial field programmable gate array circuit boards. The FPGA based system is linked to a personal computer for testing and data analysis. Tests using simulated signals indicated that the FPGA-based signal processing unit has a good linearity and is flexible for parameter adjustment for various experimental conditions (authors)

  9. Design, manufacturing and commissioning of mobile unit for EDF (Dow Chemical process)

    International Nuclear Information System (INIS)

    Cangini, D.; Cordier, J.P.; PEC Engineering, Osny, France)

    1985-01-01

    To process their spent ion exchange resins and the liquid wastes, EDF has ordered from PEC a mobile unit using the DOW CHEMICAL binder. This paper presents the EDF's design requirements as well as the new French regulation for waste embedding. The mobile unit was started in January 1983 and commissioned successfully in January 1985 in the TRICASTIN EDF's power plant

  10. A low-cost system for graphical process monitoring with colour video symbol display units

    International Nuclear Information System (INIS)

    Grauer, H.; Jarsch, V.; Mueller, W.

    1977-01-01

    A system for computer controlled graphic process supervision, using color symbol video displays is described. It has the following characteristics: - compact unit: no external memory for image storage - problem oriented simple descriptive cut to the process program - no restriction of the graphical representation of process variables - computer and display independent, by implementation of colours and parameterized code creation for the display. (WB) [de

  11. 32 CFR 516.11 - Service of criminal process outside the United States.

    Science.gov (United States)

    2010-07-01

    ... 32 National Defense 3 2010-07-01 2010-07-01 true Service of criminal process outside the United... AID OF CIVIL AUTHORITIES AND PUBLIC RELATIONS LITIGATION Service of Process § 516.11 Service of... status of forces agreements, govern the service of criminal process of foreign courts and the surrender...

  12. Silicon Carbide (SiC) Power Processing Unit (PPU) for Hall Effect Thrusters, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — In this SBIR project, APEI, Inc. is proposing to develop a high efficiency, rad-hard 3.8 kW silicon carbide (SiC) Power Processing Unit (PPU) for Hall Effect...

  13. Process quality in the Trade Finance unit from the perspective of corporate banking employees

    OpenAIRE

    Mikkola, Henri

    2013-01-01

    This thesis examines the quality of the processes in the Trade Finance unit of Pohjola Bank, from the perspective of the corporate banking employees at Helsinki OP Bank. The Trade Finance unit provides methods of payment for foreign trade. Such services are intended for companies and the perspective investigated in this thesis is that of corporate banking employees. The purpose of this thesis is to define the quality of the processes and to develop solutions for difficulties discovered. The q...

  14. Stochastic Analysis of a Queue Length Model Using a Graphics Processing Unit

    Czech Academy of Sciences Publication Activity Database

    Přikryl, Jan; Kocijan, J.

    2012-01-01

    Roč. 5, č. 2 (2012), s. 55-62 ISSN 1802-971X R&D Projects: GA MŠk(CZ) MEB091015 Institutional support: RVO:67985556 Keywords : graphics processing unit * GPU * Monte Carlo simulation * computer simulation * modeling Subject RIV: BC - Control Systems Theory http://library.utia.cas.cz/separaty/2012/AS/prikryl-stochastic analysis of a queue length model using a graphics processing unit.pdf

  15. Water Use in the United States Energy System: A National Assessment and Unit Process Inventory of Water Consumption and Withdrawals.

    Science.gov (United States)

    Grubert, Emily; Sanders, Kelly T

    2018-06-05

    The United States (US) energy system is a large water user, but the nature of that use is poorly understood. To support resource comanagement and fill this noted gap in the literature, this work presents detailed estimates for US-based water consumption and withdrawals for the US energy system as of 2014, including both intensity values and the first known estimate of total water consumption and withdrawal by the US energy system. We address 126 unit processes, many of which are new additions to the literature, differentiated among 17 fuel cycles, five life cycle stages, three water source categories, and four levels of water quality. Overall coverage is about 99% of commercially traded US primary energy consumption with detailed energy flows by unit process. Energy-related water consumption, or water removed from its source and not directly returned, accounts for about 10% of both total and freshwater US water consumption. Major consumers include biofuels (via irrigation), oil (via deep well injection, usually of nonfreshwater), and hydropower (via evaporation and seepage). The US energy system also accounts for about 40% of both total and freshwater US water withdrawals, i.e., water removed from its source regardless of fate. About 70% of withdrawals are associated with the once-through cooling systems of approximately 300 steam cycle power plants that produce about 25% of US electricity.

  16. Alternative Procedure of Heat Integration Tehnique Election between Two Unit Processes to Improve Energy Saving

    Science.gov (United States)

    Santi, S. S.; Renanto; Altway, A.

    2018-01-01

    The energy use system in a production process, in this case heat exchangers networks (HENs), is one element that plays a role in the smoothness and sustainability of the industry itself. Optimizing Heat Exchanger Networks (HENs) from process streams can have a major effect on the economic value of an industry as a whole. So the solving of design problems with heat integration becomes an important requirement. In a plant, heat integration can be carried out internally or in combination between process units. However, steps in the determination of suitable heat integration techniques require long calculations and require a long time. In this paper, we propose an alternative step in determining heat integration technique by investigating 6 hypothetical units using Pinch Analysis approach with objective function energy target and total annual cost target. The six hypothetical units consist of units A, B, C, D, E, and F, where each unit has the location of different process streams to the temperature pinch. The result is a potential heat integration (ΔH’) formula that can trim conventional steps from 7 steps to just 3 steps. While the determination of the preferred heat integration technique is to calculate the potential of heat integration (ΔH’) between the hypothetical process units. Completion of calculation using matlab language programming.

  17. Automated processing of whole blood units: operational value and in vitro quality of final blood components.

    Science.gov (United States)

    Jurado, Marisa; Algora, Manuel; Garcia-Sanchez, Félix; Vico, Santiago; Rodriguez, Eva; Perez, Sonia; Barbolla, Luz

    2012-01-01

    The Community Transfusion Centre in Madrid currently processes whole blood using a conventional procedure (Compomat, Fresenius) followed by automated processing of buffy coats with the OrbiSac system (CaridianBCT). The Atreus 3C system (CaridianBCT) automates the production of red blood cells, plasma and an interim platelet unit from a whole blood unit. Interim platelet unit are pooled to produce a transfusable platelet unit. In this study the Atreus 3C system was evaluated and compared to the routine method with regards to product quality and operational value. Over a 5-week period 810 whole blood units were processed using the Atreus 3C system. The attributes of the automated process were compared to those of the routine method by assessing productivity, space, equipment and staffing requirements. The data obtained were evaluated in order to estimate the impact of implementing the Atreus 3C system in the routine setting of the blood centre. Yield and in vitro quality of the final blood components processed with the two systems were evaluated and compared. The Atreus 3C system enabled higher throughput while requiring less space and employee time by decreasing the amount of equipment and processing time per unit of whole blood processed. Whole blood units processed on the Atreus 3C system gave a higher platelet yield, a similar amount of red blood cells and a smaller volume of plasma. These results support the conclusion that the Atreus 3C system produces blood components meeting quality requirements while providing a high operational efficiency. Implementation of the Atreus 3C system could result in a large organisational improvement.

  18. Quality Improvement Process in a Large Intensive Care Unit: Structure and Outcomes.

    Science.gov (United States)

    Reddy, Anita J; Guzman, Jorge A

    2016-11-01

    Quality improvement in the health care setting is a complex process, and even more so in the critical care environment. The development of intensive care unit process measures and quality improvement strategies are associated with improved outcomes, but should be individualized to each medical center as structure and culture can differ from institution to institution. The purpose of this report is to describe the structure of quality improvement processes within a large medical intensive care unit while using examples of the study institution's successes and challenges in the areas of stat antibiotic administration, reduction in blood product waste, central line-associated bloodstream infections, and medication errors. © The Author(s) 2015.

  19. Optimization Solutions for Improving the Performance of the Parallel Reduction Algorithm Using Graphics Processing Units

    Directory of Open Access Journals (Sweden)

    Ion LUNGU

    2012-01-01

    Full Text Available In this paper, we research, analyze and develop optimization solutions for the parallel reduction function using graphics processing units (GPUs that implement the Compute Unified Device Architecture (CUDA, a modern and novel approach for improving the software performance of data processing applications and algorithms. Many of these applications and algorithms make use of the reduction function in their computational steps. After having designed the function and its algorithmic steps in CUDA, we have progressively developed and implemented optimization solutions for the reduction function. In order to confirm, test and evaluate the solutions' efficiency, we have developed a custom tailored benchmark suite. We have analyzed the obtained experimental results regarding: the comparison of the execution time and bandwidth when using graphic processing units covering the main CUDA architectures (Tesla GT200, Fermi GF100, Kepler GK104 and a central processing unit; the data type influence; the binary operator's influence.

  20. Intra- versus inter-site macroscale variation in biogeochemical properties along a paddy soil chronosequence

    Directory of Open Access Journals (Sweden)

    C. Mueller-Niggemann

    2012-03-01

    Full Text Available In order to assess the intrinsic heterogeneity of paddy soils, a set of biogeochemical soil parameters was investigated in five field replicates of seven paddy fields (50, 100, 300, 500, 700, 1000, and 2000 yr of wetland rice cultivation, one flooded paddy nursery, one tidal wetland (TW, and one freshwater site (FW from a coastal area at Hangzhou Bay, Zhejiang Province, China. All soils evolved from a marine tidal flat substrate due to land reclamation. The biogeochemical parameters based on their properties were differentiated into (i a group behaving conservatively (TC, TOC, TN, TS, magnetic susceptibility, soil lightness and colour parameters, δ13C, δ15N, lipids and n-alkanes and (ii one encompassing more labile properties or fast cycling components (Nmic, Cmic, nitrate, ammonium, DON and DOC. The macroscale heterogeneity in paddy soils was assessed by evaluating intra- versus inter-site spatial variability of biogeochemical properties using statistical data analysis (descriptive, explorative and non-parametric. Results show that the intrinsic heterogeneity of paddy soil organic and minerogenic components per field is smaller than between study sites. The coefficient of variation (CV values of conservative parameters varied in a low range (10% to 20%, decreasing from younger towards older paddy soils. This indicates a declining variability of soil biogeochemical properties in longer used cropping sites according to progress in soil evolution. A generally higher variation of CV values (>20–40% observed for labile parameters implies a need for substantially higher sampling frequency when investigating these as compared to more conservative parameters. Since the representativeness of the sampling strategy could be sufficiently demonstrated, an investigation of long-term carbon accumulation/sequestration trends in topsoils of the 2000 yr paddy chronosequence under wetland rice cultivation

  1. Unit Process Wetlands for Removal of Trace Organic Contaminants and Pathogens from Municipal Wastewater Effluents

    Science.gov (United States)

    Jasper, Justin T.; Nguyen, Mi T.; Jones, Zackary L.; Ismail, Niveen S.; Sedlak, David L.; Sharp, Jonathan O.; Luthy, Richard G.; Horne, Alex J.; Nelson, Kara L.

    2013-01-01

    Abstract Treatment wetlands have become an attractive option for the removal of nutrients from municipal wastewater effluents due to their low energy requirements and operational costs, as well as the ancillary benefits they provide, including creating aesthetically appealing spaces and wildlife habitats. Treatment wetlands also hold promise as a means of removing other wastewater-derived contaminants, such as trace organic contaminants and pathogens. However, concerns about variations in treatment efficacy of these pollutants, coupled with an incomplete mechanistic understanding of their removal in wetlands, hinder the widespread adoption of constructed wetlands for these two classes of contaminants. A better understanding is needed so that wetlands as a unit process can be designed for their removal, with individual wetland cells optimized for the removal of specific contaminants, and connected in series or integrated with other engineered or natural treatment processes. In this article, removal mechanisms of trace organic contaminants and pathogens are reviewed, including sorption and sedimentation, biotransformation and predation, photolysis and photoinactivation, and remaining knowledge gaps are identified. In addition, suggestions are provided for how these treatment mechanisms can be enhanced in commonly employed unit process wetland cells or how they might be harnessed in novel unit process cells. It is hoped that application of the unit process concept to a wider range of contaminants will lead to more widespread application of wetland treatment trains as components of urban water infrastructure in the United States and around the globe. PMID:23983451

  2. Unit Process Wetlands for Removal of Trace Organic Contaminants and Pathogens from Municipal Wastewater Effluents.

    Science.gov (United States)

    Jasper, Justin T; Nguyen, Mi T; Jones, Zackary L; Ismail, Niveen S; Sedlak, David L; Sharp, Jonathan O; Luthy, Richard G; Horne, Alex J; Nelson, Kara L

    2013-08-01

    Treatment wetlands have become an attractive option for the removal of nutrients from municipal wastewater effluents due to their low energy requirements and operational costs, as well as the ancillary benefits they provide, including creating aesthetically appealing spaces and wildlife habitats. Treatment wetlands also hold promise as a means of removing other wastewater-derived contaminants, such as trace organic contaminants and pathogens. However, concerns about variations in treatment efficacy of these pollutants, coupled with an incomplete mechanistic understanding of their removal in wetlands, hinder the widespread adoption of constructed wetlands for these two classes of contaminants. A better understanding is needed so that wetlands as a unit process can be designed for their removal, with individual wetland cells optimized for the removal of specific contaminants, and connected in series or integrated with other engineered or natural treatment processes. In this article, removal mechanisms of trace organic contaminants and pathogens are reviewed, including sorption and sedimentation, biotransformation and predation, photolysis and photoinactivation, and remaining knowledge gaps are identified. In addition, suggestions are provided for how these treatment mechanisms can be enhanced in commonly employed unit process wetland cells or how they might be harnessed in novel unit process cells. It is hoped that application of the unit process concept to a wider range of contaminants will lead to more widespread application of wetland treatment trains as components of urban water infrastructure in the United States and around the globe.

  3. Process control and product evaluation in micro molding using a screwless/two-plunger injection unit

    DEFF Research Database (Denmark)

    Tosello, Guido; Hansen, Hans Nørgaard; Dormann, B.

    2010-01-01

    A newly developed μ-injection molding machine equipped with a screwless/two-plunger injection unit has been employed to mould miniaturized dog-bone shaped specimens on polyoxymethylene and its process capability and robustness have been analyzed. The influence of process parameters on μ-injection......A newly developed μ-injection molding machine equipped with a screwless/two-plunger injection unit has been employed to mould miniaturized dog-bone shaped specimens on polyoxymethylene and its process capability and robustness have been analyzed. The influence of process parameters on μ......-injection molding was investigated using the Design of Experiments technique. Injection pressure and piston stroke speed as well as part weight and dimensions were considered as quality factors over a wide range of process parameters. Experimental results obtained under different processing conditions were...

  4. Monitoring and assessment of soil erosion at micro-scale and macro-scale in forests affected by fire damage in northern Iran.

    Science.gov (United States)

    Akbarzadeh, Ali; Ghorbani-Dashtaki, Shoja; Naderi-Khorasgani, Mehdi; Kerry, Ruth; Taghizadeh-Mehrjardi, Ruhollah

    2016-12-01

    Understanding the occurrence of erosion processes at large scales is very difficult without studying them at small scales. In this study, soil erosion parameters were investigated at micro-scale and macro-scale in forests in northern Iran. Surface erosion and some vegetation attributes were measured at the watershed scale in 30 parcels of land which were separated into 15 fire-affected (burned) forests and 15 original (unburned) forests adjacent to the burned sites. The soil erodibility factor and splash erosion were also determined at the micro-plot scale within each burned and unburned site. Furthermore, soil sampling and infiltration studies were carried out at 80 other sites, as well as the 30 burned and unburned sites, (a total of 110 points) to create a map of the soil erodibility factor at the regional scale. Maps of topography, rainfall, and cover-management were also determined for the study area. The maps of erosion risk and erosion risk potential were finally prepared for the study area using the Revised Universal Soil Loss Equation (RUSLE) procedure. Results indicated that destruction of the protective cover of forested areas by fire had significant effects on splash erosion and the soil erodibility factor at the micro-plot scale and also on surface erosion, erosion risk, and erosion risk potential at the watershed scale. Moreover, the results showed that correlation coefficients between different variables at the micro-plot and watershed scales were positive and significant. Finally, assessment and monitoring of the erosion maps at the regional scale showed that the central and western parts of the study area were more susceptible to erosion compared with the western regions due to more intense crop-management, greater soil erodibility, and more rainfall. The relationships between erosion parameters and the most important vegetation attributes were also used to provide models with equations that were specific to the study region. The results of this

  5. Comparison of ultrafiltration and dissolved air flotation efficiencies in industrial units during the papermaking process

    OpenAIRE

    Monte Lara, Concepción; Ordóñez Sanz, Ruth; Hermosilla Redondo, Daphne; Sánchez González, Mónica; Blanco Suárez, Ángeles

    2011-01-01

    The efficiency of an ultrafiltration unit has been studied and compared with a dissolved air flotation system to get water with a suited quality to be reused in the process. The study was done at a paper mill producing light weight coated paper and newsprint paper from 100% recovered paper. Efficiency was analysed by removal of turbidity, cationic demand, total and dissolved chemical oxygen demand, hardness, sulphates and microstickies. Moreover, the performance of the ultrafiltration unit an...

  6. Computerized nursing process in the Intensive Care Unit: ergonomics and usability

    OpenAIRE

    Almeida,Sônia Regina Wagner de; Sasso,Grace Teresinha Marcon Dal; Barra,Daniela Couto Carvalho

    2016-01-01

    Abstract OBJECTIVE Analyzing the ergonomics and usability criteria of the Computerized Nursing Process based on the International Classification for Nursing Practice in the Intensive Care Unit according to International Organization for Standardization(ISO). METHOD A quantitative, quasi-experimental, before-and-after study with a sample of 16 participants performed in an Intensive Care Unit. Data collection was performed through the application of five simulated clinical cases and an evalua...

  7. Test results of the signal processing and amplifier unit for the emittance measurement system

    International Nuclear Information System (INIS)

    Stawiszynski, L.; Schneider, S.

    1984-01-01

    The signal processing and amplifier unit for the emittance measurement system is the unit with which the beam current on the harp-wires and the slit is measured and converted to a digital output. Temperature effects are very critical at low currents and the purpose of the test measurements described in this report was mainly to establish the accuracy and repeatability of the measurements under the influence of temperature variations

  8. Massively Parallel Signal Processing using the Graphics Processing Unit for Real-Time Brain-Computer Interface Feature Extraction.

    Science.gov (United States)

    Wilson, J Adam; Williams, Justin C

    2009-01-01

    The clock speeds of modern computer processors have nearly plateaued in the past 5 years. Consequently, neural prosthetic systems that rely on processing large quantities of data in a short period of time face a bottleneck, in that it may not be possible to process all of the data recorded from an electrode array with high channel counts and bandwidth, such as electrocorticographic grids or other implantable systems. Therefore, in this study a method of using the processing capabilities of a graphics card [graphics processing unit (GPU)] was developed for real-time neural signal processing of a brain-computer interface (BCI). The NVIDIA CUDA system was used to offload processing to the GPU, which is capable of running many operations in parallel, potentially greatly increasing the speed of existing algorithms. The BCI system records many channels of data, which are processed and translated into a control signal, such as the movement of a computer cursor. This signal processing chain involves computing a matrix-matrix multiplication (i.e., a spatial filter), followed by calculating the power spectral density on every channel using an auto-regressive method, and finally classifying appropriate features for control. In this study, the first two computationally intensive steps were implemented on the GPU, and the speed was compared to both the current implementation and a central processing unit-based implementation that uses multi-threading. Significant performance gains were obtained with GPU processing: the current implementation processed 1000 channels of 250 ms in 933 ms, while the new GPU method took only 27 ms, an improvement of nearly 35 times.

  9. NUMATH: a nuclear-material-holdup estimator for unit operations and chemical processes

    International Nuclear Information System (INIS)

    Krichinsky, A.M.

    1981-01-01

    A computer program, NUMATH (Nuclear Material Holdup Estimator), has been developed to permit inventory estimation in vessels involved in unit operations and chemical processes. This program has been implemented in an operating nuclear fuel processing plant. NUMATH's purpose is to provide steady-state composition estimates for material residing in process vessels until representative samples can be obtained and chemical analyses can be performed. Since these compositions are used for inventory estimation, the results are determined for and cataloged in container-oriented files. The estimated compositions represent material collected in applicable vessels-including consideration for material previously acknowledged in these vessels. The program utilizes process measurements and simple material balance models to estimate material holdups and distribution within unit operations. During simulated run testing, NUMATH-estimated inventories typically produced material balances within 7% of the associated measured material balances for uranium and within 16% of the associated, measured material balance for thorium during steady-state process operation

  10. Evaluation of Micro- and Macro-Scale Petrophysical Characteristics of Lower Cretaceous Sandstone with Flow Modeling in µ-CT Imaged Geometry

    Science.gov (United States)

    Katsman, R.; Haruzi, P.; Waldmann, N.; Halisch, M.

    2017-12-01

    In this study petrophysical characteristics of rock samples from 3 successive outcrop layers of Hatira Formation Lower Cretaceous Sandstone in northen Israel were evaluated at micro- and macro-scales. The study was carried out by two complementary methods: using conventional experimental measurements of porosity, pore size distribution and permeability; and using a 3D µCT imaging and modeling of signle-phase flow in the real micro-scale sample geometry. The workfow included µ-CT scanning, image processing, image segmentation, and image analyses of pore network, followed by fluid flow simulations at a pore-scale. Upscaling the results of the micro-scale flow simulations yielded a macroscopic permeabilty tensor. Comparison of the upscaled and the experimentally measured rock properties demonstrated a reasonable agreement. In addition, geometrical (pore size distribution, surface area and tortuosity) and topological (Euler characteristic) characteristics of the grains and of the pore network were evaluated at a micro-scale. Statistical analyses of the samples for estimation of anisotropy and inhomogeneity of the porous media were conducted and the results agree with anisotropy and inhomogeneity of the upscaled permeabilty tensor. Isotropic pore orientation of the primary inter-granular porosity was identified in all three samples, whereas the characteristics of the secondary porosity were affected by precipitated cement and clay matrix within the primary pore network. Results of this study provide micro- and macro-scale characteristics of the Lower Cretaceous sandstone that is used in different places over the world as a reservoir for petroleum production and png;base64,R0lGODlhHAARAHcAMSH+GlNvZnR3YXJlOiBNaWNyb3NvZnQgT2ZmaWNlACH5BAEAAAAALAAABAAYAA0AhAAAAAAAAAAAOgAAZgA6kABmtjoAADoAZjo6kDqQ22YAAGa2/5A6AJA6ZpDb/7ZmALb//9uQOtv///+2Zv/bkP//tv//2wECAwECAwECAwECAwECAwECAwECAwECAwECAwVtICBaTGAWIkCaA5S+QKWgZCJSBgo8hASrjJ4osgDqABOB45dcwpopKIznmwpFkxas9uOmqDBZMawYxxS2iakn

  11. Solid Waste Processing. A State-of-the-Art Report on Unit Operations and Processes.

    Science.gov (United States)

    Engdahl, Richard B.

    The importance and intricacy of the solid wastes disposal problem and the need to deal with it effectively and economically led to the state-of-the-art survey covered by this report. The material presented here was compiled to be used by those in government and private industry who must make or implement decisions concerning the processing of…

  12. Psychiatry training in the United Kingdom--part 2: the training process.

    Science.gov (United States)

    Christodoulou, N; Kasiakogia, K

    2015-01-01

    In the second part of this diptych, we shall deal with psychiatric training in the United Kingdom in detail, and we will compare it--wherever this is meaningful--with the equivalent system in Greece. As explained in the first part of the paper, due to the recently increased emigration of Greek psychiatrists and psychiatric trainees, and the fact that the United Kingdom is a popular destination, it has become necessary to inform those aspiring to train in the United Kingdom of the system and the circumstances they should expect to encounter. This paper principally describes the structure of the United Kingdom's psychiatric training system, including the different stages trainees progress through and their respective requirements and processes. Specifically, specialty and subspecialty options are described and explained, special paths in training are analysed, and the notions of "special interest day" and the optional "Out of programme experience" schemes are explained. Furthermore, detailed information is offered on the pivotal points of each of the stages of the training process, with special care to explain the important differences and similarities between the systems in Greece and the United Kingdom. Special attention is given to The Royal College of Psychiatrists' Membership Exams (MRCPsych) because they are the only exams towards completing specialisation in Psychiatry in the United Kingdom. Also, the educational culture of progressing according to a set curriculum, of utilising diverse means of professional development, of empowering the trainees' autonomy by allowing initiative-based development and of applying peer supervision as a tool for professional development is stressed. We conclude that psychiatric training in the United Kingdom differs substantially to that of Greece in both structure and process. Τhere are various differences such as pure psychiatric training in the United Kingdom versus neurological and medical modules in Greece, in

  13. Steady electrodiffusion in hydrogel-colloid composites: macroscale properties from microscale electrokinetics

    Directory of Open Access Journals (Sweden)

    Reghan J. Hill

    2010-03-01

    Full Text Available A rigorous microscale electrokinetic model for hydrogel-colloid composites is adopted to compute macroscale profiles of electrolyte concentration, electrostatic potential, and hydrostatic pressure across membranes that separate electrolytes with different concentrations. The membranes are uncharged polymeric hydrogels in which charged spherical colloidal particles are immobilized and randomly dispersed with a low solid volume fraction. Bulk membrane characteristics and performance are calculated from a continuum microscale electrokinetic model (Hill 2006b, c. The computations undertaken in this paper quantify the streaming and membrane potentials. For the membrane potential, increasing the volume fraction of negatively charged inclusions decreases the differential electrostatic potential across the membrane under conditions where there is zero convective flow and zero electrical current. With low electrolyte concentration and highly charged nanoparticles, the membrane potential is very sensitive to the particle volume fraction. Accordingly, the membrane potential - and changes brought about by the inclusion size, charge and concentration - could be a useful experimental diagnostic to complement more recent applications of the microscale electrokinetic model for electrical microrheology and electroacoustics (Hill and Ostoja-Starzewski 2008, Wang and Hill 2008.Um modelo eletrocinético rigoroso para compósitos formados por um hidrogel e um colóide é adotado para computar os perfis macroscópicos de concentração eletrolítica, potencial eletrostático e pressão hidrostática através de uma membrana que separa soluções com diferentes concentrações eletrolíticas. A membrana é composta por um hidrogel polimérico sem carga elétrica onde partículas esféricas são imobilizadas e dispersas aleatoriamente com baixa fração de volume do sólido. As características da membrana e a sua performance são calculadas a partir de um modelo

  14. Process Control System of a 500-MW Unit of the Reftinskaya Local Hydroelectric Power Plant

    International Nuclear Information System (INIS)

    Grekhov, L. L.; Bilenko, V. A.; Derkach, N. N.; Galperina, A. I.; Strukov, A. P.

    2002-01-01

    The results of the installation of a process control system developed by the Interavtomatika Company (Moscow) for controlling a 500-MW pulverized coal power unit with the use of the Teleperm ME and OM650 equipment of the Siemens Company are described. The system provides a principally new level of automation and process control through monitors comparable with the operation of foreign counterparts with complete preservation of the domestic peripheral equipment. During the 4.5 years of operation of the process control system the intricate algorithms for control and data processing have proved their operational integrity

  15. Analysis of the overall energy intensity of alumina refinery process using unit process energy intensity and product ratio method

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Liru; Aye, Lu [International Technologies Center (IDTC), Department of Civil and Environmental Engineering,The University of Melbourne, Vic. 3010 (Australia); Lu, Zhongwu [Institute of Materials and Metallurgy, Northeastern University, Shenyang 110004 (China); Zhang, Peihong [Department of Municipal and Environmental Engineering, Shenyang Architecture University, Shenyang 110168 (China)

    2006-07-15

    Alumina refinery is an energy intensive industry. Traditional energy saving methods employed have been single-equipment-orientated. Based on two concepts of 'energy carrier' and 'system', this paper presents a method that analyzes the effects of unit process energy intensity (e) and product ratio (p) on overall energy intensity of alumina. The important conclusion drawn from this method is that it is necessary to decrease both the unit process energy intensity and the product ratios in order to decrease the overall energy intensity of alumina, which may be taken as a future policy for energy saving. As a case study, the overall energy intensity of the Chinese Zhenzhou alumina refinery plant with Bayer-sinter combined method between 1995 and 2000 was analyzed. The result shows that the overall energy intensity of alumina in this plant decreased by 7.36 GJ/t-Al{sub 2}O{sub 3} over this period, 49% of total energy saving is due to direct energy saving, and 51% is due to indirect energy saving. The emphasis in this paper is on decreasing product ratios of high-energy consumption unit processes, such as evaporation, slurry sintering, aluminium trihydrate calcining and desilication. Energy savings can be made (1) by increasing the proportion of Bayer and indirect digestion, (2) by increasing the grade of ore by ore dressing or importing some rich gibbsite and (3) by promoting the advancement in technology. (author)

  16. Fast, multi-channel real-time processing of signals with microsecond latency using graphics processing units

    Energy Technology Data Exchange (ETDEWEB)

    Rath, N., E-mail: Nikolaus@rath.org; Levesque, J. P.; Mauel, M. E.; Navratil, G. A.; Peng, Q. [Department of Applied Physics and Applied Mathematics, Columbia University, 500 W 120th St, New York, New York 10027 (United States); Kato, S. [Department of Information Engineering, Nagoya University, Nagoya (Japan)

    2014-04-15

    Fast, digital signal processing (DSP) has many applications. Typical hardware options for performing DSP are field-programmable gate arrays (FPGAs), application-specific integrated DSP chips, or general purpose personal computer systems. This paper presents a novel DSP platform that has been developed for feedback control on the HBT-EP tokamak device. The system runs all signal processing exclusively on a Graphics Processing Unit (GPU) to achieve real-time performance with latencies below 8 μs. Signals are transferred into and out of the GPU using PCI Express peer-to-peer direct-memory-access transfers without involvement of the central processing unit or host memory. Tests were performed on the feedback control system of the HBT-EP tokamak using forty 16-bit floating point inputs and outputs each and a sampling rate of up to 250 kHz. Signals were digitized by a D-TACQ ACQ196 module, processing done on an NVIDIA GTX 580 GPU programmed in CUDA, and analog output was generated by D-TACQ AO32CPCI modules.

  17. Fast, multi-channel real-time processing of signals with microsecond latency using graphics processing units

    International Nuclear Information System (INIS)

    Rath, N.; Levesque, J. P.; Mauel, M. E.; Navratil, G. A.; Peng, Q.; Kato, S.

    2014-01-01

    Fast, digital signal processing (DSP) has many applications. Typical hardware options for performing DSP are field-programmable gate arrays (FPGAs), application-specific integrated DSP chips, or general purpose personal computer systems. This paper presents a novel DSP platform that has been developed for feedback control on the HBT-EP tokamak device. The system runs all signal processing exclusively on a Graphics Processing Unit (GPU) to achieve real-time performance with latencies below 8 μs. Signals are transferred into and out of the GPU using PCI Express peer-to-peer direct-memory-access transfers without involvement of the central processing unit or host memory. Tests were performed on the feedback control system of the HBT-EP tokamak using forty 16-bit floating point inputs and outputs each and a sampling rate of up to 250 kHz. Signals were digitized by a D-TACQ ACQ196 module, processing done on an NVIDIA GTX 580 GPU programmed in CUDA, and analog output was generated by D-TACQ AO32CPCI modules

  18. Metric-Resolution 2D River Modeling at the Macroscale: Computational Methods and Applications in a Braided River

    Directory of Open Access Journals (Sweden)

    Jochen eSchubert

    2015-11-01

    Full Text Available Metric resolution digital terrain models (DTMs of rivers now make it possible for multi-dimensional fluid mechanics models to be applied to characterize flow at fine scales that are relevant to studies of river morphology and ecological habitat, or microscales. These developments are important for managing rivers because of the potential to better understand system dynamics, anthropogenic impacts, and the consequences of proposed interventions. However, the data volumes and computational demands of microscale river modeling have largely constrained applications to small multiples of the channel width, or the mesoscale. This report presents computational methods to extend a microscale river model beyond the mesoscale to the macroscale, defined as large multiples of the channel width. A method of automated unstructured grid generation is presented that automatically clusters fine resolution cells in areas of curvature (e.g., channel banks, and places relatively coarse cells in areas lacking topographic variability. This overcomes the need to manually generate breaklines to constrain the grid, which is painstaking at the mesoscale and virtually impossible at the macroscale. The method is applied to a braided river with an extremely complex channel network configuration and shown to yield an efficient fine resolution model. The sensitivity of model output to grid design and resistance parameters is also examined as it relates to analysis of hydrology, hydraulic geometry and river habitats and the findings reiterate the importance of model calibration and validation.

  19. Using Discrete Event Simulation for Programming Model Exploration at Extreme-Scale: Macroscale Components for the Structural Simulation Toolkit (SST).

    Energy Technology Data Exchange (ETDEWEB)

    Wilke, Jeremiah J [Sandia National Laboratories (SNL-CA), Livermore, CA (United States); Kenny, Joseph P. [Sandia National Laboratories (SNL-CA), Livermore, CA (United States)

    2015-02-01

    Discrete event simulation provides a powerful mechanism for designing and testing new extreme- scale programming models for high-performance computing. Rather than debug, run, and wait for results on an actual system, design can first iterate through a simulator. This is particularly useful when test beds cannot be used, i.e. to explore hardware or scales that do not yet exist or are inaccessible. Here we detail the macroscale components of the structural simulation toolkit (SST). Instead of depending on trace replay or state machines, the simulator is architected to execute real code on real software stacks. Our particular user-space threading framework allows massive scales to be simulated even on small clusters. The link between the discrete event core and the threading framework allows interesting performance metrics like call graphs to be collected from a simulated run. Performance analysis via simulation can thus become an important phase in extreme-scale programming model and runtime system design via the SST macroscale components.

  20. Plasma simulation by macroscale, electromagnetic particle code and its application to current-drive by relativistic electron beam injection

    International Nuclear Information System (INIS)

    Tanaka, M.; Sato, T.

    1985-01-01

    A new implicit macroscale electromagnetic particle simulation code (MARC) which allows a large scale length and a time step in multi-dimensions is described. Finite mass electrons and ions are used with relativistic version of the equation of motion. The electromagnetic fields are solved by using a complete set of Maxwell equations. For time integration of the field equations, a decentered (backward) finite differencing scheme is employed with the predictor - corrector method for small noise and super-stability. It is shown both in analytical and numerical ways that the present scheme efficiently suppresses high frequency electrostatic and electromagnetic waves in a plasma, and that it accurately reproduces low frequency waves such as ion acoustic waves, Alfven waves and fast magnetosonic waves. The present numerical scheme has currently been coded in three dimensions for application to a new tokamak current-drive method by means of relativistic electron beam injection. Some remarks of the proper macroscale code application is presented in this paper

  1. Sodium content of popular commercially processed and restaurant foods in the United States

    Science.gov (United States)

    Nutrient Data Laboratory (NDL) of the U.S. Department of Agriculture (USDA) in close collaboration with U.S. Center for Disease Control and Prevention is monitoring the sodium content of commercially processed and restaurant foods in the United States. The main purpose of this manuscript is to prov...

  2. 78 FR 18234 - Service of Process on Manufacturers; Manufacturers Importing Electronic Products Into the United...

    Science.gov (United States)

    2013-03-26

    ... DEPARTMENT OF HEALTH AND HUMAN SERVICES Food and Drug Administration 21 CFR Part 1005 [Docket No. FDA-2007-N-0091; (formerly 2007N-0104)] Service of Process on Manufacturers; Manufacturers Importing Electronic Products Into the United States; Agent Designation; Change of Address AGENCY: Food and Drug...

  3. ECO LOGIC INTERNATIONAL GAS-PHASE CHEMICAL REDUCTION PROCESS - THE THERMAL DESORPTION UNIT - APPLICATIONS ANALYSIS REPORT

    Science.gov (United States)

    ELI ECO Logic International, Inc.'s Thermal Desorption Unit (TDU) is specifically designed for use with Eco Logic's Gas Phase Chemical Reduction Process. The technology uses an externally heated bath of molten tin in a hydrogen atmosphere to desorb hazardous organic compounds fro...

  4. Process methods and levels of automation of wood pallet repair in the United States

    Science.gov (United States)

    Jonghun Park; Laszlo Horvath; Robert J. Bush

    2016-01-01

    This study documented the current status of wood pallet repair in the United States by identifying the types of processing and equipment usage in repair operations from an automation prespective. The wood pallet repair firms included in the sudy received an average of approximately 1.28 million cores (i.e., used pallets) for recovery in 2012. A majority of the cores...

  5. Silicon Carbide (SiC) Power Processing Unit (PPU) for Hall Effect Thrusters, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — In this SBIR project, APEI, Inc. is proposing to develop a high efficiency, rad-hard 3.8 kW silicon carbide (SiC) power supply for the Power Processing Unit (PPU) of...

  6. Miniaturized Power Processing Unit Study: A Cubesat Electric Propulsion Technology Enabler Project

    Science.gov (United States)

    Ghassemieh, Shakib M.

    2014-01-01

    This study evaluates High Voltage Power Processing Unit (PPU) technology and driving requirements necessary to enable the Microfluidic Electric Propulsion technology research and development by NASA and university partners. This study provides an overview of the state of the art PPU technology with recommendations for technology demonstration projects and missions for NASA to pursue.

  7. Use of a tangential filtration unit for processing liquid waste from nuclear laundries

    International Nuclear Information System (INIS)

    Augustin, X.; Buzonniere, A. de; Barnier, H.

    1993-01-01

    Nuclear laundries produce large quantities of weakly contaminated effluents charged with insoluble and soluble products. In collaboration with CEA, TECHNICATOME has developed an ultrafiltration process for liquid waste from nuclear laundries, associated with prior in-solubilization of the radiochemical activity. This process 'seeded ultrafiltration' is based on the use of decloggable mineral filter media and combines very high separation efficiency with long membrane life. The efficiency of the tangential filtration unit which has been processing effluents from the Cadarache Nuclear Research Center (CEA-France) nuclear laundry since mid-1988, has been confirmed on several sites

  8. Software Graphics Processing Unit (sGPU) for Deep Space Applications

    Science.gov (United States)

    McCabe, Mary; Salazar, George; Steele, Glen

    2015-01-01

    A graphics processing capability will be required for deep space missions and must include a range of applications, from safety-critical vehicle health status to telemedicine for crew health. However, preliminary radiation testing of commercial graphics processing cards suggest they cannot operate in the deep space radiation environment. Investigation into an Software Graphics Processing Unit (sGPU)comprised of commercial-equivalent radiation hardened/tolerant single board computers, field programmable gate arrays, and safety-critical display software shows promising results. Preliminary performance of approximately 30 frames per second (FPS) has been achieved. Use of multi-core processors may provide a significant increase in performance.

  9. Evolution of the Power Processing Units Architecture for Electric Propulsion at CRISA

    Science.gov (United States)

    Palencia, J.; de la Cruz, F.; Wallace, N.

    2008-09-01

    Since 2002, the team formed by EADS Astrium CRISA, Astrium GmbH Friedrichshafen, and QinetiQ has participated in several flight programs where the Electric Propulsion based on Kaufman type Ion Thrusters is the baseline conceptOn 2002, CRISA won the contract for the development of the Ion Propulsion Control Unit (IPCU) for GOCE. This unit together with the T5 thruster by QinetiQ provides near perfect atmospheric drag compensation offering thrust levels in the range of 1 to 20mN.By the end of 2003, CRISA started the adaptation of the IPCU concept to the QinetiQ T6 Ion Thruster for the Alphabus program.This paper shows how the Power Processing Unit design evolved in time including the current developments.

  10. Status Report from the United Kingdom [Processing of Low-Grade Uranium Ores

    Energy Technology Data Exchange (ETDEWEB)

    North, A A [Warren Spring Laboratory, Stevenage, Herts. (United Kingdom)

    1967-06-15

    The invitation to present this status report could have been taken literally as a request for information on experience gained in the actual processing of low-grade uranium ores in the United Kingdom, in which case there would have been very little to report; however, the invitation naturally was considered to be a request for a report on the experience gained by the United Kingdom of the processing of uranium ores. Lowgrade uranium ores are not treated in the United Kingdom simply because the country does not possess any known significant deposits of uranium ore. It is of interest to record the fact that during the nineteenth century mesothermal vein deposits associated with Hercynian granite were worked at South Terras, Cornwall, and ore that contained approximately 100 tons of uranium oxide was exported to Germany. Now only some 20 tons of contained uranium oxide remain at South Terras; also in Cornwall there is a small number of other vein deposits that each hold about five tons of uranium. Small lodes of uranium ore have been located in the southern uplands of Scotland; in North Wales lower palaeozoic black shales have only as much as 50 to 80 parts per million of uranium oxide, and a slightly lower grade carbonaceous shale is found near the base of the millstone grit that occurs in the north of England. Thus the experience gained by the United Kingdom has been of the treatment of uranium ores that occur abroad.

  11. Modeling of yield and environmental impact categories in tea processing units based on artificial neural networks.

    Science.gov (United States)

    Khanali, Majid; Mobli, Hossein; Hosseinzadeh-Bandbafha, Homa

    2017-12-01

    In this study, an artificial neural network (ANN) model was developed for predicting the yield and life cycle environmental impacts based on energy inputs required in processing of black tea, green tea, and oolong tea in Guilan province of Iran. A life cycle assessment (LCA) approach was used to investigate the environmental impact categories of processed tea based on the cradle to gate approach, i.e., from production of input materials using raw materials to the gate of tea processing units, i.e., packaged tea. Thus, all the tea processing operations such as withering, rolling, fermentation, drying, and packaging were considered in the analysis. The initial data were obtained from tea processing units while the required data about the background system was extracted from the EcoInvent 2.2 database. LCA results indicated that diesel fuel and corrugated paper box used in drying and packaging operations, respectively, were the main hotspots. Black tea processing unit caused the highest pollution among the three processing units. Three feed-forward back-propagation ANN models based on Levenberg-Marquardt training algorithm with two hidden layers accompanied by sigmoid activation functions and a linear transfer function in output layer, were applied for three types of processed tea. The neural networks were developed based on energy equivalents of eight different input parameters (energy equivalents of fresh tea leaves, human labor, diesel fuel, electricity, adhesive, carton, corrugated paper box, and transportation) and 11 output parameters (yield, global warming, abiotic depletion, acidification, eutrophication, ozone layer depletion, human toxicity, freshwater aquatic ecotoxicity, marine aquatic ecotoxicity, terrestrial ecotoxicity, and photochemical oxidation). The results showed that the developed ANN models with R 2 values in the range of 0.878 to 0.990 had excellent performance in predicting all the output variables based on inputs. Energy consumption for

  12. Morphology study of thoracic transverse processes and its significance in pedicle-rib unit screw fixation.

    Science.gov (United States)

    Cui, Xin-gang; Cai, Jin-fang; Sun, Jian-min; Jiang, Zhen-song

    2015-03-01

    Thoracic transverse process is an important anatomic structure of the spine. Several anatomic studies have investigated the adjacent structures of the thoracic transverse process. But there is still a blank on the morphology of the thoracic transverse processes. The purpose of the cadaveric study is to investigate the morphology of thoracic transverse processes and to provide morphology basis for the pedicle-rib unit (extrapedicular) screw fixation method. Forty-five adult dehydrated skeletons (T1-T10) were included in this study. The length, width, thickness, and the tilt angle (upward and backward) of the thoracic transverse process were measured. The data were then analyzed statistically. On the basis of the morphometric study, 5 fresh cadavers were used to place screws from transverse processes to the vertebral body in the thoracic spine, and then observed by the naked eye and on computed tomography scans. The lengths of thoracic transverse processes were between 16.63±1.59 and 18.10±1.95 mm; the longest was at T7, and the shortest was at T10. The widths of thoracic transverse processes were between 11.68±0.80 and 12.87±1.48 mm; the widest was at T3, and the narrowest was at T7. The thicknesses of thoracic transverse processes were between 7.86±1.24 and 10.78±1.35 mm; the thickest was at T1, and the thinnest was at T7. The upward tilt angles of thoracic transverse processes were between 24.9±3.1 and 3.0±1.56 degrees; the maximal upward tilt angle was at T1, and the minimal upward tilt angle was at T7. The upward tilt angles of T1 and T2 were obviously different from the other thoracic transverse processes (Ptransverse processes gradually increased from 24.5±2.91 degrees at T1 to 64.5±5.12 degrees at T10. The backward tilt angles were significantly different between each other, except between T5 and T6. In the validation study, screws were all placed successfully from transverse processes to the vertebrae of thoracic spine. The length, width, and

  13. Calculation of the real states of Ignalina NPP Unit 1 and Unit 2 RBMK-1500 reactors in the verification process of QUABOX/CUBBOX code

    International Nuclear Information System (INIS)

    Bubelis, E.; Pabarcius, R.; Demcenko, M.

    2001-01-01

    Calculations of the main neutron-physical characteristics of RBMK-1500 reactors of Ignalina NPP Unit 1 and Unit 2 were performed, taking real reactor core states as the basis for these calculations. Comparison of the calculation results, obtained using QUABOX/CUBBOX code, with experimental data and the calculation results, obtained using STEPAN code, showed that all the main neutron-physical characteristics of the reactors of Unit 1 and Unit 2 of Ignalina NPP are in the safe deviation range of die analyzed parameters, and that reactors of Ignalina NPP, during the process of the reactor core composition change, are operated in a safe and stable manner. (author)

  14. High-Performance Pseudo-Random Number Generation on Graphics Processing Units

    OpenAIRE

    Nandapalan, Nimalan; Brent, Richard P.; Murray, Lawrence M.; Rendell, Alistair

    2011-01-01

    This work considers the deployment of pseudo-random number generators (PRNGs) on graphics processing units (GPUs), developing an approach based on the xorgens generator to rapidly produce pseudo-random numbers of high statistical quality. The chosen algorithm has configurable state size and period, making it ideal for tuning to the GPU architecture. We present a comparison of both speed and statistical quality with other common parallel, GPU-based PRNGs, demonstrating favourable performance o...

  15. Electromagnetic compatibility of tools and automated process control systems of NPP units

    International Nuclear Information System (INIS)

    Alpeev, A.S.

    1994-01-01

    Problems of electromagnetic compatibility of automated process control subsystems in NPP units are discussed. It is emphasized, that at the stage of development of request for proposal for each APC subsystem special attention should be paid to electromagnetic situation in specific room and requirements to the quality of functions performed by the system. Besides, requirements to electromagnetic compatibility tests at the work stations should be formulated, and mock-ups of the subsystems should be tested

  16. State-Level Comparison of Processes and Timelines for Distributed Photovoltaic Interconnection in the United States

    Energy Technology Data Exchange (ETDEWEB)

    Ardani, K. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Davidson, C. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Margolis, R. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Nobler, E. [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2015-01-01

    This report presents results from an analysis of distributed photovoltaic (PV) interconnection and deployment processes in the United States. Using data from more than 30,000 residential (up to 10 kilowatts) and small commercial (10-50 kilowatts) PV systems, installed from 2012 to 2014, we assess the range in project completion timelines nationally (across 87 utilities in 16 states) and in five states with active solar markets (Arizona, California, New Jersey, New York, and Colorado).

  17. Thermo-mechanical efficiency of the bimetallic strip heat engine at the macro-scale and micro-scale

    International Nuclear Information System (INIS)

    Arnaud, A; Boughaleb, J; Monfray, S; Boeuf, F; Skotnicki, T; Cugat, O

    2015-01-01

    Bimetallic strip heat engines are energy harvesters that exploit the thermo-mechanical properties of bistable bimetallic membranes to convert heat into mechanical energy. They thus represent a solution to transform low-grade heat into electrical energy if the bimetallic membrane is coupled with an electro-mechanical transducer. The simplicity of these devices allows us to consider their miniaturization using MEMS fabrication techniques. In order to design and optimize these devices at the macro-scale and micro-scale, this article proposes an explanation of the origin of the thermal snap-through by giving the expressions of the constitutive equations of composite beams. This allows us to evaluate the capability of bimetallic strips to convert heat into mechanical energy whatever their size is, and to give the theoretical thermo-mechanical efficiencies which can be obtained with these harvesters. (paper)

  18. Radiative heat transfer exceeding the blackbody limit between macroscale planar surfaces separated by a nanosize vacuum gap

    Science.gov (United States)

    Bernardi, Michael P.; Milovich, Daniel; Francoeur, Mathieu

    2016-09-01

    Using Rytov's fluctuational electrodynamics framework, Polder and Van Hove predicted that radiative heat transfer between planar surfaces separated by a vacuum gap smaller than the thermal wavelength exceeds the blackbody limit due to tunnelling of evanescent modes. This finding has led to the conceptualization of systems capitalizing on evanescent modes such as thermophotovoltaic converters and thermal rectifiers. Their development is, however, limited by the lack of devices enabling radiative transfer between macroscale planar surfaces separated by a nanosize vacuum gap. Here we measure radiative heat transfer for large temperature differences (~120 K) using a custom-fabricated device in which the gap separating two 5 × 5 mm2 intrinsic silicon planar surfaces is modulated from 3,500 to 150 nm. A substantial enhancement over the blackbody limit by a factor of 8.4 is reported for a 150-nm-thick gap. Our device paves the way for the establishment of novel evanescent wave-based systems.

  19. FEATURES OF THE SOCIO-POLITICAL PROCESS IN THE UNITED STATES

    Directory of Open Access Journals (Sweden)

    Tatyana Evgenevna Beydina

    2017-06-01

    Full Text Available The subject of this article is the study of political and social developments of the USA at the present stage. There are four stages of the American tradition of studying political processes. The first stage is connected with substantiation of the Executive, Legislative and Judicial branches of political system (works of F. Pollack and R. Sili. The second one includes behavioral studies of politics. Besides studying political processes Charles Merriam has studied their similarities and differences. The third stage is characterized by political system studies – the works of T. Parsons, D. Easton, R. Aron, G. Almond and K. Deutsch. The fourth stage is characterized by superpower and the systems democratization problem (S. Huntington, Zb. Bzhezinsky. American social processes were qualified by R. Park, P. Sorokin, E. Giddens. The work is concentrated on the divided explanation of social and political processes of the us and the reflection of unity of American social-political reality. Academic novelty is composed of substantiation of the US social-political process concept and characterization of its features. The US social-political process is characterized by two channels: soft power and aggression. Soft power appears in the US economy dominancy. The main results of the research are features of the socio-political process in the United States. Purpose: the main goal of the research is to systematize the definition of social-political process of the USA and estimate the line of its study within American political tradition. Methodology: in this article have used methods: such as system, comparison and historical analysis, structural-functional analysis. Results: during the research the analysis of the dynamics of social and political processes of the United States had been made. Practical implications it is expedient to apply the received results in the international relation theory and practice.

  20. Orthographic units in the absence of visual processing: Evidence from sublexical structure in braille.

    Science.gov (United States)

    Fischer-Baum, Simon; Englebretson, Robert

    2016-08-01

    Reading relies on the recognition of units larger than single letters and smaller than whole words. Previous research has linked sublexical structures in reading to properties of the visual system, specifically on the parallel processing of letters that the visual system enables. But whether the visual system is essential for this to happen, or whether the recognition of sublexical structures may emerge by other means, is an open question. To address this question, we investigate braille, a writing system that relies exclusively on the tactile rather than the visual modality. We provide experimental evidence demonstrating that adult readers of (English) braille are sensitive to sublexical units. Contrary to prior assumptions in the braille research literature, we find strong evidence that braille readers do indeed access sublexical structure, namely the processing of multi-cell contractions as single orthographic units and the recognition of morphemes within morphologically-complex words. Therefore, we conclude that the recognition of sublexical structure is not exclusively tied to the visual system. However, our findings also suggest that there are aspects of morphological processing on which braille and print readers differ, and that these differences may, crucially, be related to reading using the tactile rather than the visual sensory modality. Copyright © 2016 Elsevier B.V. All rights reserved.

  1. The impact of a lean rounding process in a pediatric intensive care unit.

    Science.gov (United States)

    Vats, Atul; Goin, Kristin H; Villarreal, Monica C; Yilmaz, Tuba; Fortenberry, James D; Keskinocak, Pinar

    2012-02-01

    Poor workflow associated with physician rounding can produce inefficiencies that decrease time for essential activities, delay clinical decisions, and reduce staff and patient satisfaction. Workflow and provider resources were not optimized when a pediatric intensive care unit increased by 22,000 square feet (to 33,000) and by nine beds (to 30). Lean methods (focusing on essential processes) and scenario analysis were used to develop and implement a patient-centric standardized rounding process, which we hypothesize would lead to improved rounding efficiency, decrease required physician resources, improve satisfaction, and enhance throughput. Human factors techniques and statistical tools were used to collect and analyze observational data for 11 rounding events before and 12 rounding events after process redesign. Actions included: 1) recording rounding events, times, and patient interactions and classifying them as essential, nonessential, or nonvalue added; 2) comparing rounding duration and time per patient to determine the impact on efficiency; 3) analyzing discharge orders for timeliness; 4) conducting staff surveys to assess improvements in communication and care coordination; and 5) analyzing customer satisfaction data to evaluate impact on patient experience. Thirty-bed pediatric intensive care unit in a children's hospital with academic affiliation. Eight attending pediatric intensivists and their physician rounding teams. Eight attending physician-led teams were observed for 11 rounding events before and 12 rounding events after implementation of a standardized lean rounding process focusing on essential processes. Total rounding time decreased significantly (157 ± 35 mins before vs. 121 ± 20 mins after), through a reduction in time spent on nonessential (53 ± 30 vs. 9 ± 6 mins) activities. The previous process required three attending physicians for an average of 157 mins (7.55 attending physician man-hours), while the new process required two

  2. Modelling of a Naphtha Recovery Unit (NRU with Implications for Process Optimization

    Directory of Open Access Journals (Sweden)

    Jiawei Du

    2018-06-01

    Full Text Available The naphtha recovery unit (NRU is an integral part of the processes used in the oil sands industry for bitumen extraction. The principle role of the NRU is to recover naphtha from the tailings for reuse in this process. This process is energy-intensive, and environmental guidelines for naphtha recovery must be met. Steady-state models for the NRU system are developed in this paper using two different approaches. The first approach is a statistical, data-based modelling approach where linear regression models have been developed using Minitab® from plant data collected during a performance test. The second approach involves the development of a first-principles model in Aspen Plus® based on the NRU process flow diagram. A novel refinement to this latter model, called “withdraw and remix”, is proposed based on comparing actual plant data to model predictions around the two units used to separate water and naphtha. The models developed in this paper suggest some interesting ideas for the further optimization of the process, in that it may be possible to achieve the required naphtha recovery using less energy. More plant tests are required to validate these ideas.

  3. PREMATH: a Precious-Material Holdup Estimator for unit operations and chemical processes

    International Nuclear Information System (INIS)

    Krichinsky, A.M.; Bruns, D.D.

    1982-01-01

    A computer program, PREMATH (Precious Material Holdup Estimator), has been developed to permit inventory estimation in vessels involved in unit operations and chemical processes. This program has been implemented in an operating nuclear fuel processing plant. PREMATH's purpose is to provide steady-state composition estimates for material residing in process vessels until representative samples can be obtained and chemical analyses can be performed. Since these compositions are used for inventory estimation, the results are determined for and cataloged in container-oriented files. The estimated compositions represent material collected in applicable vessels - including consideration for material previously acknowledged in these vessels. The program utilizes process measurements and simple material balance models to estimate material holdups and distribution within unit operations. During simulated run testing, PREMATH-estimated inventories typically produced material balances within 7% of the associated measured material balances for uranium and within 16% of the associated, measured material balances for thorium (a less valuable material than uranium) during steady-state process operation

  4. Design of Biochemical Oxidation Process Engineering Unit for Treatment of Organic Radioactive Liquid Waste

    International Nuclear Information System (INIS)

    Zainus Salimin; Endang Nuraeni; Mirawaty; Tarigan, Cerdas

    2010-01-01

    Organic radioactive liquid waste from nuclear industry consist of detergent waste from nuclear laundry, 30% TBP-kerosene solvent waste from purification or recovery of uranium from process failure of nuclear fuel fabrication, and solvent waste containing D 2 EHPA, TOPO, and kerosene from purification of phosphoric acid. The waste is dangerous and toxic matter having low pH, high COD and BOD, and also low radioactivity. Biochemical oxidation process is the effective method for detoxification of organic waste and decontamination of radionuclide by bio sorption. The result process are sludges and non radioactive supernatant. The existing treatment facilities radioactive waste in Serpong can not use for treatment of that’s organics waste. Dio chemical oxidation process engineering unit for continuous treatment of organic radioactive liquid waste on the capacity of 1.6 L/h has been designed and constructed the equipment of process unit consist of storage tank of 100 L capacity for nutrition solution, 2 storage tanks of 100 L capacity per each for liquid waste, reactor oxidation of 120 L, settling tank of 50 L capacity storage tank of 55 L capacity for sludge, storage tank of 50 capacity for supernatant. Solution on the reactor R-01 are added by bacteria, nutrition and aeration using two difference aerators until biochemical oxidation occurs. The sludge from reactor of R-01 are recirculated to the settling tank of R-02 and on the its reverse operation biological sludge will be settled, and supernatant will be overflow. (author)

  5. General Purpose Graphics Processing Unit Based High-Rate Rice Decompression and Reed-Solomon Decoding

    Energy Technology Data Exchange (ETDEWEB)

    Loughry, Thomas A. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-02-01

    As the volume of data acquired by space-based sensors increases, mission data compression/decompression and forward error correction code processing performance must likewise scale. This competency development effort was explored using the General Purpose Graphics Processing Unit (GPGPU) to accomplish high-rate Rice Decompression and high-rate Reed-Solomon (RS) decoding at the satellite mission ground station. Each algorithm was implemented and benchmarked on a single GPGPU. Distributed processing across one to four GPGPUs was also investigated. The results show that the GPGPU has considerable potential for performing satellite communication Data Signal Processing, with three times or better performance improvements and up to ten times reduction in cost over custom hardware, at least in the case of Rice Decompression and Reed-Solomon Decoding.

  6. General purpose graphic processing unit implementation of adaptive pulse compression algorithms

    Science.gov (United States)

    Cai, Jingxiao; Zhang, Yan

    2017-07-01

    This study introduces a practical approach to implement real-time signal processing algorithms for general surveillance radar based on NVIDIA graphical processing units (GPUs). The pulse compression algorithms are implemented using compute unified device architecture (CUDA) libraries such as CUDA basic linear algebra subroutines and CUDA fast Fourier transform library, which are adopted from open source libraries and optimized for the NVIDIA GPUs. For more advanced, adaptive processing algorithms such as adaptive pulse compression, customized kernel optimization is needed and investigated. A statistical optimization approach is developed for this purpose without needing much knowledge of the physical configurations of the kernels. It was found that the kernel optimization approach can significantly improve the performance. Benchmark performance is compared with the CPU performance in terms of processing accelerations. The proposed implementation framework can be used in various radar systems including ground-based phased array radar, airborne sense and avoid radar, and aerospace surveillance radar.

  7. Process Improvement to Enhance Quality in a Large Volume Labor and Birth Unit.

    Science.gov (United States)

    Bell, Ashley M; Bohannon, Jessica; Porthouse, Lisa; Thompson, Heather; Vago, Tony

    The goal of the perinatal team at Mercy Hospital St. Louis is to provide a quality patient experience during labor and birth. After the move to a new labor and birth unit in 2013, the team recognized many of the routines and practices needed to be modified based on different demands. The Lean process was used to plan and implement required changes. This technique was chosen because it is based on feedback from clinicians, teamwork, strategizing, and immediate evaluation and implementation of common sense solutions. Through rapid improvement events, presence of leaders in the work environment, and daily huddles, team member engagement and communication were enhanced. The process allowed for team members to offer ideas, test these ideas, and evaluate results, all within a rapid time frame. For 9 months, frontline clinicians met monthly for a weeklong rapid improvement event to create better experiences for childbearing women and those who provide their care, using Lean concepts. At the end of each week, an implementation plan and metrics were developed to help ensure sustainment. The issues that were the focus of these process improvements included on-time initiation of scheduled cases such as induction of labor and cesarean birth, timely and efficient assessment and triage disposition, postanesthesia care and immediate newborn care completed within approximately 2 hours, transfer from the labor unit to the mother baby unit, and emergency transfers to the main operating room and intensive care unit. On-time case initiation for labor induction and cesarean birth improved, length of stay in obstetric triage decreased, postanesthesia recovery care was reorganized to be completed within the expected 2-hour standard time frame, and emergency transfers to the main hospital operating room and intensive care units were standardized and enhanced for efficiency and safety. Participants were pleased with the process improvements and quality outcomes. Working together as a team

  8. Grey water treatment by a continuous process of an electrocoagulation unit and a submerged membrane bioreactor system

    KAUST Repository

    Bani-Melhem, Khalid; Smith, Edward

    2012-01-01

    This paper presents the performance of an integrated process consisting of an electro-coagulation (EC) unit and a submerged membrane bioreactor (SMBR) technology for grey water treatment. For comparison purposes, another SMBR process without

  9. A Patient Flow Analysis: Identification of Process Inefficiencies and Workflow Metrics at an Ambulatory Endoscopy Unit

    Directory of Open Access Journals (Sweden)

    Rowena Almeida

    2016-01-01

    Full Text Available Background. The increasing demand for endoscopic procedures coincides with the paradigm shift in health care delivery that emphasizes efficient use of existing resources. However, there is limited literature on the range of endoscopy unit efficiencies. Methods. A time and motion analysis of patient flow through the Hotel-Dieu Hospital (Kingston, Ontario endoscopy unit was followed by qualitative interviews. Procedures were directly observed in three segments: individual endoscopy room use, preprocedure/recovery room, and overall endoscopy unit utilization. Results. Data were collected for 137 procedures in the endoscopy room, 139 procedures in the preprocedure room, and 143 procedures for overall room utilization. The mean duration spent in the endoscopy room was 31.47 min for an esophagogastroduodenoscopy, 52.93 min for a colonoscopy, 30.47 min for a flexible sigmoidoscopy, and 66.88 min for a double procedure. The procedure itself accounted for 8.11 min, 34.24 min, 9.02 min, and 39.13 min for the above procedures, respectively. The focused interviews identified the scheduling template as a major area of operational inefficiency. Conclusions. Despite reasonable procedure times for all except colonoscopies, the endoscopy room durations exceed the allocated times, reflecting the impact of non-procedure-related factors and the need for a revised scheduling template. Endoscopy units have unique operational characteristics and identification of process inefficiencies can lead to targeted quality improvement initiatives.

  10. A Patient Flow Analysis: Identification of Process Inefficiencies and Workflow Metrics at an Ambulatory Endoscopy Unit.

    Science.gov (United States)

    Almeida, Rowena; Paterson, William G; Craig, Nancy; Hookey, Lawrence

    2016-01-01

    Background. The increasing demand for endoscopic procedures coincides with the paradigm shift in health care delivery that emphasizes efficient use of existing resources. However, there is limited literature on the range of endoscopy unit efficiencies. Methods. A time and motion analysis of patient flow through the Hotel-Dieu Hospital (Kingston, Ontario) endoscopy unit was followed by qualitative interviews. Procedures were directly observed in three segments: individual endoscopy room use, preprocedure/recovery room, and overall endoscopy unit utilization. Results. Data were collected for 137 procedures in the endoscopy room, 139 procedures in the preprocedure room, and 143 procedures for overall room utilization. The mean duration spent in the endoscopy room was 31.47 min for an esophagogastroduodenoscopy, 52.93 min for a colonoscopy, 30.47 min for a flexible sigmoidoscopy, and 66.88 min for a double procedure. The procedure itself accounted for 8.11 min, 34.24 min, 9.02 min, and 39.13 min for the above procedures, respectively. The focused interviews identified the scheduling template as a major area of operational inefficiency. Conclusions. Despite reasonable procedure times for all except colonoscopies, the endoscopy room durations exceed the allocated times, reflecting the impact of non-procedure-related factors and the need for a revised scheduling template. Endoscopy units have unique operational characteristics and identification of process inefficiencies can lead to targeted quality improvement initiatives.

  11. Performance Recognition for Sulphur Flotation Process Based on Froth Texture Unit Distribution

    Directory of Open Access Journals (Sweden)

    Mingfang He

    2013-01-01

    Full Text Available As an important indicator of flotation performance, froth texture is believed to be related to operational condition in sulphur flotation process. A novel fault detection method based on froth texture unit distribution (TUD is proposed to recognize the fault condition of sulphur flotation in real time. The froth texture unit number is calculated based on texture spectrum, and the probability density function (PDF of froth texture unit number is defined as texture unit distribution, which can describe the actual textual feature more accurately than the grey level dependence matrix approach. As the type of the froth TUD is unknown, a nonparametric kernel estimation method based on the fixed kernel basis is proposed, which can overcome the difficulty when comparing different TUDs under various conditions is impossible using the traditional varying kernel basis. Through transforming nonparametric description into dynamic kernel weight vectors, a principle component analysis (PCA model is established to reduce the dimensionality of the vectors. Then a threshold criterion determined by the TQ statistic based on the PCA model is proposed to realize the performance recognition. The industrial application results show that the accurate performance recognition of froth flotation can be achieved by using the proposed method.

  12. 40 CFR Appendix Xiii to Part 266 - Mercury Bearing Wastes That May Be Processed in Exempt Mercury Recovery Units

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 26 2010-07-01 2010-07-01 false Mercury Bearing Wastes That May Be Processed in Exempt Mercury Recovery Units XIII Appendix XIII to Part 266 Protection of Environment... XIII to Part 266—Mercury Bearing Wastes That May Be Processed in Exempt Mercury Recovery Units These...

  13. Energy audit and conservation opportunities for pyroprocessing unit of a typical dry process cement plant

    International Nuclear Information System (INIS)

    Kabir, G.; Abubakar, A.I.; El-Nafaty, U.A.

    2010-01-01

    Cement production process has been highly energy and cost intensive. The cement plant requires 8784 h per year of the total operating hours to produce 640,809 tonnes of clinker. To achieve effective and efficient energy management scheme, thermal energy audit analysis was employed on the pyroprocessing unit of the cement plant. Fuel combustion generates the bulk of the thermal energy for the process, amounting to 95.48% (4164.02 kJ/kg cl ) of the total thermal energy input. Thermal efficiency of the unit stands at 41%, below 50-54% achieved in modern plants. The exhaust gases and kiln shell heat energy losses are in significant quantity, amounting to 27.9% and 11.97% of the total heat input respectively. To enhance the energy performance of the unit, heat losses conservation systems are considered. Waste heat recovery steam generator (WHRSG) and Secondary kiln shell were studied. Power and thermal energy savings of 42.88 MWh/year and 5.30 MW can be achieved respectively. Financial benefits for use of the conservation methods are substantial. Environmental benefit of 14.10% reduction in Greenhouse gases (GHG) emissions could be achieved.

  14. Energy audit and conservation opportunities for pyroprocessing unit of a typical dry process cement plant

    Energy Technology Data Exchange (ETDEWEB)

    Kabir, G.; Abubakar, A.I.; El-Nafaty, U.A. [Chemical Engineering Programme, Abubakar Tafawa Balewa University, P. M. B. 0248, Bauchi (Nigeria)

    2010-03-15

    Cement production process has been highly energy and cost intensive. The cement plant requires 8784 h per year of the total operating hours to produce 640,809 tonnes of clinker. To achieve effective and efficient energy management scheme, thermal energy audit analysis was employed on the pyroprocessing unit of the cement plant. Fuel combustion generates the bulk of the thermal energy for the process, amounting to 95.48% (4164.02 kJ/kg{sub cl}) of the total thermal energy input. Thermal efficiency of the unit stands at 41%, below 50-54% achieved in modern plants. The exhaust gases and kiln shell heat energy losses are in significant quantity, amounting to 27.9% and 11.97% of the total heat input respectively. To enhance the energy performance of the unit, heat losses conservation systems are considered. Waste heat recovery steam generator (WHRSG) and Secondary kiln shell were studied. Power and thermal energy savings of 42.88 MWh/year and 5.30 MW can be achieved respectively. Financial benefits for use of the conservation methods are substantial. Environmental benefit of 14.10% reduction in Greenhouse gases (GHG) emissions could be achieved. (author)

  15. Exploring the decision-making process in the delivery of physiotherapy in a stroke unit.

    Science.gov (United States)

    McGlinchey, Mark P; Davenport, Sally

    2015-01-01

    The aim of this study was to explore the decision-making process in the delivery of physiotherapy in a stroke unit. A focused ethnographical approach involving semi-structured interviews and observations of clinical practice was used. A purposive sample of seven neurophysiotherapists and four patients participated in semi-structured interviews. From this group, three neurophysiotherapists and four patients were involved in observation of practice. Data from interviews and observations were analysed to generate themes. Three themes were identified: planning the ideal physiotherapy delivery, the reality of physiotherapy delivery and involvement in the decision-making process. Physiotherapists used a variety of clinical reasoning strategies and considered many factors to influence their decision-making in the planning and delivery of physiotherapy post-stroke. These factors included the therapist's clinical experience, patient's presentation and response to therapy, prioritisation, organisational constraints and compliance with organisational practice. All physiotherapists highlighted the importance to involve patients in planning and delivering their physiotherapy. However, there were varying levels of patient involvement observed in this process. The study has generated insight into the reality of decision-making in the planning and delivery of physiotherapy post-stroke. Further research involving other stroke units is required to gain a greater understanding of this aspect of physiotherapy. Implications for Rehabilitation Physiotherapists need to consider multiple patient, therapist and organisational factors when planning and delivering physiotherapy in a stroke unit. Physiotherapists should continually reflect upon how they provide physiotherapy, with respect to the duration, frequency and time of day sessions are delivered, in order to guide current and future physiotherapy delivery. As patients may demonstrate varying levels of participation in deciding and

  16. FINAL INTERIM REPORT VERIFICATION SURVEY ACTIVITIES IN FINAL STATUS SURVEY UNITS 7, 8, 9, 10, 11, 13 and 14 AT THE SEPARATIONS PROCESS RESEARCH UNIT, NISKAYUNA, NEW YORK

    International Nuclear Information System (INIS)

    Jadick, M.G.

    2010-01-01

    The Separations Process Research Unit (SPRU) facilities were constructed in the late 1940s to research the chemical separation of plutonium and uranium. SPRU operated between February 1950 and October 1953. The research activities ceased following the successful development of the reduction/oxidation and plutonium/uranium extraction processes that were subsequently used by the Hanford and the Savannah River sites.

  17. Usability of computerized nursing process from the ICNP® in intensive care units

    Directory of Open Access Journals (Sweden)

    Daniela Couto Carvalho Barra

    2015-04-01

    Full Text Available OBJECTIVE To analyze the usability of Computerized Nursing Process (CNP from the ICNP® 1.0 in Intensive Care Units in accordance with the criteria established by the standards of the International Organization for Standardization and the Brazilian Association of Technical Standards of systems. METHOD This is a before-and-after semi-experimental quantitative study, with a sample of 34 participants (nurses, professors and systems programmers, carried out in three Intensive Care Units. RESULTS The evaluated criteria (use, content and interface showed that CNP has usability criteria, as it integrates a logical data structure, clinical assessment, diagnostics and nursing interventions. CONCLUSION The CNP is a source of information and knowledge that provide nurses with new ways of learning in intensive care, for it is a place that provides complete, comprehensive, and detailed content, supported by current and relevant data and scientific research information for Nursing practices.

  18. Process and unit for gasification of combustible material. Verfahren und Aggregat zur Vergasung brennbaren Gutes

    Energy Technology Data Exchange (ETDEWEB)

    Linneborn, J

    1987-05-21

    The invention refers to a process for the gasification of solid and combustible material in a moving bed and a unit in which this process can be carried out. By material to be gasified one means small material such as ground fossil coal and all organic substances such as wood, straw, husks and shells of fruit, to which sewage sludge can be added. The new process can be carried out, according to the invention, in a closed duct moved by vibration or shaking, in which the material or the ash produced moves from one end to the other by suitable vibration and comes into contact with round heat sources largely resistant to friction. This achieves rapid gasification of the material (at about 1000/sup 0/C) by convection and radiation.

  19. Security central processing unit applications in the protection of nuclear facilities

    International Nuclear Information System (INIS)

    Goetzke, R.E.

    1987-01-01

    New or upgraded electronic security systems protecting nuclear facilities or complexes will be heavily computer dependent. Proper planning for new systems and the employment of new state-of-the-art 32 bit processors in the processing of subsystem reports are key elements in effective security systems. The processing of subsystem reports represents only a small segment of system overhead. In selecting a security system to meet the current and future needs for nuclear security applications the central processing unit (CPU) applied in the system architecture is the critical element in system performance. New 32 bit technology eliminates the need for program overlays while providing system programmers with well documented program tools to develop effective systems to operate in all phases of nuclear security applications

  20. Future evolution of the Fast TracKer (FTK) processing unit

    CERN Document Server

    Gentsos, C; The ATLAS collaboration; Giannetti, P; Magalotti, D; Nikolaidis, S

    2014-01-01

    The Fast Tracker (FTK) processor [1] for the ATLAS experiment has a computing core made of 128 Processing Units that reconstruct tracks in the silicon detector in a ~100 μsec deep pipeline. The track parameter resolution provided by FTK enables the HLT trigger to identify efficiently and reconstruct significant samples of fermionic Higgs decays. Data processing speed is achieved with custom VLSI pattern recognition, linearized track fitting executed inside modern FPGAs, pipelining, and parallel processing. One large FPGA executes full resolution track fitting inside low resolution candidate tracks found by a set of 16 custom Asic devices, called Associative Memories (AM chips) [2]. The FTK dual structure, based on the cooperation of VLSI dedicated AM and programmable FPGAs, is maintained to achieve further technology performance, miniaturization and integration of the current state of the art prototypes. This allows to fully exploit new applications within and outside the High Energy Physics field. We plan t...

  1. Advanced spent fuel processing technologies for the United States GNEP programme

    International Nuclear Information System (INIS)

    Laidler, J.J.

    2007-01-01

    Spent fuel processing technologies for future advanced nuclear fuel cycles are being developed under the scope of the Global Nuclear Energy Partnership (GNEP). This effort seeks to make available for future deployment a fissile material recycling system that does not involve the separation of pure plutonium from spent fuel. In the nuclear system proposed by the United States under the GNEP initiative, light water reactor spent fuel is treated by means of a solvent extraction process that involves a group extraction of transuranic elements. The recovered transuranics are recycled as fuel material for advanced burner reactors, which can lead in the long term to fast reactors with conversion ratios greater than unity, helping to assure the sustainability of nuclear power systems. Both aqueous and pyrochemical methods are being considered for fast reactor spent fuel processing in the current US development programme. (author)

  2. Feasibility study and concepts for use of compact process units to treat Hanford tank wastes

    Energy Technology Data Exchange (ETDEWEB)

    Collins, E.D.; Bond, W.D.; Campbell, D.O.; Harrington, F.E.; Malkemus, D.W.; Peishel, F.L.; Yarbro, O.O.

    1994-06-01

    A team of experienced radiochemical design engineers and chemists was assembled at Oak Ridge National Laboratory (ORNL) at the request of the Underground Storage Tank Integrated Demonstration (USTID) Program to evaluate the feasibility and perform a conceptual study of options for the use of compact processing units (CPUs), located at the Hanford, Washington, waste tank sites, to accomplish extensive pretreatment of the tank wastes using the clean-option concept. The scope of the ORNL study included an evaluation of the constraints of the various chemical process operations that may be employed and the constraints of necessary supporting operations. The latter include equipment maintenance and replacement, process control methods, product and by-product storage, and waste disposal.

  3. Feasibility study and concepts for use of compact process units to treat Hanford tank wastes

    International Nuclear Information System (INIS)

    Collins, E.D.; Bond, W.D.; Campbell, D.O.; Harrington, F.E.; Malkemus, D.W.; Peishel, F.L.; Yarbro, O.O.

    1994-06-01

    A team of experienced radiochemical design engineers and chemists was assembled at Oak Ridge National Laboratory (ORNL) at the request of the Underground Storage Tank Integrated Demonstration (USTID) Program to evaluate the feasibility and perform a conceptual study of options for the use of compact processing units (CPUs), located at the Hanford, Washington, waste tank sites, to accomplish extensive pretreatment of the tank wastes using the clean-option concept. The scope of the ORNL study included an evaluation of the constraints of the various chemical process operations that may be employed and the constraints of necessary supporting operations. The latter include equipment maintenance and replacement, process control methods, product and by-product storage, and waste disposal

  4. [Variations in the diagnostic confirmation process between breast cancer mass screening units].

    Science.gov (United States)

    Natal, Carmen; Fernández-Somoano, Ana; Torá-Rocamora, Isabel; Tardón, Adonina; Castells, Xavier

    2016-01-01

    To analyse variations in the diagnostic confirmation process between screening units, variations in the outcome of each episode and the relationship between the use of the different diagnostic confirmation tests and the lesion detection rate. Observational study of variability of the standardised use of diagnostic and lesion detection tests in 34 breast cancer mass screening units participating in early-detection programmes in three Spanish regions from 2002-2011. The diagnostic test variation ratio in percentiles 25-75 ranged from 1.68 (further appointments) to 3.39 (fine-needle aspiration). The variation ratio in detection rates of benign lesions, ductal carcinoma in situ and invasive cancer were 2.79, 1.99 and 1.36, respectively. A positive relationship between rates of testing and detection rates was found with fine-needle aspiration-benign lesions (R(2): 0.53), fine-needle aspiration-invasive carcinoma (R(2): 0 28), core biopsy-benign lesions (R(2): 0.64), core biopsy-ductal carcinoma in situ (R(2): 0.61) and core biopsy-invasive carcinoma (R(2): 0.48). Variation in the use of invasive tests between the breast cancer screening units participating in early-detection programmes was found to be significantly higher than variations in lesion detection. Units which conducted more fine-needle aspiration tests had higher benign lesion detection rates, while units that conducted more core biopsies detected more benign lesions and cancer. Copyright © 2016 SESPAS. Published by Elsevier Espana. All rights reserved.

  5. Process engineering design of pathological waste incinerator with an integrated combustion gases treatment unit.

    Science.gov (United States)

    Shaaban, A F

    2007-06-25

    Management of medical wastes generated at different hospitals in Egypt is considered a highly serious problem. The sources and quantities of regulated medical wastes have been thoroughly surveyed and estimated (75t/day from governmental hospitals in Cairo). From the collected data it was concluded that the most appropriate incinerator capacity is 150kg/h. The objective of this work is to develop the process engineering design of an integrated unit, which is technically and economically capable for incinerating medical wastes and treatment of combustion gases. Such unit consists of (i) an incineration unit (INC-1) having an operating temperature of 1100 degrees C at 300% excess air, (ii) combustion-gases cooler (HE-1) generating 35m(3)/h hot water at 75 degrees C, (iii) dust filter (DF-1) capable of reducing particulates to 10-20mg/Nm(3), (iv) gas scrubbers (GS-1,2) for removing acidic gases, (v) a multi-tube fixed bed catalytic converter (CC-1) to maintain the level of dioxins and furans below 0.1ng/Nm(3), and (vi) an induced-draft suction fan system (SF-1) that can handle 6500Nm(3)/h at 250 degrees C. The residence time of combustion gases in the ignition, mixing and combustion chambers was found to be 2s, 0.25s and 0.75s, respectively. This will ensure both thorough homogenization of combustion gases and complete destruction of harmful constituents of the refuse. The adequate engineering design of individual process equipment results in competitive fixed and operating investments. The incineration unit has proved its high operating efficiency through the measurements of different pollutant-levels vented to the open atmosphere, which was found to be in conformity with the maximum allowable limits as specified in the law number 4/1994 issued by the Egyptian Environmental Affairs Agency (EEAA) and the European standards.

  6. HTS current lead units prepared by the TFA-MOD processed YBCO coated conductors

    International Nuclear Information System (INIS)

    Shiohara, K.; Sakai, S.; Ishii, Y.; Yamada, Y.; Tachikawa, K.; Koizumi, T.; Aoki, Y.; Hasegawa, T.; Tamura, H.; Mito, T.

    2010-01-01

    Two superconducting current lead units have been prepared using ten coated conductors of the Tri-Fluoro-Acetate - Metal Organic Deposition (TFA-MOD) processed Y 1 Ba 2 Cu 3 O 7-δ (YBCO) coated conductors with critical current (I c ) of about 170 A at 77 K in self-field. The coated conductors are 5 mm in width, 190 mm in length and about 120 μm in overall thickness. The 1.5 μm thick superconducting YBCO layer was synthesized through the TFA-MOD process on Hastelloy TM C-276 substrate tape with two buffer oxide layers of Gd 2 Zr 2 O 7 and CeO 2 . The five YBCO coated conductors are attached on a 1 mm thick Glass Fiber Reinforced Plastics (GFRP) board and soldered to Cu caps at the both ends. We prepared two 500 A-class current lead units. The DC transport current of 800 A was stably applied at 77 K without any voltage generation in all coated conductors. The voltage between both Cu caps linearly increased with increasing the applied current, and was about 350 μV at 500 A in both current lead units. According to the estimated values of the heat leakage from 77 K to 4.2 K, the heat leakage for the current lead unit was 46.5 mW. We successfully attained reduction of the heat leakage because of improvement of the transport current performance (I c ), a thinner Ag layer of YBCO coated conductor and usage of the GFRP board for reinforcement instead of a stainless steel board used in the previous study. The DC transport current of 1400 A was stably applied when the two current lead units were joined in parallel. The sum of the heat leakages from 77 K to 4.2 K for the combined the current lead units was 93 mW. In comparison with the conventional Cu current leads by gas-cooling, it could be noted that the heat leakage of the current lead is about one order of magnitude smaller than that of the Cu current lead.

  7. Development of diagnostic process for abnormal conditions of Ulchin units 1 and 2

    Energy Technology Data Exchange (ETDEWEB)

    Choi, Hyun Soo; Kwak, Jeong Keun; Yun, Jung Hyun; Kim, Jong Hyun [KEPCO International Nuclear Graduate School, Ulsan (Korea, Republic of)

    2012-10-15

    Diagnosis of abnormal conditions during operation is one of difficult tasks to nuclear power plant operators. Operators may have trouble in handling abnormal conditions due to various reasons such as 1) many alarms (around 2,000 alarms in the Ulchin units 1 and 2 each) and multi alarms occurrences, 2) the same alarms occurrences in different abnormal conditions, and 3) a number of Abnormal Operating Procedures (AOPs). For these reasons, the first diagnosis on abnormal conditions largely relies on operator's experiences and pattern recognition. Then, this difficulty may be highlighted for inexperienced operators. This paper suggests an approach to develop the optimal diagnostic process for appropriate selection of AOPs by using the Elimination by Aspect (EBA) method. The EBA method uses a heuristic followed by decision makers during a process of sequential choice and which constitutes a good balance between the cost of a decision and its quality. At each stage of decision, the individuals eliminate all the options not having an expected given attribute, until only one option remains. This approach is applied to steam generator level control system abnormal procedure for Ulchin units 1 and 2. The result indicates that the EBA method is applicable to the development of optimal process on diagnosis of abnormal conditions.

  8. Pre-design safety analyses of cesium ion-exchange compact processing unit

    International Nuclear Information System (INIS)

    Richmond, W.G.; Ballinger, M.Y.

    1993-11-01

    This report describes an innovative radioactive waste pretreatment concept. This cost-effective, highly flexible processing approach is based on the use of Compact Processing Units (CPUs) to treat highly radioactive tank wastes in proximity to the tanks themselves. The units will be designed to treat tank wastes at rates from 8 to 20 liters per minute and have the capacity to remove cesium, and ultimately other radionuclides, from 4,000 cubic meters of waste per year. This new concept is being integrated into waste per year. This new concept is being integrated into Hanford's tank farm management plans by a team of PNL and Westinghouse Hanford Company scientists and engineers. The first CPU to be designed and deployed will be used to remove cesium from Hanford double-shell tank (DST) supernatant waste. Separating Cs from the waste would be a major step toward lowering the radioactivity in the bulk of the waste, allowing it to be disposed of as a low-level solid waste form (e.g.,grout), while concentrating the more highly radioactive material for processing as high-level solid waste

  9. Conversion of a deasphalting unit for use in the process of supercritical solvent recovery

    Directory of Open Access Journals (Sweden)

    Waintraub S.

    2000-01-01

    Full Text Available In order to reduce energy consumption and to increase deasphalted oil yield, an old PETROBRAS deasphalting unit was converted for use in the process of supercritical solvent recovery. In-plant and pilot tests were performed to determine the ideal solvent-to-oil ratio. The optimum conditions for separation of the supercritical solvent from the solvent-plus-oil liquid mixture were determined by experimental tests in PVT cells. These tests also allowed measurement of the dew and bubble points, determination of the retrograde region, observation of supercritical fluid compressibility and as a result construction of a phase equilibrium diagram.

  10. Pseudo-random number generators for Monte Carlo simulations on ATI Graphics Processing Units

    Science.gov (United States)

    Demchik, Vadim

    2011-03-01

    Basic uniform pseudo-random number generators are implemented on ATI Graphics Processing Units (GPU). The performance results of the realized generators (multiplicative linear congruential (GGL), XOR-shift (XOR128), RANECU, RANMAR, RANLUX and Mersenne Twister (MT19937)) on CPU and GPU are discussed. The obtained speed up factor is hundreds of times in comparison with CPU. RANLUX generator is found to be the most appropriate for using on GPU in Monte Carlo simulations. The brief review of the pseudo-random number generators used in modern software packages for Monte Carlo simulations in high-energy physics is presented.

  11. General purpose graphics-processing-unit implementation of cosmological domain wall network evolution.

    Science.gov (United States)

    Correia, J R C C C; Martins, C J A P

    2017-10-01

    Topological defects unavoidably form at symmetry breaking phase transitions in the early universe. To probe the parameter space of theoretical models and set tighter experimental constraints (exploiting the recent advances in astrophysical observations), one requires more and more demanding simulations, and therefore more hardware resources and computation time. Improving the speed and efficiency of existing codes is essential. Here we present a general purpose graphics-processing-unit implementation of the canonical Press-Ryden-Spergel algorithm for the evolution of cosmological domain wall networks. This is ported to the Open Computing Language standard, and as a consequence significant speedups are achieved both in two-dimensional (2D) and 3D simulations.

  12. Implementation of RLS-based Adaptive Filterson nVIDIA GeForce Graphics Processing Unit

    OpenAIRE

    Hirano, Akihiro; Nakayama, Kenji

    2011-01-01

    This paper presents efficient implementa- tion of RLS-based adaptive filters with a large number of taps on nVIDIA GeForce graphics processing unit (GPU) and CUDA software development environment. Modification of the order and the combination of calcu- lations reduces the number of accesses to slow off-chip memory. Assigning tasks into multiple threads also takes memory access order into account. For a 4096-tap case, a GPU program is almost three times faster than a CPU program.

  13. General purpose graphics-processing-unit implementation of cosmological domain wall network evolution

    Science.gov (United States)

    Correia, J. R. C. C. C.; Martins, C. J. A. P.

    2017-10-01

    Topological defects unavoidably form at symmetry breaking phase transitions in the early universe. To probe the parameter space of theoretical models and set tighter experimental constraints (exploiting the recent advances in astrophysical observations), one requires more and more demanding simulations, and therefore more hardware resources and computation time. Improving the speed and efficiency of existing codes is essential. Here we present a general purpose graphics-processing-unit implementation of the canonical Press-Ryden-Spergel algorithm for the evolution of cosmological domain wall networks. This is ported to the Open Computing Language standard, and as a consequence significant speedups are achieved both in two-dimensional (2D) and 3D simulations.

  14. An Application of Graphics Processing Units to Geosimulation of Collective Crowd Behaviour

    Directory of Open Access Journals (Sweden)

    Cjoskāns Jānis

    2017-12-01

    Full Text Available The goal of the paper is to assess the ways for computational performance and efficiency improvement of collective crowd behaviour simulation by using parallel computing methods implemented on graphics processing unit (GPU. To perform an experimental evaluation of benefits of parallel computing, a new GPU-based simulator prototype is proposed and the runtime performance is analysed. Based on practical examples of pedestrian dynamics geosimulation, the obtained performance measurements are compared to several other available multiagent simulation tools to determine the efficiency of the proposed simulator, as well as to provide generic guidelines for the efficiency improvements of the parallel simulation of collective crowd behaviour.

  15. Solution of relativistic quantum optics problems using clusters of graphical processing units

    Energy Technology Data Exchange (ETDEWEB)

    Gordon, D.F., E-mail: daviel.gordon@nrl.navy.mil; Hafizi, B.; Helle, M.H.

    2014-06-15

    Numerical solution of relativistic quantum optics problems requires high performance computing due to the rapid oscillations in a relativistic wavefunction. Clusters of graphical processing units are used to accelerate the computation of a time dependent relativistic wavefunction in an arbitrary external potential. The stationary states in a Coulomb potential and uniform magnetic field are determined analytically and numerically, so that they can used as initial conditions in fully time dependent calculations. Relativistic energy levels in extreme magnetic fields are recovered as a means of validation. The relativistic ionization rate is computed for an ion illuminated by a laser field near the usual barrier suppression threshold, and the ionizing wavefunction is displayed.

  16. Tailoring Macroscale Response of Mechanical and Heat Transfer Systems by Topology Optimization of Microstructural Details

    DEFF Research Database (Denmark)

    Alexandersen, Joe; Lazarov, Boyan Stefanov

    2015-01-01

    -contrast material parameters is proposed to alleviate the high computational cost associated with solving the discrete systems arising during the topology optimization process. Problems within important engineering areas, heat transfer and linear elasticity, are considered for exemplifying the approach...

  17. Effect of unit size on thermal fatigue behavior of hot work steel repaired by a biomimetic laser remelting process

    Science.gov (United States)

    Cong, Dalong; Li, Zhongsheng; He, Qingbing; Chen, Dajun; Chen, Hanbin; Yang, Jiuzhou; Zhang, Peng; Zhou, Hong

    2018-01-01

    AISI H13 hot work steel with fatigue cracks was repaired by a biomimetic laser remelting (BLR) process in the form of lattice units with different sizes. Detailed microstructural studies and microhardness tests were carried out on the units. Studies revealed a mixed microstructure containing martensite, retained austenite and carbide particles with ultrafine grain size in units. BLR samples with defect-free units exhibited superior thermal fatigue resistance due to microstructure strengthening, and mechanisms of crack tip blunting and blocking. In addition, effects of unit size on thermal fatigue resistance of BLR samples were discussed.

  18. Theoretical and experimental study of a small unit for solar desalination using flashing process

    International Nuclear Information System (INIS)

    Nafey, A. Safwat; Mohamad, M.A.; El-Helaby, S.O.; Sharaf, M.A.

    2007-01-01

    A small unit for water desalination by solar energy and a flash evaporation process is investigated. The system is built at the Faculty of Petroleum and Mining Engineering at Suez, Egypt. The system consists of a solar water heater (flat plate solar collector) working as a brine heater and a vertical flash unit that is attached with a condenser/preheater unit. In this work, the system is investigated theoretically and experimentally at different real environmental conditions along Julian days of one year (2005). A mathematical model is developed to calculate the productivity of the system under different operating conditions. The BIRD's model for the calculation of solar insolation is used to predict the solar insolation instantaneously. Also, the solar insolation is measured by a highly sensitive digital pyranometer. Comparison between the theoretical and experimental results is performed. The average accumulative productivity of the system in November, December and January ranged between 1.04 to 1.45 kg/day/m 2 . The average summer productivity ranged between 5.44 to 7 kg/day/m 2 in July and August and 4.2 to 5 kg/day/m 2 in June

  19. Theoretical and experimental study of a small unit for solar desalination using flashing process

    Energy Technology Data Exchange (ETDEWEB)

    Nafey, A. Safwat; El-Helaby, S.O.; Sharaf, M.A. [Department of Engineering Science, Faculty of Petroleum and Mining Engineering, Suez Canal University, Suez 43522 (Egypt); Mohamad, M.A. [Solar Energy Department, National Research Center, Cairo (Egypt)

    2007-02-15

    A small unit for water desalination by solar energy and a flash evaporation process is investigated. The system is built at the Faculty of Petroleum and Mining Engineering at Suez, Egypt. The system consists of a solar water heater (flat plate solar collector) working as a brine heater and a vertical flash unit that is attached with a condenser/preheater unit. In this work, the system is investigated theoretically and experimentally at different real environmental conditions along Julian days of one year (2005). A mathematical model is developed to calculate the productivity of the system under different operating conditions. The BIRD's model for the calculation of solar insolation is used to predict the solar insolation instantaneously. Also, the solar insolation is measured by a highly sensitive digital pyranometer. Comparison between the theoretical and experimental results is performed. The average accumulative productivity of the system in November, December and January ranged between 1.04 to 1.45 kg/day/m{sup 2}. The average summer productivity ranged between 5.44 to 7 kg/day/m{sup 2} in July and August and 4.2 to 5 kg/day/m{sup 2} in June. (author)

  20. Computerized nursing process in the Intensive Care Unit: ergonomics and usability

    Directory of Open Access Journals (Sweden)

    Sônia Regina Wagner de Almeida

    Full Text Available Abstract OBJECTIVE Analyzing the ergonomics and usability criteria of the Computerized Nursing Process based on the International Classification for Nursing Practice in the Intensive Care Unit according to International Organization for Standardization(ISO. METHOD A quantitative, quasi-experimental, before-and-after study with a sample of 16 participants performed in an Intensive Care Unit. Data collection was performed through the application of five simulated clinical cases and an evaluation instrument. Data analysis was performed by descriptive and inferential statistics. RESULTS The organization, content and technical criteria were considered "excellent", and the interface criteria were considered "very good", obtaining means of 4.54, 4.60, 4.64 and 4.39, respectively. The analyzed standards obtained means above 4.0, being considered "very good" by the participants. CONCLUSION The Computerized Nursing Processmet ergonomic and usability standards according to the standards set by ISO. This technology supports nurses' clinical decision-making by providing complete and up-to-date content for Nursing practice in the Intensive Care Unit.

  1. Real time 3D structural and Doppler OCT imaging on graphics processing units

    Science.gov (United States)

    Sylwestrzak, Marcin; Szlag, Daniel; Szkulmowski, Maciej; Gorczyńska, Iwona; Bukowska, Danuta; Wojtkowski, Maciej; Targowski, Piotr

    2013-03-01

    In this report the application of graphics processing unit (GPU) programming for real-time 3D Fourier domain Optical Coherence Tomography (FdOCT) imaging with implementation of Doppler algorithms for visualization of the flows in capillary vessels is presented. Generally, the time of the data processing of the FdOCT data on the main processor of the computer (CPU) constitute a main limitation for real-time imaging. Employing additional algorithms, such as Doppler OCT analysis, makes this processing even more time consuming. Lately developed GPUs, which offers a very high computational power, give a solution to this problem. Taking advantages of them for massively parallel data processing, allow for real-time imaging in FdOCT. The presented software for structural and Doppler OCT allow for the whole processing with visualization of 2D data consisting of 2000 A-scans generated from 2048 pixels spectra with frame rate about 120 fps. The 3D imaging in the same mode of the volume data build of 220 × 100 A-scans is performed at a rate of about 8 frames per second. In this paper a software architecture, organization of the threads and optimization applied is shown. For illustration the screen shots recorded during real time imaging of the phantom (homogeneous water solution of Intralipid in glass capillary) and the human eye in-vivo is presented.

  2. Ultra-processed food consumption in children from a Basic Health Unit.

    Science.gov (United States)

    Sparrenberger, Karen; Friedrich, Roberta Roggia; Schiffner, Mariana Dihl; Schuch, Ilaine; Wagner, Mário Bernardes

    2015-01-01

    To evaluate the contribution of ultra-processed food (UPF) on the dietary consumption of children treated at a Basic Health Unit and the associated factors. Cross-sectional study carried out with a convenience sample of 204 children, aged 2-10 years old, in Southern Brazil. Children's food intake was assessed using a 24-h recall questionnaire. Food items were classified as minimally processed, processed for culinary use, and ultra-processed. A semi-structured questionnaire was applied to collect socio-demographic and anthropometric variables. Overweight in children was classified using a Z score >2 for children younger than 5 and Z score >+1 for those aged between 5 and 10 years, using the body mass index for age. Overweight frequency was 34% (95% CI: 28-41%). Mean energy consumption was 1672.3 kcal/day, with 47% (95% CI: 45-49%) coming from ultra-processed food. In the multiple linear regression model, maternal education (r=0.23; p=0.001) and child age (r=0.40; pde Pediatria. Published by Elsevier Editora Ltda. All rights reserved.

  3. Commercial processing and disposal alternatives for very low levels of radioactive waste in the United States

    International Nuclear Information System (INIS)

    Benda, G.A.

    2005-01-01

    The United States has several options available in the commercial processing and disposal of very low levels of radioactive waste. These range from NRC licensed low level radioactive sites for Class A, B and C waste to conditional disposal or free release of very low concentrations of material. Throughout the development of disposal alternatives, the US promoted a graded disposal approach based on risk of the material hazards. The US still promotes this approach and is renewing the emphasis on risk based disposal for very low levels of radioactive waste. One state in the US, Tennessee, has had a long and successful history of disposal of very low levels of radioactive material. This paper describes that approach and the continuing commercial options for safe, long term processing and disposal. (author)

  4. Model of a programmable quantum processing unit based on a quantum transistor effect

    Science.gov (United States)

    Ablayev, Farid; Andrianov, Sergey; Fetisov, Danila; Moiseev, Sergey; Terentyev, Alexandr; Urmanchev, Andrey; Vasiliev, Alexander

    2018-02-01

    In this paper we propose a model of a programmable quantum processing device realizable with existing nano-photonic technologies. It can be viewed as a basis for new high performance hardware architectures. Protocols for physical implementation of device on the controlled photon transfer and atomic transitions are presented. These protocols are designed for executing basic single-qubit and multi-qubit gates forming a universal set. We analyze the possible operation of this quantum computer scheme. Then we formalize the physical architecture by a mathematical model of a Quantum Processing Unit (QPU), which we use as a basis for the Quantum Programming Framework. This framework makes it possible to perform universal quantum computations in a multitasking environment.

  5. Processing techniques for data from the Kuosheng Unit 1 shakedown safety-relief-valve tests

    International Nuclear Information System (INIS)

    McCauley, E.W.; Rompel, S.L.; Weaver, H.J.; Altenbach, T.J.

    1982-08-01

    This report describes techniques developed at the Lawrence Livermore National Laobratory, Livermore, CA for processing original data from the Taiwan Power Company's Kuosheng MKIII Unit 1 Safety Relief Valve Shakedown Tests conducted in April/May 1981. The computer codes used, TPSORT, TPPLOT, and TPPSD, form a special evaluation system for treating the data from its original packed binary form to ordered, calibrated ASCII transducer files and then to production of time-history plots, numerical output files, and spectral analyses. Using the data processing techniques described, a convenient means of independently examining and analyzing a unique data base for steam condensation phenomena in the MARKIII wetwell is described. The techniques developed for handling these data are applicable to the treatment of similar, but perhaps differently structured, experiment data sets

  6. A data processing unit (DPU) for a satellite-borne charge composition experiment

    International Nuclear Information System (INIS)

    Koga, R.; Blake, J.B.; Chenette, D.L.; Fennell, J.F.; Imamoto, S.S.; Katz, N.; King, C.G.

    1985-01-01

    A data processing unit (DPU) for use with a charge composition experiment to be flown aboard the VIKING auroral research satellite is described. The function of this experiment is to measure the mass, charge state, energy, and pitch-angle distribution of ions in the earth's high-altitude magnetosphere in the energy range from 50 keV/q to 300 keV/q. In order to be compatible with the spacecraft telemetry limitations, raw sensor data are processed in the DPU using on-board composition analysis and the scalar compression. The design of this DPU is such that it can be readily adapted to a variety of space composition experiments. Special attention was given to the effect of the radiation environment on orbit since a microprocessor and a relatively large number of random access memories (RAMs) comprise a considerable portion of the DPU

  7. Discrete-Event Execution Alternatives on General Purpose Graphical Processing Units

    International Nuclear Information System (INIS)

    Perumalla, Kalyan S.

    2006-01-01

    Graphics cards, traditionally designed as accelerators for computer graphics, have evolved to support more general-purpose computation. General Purpose Graphical Processing Units (GPGPUs) are now being used as highly efficient, cost-effective platforms for executing certain simulation applications. While most of these applications belong to the category of time-stepped simulations, little is known about the applicability of GPGPUs to discrete event simulation (DES). Here, we identify some of the issues and challenges that the GPGPU stream-based interface raises for DES, and present some possible approaches to moving DES to GPGPUs. Initial performance results on simulation of a diffusion process show that DES-style execution on GPGPU runs faster than DES on CPU and also significantly faster than time-stepped simulations on either CPU or GPGPU.

  8. Analysis of possible designs of processing units with radial plasma flows

    Science.gov (United States)

    Kolesnik, V. V.; Zaitsev, S. V.; Vashilin, V. S.; Limarenko, M. V.; Prochorenkov, D. S.

    2018-03-01

    Analysis of plasma-ion methods of obtaining thin-film coatings shows that their development goes along the path of the increasing use of sputter deposition processes, which allow one to obtain multicomponent coatings with varying percentage of particular components. One of the methods that allow one to form multicomponent coatings with virtually any composition of elementary components is the method of coating deposition using quasi-magnetron sputtering systems [1]. This requires the creation of an axial magnetic field of a defined configuration with the flux density within the range of 0.01-0.1 T [2]. In order to compare and analyze various configurations of processing unit magnetic systems, it is necessary to obtain the following dependencies: the dependency of magnetic core section on the input power to inductors, the distribution of magnetic induction within the equatorial plane in the corresponding sections, the distribution of the magnetic induction value in the area of cathode target location.

  9. Impact of memory bottleneck on the performance of graphics processing units

    Science.gov (United States)

    Son, Dong Oh; Choi, Hong Jun; Kim, Jong Myon; Kim, Cheol Hong

    2015-12-01

    Recent graphics processing units (GPUs) can process general-purpose applications as well as graphics applications with the help of various user-friendly application programming interfaces (APIs) supported by GPU vendors. Unfortunately, utilizing the hardware resource in the GPU efficiently is a challenging problem, since the GPU architecture is totally different to the traditional CPU architecture. To solve this problem, many studies have focused on the techniques for improving the system performance using GPUs. In this work, we analyze the GPU performance varying GPU parameters such as the number of cores and clock frequency. According to our simulations, the GPU performance can be improved by 125.8% and 16.2% on average as the number of cores and clock frequency increase, respectively. However, the performance is saturated when memory bottleneck problems incur due to huge data requests to the memory. The performance of GPUs can be improved as the memory bottleneck is reduced by changing GPU parameters dynamically.

  10. United States Department of Energy Integrated Manufacturing & Processing Predoctoral Fellowships. Final Report

    Energy Technology Data Exchange (ETDEWEB)

    Petrochenkov, M.

    2003-03-31

    The objective of the program was threefold: to create a pool of PhDs trained in the integrated approach to manufacturing and processing, to promote academic interest in the field, and to attract talented professionals to this challenging area of engineering. It was anticipated that the program would result in the creation of new manufacturing methods that would contribute to improved energy efficiency, to better utilization of scarce resources, and to less degradation of the environment. Emphasis in the competition was on integrated systems of manufacturing and the integration of product design with manufacturing processes. Research addressed such related areas as aspects of unit operations, tooling and equipment, intelligent sensors, and manufacturing systems as they related to product design.

  11. Silicon-Carbide Power MOSFET Performance in High Efficiency Boost Power Processing Unit for Extreme Environments

    Science.gov (United States)

    Ikpe, Stanley A.; Lauenstein, Jean-Marie; Carr, Gregory A.; Hunter, Don; Ludwig, Lawrence L.; Wood, William; Del Castillo, Linda Y.; Fitzpatrick, Fred; Chen, Yuan

    2016-01-01

    Silicon-Carbide device technology has generated much interest in recent years. With superior thermal performance, power ratings and potential switching frequencies over its Silicon counterpart, Silicon-Carbide offers a greater possibility for high powered switching applications in extreme environment. In particular, Silicon-Carbide Metal-Oxide- Semiconductor Field-Effect Transistors' (MOSFETs) maturing process technology has produced a plethora of commercially available power dense, low on-state resistance devices capable of switching at high frequencies. A novel hard-switched power processing unit (PPU) is implemented utilizing Silicon-Carbide power devices. Accelerated life data is captured and assessed in conjunction with a damage accumulation model of gate oxide and drain-source junction lifetime to evaluate potential system performance at high temperature environments.

  12. All-optical quantum computing with a hybrid solid-state processing unit

    International Nuclear Information System (INIS)

    Pei Pei; Zhang Fengyang; Li Chong; Song Heshan

    2011-01-01

    We develop an architecture of a hybrid quantum solid-state processing unit for universal quantum computing. The architecture allows distant and nonidentical solid-state qubits in distinct physical systems to interact and work collaboratively. All the quantum computing procedures are controlled by optical methods using classical fields and cavity QED. Our methods have a prominent advantage of the insensitivity to dissipation process benefiting from the virtual excitation of subsystems. Moreover, the quantum nondemolition measurements and state transfer for the solid-state qubits are proposed. The architecture opens promising perspectives for implementing scalable quantum computation in a broader sense that different solid-state systems can merge and be integrated into one quantum processor afterward.

  13. Lightweight concrete masonry units based on processed granulate of corn cob as aggregate

    Directory of Open Access Journals (Sweden)

    Faustino, J.

    2015-06-01

    Full Text Available A research work was performed in order to assess the potential application of processed granulate of corn cob (PCC as an alternative lightweight aggregate for the manufacturing process of lightweight concrete masonry units (CMU. Therefore, CMU-PCC were prepared in a factory using a typical lightweight concrete mixture for non-structural purposes. Additionally, lightweight concrete masonry units based on a currently applied lightweight aggregate such as expanded clay (CMU-EC were also manufactured. An experimental work allowed achieving a set of results that suggest that the proposed building product presents interesting material properties within the masonry wall context. Therefore, this unit is promising for both interior and exterior applications. This conclusion is even more relevant considering that corn cob is an agricultural waste product.En este trabajo de investigación se evaluó la posible aplicación de granulado procesado de la mazorca de maiz como un árido ligero alternativo en el proceso de fabricación de unidades de mampostería de hormigón ligero. Con esta finalidad, se prepararon en una fábrica diversas unidades de mampostería no estructural con granulado procesado de la mazorca de maiz. Además, se fabricaran unidades de mampostería estándar de peso ligero basado en agregados de arcilla expandida. Este trabajo experimental permitió lograr un conjunto de resultados que sugieren que el producto de construcción propuesto presenta interesantes propiedades materiales en el contexto de la pared de mampostería. Por lo tanto, esta solución es prometedora tanto para aplicaciones interiores y exteriores. Esta conclusión es aún más relevante teniendo en cuenta que la mazorca de maíz es un producto de desecho agrícola.

  14. Computation of large covariance matrices by SAMMY on graphical processing units and multicore CPUs

    Energy Technology Data Exchange (ETDEWEB)

    Arbanas, G.; Dunn, M.E.; Wiarda, D., E-mail: arbanasg@ornl.gov, E-mail: dunnme@ornl.gov, E-mail: wiardada@ornl.gov [Oak Ridge National Laboratory, Oak Ridge, TN (United States)

    2011-07-01

    Computational power of Graphical Processing Units and multicore CPUs was harnessed by the nuclear data evaluation code SAMMY to speed up computations of large Resonance Parameter Covariance Matrices (RPCMs). This was accomplished by linking SAMMY to vendor-optimized implementations of the matrix-matrix multiplication subroutine of the Basic Linear Algebra Library to compute the most time-consuming step. The {sup 235}U RPCM computed previously using a triple-nested loop was re-computed using the NVIDIA implementation of the subroutine on a single Tesla Fermi Graphical Processing Unit, and also using the Intel's Math Kernel Library implementation on two different multicore CPU systems. A multiplication of two matrices of dimensions 16,000×20,000 that had previously taken days, took approximately one minute on the GPU. Comparable performance was achieved on a dual six-core CPU system. The magnitude of the speed-up suggests that these, or similar, combinations of hardware and libraries may be useful for large matrix operations in SAMMY. Uniform interfaces of standard linear algebra libraries make them a promising candidate for a programming framework of a new generation of SAMMY for the emerging heterogeneous computing platforms. (author)

  15. Computation of large covariance matrices by SAMMY on graphical processing units and multicore CPUs

    International Nuclear Information System (INIS)

    Arbanas, G.; Dunn, M.E.; Wiarda, D.

    2011-01-01

    Computational power of Graphical Processing Units and multicore CPUs was harnessed by the nuclear data evaluation code SAMMY to speed up computations of large Resonance Parameter Covariance Matrices (RPCMs). This was accomplished by linking SAMMY to vendor-optimized implementations of the matrix-matrix multiplication subroutine of the Basic Linear Algebra Library to compute the most time-consuming step. The 235 U RPCM computed previously using a triple-nested loop was re-computed using the NVIDIA implementation of the subroutine on a single Tesla Fermi Graphical Processing Unit, and also using the Intel's Math Kernel Library implementation on two different multicore CPU systems. A multiplication of two matrices of dimensions 16,000×20,000 that had previously taken days, took approximately one minute on the GPU. Comparable performance was achieved on a dual six-core CPU system. The magnitude of the speed-up suggests that these, or similar, combinations of hardware and libraries may be useful for large matrix operations in SAMMY. Uniform interfaces of standard linear algebra libraries make them a promising candidate for a programming framework of a new generation of SAMMY for the emerging heterogeneous computing platforms. (author)

  16. The AMchip04 and the Processing Unit Prototype for the FastTracker

    CERN Document Server

    Andreani, A; The ATLAS collaboration; Beretta, M; Bogdan, M; Citterio, M; Alberti, F; Giannetti, P; Lanza, A; Magalotti, D; Piendibene, M; Shochet, M; Stabile, A; Tang, J; Tompkins, L; Volpi, G

    2012-01-01

    Modern experiments search for extremely rare processes hidden in much larger background levels. As the experiment complexity and the accelerator backgrounds and luminosity increase we need increasingly complex and exclusive selections. We present the first prototype of a new Processing Unit, the core of the FastTracker processor for Atlas, whose computing power is such that a couple of hundreds of them will be able to reconstruct all the tracks with transverse momentum above 1 GeV in the ATLAS events up to Phase II instantaneous luminosities (5×1034 cm-2 s-1) with an event input rate of 100 kHz and a latency below hundreds of microseconds. We plan extremely powerful, very compact and low consumption units for the far future, essential to increase efficiency and purity of the Level 2 selected samples through the intensive use of tracking. This strategy requires massive computing power to minimize the online execution time of complex tracking algorithms. The time consuming pattern recognition problem, generall...

  17. Simulation of operational processes in hospital emergency units as lean healthcare tool

    Directory of Open Access Journals (Sweden)

    Andreia Macedo Gomes

    2017-07-01

    Full Text Available Recently, the Lean philosophy is gaining importance due to a competitive environment, which increases the need to reduce costs. Lean practices and tools have been applied to manufacturing, services, supply chain, startups and, the next frontier is healthcare. Most lean techniques can be easily adapted to health organizations. Therefore, this paper intends to summarize Lean practices and tools that are already being applied in health organizations. Among the numerous techniques and lean tools used, this research highlights the Simulation. Therefore, in order to understand the use of Simulation as a Lean Healthcare tool, this research aims to analyze, through the simulation technique, the operational dynamics of the service process of a fictitious hospital emergency unit. Initially a systematic review of the literature on the practices and tools of Lean Healthcare was carried out, in order to identify the main techniques practiced. The research highlighted Simulation as the sixth most cited tool in the literature. Subsequently, a simulation of a service model of an emergency unit was performed through the Arena software. As a main result, it can be highlighted that the attendants of the built model presented a degree of idleness, thus, they are able to atend a greater demand. As a last conclusion, it was verified that the emergency room is the process with longer service time and greater overload.

  18. Proposals for the Negotiation Process on the United Nations Global Compact for Migration

    Directory of Open Access Journals (Sweden)

    Victor Genina

    2017-09-01

    • builds a cooperation-oriented, peer-review mechanism to review migration policies.    The paper has been conceived as an input for those who will take part in the negotiation of the global compact for migration, as well as those who will closely follow those negotiations. Thus, the paper assumes a level of knowledge on how international migration has been addressed within the United Nations during the last several years and of the complexities of these negotiation processes. The author took part in different UN negotiation processes on international migration from 2004 to 2013. The paper is primarily based on this experience.[4] [1] G.A. Res. 71/1, ¶ 21 (Sept. 19, 2016. [2] G.A. Res. 68/4 (Oct. 3, 2013. [3] A mixed flow, according to UNHCR (n.d., is the migratory flow comprised by both asylum seekers and migrants: “Migrants and refugees increasingly make use of the same routes and means of transport to get to an overseas destination.” [4] During that period, the author was a staff member of the Mexican delegation to the United Nations, both in Geneva and New York.

  19. Research on the pyrolysis of hardwood in an entrained bed process development unit

    Energy Technology Data Exchange (ETDEWEB)

    Kovac, R.J.; Gorton, C.W.; Knight, J.A.; Newman, C.J.; O' Neil, D.J. (Georgia Inst. of Tech., Atlanta, GA (United States). Research Inst.)

    1991-08-01

    An atmospheric flash pyrolysis process, the Georgia Tech Entrained Flow Pyrolysis Process, for the production of liquid biofuels from oak hardwood is described. The development of the process began with bench-scale studies and a conceptual design in the 1978--1981 timeframe. Its development and successful demonstration through research on the pyrolysis of hardwood in an entrained bed process development unit (PDU), in the period of 1982--1989, is presented. Oil yields (dry basis) up to 60% were achieved in the 1.5 ton-per-day PDU, far exceeding the initial target/forecast of 40% oil yields. Experimental data, based on over forty runs under steady-state conditions, supported by material and energy balances of near-100% closures, have been used to establish a process model which indicates that oil yields well in excess of 60% (dry basis) can be achieved in a commercial reactor. Experimental results demonstrate a gross product thermal efficiency of 94% and a net product thermal efficiency of 72% or more; the highest values yet achieved with a large-scale biomass liquefaction process. A conceptual manufacturing process and an economic analysis for liquid biofuel production at 60% oil yield from a 200-TPD commercial plant is reported. The plant appears to be profitable at contemporary fuel costs of $21/barrel oil-equivalent. Total capital investment is estimated at under $2.5 million. A rate-of-return on investment of 39.4% and a pay-out period of 2.1 years has been estimated. The manufacturing cost of the combustible pyrolysis oil is $2.70 per gigajoule. 20 figs., 87 tabs.

  20. Closed-cycle process of coke-cooling water in delayed coking unit

    International Nuclear Information System (INIS)

    Zhou, P.; Bai, Z.S.; Yang, Q.; Ma, J.; Wang, H.L.

    2008-01-01

    Synthesized processes are commonly used to treat coke-cooling wastewater. These include cold coke-cut water, diluting coke-cooling water, adding chemical deodorization into oily water, high-speed centrifugal separation, de-oiling and deodorization by coke adsorption, and open nature cooling. However, because of water and volatile evaporation loss, it is not suitable to process high-sulphur heavy oil using open treatments. This paper proposed a closed-cycling process in order to solve the wastewater treatment problem. The process is based on the characteristics of coke-cooling water, such as rapid parametric variation, oil-water-coke emulsification and steam-water mixing. The paper discussed the material characteristics and general idea of the study. The process of closed-cycle separation and utilization process of coke-cooling water was presented along with a process flow diagram. Several applications were presented, including a picture of hydrocyclones for pollution separation and a picture of equipments of pollution separation and components regeneration. The results showed good effect had been achieved since the coke-cooling water system was put into production in 2004. The recycling ratios for the components of the coke-cooling water were 100 per cent, and air quality in the operating area reached the requirements of the national operating site circumstance and the health standards. Calibration results of the demonstration unit were presented. It was concluded that since the devices went into operation, the function of production has been normal and stable. The operation was simple, flexible, adjustable and reliable, with significant economic efficiency and environmental benefits. 10 refs., 2 tabs., 3 figs

  1. The fundamental units, processes and patterns of evolution, and the Tree of Life conundrum

    Directory of Open Access Journals (Sweden)

    Wolf Yuri I

    2009-09-01

    Full Text Available Abstract Background The elucidation of the dominant role of horizontal gene transfer (HGT in the evolution of prokaryotes led to a severe crisis of the Tree of Life (TOL concept and intense debates on this subject. Concept Prompted by the crisis of the TOL, we attempt to define the primary units and the fundamental patterns and processes of evolution. We posit that replication of the genetic material is the singular fundamental biological process and that replication with an error rate below a certain threshold both enables and necessitates evolution by drift and selection. Starting from this proposition, we outline a general concept of evolution that consists of three major precepts. 1. The primary agency of evolution consists of Fundamental Units of Evolution (FUEs, that is, units of genetic material that possess a substantial degree of evolutionary independence. The FUEs include both bona fide selfish elements such as viruses, viroids, transposons, and plasmids, which encode some of the information required for their own replication, and regular genes that possess quasi-independence owing to their distinct selective value that provides for their transfer between ensembles of FUEs (genomes and preferential replication along with the rest of the recipient genome. 2. The history of replication of a genetic element without recombination is isomorphously represented by a directed tree graph (an arborescence, in the graph theory language. Recombination within a FUE is common between very closely related sequences where homologous recombination is feasible but becomes negligible for longer evolutionary distances. In contrast, shuffling of FUEs occurs at all evolutionary distances. Thus, a tree is a natural representation of the evolution of an individual FUE on the macro scale, but not of an ensemble of FUEs such as a genome. 3. The history of life is properly represented by the "forest" of evolutionary trees for individual FUEs (Forest of Life, or

  2. Real-time speckle variance swept-source optical coherence tomography using a graphics processing unit.

    Science.gov (United States)

    Lee, Kenneth K C; Mariampillai, Adrian; Yu, Joe X Z; Cadotte, David W; Wilson, Brian C; Standish, Beau A; Yang, Victor X D

    2012-07-01

    Advances in swept source laser technology continues to increase the imaging speed of swept-source optical coherence tomography (SS-OCT) systems. These fast imaging speeds are ideal for microvascular detection schemes, such as speckle variance (SV), where interframe motion can cause severe imaging artifacts and loss of vascular contrast. However, full utilization of the laser scan speed has been hindered by the computationally intensive signal processing required by SS-OCT and SV calculations. Using a commercial graphics processing unit that has been optimized for parallel data processing, we report a complete high-speed SS-OCT platform capable of real-time data acquisition, processing, display, and saving at 108,000 lines per second. Subpixel image registration of structural images was performed in real-time prior to SV calculations in order to reduce decorrelation from stationary structures induced by the bulk tissue motion. The viability of the system was successfully demonstrated in a high bulk tissue motion scenario of human fingernail root imaging where SV images (512 × 512 pixels, n = 4) were displayed at 54 frames per second.

  3. Integration of Satellite, Global Reanalysis Data and Macroscale Hydrological Model for Drought Assessment in Sub-Tropical Region of India

    Science.gov (United States)

    Pandey, V.; Srivastava, P. K.

    2018-04-01

    Change in soil moisture regime is highly relevant for agricultural drought, which can be best analyzed in terms of Soil Moisture Deficit Index (SMDI). A macroscale hydrological model Variable Infiltration Capacity (VIC) was used to simulate the hydro-climatological fluxes including evapotranspiration, runoff, and soil moisture storage to reconstruct the severity and duration of agricultural drought over semi-arid region of India. The simulations in VIC were performed at 0.25° spatial resolution by using a set of meteorological forcing data, soil parameters and Land Use Land Cover (LULC) and vegetation parameters. For calibration and validation, soil parameters obtained from National Bureau of Soil Survey and Land Use Planning (NBSSLUP) and ESA's Climate Change Initiative soil moisture (CCI-SM) data respectively. The analysis of results demonstrates that most of the study regions (> 80 %) especially for central northern part are affected by drought condition. The year 2001, 2002, 2007, 2008 and 2009 was highly affected by agricultural drought. Due to high average and maximum temperature, we observed higher soil evaporation that reduces the surface soil moisture significantly as well as the high topographic variations; coarse soil texture and moderate to high wind speed enhanced the drying upper soil moisture layer that incorporate higher negative SMDI over the study area. These findings can also facilitate the archetype in terms of daily time step data, lengths of the simulation period, various hydro-climatological outputs and use of reasonable hydrological model.

  4. INTEGRATION OF SATELLITE, GLOBAL REANALYSIS DATA AND MACROSCALE HYDROLOGICAL MODEL FOR DROUGHT ASSESSMENT IN SUB-TROPICAL REGION OF INDIA

    Directory of Open Access Journals (Sweden)

    V. Pandey

    2018-04-01

    Full Text Available Change in soil moisture regime is highly relevant for agricultural drought, which can be best analyzed in terms of Soil Moisture Deficit Index (SMDI. A macroscale hydrological model Variable Infiltration Capacity (VIC was used to simulate the hydro-climatological fluxes including evapotranspiration, runoff, and soil moisture storage to reconstruct the severity and duration of agricultural drought over semi-arid region of India. The simulations in VIC were performed at 0.25° spatial resolution by using a set of meteorological forcing data, soil parameters and Land Use Land Cover (LULC and vegetation parameters. For calibration and validation, soil parameters obtained from National Bureau of Soil Survey and Land Use Planning (NBSSLUP and ESA's Climate Change Initiative soil moisture (CCI-SM data respectively. The analysis of results demonstrates that most of the study regions (> 80 % especially for central northern part are affected by drought condition. The year 2001, 2002, 2007, 2008 and 2009 was highly affected by agricultural drought. Due to high average and maximum temperature, we observed higher soil evaporation that reduces the surface soil moisture significantly as well as the high topographic variations; coarse soil texture and moderate to high wind speed enhanced the drying upper soil moisture layer that incorporate higher negative SMDI over the study area. These findings can also facilitate the archetype in terms of daily time step data, lengths of the simulation period, various hydro-climatological outputs and use of reasonable hydrological model.

  5. Modeling PM10 gravimetric data from the Qalabotjha low-smoke fuels macro-scale experiment in South Africa

    International Nuclear Information System (INIS)

    Engelbrecht, J.P.; Swanepoel, L.; Zunckel, M.; Chow, J.C.

    1998-01-01

    D-grade domestic coal is being widely used for household cooking and heating purposes by the poorer urban communities in South Africa. The smoke from the combustion of coal has had a severe impact on the health of communities living in the rural townships and cities. To alleviate this escalating problem, the Department of Minerals and Energy of South Africa evaluated low-smoke fuels as an alternative source of energy. The technical and social implications of such fuels were investigated in the course of the Qalabotjha Low-Smoke Fuels Macro-Scale Experiment. Three low-smoke fuels (Chartech, African Fine Carbon (AFC) and Flame Africa) were tested in Qalabotjha over a 10 to 20 day period. This paper presents results from a PM10 TEOM continuous monitor at the Clinic site in Qalabotjha over the mentioned monitoring period. Both the fuel-type and the wind were found to have an effect on the air particulate concentrations. An exponential model which incorporates both these variables is proposed. This model allows for all measured particulate concentrations to be re-calculated to zero wind values. From the analysis of variance (ANOVA) calculations on the zero wind concentrations, it is concluded that the combustion of low-smoke fuels did make a significant improvement to the air quality in Qalabotjha over the period when these were used

  6. Multidisciplinary Simulation Acceleration using Multiple Shared-Memory Graphical Processing Units

    Science.gov (United States)

    Kemal, Jonathan Yashar

    For purposes of optimizing and analyzing turbomachinery and other designs, the unsteady Favre-averaged flow-field differential equations for an ideal compressible gas can be solved in conjunction with the heat conduction equation. We solve all equations using the finite-volume multiple-grid numerical technique, with the dual time-step scheme used for unsteady simulations. Our numerical solver code targets CUDA-capable Graphical Processing Units (GPUs) produced by NVIDIA. Making use of MPI, our solver can run across networked compute notes, where each MPI process can use either a GPU or a Central Processing Unit (CPU) core for primary solver calculations. We use NVIDIA Tesla C2050/C2070 GPUs based on the Fermi architecture, and compare our resulting performance against Intel Zeon X5690 CPUs. Solver routines converted to CUDA typically run about 10 times faster on a GPU for sufficiently dense computational grids. We used a conjugate cylinder computational grid and ran a turbulent steady flow simulation using 4 increasingly dense computational grids. Our densest computational grid is divided into 13 blocks each containing 1033x1033 grid points, for a total of 13.87 million grid points or 1.07 million grid points per domain block. To obtain overall speedups, we compare the execution time of the solver's iteration loop, including all resource intensive GPU-related memory copies. Comparing the performance of 8 GPUs to that of 8 CPUs, we obtain an overall speedup of about 6.0 when using our densest computational grid. This amounts to an 8-GPU simulation running about 39.5 times faster than running than a single-CPU simulation.

  7. Developing a Comprehensive Model of Intensive Care Unit Processes: Concept of Operations.

    Science.gov (United States)

    Romig, Mark; Tropello, Steven P; Dwyer, Cindy; Wyskiel, Rhonda M; Ravitz, Alan; Benson, John; Gropper, Michael A; Pronovost, Peter J; Sapirstein, Adam

    2015-04-23

    This study aimed to use a systems engineering approach to improve performance and stakeholder engagement in the intensive care unit to reduce several different patient harms. We developed a conceptual framework or concept of operations (ConOps) to analyze different types of harm that included 4 steps as follows: risk assessment, appropriate therapies, monitoring and feedback, as well as patient and family communications. This framework used a transdisciplinary approach to inventory the tasks and work flows required to eliminate 7 common types of harm experienced by patients in the intensive care unit. The inventory gathered both implicit and explicit information about how the system works or should work and converted the information into a detailed specification that clinicians could understand and use. Using the ConOps document, we created highly detailed work flow models to reduce harm and offer an example of its application to deep venous thrombosis. In the deep venous thrombosis model, we identified tasks that were synergistic across different types of harm. We will use a system of systems approach to integrate the variety of subsystems and coordinate processes across multiple types of harm to reduce the duplication of tasks. Through this process, we expect to improve efficiency and demonstrate synergistic interactions that ultimately can be applied across the spectrum of potential patient harms and patient locations. Engineering health care to be highly reliable will first require an understanding of the processes and work flows that comprise patient care. The ConOps strategy provided a framework for building complex systems to reduce patient harm.

  8. The AMchip04 and the processing unit prototype for the FastTracker

    International Nuclear Information System (INIS)

    Andreani, A; Alberti, F; Stabile, A; Annovi, A; Beretta, M; Volpi, G; Bogdan, M; Shochet, M; Tang, J; Tompkins, L; Citterio, M; Giannetti, P; Lanza, A; Magalotti, D; Piendibene, M

    2012-01-01

    Modern experiments search for extremely rare processes hidden in much larger background levels. As the experiment's complexity, the accelerator backgrounds and luminosity increase we need increasingly complex and exclusive event selection. We present the first prototype of a new Processing Unit (PU), the core of the FastTracker processor (FTK). FTK is a real time tracking device for the ATLAS experiment's trigger upgrade. The computing power of the PU is such that a few hundred of them will be able to reconstruct all the tracks with transverse momentum above 1 GeV/c in ATLAS events up to Phase II instantaneous luminosities (3 × 10 34 cm −2 s −1 ) with an event input rate of 100 kHz and a latency below a hundred microseconds. The PU provides massive computing power to minimize the online execution time of complex tracking algorithms. The time consuming pattern recognition problem, generally referred to as the ''combinatorial challenge'', is solved by the Associative Memory (AM) technology exploiting parallelism to the maximum extent; it compares the event to all pre-calculated ''expectations'' or ''patterns'' (pattern matching) simultaneously, looking for candidate tracks called ''roads''. This approach reduces to a linear behavior the typical exponential complexity of the CPU based algorithms. Pattern recognition is completed by the time data are loaded into the AM devices. We report on the design of the first Processing Unit prototypes. The design had to address the most challenging aspects of this technology: a huge number of detector clusters (''hits'') must be distributed at high rate with very large fan-out to all patterns (10 Million patterns will be located on 128 chips placed on a single board) and a huge number of roads must be collected and sent back to the FTK post-pattern-recognition functions. A network of high speed serial links is used to solve the data distribution problem.

  9. 43 CFR 429.37 - Does interest accrue on monies owed to the United States during my appeal process?

    Science.gov (United States)

    2010-10-01

    ... United States during my appeal process? 429.37 Section 429.37 Public Lands: Interior Regulations Relating... States during my appeal process? Except for any period in the appeal process during which a stay is then... decision to OHA, or during judicial review of final agency action. ...

  10. A global fingerprint of macro-scale changes in urban structure from 1999 to 2009

    International Nuclear Information System (INIS)

    Frolking, Steve; Milliman, Tom; Seto, Karen C; Friedl, Mark A

    2013-01-01

    Urban population now exceeds rural population globally, and 60–80% of global energy consumption by households, businesses, transportation, and industry occurs in urban areas. There is growing evidence that built-up infrastructure contributes to carbon emissions inertia, and that investments in infrastructure today have delayed climate cost in the future. Although the United Nations statistics include data on urban population by country and select urban agglomerations, there are no empirical data on built-up infrastructure for a large sample of cities. Here we present the first study to examine changes in the structure of the world’s largest cities from 1999 to 2009. Combining data from two space-borne sensors—backscatter power (PR) from NASA’s SeaWinds microwave scatterometer, and nighttime lights (NL) from NOAA’s defense meteorological satellite program/operational linescan system (DMSP/OLS)—we report large increases in built-up infrastructure stock worldwide and show that cities are expanding both outward and upward. Our results reveal previously undocumented recent and rapid changes in urban areas worldwide that reflect pronounced shifts in the form and structure of cities. Increases in built-up infrastructure are highest in East Asian cities, with Chinese cities rapidly expanding their material infrastructure stock in both height and extent. In contrast, Indian cities are primarily building out and not increasing in verticality. This new dataset will help characterize the structure and form of cities, and ultimately improve our understanding of how cities affect regional-to-global energy use and greenhouse gas emissions. (letter)

  11. Startup of Pumping Units in Process Water Supplies with Cooling Towers at Thermal and Nuclear Power Plants

    Energy Technology Data Exchange (ETDEWEB)

    Berlin, V. V., E-mail: vberlin@rinet.ru; Murav’ev, O. A., E-mail: muraviov1954@mail.ru; Golubev, A. V., E-mail: electronik@inbox.ru [National Research University “Moscow State University of Civil Engineering,” (Russian Federation)

    2017-03-15

    Aspects of the startup of pumping units in the cooling and process water supply systems for thermal and nuclear power plants with cooling towers, the startup stages, and the limits imposed on the extreme parameters during transients are discussed.

  12. 77 FR 13635 - Labor Certification Process for the Temporary Employment of Aliens in Agriculture in the United...

    Science.gov (United States)

    2012-03-07

    ... DEPARTMENT OF LABOR Employment and Training Administration Labor Certification Process for the Temporary Employment of Aliens in Agriculture in the United States: 2012 Allowable Charges for Agricultural Workers' Meals and Travel Subsistence Reimbursement, Including Lodging AGENCY: Employment and Training...

  13. 77 FR 12882 - Labor Certification Process for the Temporary Employment of Aliens in Agriculture in the United...

    Science.gov (United States)

    2012-03-02

    ... DEPARTMENT OF LABOR Employment and Training Administration Labor Certification Process for the Temporary Employment of Aliens in Agriculture in the United States: 2012 Allowable Charges for Agricultural Workers' Meals and Travel Subsistence Reimbursement, Including Lodging AGENCY: Employment and Training...

  14. 78 FR 15741 - Labor Certification Process for the Temporary Employment of Aliens in Agriculture in the United...

    Science.gov (United States)

    2013-03-12

    ... DEPARTMENT OF LABOR Employment and Training Administration Labor Certification Process for the Temporary Employment of Aliens in Agriculture in the United States: 2013 Allowable Charges for Agricultural Workers' Meals and Travel Subsistence Reimbursement, Including Lodging AGENCY: Employment and Training...

  15. 76 FR 11286 - Labor Certification Process for the Temporary Employment of Aliens in Agriculture in the United...

    Science.gov (United States)

    2011-03-01

    ... DEPARTMENT OF LABOR Employment and Training Administration Labor Certification Process for the Temporary Employment of Aliens in Agriculture in the United States: 2011 Adverse Effect Wage Rates, Allowable Charges for Agricultural Workers' Meals, and Maximum Travel Subsistence Reimbursement AGENCY...

  16. Energy- and cost-efficient lattice-QCD computations using graphics processing units

    Energy Technology Data Exchange (ETDEWEB)

    Bach, Matthias

    2014-07-01

    Quarks and gluons are the building blocks of all hadronic matter, like protons and neutrons. Their interaction is described by Quantum Chromodynamics (QCD), a theory under test by large scale experiments like the Large Hadron Collider (LHC) at CERN and in the future at the Facility for Antiproton and Ion Research (FAIR) at GSI. However, perturbative methods can only be applied to QCD for high energies. Studies from first principles are possible via a discretization onto an Euclidean space-time grid. This discretization of QCD is called Lattice QCD (LQCD) and is the only ab-initio option outside of the high-energy regime. LQCD is extremely compute and memory intensive. In particular, it is by definition always bandwidth limited. Thus - despite the complexity of LQCD applications - it led to the development of several specialized compute platforms and influenced the development of others. However, in recent years General-Purpose computation on Graphics Processing Units (GPGPU) came up as a new means for parallel computing. Contrary to machines traditionally used for LQCD, graphics processing units (GPUs) are a massmarket product. This promises advantages in both the pace at which higher-performing hardware becomes available and its price. CL2QCD is an OpenCL based implementation of LQCD using Wilson fermions that was developed within this thesis. It operates on GPUs by all major vendors as well as on central processing units (CPUs). On the AMD Radeon HD 7970 it provides the fastest double-precision D kernel for a single GPU, achieving 120GFLOPS. D - the most compute intensive kernel in LQCD simulations - is commonly used to compare LQCD platforms. This performance is enabled by an in-depth analysis of optimization techniques for bandwidth-limited codes on GPUs. Further, analysis of the communication between GPU and CPU, as well as between multiple GPUs, enables high-performance Krylov space solvers and linear scaling to multiple GPUs within a single system. LQCD

  17. Energy- and cost-efficient lattice-QCD computations using graphics processing units

    International Nuclear Information System (INIS)

    Bach, Matthias

    2014-01-01

    Quarks and gluons are the building blocks of all hadronic matter, like protons and neutrons. Their interaction is described by Quantum Chromodynamics (QCD), a theory under test by large scale experiments like the Large Hadron Collider (LHC) at CERN and in the future at the Facility for Antiproton and Ion Research (FAIR) at GSI. However, perturbative methods can only be applied to QCD for high energies. Studies from first principles are possible via a discretization onto an Euclidean space-time grid. This discretization of QCD is called Lattice QCD (LQCD) and is the only ab-initio option outside of the high-energy regime. LQCD is extremely compute and memory intensive. In particular, it is by definition always bandwidth limited. Thus - despite the complexity of LQCD applications - it led to the development of several specialized compute platforms and influenced the development of others. However, in recent years General-Purpose computation on Graphics Processing Units (GPGPU) came up as a new means for parallel computing. Contrary to machines traditionally used for LQCD, graphics processing units (GPUs) are a massmarket product. This promises advantages in both the pace at which higher-performing hardware becomes available and its price. CL2QCD is an OpenCL based implementation of LQCD using Wilson fermions that was developed within this thesis. It operates on GPUs by all major vendors as well as on central processing units (CPUs). On the AMD Radeon HD 7970 it provides the fastest double-precision D kernel for a single GPU, achieving 120GFLOPS. D - the most compute intensive kernel in LQCD simulations - is commonly used to compare LQCD platforms. This performance is enabled by an in-depth analysis of optimization techniques for bandwidth-limited codes on GPUs. Further, analysis of the communication between GPU and CPU, as well as between multiple GPUs, enables high-performance Krylov space solvers and linear scaling to multiple GPUs within a single system. LQCD

  18. Towards improved parameterization of a macroscale hydrologic model in a discontinuous permafrost boreal forest ecosystem

    Directory of Open Access Journals (Sweden)

    A. Endalamaw

    2017-09-01

    Full Text Available Modeling hydrological processes in the Alaskan sub-arctic is challenging because of the extreme spatial heterogeneity in soil properties and vegetation communities. Nevertheless, modeling and predicting hydrological processes is critical in this region due to its vulnerability to the effects of climate change. Coarse-spatial-resolution datasets used in land surface modeling pose a new challenge in simulating the spatially distributed and basin-integrated processes since these datasets do not adequately represent the small-scale hydrological, thermal, and ecological heterogeneity. The goal of this study is to improve the prediction capacity of mesoscale to large-scale hydrological models by introducing a small-scale parameterization scheme, which better represents the spatial heterogeneity of soil properties and vegetation cover in the Alaskan sub-arctic. The small-scale parameterization schemes are derived from observations and a sub-grid parameterization method in the two contrasting sub-basins of the Caribou Poker Creek Research Watershed (CPCRW in Interior Alaska: one nearly permafrost-free (LowP sub-basin and one permafrost-dominated (HighP sub-basin. The sub-grid parameterization method used in the small-scale parameterization scheme is derived from the watershed topography. We found that observed soil thermal and hydraulic properties – including the distribution of permafrost and vegetation cover heterogeneity – are better represented in the sub-grid parameterization method than the coarse-resolution datasets. Parameters derived from the coarse-resolution datasets and from the sub-grid parameterization method are implemented into the variable infiltration capacity (VIC mesoscale hydrological model to simulate runoff, evapotranspiration (ET, and soil moisture in the two sub-basins of the CPCRW. Simulated hydrographs based on the small-scale parameterization capture most of the peak and low flows, with similar accuracy in both sub

  19. Research Regarding the Anticorosiv Protection of Atmospheric and Vacuum Distillation Unit that Process Crude Oil

    Directory of Open Access Journals (Sweden)

    M. Morosanu

    2011-12-01

    Full Text Available Due to high boiling temperature, organic acids are present in the warmer areas of metal equipment from atmospheric and vacuum distillation units and determine, increased corrosion processes in furnace tubes, transfer lines, metal equipment within the distillation columns etc. In order to protect the corrosion of metal equipment from atmospheric and vacuum distillation units, against acids, de authors researched solution which integrates corrosion inhibitors and selecting materials for equipment construction. For this purpose, we tested the inhibitor PET 1441, which has dialchilfosfat in his composition and inhibitor based on phosphate ester. In this case, to the metal surface forms a complex phosphorous that forms of high temperature and high fluid speed. In order to form the passive layer and to achieve a 90% protection, we initially insert a shock dose, and in order to ensure further protection there is used a dose of 20 ppm. The check of anticorrosion protection namely the inhibition efficiency is achieved by testing samples made from steel different.

  20. MethodS of radioactive waste processing and disposal in the United Kingdom

    International Nuclear Information System (INIS)

    Tolstykh, V.D.

    1983-01-01

    The results of investigations into radioactive waste processing and disposal in the United Kingdom are discussed. Methods for solidification of metal and graphite radioactive wastes and radioactive slime of the Magnox reactors are described. Specifications of different installations used for radioactive waste disposal are given. Climatic and geological conditions in the United Kingdom are such that any deep storages of wastes will be lower than the underground water level. That is why dissolution and transport by underground waters will inevitably result in radionuclide mobility. In this connection an extended program of investigations into the main three aspects of disposal problem namely radionucleide release in storages, underground water transport and radionuclide migration is realized. The program is divided in two parts. The first part deals with retrival of hydrological and geochemical data on geological formations, development of specialized methods of investigations which are necessary for identification of places for waste final disposal. The second part represents theoretical and laboratory investigations into provesses of radionuclide transport in the system of ''sttorage-geological formation''. It is concluded that vitrification on the base of borosilicate glass is the most advanced method of radioactive waste solidification

  1. Unit operations used to treat process and/or waste streams at nuclear power plants

    International Nuclear Information System (INIS)

    Godbee, H.W.; Kibbey, A.H.

    1980-01-01

    Estimates are given of the annual amounts of each generic type of LLW [i.e., Government and commerical (fuel cycle and non-fuel cycle)] that is generated at LWR plants. Many different chemical engineering unit operations used to treat process and/or waste streams at LWR plants include adsorption, evaporation, calcination, centrifugation, compaction, crystallization, drying, filtration, incineration, reverse osmosis, and solidification of waste residues. The treatment of these various streams and the secondary wet solid wastes thus generated is described. The various treatment options for concentrates or solid wet wastes, and for dry wastes are discussed. Among the dry waste treatment methods are compaction, baling, and incineration, as well as chopping, cutting and shredding. Organic materials [liquids (e.g., oils or solvents) and/or solids], could be incinerated in most cases. The filter sludges, spent resins, and concentrated liquids (e.g., evaporator concentrates) are usually solidified in cement, or urea-formaldehyde or unsaturated polyester resins prior to burial. Incinerator ashes can also be incorporated in these binding agents. Asphalt has not yet been used. This paper presents a brief survey of operational experience at LWRs with various unit operations, including a short discussion of problems and some observations on recent trends

  2. [Work process and workers' health in a food and nutrition unit: prescribed versus actual work].

    Science.gov (United States)

    Colares, Luciléia Granhen Tavares; Freitas, Carlos Machado de

    2007-12-01

    This study focuses on the relationship between the work process in a food and nutrition unit and workers' health, in the words of the participants themselves. Direct observation, a semi-structured interview, and focus groups were used to collect the data. The reference was the dialogue between human ergonomics and work psychodynamics. The results showed that work organization in the study unit represents a routine activity, the requirements of which in terms of the work situation are based on criteria set by the institution. Variability in the activities is influenced mainly by the available equipment, instruments, and materials, thereby generating improvisation in meal production that produces both a physical and psychological cost for workers. Dissatisfaction during the performance of tasks results mainly from the supervisory style and relationship to immediate superiors. Workers themselves proposed changes in the work organization, based on greater dialogue and trust between supervisors and the workforce. Finally, the study identifies the need for an intervention that encourages workers' participation as agents of change.

  3. Optimization of the coherence function estimation for multi-core central processing unit

    Science.gov (United States)

    Cheremnov, A. G.; Faerman, V. A.; Avramchuk, V. S.

    2017-02-01

    The paper considers use of parallel processing on multi-core central processing unit for optimization of the coherence function evaluation arising in digital signal processing. Coherence function along with other methods of spectral analysis is commonly used for vibration diagnosis of rotating machinery and its particular nodes. An algorithm is given for the function evaluation for signals represented with digital samples. The algorithm is analyzed for its software implementation and computational problems. Optimization measures are described, including algorithmic, architecture and compiler optimization, their results are assessed for multi-core processors from different manufacturers. Thus, speeding-up of the parallel execution with respect to sequential execution was studied and results are presented for Intel Core i7-4720HQ и AMD FX-9590 processors. The results show comparatively high efficiency of the optimization measures taken. In particular, acceleration indicators and average CPU utilization have been significantly improved, showing high degree of parallelism of the constructed calculating functions. The developed software underwent state registration and will be used as a part of a software and hardware solution for rotating machinery fault diagnosis and pipeline leak location with acoustic correlation method.

  4. Acceleration of Linear Finite-Difference Poisson-Boltzmann Methods on Graphics Processing Units.

    Science.gov (United States)

    Qi, Ruxi; Botello-Smith, Wesley M; Luo, Ray

    2017-07-11

    Electrostatic interactions play crucial roles in biophysical processes such as protein folding and molecular recognition. Poisson-Boltzmann equation (PBE)-based models have emerged as widely used in modeling these important processes. Though great efforts have been put into developing efficient PBE numerical models, challenges still remain due to the high dimensionality of typical biomolecular systems. In this study, we implemented and analyzed commonly used linear PBE solvers for the ever-improving graphics processing units (GPU) for biomolecular simulations, including both standard and preconditioned conjugate gradient (CG) solvers with several alternative preconditioners. Our implementation utilizes the standard Nvidia CUDA libraries cuSPARSE, cuBLAS, and CUSP. Extensive tests show that good numerical accuracy can be achieved given that the single precision is often used for numerical applications on GPU platforms. The optimal GPU performance was observed with the Jacobi-preconditioned CG solver, with a significant speedup over standard CG solver on CPU in our diversified test cases. Our analysis further shows that different matrix storage formats also considerably affect the efficiency of different linear PBE solvers on GPU, with the diagonal format best suited for our standard finite-difference linear systems. Further efficiency may be possible with matrix-free operations and integrated grid stencil setup specifically tailored for the banded matrices in PBE-specific linear systems.

  5. Roll and roll-to-roll process scaling through development of a compact flexo unit for printing of back electrodes

    DEFF Research Database (Denmark)

    Dam, Henrik Friis; Andersen, Thomas Rieks; Madsen, Morten Vesterager

    2015-01-01

    some of the most critical steps in the scaling process. We describe the development of such a machine that comprise web guiding, tension control and surface treatment in a compact desk size that is easily moved around and also detail the development of a small cassette based flexographic unit for back...... electrode printing that is parsimonious in terms of ink usage and more gentle than laboratory scale flexo units where the foil transport is either driven by the flexo unit or the flexo unit is driven by the foil transport. We demonstrate fully operational flexible polymer solar cell manufacture using...

  6. Measurement system of bubbly flow using ultrasonic velocity profile monitor and video data processing unit

    International Nuclear Information System (INIS)

    Aritomi, Masanori; Zhou, Shirong; Nakajima, Makoto; Takeda, Yasushi; Mori, Michitsugu; Yoshioka, Yuzuru.

    1996-01-01

    The authors have been developing a measurement system for bubbly flow in order to clarify its multi-dimensional flow characteristics and to offer a data base to validate numerical codes for multi-dimensional two-phase flow. In this paper, the measurement system combining an ultrasonic velocity profile monitor with a video data processing unit is proposed, which can measure simultaneously velocity profiles in both gas and liquid phases, a void fraction profile for bubbly flow in a channel, and an average bubble diameter and void fraction. Furthermore, the proposed measurement system is applied to measure flow characteristics of a bubbly countercurrent flow in a vertical rectangular channel to verify its capability. (author)

  7. A Fast MHD Code for Gravitationally Stratified Media using Graphical Processing Units: SMAUG

    Science.gov (United States)

    Griffiths, M. K.; Fedun, V.; Erdélyi, R.

    2015-03-01

    Parallelization techniques have been exploited most successfully by the gaming/graphics industry with the adoption of graphical processing units (GPUs), possessing hundreds of processor cores. The opportunity has been recognized by the computational sciences and engineering communities, who have recently harnessed successfully the numerical performance of GPUs. For example, parallel magnetohydrodynamic (MHD) algorithms are important for numerical modelling of highly inhomogeneous solar, astrophysical and geophysical plasmas. Here, we describe the implementation of SMAUG, the Sheffield Magnetohydrodynamics Algorithm Using GPUs. SMAUG is a 1-3D MHD code capable of modelling magnetized and gravitationally stratified plasma. The objective of this paper is to present the numerical methods and techniques used for porting the code to this novel and highly parallel compute architecture. The methods employed are justified by the performance benchmarks and validation results demonstrating that the code successfully simulates the physics for a range of test scenarios including a full 3D realistic model of wave propagation in the solar atmosphere.

  8. FAST CALCULATION OF THE LOMB-SCARGLE PERIODOGRAM USING GRAPHICS PROCESSING UNITS

    International Nuclear Information System (INIS)

    Townsend, R. H. D.

    2010-01-01

    I introduce a new code for fast calculation of the Lomb-Scargle periodogram that leverages the computing power of graphics processing units (GPUs). After establishing a background to the newly emergent field of GPU computing, I discuss the code design and narrate key parts of its source. Benchmarking calculations indicate no significant differences in accuracy compared to an equivalent CPU-based code. However, the differences in performance are pronounced; running on a low-end GPU, the code can match eight CPU cores, and on a high-end GPU it is faster by a factor approaching 30. Applications of the code include analysis of long photometric time series obtained by ongoing satellite missions and upcoming ground-based monitoring facilities, and Monte Carlo simulation of periodogram statistical properties.

  9. Monte Carlo method for neutron transport calculations in graphics processing units (GPUs)

    International Nuclear Information System (INIS)

    Pellegrino, Esteban

    2011-01-01

    Monte Carlo simulation is well suited for solving the Boltzmann neutron transport equation in an inhomogeneous media for complicated geometries. However, routine applications require the computation time to be reduced to hours and even minutes in a desktop PC. The interest in adopting Graphics Processing Units (GPUs) for Monte Carlo acceleration is rapidly growing. This is due to the massive parallelism provided by the latest GPU technologies which is the most promising solution to the challenge of performing full-size reactor core analysis on a routine basis. In this study, Monte Carlo codes for a fixed-source neutron transport problem were developed for GPU environments in order to evaluate issues associated with computational speedup using GPUs. Results obtained in this work suggest that a speedup of several orders of magnitude is possible using the state-of-the-art GPU technologies. (author) [es

  10. Three-dimensional photoacoustic tomography based on graphics-processing-unit-accelerated finite element method.

    Science.gov (United States)

    Peng, Kuan; He, Ling; Zhu, Ziqiang; Tang, Jingtian; Xiao, Jiaying

    2013-12-01

    Compared with commonly used analytical reconstruction methods, the frequency-domain finite element method (FEM) based approach has proven to be an accurate and flexible algorithm for photoacoustic tomography. However, the FEM-based algorithm is computationally demanding, especially for three-dimensional cases. To enhance the algorithm's efficiency, in this work a parallel computational strategy is implemented in the framework of the FEM-based reconstruction algorithm using a graphic-processing-unit parallel frame named the "compute unified device architecture." A series of simulation experiments is carried out to test the accuracy and accelerating effect of the improved method. The results obtained indicate that the parallel calculation does not change the accuracy of the reconstruction algorithm, while its computational cost is significantly reduced by a factor of 38.9 with a GTX 580 graphics card using the improved method.

  11. Reconstructing the population activity of olfactory output neurons that innervate identifiable processing units

    Directory of Open Access Journals (Sweden)

    Shigehiro Namiki

    2008-06-01

    Full Text Available We investigated the functional organization of the moth antennal lobe (AL, the primary olfactory network, using in vivo electrophysiological recordings and anatomical identification. The moth AL contains about 60 processing units called glomeruli that are identifiable from one animal to another. We were able to monitor the output information of the AL by recording the activity of a population of output neurons, each of which innervated a single glomerulus. Using compiled intracellular recordings and staining data from different animals, we mapped the odor-evoked dynamics on a digital atlas of the AL and geometrically reconstructed the population activity. We examined the quantitative relationship between the similarity of olfactory responses and the anatomical distance between glomeruli. Globally, the olfactory response profile was independent of the anatomical distance, although some local features were present.

  12. Real-time track-less Cherenkov ring fitting trigger system based on Graphics Processing Units

    Science.gov (United States)

    Ammendola, R.; Biagioni, A.; Chiozzi, S.; Cretaro, P.; Cotta Ramusino, A.; Di Lorenzo, S.; Fantechi, R.; Fiorini, M.; Frezza, O.; Gianoli, A.; Lamanna, G.; Lo Cicero, F.; Lonardo, A.; Martinelli, M.; Neri, I.; Paolucci, P. S.; Pastorelli, E.; Piandani, R.; Piccini, M.; Pontisso, L.; Rossetti, D.; Simula, F.; Sozzi, M.; Vicini, P.

    2017-12-01

    The parallel computing power of commercial Graphics Processing Units (GPUs) is exploited to perform real-time ring fitting at the lowest trigger level using information coming from the Ring Imaging Cherenkov (RICH) detector of the NA62 experiment at CERN. To this purpose, direct GPU communication with a custom FPGA-based board has been used to reduce the data transmission latency. The GPU-based trigger system is currently integrated in the experimental setup of the RICH detector of the NA62 experiment, in order to reconstruct ring-shaped hit patterns. The ring-fitting algorithm running on GPU is fed with raw RICH data only, with no information coming from other detectors, and is able to provide more complex trigger primitives with respect to the simple photodetector hit multiplicity, resulting in a higher selection efficiency. The performance of the system for multi-ring Cherenkov online reconstruction obtained during the NA62 physics run is presented.

  13. Classification of hyperspectral imagery using MapReduce on a NVIDIA graphics processing unit (Conference Presentation)

    Science.gov (United States)

    Ramirez, Andres; Rahnemoonfar, Maryam

    2017-04-01

    A hyperspectral image provides multidimensional figure rich in data consisting of hundreds of spectral dimensions. Analyzing the spectral and spatial information of such image with linear and non-linear algorithms will result in high computational time. In order to overcome this problem, this research presents a system using a MapReduce-Graphics Processing Unit (GPU) model that can help analyzing a hyperspectral image through the usage of parallel hardware and a parallel programming model, which will be simpler to handle compared to other low-level parallel programming models. Additionally, Hadoop was used as an open-source version of the MapReduce parallel programming model. This research compared classification accuracy results and timing results between the Hadoop and GPU system and tested it against the following test cases: the CPU and GPU test case, a CPU test case and a test case where no dimensional reduction was applied.

  14. Nanoscale multireference quantum chemistry: full configuration interaction on graphical processing units.

    Science.gov (United States)

    Fales, B Scott; Levine, Benjamin G

    2015-10-13

    Methods based on a full configuration interaction (FCI) expansion in an active space of orbitals are widely used for modeling chemical phenomena such as bond breaking, multiply excited states, and conical intersections in small-to-medium-sized molecules, but these phenomena occur in systems of all sizes. To scale such calculations up to the nanoscale, we have developed an implementation of FCI in which electron repulsion integral transformation and several of the more expensive steps in σ vector formation are performed on graphical processing unit (GPU) hardware. When applied to a 1.7 × 1.4 × 1.4 nm silicon nanoparticle (Si72H64) described with the polarized, all-electron 6-31G** basis set, our implementation can solve for the ground state of the 16-active-electron/16-active-orbital CASCI Hamiltonian (more than 100,000,000 configurations) in 39 min on a single NVidia K40 GPU.

  15. MGUPGMA: A Fast UPGMA Algorithm With Multiple Graphics Processing Units Using NCCL.

    Science.gov (United States)

    Hua, Guan-Jie; Hung, Che-Lun; Lin, Chun-Yuan; Wu, Fu-Che; Chan, Yu-Wei; Tang, Chuan Yi

    2017-01-01

    A phylogenetic tree is a visual diagram of the relationship between a set of biological species. The scientists usually use it to analyze many characteristics of the species. The distance-matrix methods, such as Unweighted Pair Group Method with Arithmetic Mean and Neighbor Joining, construct a phylogenetic tree by calculating pairwise genetic distances between taxa. These methods have the computational performance issue. Although several new methods with high-performance hardware and frameworks have been proposed, the issue still exists. In this work, a novel parallel Unweighted Pair Group Method with Arithmetic Mean approach on multiple Graphics Processing Units is proposed to construct a phylogenetic tree from extremely large set of sequences. The experimental results present that the proposed approach on a DGX-1 server with 8 NVIDIA P100 graphic cards achieves approximately 3-fold to 7-fold speedup over the implementation of Unweighted Pair Group Method with Arithmetic Mean on a modern CPU and a single GPU, respectively.

  16. Area-delay trade-offs of texture decompressors for a graphics processing unit

    Science.gov (United States)

    Novoa Súñer, Emilio; Ituero, Pablo; López-Vallejo, Marisa

    2011-05-01

    Graphics Processing Units have become a booster for the microelectronics industry. However, due to intellectual property issues, there is a serious lack of information on implementation details of the hardware architecture that is behind GPUs. For instance, the way texture is handled and decompressed in a GPU to reduce bandwidth usage has never been dealt with in depth from a hardware point of view. This work addresses a comparative study on the hardware implementation of different texture decompression algorithms for both conventional (PCs and video game consoles) and mobile platforms. Circuit synthesis is performed targeting both a reconfigurable hardware platform and a 90nm standard cell library. Area-delay trade-offs have been extensively analyzed, which allows us to compare the complexity of decompressors and thus determine suitability of algorithms for systems with limited hardware resources.

  17. AN APPROACH TO EFFICIENT FEM SIMULATIONS ON GRAPHICS PROCESSING UNITS USING CUDA

    Directory of Open Access Journals (Sweden)

    Björn Nutti

    2014-04-01

    Full Text Available The paper presents a highly efficient way of simulating the dynamic behavior of deformable objects by means of the finite element method (FEM with computations performed on Graphics Processing Units (GPU. The presented implementation reduces bottlenecks related to memory accesses by grouping the necessary data per node pairs, in contrast to the classical way done per element. This strategy reduces the memory access patterns that are not suitable for the GPU memory architecture. Furthermore, the presented implementation takes advantage of the underlying sparse-block-matrix structure, and it has been demonstrated how to avoid potential bottlenecks in the algorithm. To achieve plausible deformational behavior for large local rotations, the objects are modeled by means of a simplified co-rotational FEM formulation.

  18. Processes and patterns of interaction as units of selection: An introduction to ITSNTS thinking.

    Science.gov (United States)

    Doolittle, W Ford; Inkpen, S Andrew

    2018-04-17

    Many practicing biologists accept that nothing in their discipline makes sense except in the light of evolution, and that natural selection is evolution's principal sense-maker. But what natural selection actually is (a force or a statistical outcome, for example) and the levels of the biological hierarchy (genes, organisms, species, or even ecosystems) at which it operates directly are still actively disputed among philosophers and theoretical biologists. Most formulations of evolution by natural selection emphasize the differential reproduction of entities at one or the other of these levels. Some also recognize differential persistence, but in either case the focus is on lineages of material things: even species can be thought of as spatiotemporally restricted, if dispersed, physical beings. Few consider-as "units of selection" in their own right-the processes implemented by genes, cells, species, or communities. "It's the song not the singer" (ITSNTS) theory does that, also claiming that evolution by natural selection of processes is more easily understood and explained as differential persistence than as differential reproduction. ITSNTS was formulated as a response to the observation that the collective functions of microbial communities (the songs) are more stably conserved and ecologically relevant than are the taxa that implement them (the singers). It aims to serve as a useful corrective to claims that "holobionts" (microbes and their animal or plant hosts) are aggregate "units of selection," claims that often conflate meanings of that latter term. But ITSNS also seems broadly applicable, for example, to the evolution of global biogeochemical cycles and the definition of ecosystem function.

  19. Computing the Density Matrix in Electronic Structure Theory on Graphics Processing Units.

    Science.gov (United States)

    Cawkwell, M J; Sanville, E J; Mniszewski, S M; Niklasson, Anders M N

    2012-11-13

    The self-consistent solution of a Schrödinger-like equation for the density matrix is a critical and computationally demanding step in quantum-based models of interatomic bonding. This step was tackled historically via the diagonalization of the Hamiltonian. We have investigated the performance and accuracy of the second-order spectral projection (SP2) algorithm for the computation of the density matrix via a recursive expansion of the Fermi operator in a series of generalized matrix-matrix multiplications. We demonstrate that owing to its simplicity, the SP2 algorithm [Niklasson, A. M. N. Phys. Rev. B2002, 66, 155115] is exceptionally well suited to implementation on graphics processing units (GPUs). The performance in double and single precision arithmetic of a hybrid GPU/central processing unit (CPU) and full GPU implementation of the SP2 algorithm exceed those of a CPU-only implementation of the SP2 algorithm and traditional matrix diagonalization when the dimensions of the matrices exceed about 2000 × 2000. Padding schemes for arrays allocated in the GPU memory that optimize the performance of the CUBLAS implementations of the level 3 BLAS DGEMM and SGEMM subroutines for generalized matrix-matrix multiplications are described in detail. The analysis of the relative performance of the hybrid CPU/GPU and full GPU implementations indicate that the transfer of arrays between the GPU and CPU constitutes only a small fraction of the total computation time. The errors measured in the self-consistent density matrices computed using the SP2 algorithm are generally smaller than those measured in matrices computed via diagonalization. Furthermore, the errors in the density matrices computed using the SP2 algorithm do not exhibit any dependence of system size, whereas the errors increase linearly with the number of orbitals when diagonalization is employed.

  20. Acceleration of the OpenFOAM-based MHD solver using graphics processing units

    International Nuclear Information System (INIS)

    He, Qingyun; Chen, Hongli; Feng, Jingchao

    2015-01-01

    Highlights: • A 3D PISO-MHD was implemented on Kepler-class graphics processing units (GPUs) using CUDA technology. • A consistent and conservative scheme is used in the code which was validated by three basic benchmarks in a rectangular and round ducts. • Parallelized of CPU and GPU acceleration were compared relating to single core CPU in MHD problems and non-MHD problems. • Different preconditions for solving MHD solver were compared and the results showed that AMG method is better for calculations. - Abstract: The pressure-implicit with splitting of operators (PISO) magnetohydrodynamics MHD solver of the couple of Navier–Stokes equations and Maxwell equations was implemented on Kepler-class graphics processing units (GPUs) using the CUDA technology. The solver is developed on open source code OpenFOAM based on consistent and conservative scheme which is suitable for simulating MHD flow under strong magnetic field in fusion liquid metal blanket with structured or unstructured mesh. We verified the validity of the implementation on several standard cases including the benchmark I of Shercliff and Hunt's cases, benchmark II of fully developed circular pipe MHD flow cases and benchmark III of KIT experimental case. Computational performance of the GPU implementation was examined by comparing its double precision run times with those of essentially the same algorithms and meshes. The resulted showed that a GPU (GTX 770) can outperform a server-class 4-core, 8-thread CPU (Intel Core i7-4770k) by a factor of 2 at least.

  1. Acceleration of the OpenFOAM-based MHD solver using graphics processing units

    Energy Technology Data Exchange (ETDEWEB)

    He, Qingyun; Chen, Hongli, E-mail: hlchen1@ustc.edu.cn; Feng, Jingchao

    2015-12-15

    Highlights: • A 3D PISO-MHD was implemented on Kepler-class graphics processing units (GPUs) using CUDA technology. • A consistent and conservative scheme is used in the code which was validated by three basic benchmarks in a rectangular and round ducts. • Parallelized of CPU and GPU acceleration were compared relating to single core CPU in MHD problems and non-MHD problems. • Different preconditions for solving MHD solver were compared and the results showed that AMG method is better for calculations. - Abstract: The pressure-implicit with splitting of operators (PISO) magnetohydrodynamics MHD solver of the couple of Navier–Stokes equations and Maxwell equations was implemented on Kepler-class graphics processing units (GPUs) using the CUDA technology. The solver is developed on open source code OpenFOAM based on consistent and conservative scheme which is suitable for simulating MHD flow under strong magnetic field in fusion liquid metal blanket with structured or unstructured mesh. We verified the validity of the implementation on several standard cases including the benchmark I of Shercliff and Hunt's cases, benchmark II of fully developed circular pipe MHD flow cases and benchmark III of KIT experimental case. Computational performance of the GPU implementation was examined by comparing its double precision run times with those of essentially the same algorithms and meshes. The resulted showed that a GPU (GTX 770) can outperform a server-class 4-core, 8-thread CPU (Intel Core i7-4770k) by a factor of 2 at least.

  2. Dynamic Data-Driven Reduced-Order Models of Macroscale Quantities for the Prediction of Equilibrium System State for Multiphase Porous Medium Systems

    Science.gov (United States)

    Talbot, C.; McClure, J. E.; Armstrong, R. T.; Mostaghimi, P.; Hu, Y.; Miller, C. T.

    2017-12-01

    Microscale simulation of multiphase flow in realistic, highly-resolved porous medium systems of a sufficient size to support macroscale evaluation is computationally demanding. Such approaches can, however, reveal the dynamic, steady, and equilibrium states of a system. We evaluate methods to utilize dynamic data to reduce the cost associated with modeling a steady or equilibrium state. We construct data-driven models using extensions to dynamic mode decomposition (DMD) and its connections to Koopman Operator Theory. DMD and its variants comprise a class of equation-free methods for dimensionality reduction of time-dependent nonlinear dynamical systems. DMD furnishes an explicit reduced representation of system states in terms of spatiotemporally varying modes with time-dependent oscillation frequencies and amplitudes. We use DMD to predict the steady and equilibrium macroscale state of a realistic two-fluid porous medium system imaged using micro-computed tomography (µCT) and simulated using the lattice Boltzmann method (LBM). We apply Koopman DMD to direct numerical simulation data resulting from simulations of multiphase fluid flow through a 1440x1440x4320 section of a full 1600x1600x5280 realization of imaged sandstone. We determine a representative set of system observables via dimensionality reduction techniques including linear and kernel principal component analysis. We demonstrate how this subset of macroscale quantities furnishes a representation of the time-evolution of the system in terms of dynamic modes, and discuss the selection of a subset of DMD modes yielding the optimal reduced model, as well as the time-dependence of the error in the predicted equilibrium value of each macroscale quantity. Finally, we describe how the above procedure, modified to incorporate methods from compressed sensing and random projection techniques, may be used in an online fashion to facilitate adaptive time-stepping and parsimonious storage of system states over time.

  3. Macro-Scale Patterns in Upwelling/Downwelling Activity at North American West Coast.

    Directory of Open Access Journals (Sweden)

    Romeo Saldívar-Lucio

    Full Text Available The seasonal and interannual variability of vertical transport (upwelling/downwelling has been relatively well studied, mainly for the California Current System, including low-frequency changes and latitudinal heterogeneity. The aim of this work was to identify potentially predictable patterns in upwelling/downwelling activity along the North American west coast and discuss their plausible mechanisms. To this purpose we applied the min/max Autocorrelation Factor technique and time series analysis. We found that spatial co-variation of seawater vertical movements present three dominant low-frequency signals in the range of 33, 19 and 11 years, resembling periodicities of: atmospheric circulation, nodal moon tides and solar activity. Those periodicities might be related to the variability of vertical transport through their influence on dominant wind patterns, the position/intensity of pressure centers and the strength of atmospheric circulation cells (wind stress. The low-frequency signals identified in upwelling/downwelling are coherent with temporal patterns previously reported at the study region: sea surface temperature along the Pacific coast of North America, catch fluctuations of anchovy Engraulis mordax and sardine Sardinops sagax, the Pacific Decadal Oscillation, changes in abundance and distribution of salmon populations, and variations in the position and intensity of the Aleutian low. Since the vertical transport is an oceanographic process with strong biological relevance, the recognition of their spatio-temporal patterns might allow for some reasonable forecasting capacity, potentially useful for marine resources management of the region.

  4. Data-Science Analysis of the Macro-scale Features Governing the Corrosion to Crack Transition in AA7050-T7451

    Science.gov (United States)

    Co, Noelle Easter C.; Brown, Donald E.; Burns, James T.

    2018-05-01

    This study applies data science approaches (random forest and logistic regression) to determine the extent to which macro-scale corrosion damage features govern the crack formation behavior in AA7050-T7451. Each corrosion morphology has a set of corresponding predictor variables (pit depth, volume, area, diameter, pit density, total fissure length, surface roughness metrics, etc.) describing the shape of the corrosion damage. The values of the predictor variables are obtained from white light interferometry, x-ray tomography, and scanning electron microscope imaging of the corrosion damage. A permutation test is employed to assess the significance of the logistic and random forest model predictions. Results indicate minimal relationship between the macro-scale corrosion feature predictor variables and fatigue crack initiation. These findings suggest that the macro-scale corrosion features and their interactions do not solely govern the crack formation behavior. While these results do not imply that the macro-features have no impact, they do suggest that additional parameters must be considered to rigorously inform the crack formation location.

  5. The divining root: moisture-driven responses of roots at the micro- and macro-scale.

    Science.gov (United States)

    Robbins, Neil E; Dinneny, José R

    2015-04-01

    Water is fundamental to plant life, but the mechanisms by which plant roots sense and respond to variations in water availability in the soil are poorly understood. Many studies of responses to water deficit have focused on large-scale effects of this stress, but have overlooked responses at the sub-organ or cellular level that give rise to emergent whole-plant phenotypes. We have recently discovered hydropatterning, an adaptive environmental response in which roots position new lateral branches according to the spatial distribution of available water across the circumferential axis. This discovery illustrates that roots are capable of sensing and responding to water availability at spatial scales far lower than those normally studied for such processes. This review will explore how roots respond to water availability with an emphasis on what is currently known at different spatial scales. Beginning at the micro-scale, there is a discussion of water physiology at the cellular level and proposed sensory mechanisms cells use to detect osmotic status. The implications of these principles are then explored in the context of cell and organ growth under non-stress and water-deficit conditions. Following this, several adaptive responses employed by roots to tailor their functionality to the local moisture environment are discussed, including patterning of lateral root development and generation of hydraulic barriers to limit water loss. We speculate that these micro-scale responses are necessary for optimal functionality of the root system in a heterogeneous moisture environment, allowing for efficient water uptake with minimal water loss during periods of drought. © The Author 2015. Published by Oxford University Press on behalf of the Society for Experimental Biology. All rights reserved. For permissions, please email: journals.permissions@oup.com.

  6. Characterizing Micro- and Macro-Scale Seismicity from Bayou Corne, Louisiana

    Science.gov (United States)

    Baig, A. M.; Urbancic, T.; Karimi, S.

    2013-12-01

    parameters for the larger magnitude events. Our presentation is focused on investigating this deformation, characterizing the scaling behaviour and the other source processes by taking advantage of the wide-band afforded to us through the deployment.

  7. Process of motion by unit steps over a surface provided with elements regularly arranged

    International Nuclear Information System (INIS)

    Cooper, D.E.; Hendee, L.C. III; Hill, W.G. Jr.; Leshem, Adam; Marugg, M.L.

    1977-01-01

    This invention concerns a process for moving by unit steps an apparatus travelling over a surface provided with an array of orifices aligned and evenly spaced in several lines and several parallel rows regularly spaced, the lines and rows being parallel to axes x and y of Cartesian co-ordinates, each orifice having a separate address in the Cartesian co-ordinate system. The surface travelling apparatus has two previously connected arms aranged in directions transversal to each other thus forming an angle corresponding to the intersection of axes x and y. In the inspection and/or repair of nuclear or similar steam generator tubes, it is desirable that such an apparatus should be able to move in front of a surface comprising an array of orifices by the selective alternate introduction and retraction of two sets of anchoring claws of the two respective arms, in relation to the orifices of the array, it being possible to shift the arms in a movement of translation, transversally to each other, as a set of claws is withdrawn from the orifices. The invention concerns a process and aparatus as indicated above that reduces to a minimum the path length of the apparatus between the orifices it is effectively opposite and a given orifice [fr

  8. Prototype design of singles processing unit for the small animal PET

    Science.gov (United States)

    Deng, P.; Zhao, L.; Lu, J.; Li, B.; Dong, R.; Liu, S.; An, Q.

    2018-05-01

    Position Emission Tomography (PET) is an advanced clinical diagnostic imaging technique for nuclear medicine. Small animal PET is increasingly used for studying the animal model of disease, new drugs and new therapies. A prototype of Singles Processing Unit (SPU) for a small animal PET system was designed to obtain the time, energy, and position information. The energy and position is actually calculated through high precison charge measurement, which is based on amplification, shaping, A/D conversion and area calculation in digital signal processing domian. Analysis and simulations were also conducted to optimize the key parameters in system design. Initial tests indicate that the charge and time precision is better than 3‰ FWHM and 350 ps FWHM respectively, while the position resolution is better than 3.5‰ FWHM. Commination tests of the SPU prototype with the PET detector indicate that the system time precision is better than 2.5 ns, while the flood map and energy spectra concored well with the expected.

  9. The Design Process of a Board Game for Exploring the Territories of the United States

    Directory of Open Access Journals (Sweden)

    Mehmet Kosa

    2017-06-01

    Full Text Available The paper reports the design experience of a board game with an educational aspect, which takes place on the location of states and territories of the United States. Based on a territorial acquisition dynamic, the goal was to articulate the design process of a board game that provides information for individuals who are willing to learn the locations of the U.S. states by playing a game. The game was developed using an iterative design process based on focus groups studies and brainstorming sessions. A mechanic-driven design approach was adopted instead of a theme or setting-driven alternative and a relatively abstract game was developed. The initial design idea was formed and refined according to the player feedback. The paper details play-testing sessions conducted and documents the design experience from a qualitative perspective. Our preliminary results suggest that the initial design is moderately balanced and despite the lack of quantitative evidence, our subjective observations indicate that participants’ knowledge about the location of states was improved in an entertaining and interactive way.

  10. Fast ray-tracing of human eye optics on Graphics Processing Units.

    Science.gov (United States)

    Wei, Qi; Patkar, Saket; Pai, Dinesh K

    2014-05-01

    We present a new technique for simulating retinal image formation by tracing a large number of rays from objects in three dimensions as they pass through the optic apparatus of the eye to objects. Simulating human optics is useful for understanding basic questions of vision science and for studying vision defects and their corrections. Because of the complexity of computing such simulations accurately, most previous efforts used simplified analytical models of the normal eye. This makes them less effective in modeling vision disorders associated with abnormal shapes of the ocular structures which are hard to be precisely represented by analytical surfaces. We have developed a computer simulator that can simulate ocular structures of arbitrary shapes, for instance represented by polygon meshes. Topographic and geometric measurements of the cornea, lens, and retina from keratometer or medical imaging data can be integrated for individualized examination. We utilize parallel processing using modern Graphics Processing Units (GPUs) to efficiently compute retinal images by tracing millions of rays. A stable retinal image can be generated within minutes. We simulated depth-of-field, accommodation, chromatic aberrations, as well as astigmatism and correction. We also show application of the technique in patient specific vision correction by incorporating geometric models of the orbit reconstructed from clinical medical images. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  11. Initial Assessment of Parallelization of Monte Carlo Calculation using Graphics Processing Units

    International Nuclear Information System (INIS)

    Choi, Sung Hoon; Joo, Han Gyu

    2009-01-01

    Monte Carlo (MC) simulation is an effective tool for calculating neutron transports in complex geometry. However, because Monte Carlo simulates each neutron behavior one by one, it takes a very long computing time if enough neutrons are used for high precision of calculation. Accordingly, methods that reduce the computing time are required. In a Monte Carlo code, parallel calculation is well-suited since it simulates the behavior of each neutron independently and thus parallel computation is natural. The parallelization of the Monte Carlo codes, however, was done using multi CPUs. By the global demand for high quality 3D graphics, the Graphics Processing Unit (GPU) has developed into a highly parallel, multi-core processor. This parallel processing capability of GPUs can be available to engineering computing once a suitable interface is provided. Recently, NVIDIA introduced CUDATM, a general purpose parallel computing architecture. CUDA is a software environment that allows developers to manage GPU using C/C++ or other languages. In this work, a GPU-based Monte Carlo is developed and the initial assessment of it parallel performance is investigated

  12. BarraCUDA - a fast short read sequence aligner using graphics processing units

    Directory of Open Access Journals (Sweden)

    Klus Petr

    2012-01-01

    Full Text Available Abstract Background With the maturation of next-generation DNA sequencing (NGS technologies, the throughput of DNA sequencing reads has soared to over 600 gigabases from a single instrument run. General purpose computing on graphics processing units (GPGPU, extracts the computing power from hundreds of parallel stream processors within graphics processing cores and provides a cost-effective and energy efficient alternative to traditional high-performance computing (HPC clusters. In this article, we describe the implementation of BarraCUDA, a GPGPU sequence alignment software that is based on BWA, to accelerate the alignment of sequencing reads generated by these instruments to a reference DNA sequence. Findings Using the NVIDIA Compute Unified Device Architecture (CUDA software development environment, we ported the most computational-intensive alignment component of BWA to GPU to take advantage of the massive parallelism. As a result, BarraCUDA offers a magnitude of performance boost in alignment throughput when compared to a CPU core while delivering the same level of alignment fidelity. The software is also capable of supporting multiple CUDA devices in parallel to further accelerate the alignment throughput. Conclusions BarraCUDA is designed to take advantage of the parallelism of GPU to accelerate the alignment of millions of sequencing reads generated by NGS instruments. By doing this, we could, at least in part streamline the current bioinformatics pipeline such that the wider scientific community could benefit from the sequencing technology. BarraCUDA is currently available from http://seqbarracuda.sf.net

  13. 21st Century Parent-Child Sex Communication in the United States: A Process Review.

    Science.gov (United States)

    Flores, Dalmacio; Barroso, Julie

    Parent-child sex communication results in the transmission of family expectations, societal values, and role modeling of sexual health risk-reduction strategies. Parent-child sex communication's potential to curb negative sexual health outcomes has sustained a multidisciplinary effort to better understand the process and its impact on the development of healthy sexual attitudes and behaviors among adolescents. This review advances what is known about the process of sex communication in the United States by reviewing studies published from 2003 to 2015. We used the Cumulative Index to Nursing and Allied Health Literature (CINAHL), PsycINFO, SocINDEX, and PubMed, and the key terms "parent child" AND "sex education" for the initial query; we included 116 original articles for analysis. Our review underscores long-established factors that prevent parents from effectively broaching and sustaining talks about sex with their children and has also identified emerging concerns unique to today's parenting landscape. Parental factors salient to sex communication are established long before individuals become parents and are acted upon by influences beyond the home. Child-focused communication factors likewise describe a maturing audience that is far from captive. The identification of both enduring and emerging factors that affect how sex communication occurs will inform subsequent work that will result in more positive sexual health outcomes for adolescents.

  14. Spatial resolution recovery utilizing multi-ray tracing and graphic processing unit in PET image reconstruction

    International Nuclear Information System (INIS)

    Liang, Yicheng; Peng, Hao

    2015-01-01

    Depth-of-interaction (DOI) poses a major challenge for a PET system to achieve uniform spatial resolution across the field-of-view, particularly for small animal and organ-dedicated PET systems. In this work, we implemented an analytical method to model system matrix for resolution recovery, which was then incorporated in PET image reconstruction on a graphical processing unit platform, due to its parallel processing capacity. The method utilizes the concepts of virtual DOI layers and multi-ray tracing to calculate the coincidence detection response function for a given line-of-response. The accuracy of the proposed method was validated for a small-bore PET insert to be used for simultaneous PET/MR breast imaging. In addition, the performance comparisons were studied among the following three cases: 1) no physical DOI and no resolution modeling; 2) two physical DOI layers and no resolution modeling; and 3) no physical DOI design but with a different number of virtual DOI layers. The image quality was quantitatively evaluated in terms of spatial resolution (full-width-half-maximum and position offset), contrast recovery coefficient and noise. The results indicate that the proposed method has the potential to be used as an alternative to other physical DOI designs and achieve comparable imaging performances, while reducing detector/system design cost and complexity. (paper)

  15. BarraCUDA - a fast short read sequence aligner using graphics processing units

    LENUS (Irish Health Repository)

    Klus, Petr

    2012-01-13

    Abstract Background With the maturation of next-generation DNA sequencing (NGS) technologies, the throughput of DNA sequencing reads has soared to over 600 gigabases from a single instrument run. General purpose computing on graphics processing units (GPGPU), extracts the computing power from hundreds of parallel stream processors within graphics processing cores and provides a cost-effective and energy efficient alternative to traditional high-performance computing (HPC) clusters. In this article, we describe the implementation of BarraCUDA, a GPGPU sequence alignment software that is based on BWA, to accelerate the alignment of sequencing reads generated by these instruments to a reference DNA sequence. Findings Using the NVIDIA Compute Unified Device Architecture (CUDA) software development environment, we ported the most computational-intensive alignment component of BWA to GPU to take advantage of the massive parallelism. As a result, BarraCUDA offers a magnitude of performance boost in alignment throughput when compared to a CPU core while delivering the same level of alignment fidelity. The software is also capable of supporting multiple CUDA devices in parallel to further accelerate the alignment throughput. Conclusions BarraCUDA is designed to take advantage of the parallelism of GPU to accelerate the alignment of millions of sequencing reads generated by NGS instruments. By doing this, we could, at least in part streamline the current bioinformatics pipeline such that the wider scientific community could benefit from the sequencing technology. BarraCUDA is currently available from http:\\/\\/seqbarracuda.sf.net

  16. Development of a Monte Carlo software to photon transportation in voxel structures using graphic processing units

    International Nuclear Information System (INIS)

    Bellezzo, Murillo

    2014-01-01

    As the most accurate method to estimate absorbed dose in radiotherapy, Monte Carlo Method (MCM) has been widely used in radiotherapy treatment planning. Nevertheless, its efficiency can be improved for clinical routine applications. In this thesis, the CUBMC code is presented, a GPU-based MC photon transport algorithm for dose calculation under the Compute Unified Device Architecture (CUDA) platform. The simulation of physical events is based on the algorithm used in PENELOPE, and the cross section table used is the one generated by the MATERIAL routine, also present in PENELOPE code. Photons are transported in voxel-based geometries with different compositions. There are two distinct approaches used for transport simulation. The rst of them forces the photon to stop at every voxel frontier, the second one is the Woodcock method, where the photon ignores the existence of borders and travels in homogeneous fictitious media. The CUBMC code aims to be an alternative of Monte Carlo simulator code that, by using the capability of parallel processing of graphics processing units (GPU), provide high performance simulations in low cost compact machines, and thus can be applied in clinical cases and incorporated in treatment planning systems for radiotherapy. (author)

  17. The ATLAS Fast Tracker Processing Units - input and output data preparation

    CERN Document Server

    Bolz, Arthur Eugen; The ATLAS collaboration

    2016-01-01

    The ATLAS Fast Tracker is a hardware processor built to reconstruct tracks at a rate of up to 100 kHz and provide them to the high level trigger system. The Fast Tracker will allow the trigger to utilize tracking information from the entire detector at an earlier event selection stage than ever before, allowing for more efficient event rejection. The connection of the system from to the detector read-outs and to the high level trigger computing farms are made through custom boards implementing Advanced Telecommunications Computing Technologies standard. The input is processed by the Input Mezzanines and Data Formatter boards, designed to receive and sort the data coming from the Pixel and Semi-conductor Tracker. The Fast Tracker to Level-2 Interface Card connects the system to the computing farm. The Input Mezzanines are 128 boards, performing clustering, placed on the 32 Data Formatter mother boards that sort the information into 64 logical regions required by the downstream processing units. This necessitat...

  18. ACTION OF UNIFORM SEARCH ALGORITHM WHEN SELECTING LANGUAGE UNITS IN THE PROCESS OF SPEECH

    Directory of Open Access Journals (Sweden)

    Ирина Михайловна Некипелова

    2013-05-01

    Full Text Available The article is devoted to research of action of uniform search algorithm when selecting by human of language units for speech produce. The process is connected with a speech optimization phenomenon. This makes it possible to shorten the time of cogitation something that human want to say, and to achieve the maximum precision in thoughts expression. The algorithm of uniform search works at consciousness  and subconsciousness levels. It favours the forming of automatism produce and perception of speech. Realization of human's cognitive potential in the process of communication starts up complicated mechanism of self-organization and self-regulation of language. In turn, it results in optimization of language system, servicing needs not only human's self-actualization but realization of communication in society. The method of problem-oriented search is used for researching of optimization mechanisms, which are distinctive to speech producing and stabilization of language.DOI: http://dx.doi.org/10.12731/2218-7405-2013-4-50

  19. Factors associated with student learning processes in primary health care units: a questionnaire study.

    Science.gov (United States)

    Bos, Elisabeth; Alinaghizadeh, Hassan; Saarikoski, Mikko; Kaila, Päivi

    2015-01-01

    Clinical placement plays a key role in education intended to develop nursing and caregiving skills. Studies of nursing students' clinical learning experiences show that these dimensions affect learning processes: (i) supervisory relationship, (ii) pedagogical atmosphere, (iii) management leadership style, (iv) premises of nursing care on the ward, and (v) nursing teachers' roles. Few empirical studies address the probability of an association between these dimensions and factors such as student (a) motivation, (b) satisfaction with clinical placement, and (c) experiences with professional role models. The study aimed to investigate factors associated with the five dimensions in clinical learning environments within primary health care units. The Swedish version of Clinical Learning Environment, Supervision and Teacher, a validated evaluation scale, was administered to 356 graduating nursing students after four or five weeks clinical placement in primary health care units. Response rate was 84%. Multivariate analysis of variance is determined if the five dimensions are associated with factors a, b, and c above. The analysis revealed a statistically significant association with the five dimensions and two factors: students' motivation and experiences with professional role models. The satisfaction factor had a statistically significant association (effect size was high) with all dimensions; this clearly indicates that students experienced satisfaction. These questionnaire results show that a good clinical learning experience constitutes a complex whole (totality) that involves several interacting factors. Supervisory relationship and pedagogical atmosphere particularly influenced students' satisfaction and motivation. These results provide valuable decision-support material for clinical education planning, implementation, and management. Copyright © 2014 Elsevier Ltd. All rights reserved.

  20. Computerized nursing process in the Intensive Care Unit: ergonomics and usability.

    Science.gov (United States)

    Almeida, Sônia Regina Wagner de; Sasso, Grace Teresinha Marcon Dal; Barra, Daniela Couto Carvalho

    2016-01-01

    Analyzing the ergonomics and usability criteria of the Computerized Nursing Process based on the International Classification for Nursing Practice in the Intensive Care Unit according to International Organization for Standardization(ISO). A quantitative, quasi-experimental, before-and-after study with a sample of 16 participants performed in an Intensive Care Unit. Data collection was performed through the application of five simulated clinical cases and an evaluation instrument. Data analysis was performed by descriptive and inferential statistics. The organization, content and technical criteria were considered "excellent", and the interface criteria were considered "very good", obtaining means of 4.54, 4.60, 4.64 and 4.39, respectively. The analyzed standards obtained means above 4.0, being considered "very good" by the participants. The Computerized Nursing Processmet ergonomic and usability standards according to the standards set by ISO. This technology supports nurses' clinical decision-making by providing complete and up-to-date content for Nursing practice in the Intensive Care Unit. Analisar os critérios de ergonomia e usabilidade do Processo de Enfermagem Informatizado a partir da Classificação Internacional para as Práticas de Enfermagem, em Unidade de Terapia Intensiva, de acordo com os padrões da InternationalOrganization for Standardization (ISO). Pesquisa quantitativa, quase-experimental do tipo antes e depois, com uma amostra de 16 participantes, realizada em uma Unidade de Terapia Intensiva. Coleta de dados realizada por meio da aplicação de cinco casos clínicos simulados e instrumento de avaliação. A análise dos dados foi realizada pela estatística descritiva e inferencial. Os critérios organização, conteúdo e técnico foram considerados "excelentes", e o critério interface "muito bom", obtendo médias 4,54, 4,60, 4,64 e 4,39, respectivamente. Os padrões analisados obtiveram médias acima de 4,0, sendo considerados "muito bons

  1. 78 FR 19019 - Labor Certification Process for the Temporary Employment of Aliens in Agriculture in the United...

    Science.gov (United States)

    2013-03-28

    ... DEPARTMENT OF LABOR Employment and Training Administration Labor Certification Process for the Temporary Employment of Aliens in Agriculture in the United States: Prevailing Wage Rates for Certain Occupations Processed Under H-2A Special Procedures; Correction and Rescission AGENCY: Employment and Training...

  2. 78 FR 1260 - Labor Certification Process for the Temporary Employment of Aliens in Agriculture in the United...

    Science.gov (United States)

    2013-01-08

    ... DEPARTMENT OF LABOR Employment and Training Administration Labor Certification Process for the Temporary Employment of Aliens in Agriculture in the United States: Prevailing Wage Rates for Certain Occupations Processed Under H-2A Special Procedures AGENCY: Employment and Training Administration, Labor...

  3. Biodiversity of indigenous staphylococci of naturally fermented dry sausages and manufacturing environments of small-scale processing units.

    Science.gov (United States)

    Leroy, Sabine; Giammarinaro, Philippe; Chacornac, Jean-Paul; Lebert, Isabelle; Talon, Régine

    2010-04-01

    The staphylococcal community of the environments of nine French small-scale processing units and their naturally fermented meat products was identified by analyzing 676 isolates. Fifteen species were accurately identified using validated molecular methods. The three prevalent species were Staphylococcus equorum (58.4%), Staphylococcus saprophyticus (15.7%) and Staphylococcus xylosus (9.3%). S. equorum was isolated in all the processing units in similar proportion in meat and environmental samples. S. saprophyticus was also isolated in all the processing units with a higher percentage in environmental samples. S. xylosus was present sporadically in the processing units and its prevalence was higher in meat samples. The genetic diversity of the strains within the three species isolated from one processing unit was studied by PFGE and revealed a high diversity for S. equorum and S. saprophyticus both in the environment and the meat isolates. The genetic diversity remained high through the manufacturing steps. A small percentage of the strains of the two species share the two ecological niches. These results highlight that some strains, probably introduced by the meat, will persist in the manufacturing environment, while other strains are more adapted to the meat products.

  4. Using Systems Theory to Examine Patient and Nurse Structures, Processes, and Outcomes in Centralized and Decentralized Units.

    Science.gov (United States)

    Real, Kevin; Fay, Lindsey; Isaacs, Kathy; Carll-White, Allison; Schadler, Aric

    2018-01-01

    This study utilizes systems theory to understand how changes to physical design structures impact communication processes and patient and staff design-related outcomes. Many scholars and researchers have noted the importance of communication and teamwork for patient care quality. Few studies have examined changes to nursing station design within a systems theory framework. This study employed a multimethod, before-and-after, quasi-experimental research design. Nurses completed surveys in centralized units and later in decentralized units ( N = 26 pre , N = 51 post ). Patients completed surveys ( N = 62 pre ) in centralized units and later in decentralized units ( N = 49 post ). Surveys included quantitative measures and qualitative open-ended responses. Patients preferred the decentralized units because of larger single-occupancy rooms, greater privacy/confidentiality, and overall satisfaction with design. Nurses had a more complex response. Nurses approved the patient rooms, unit environment, and noise levels in decentralized units. However, they reported reduced access to support spaces, lower levels of team/mentoring communication, and less satisfaction with design than in centralized units. Qualitative findings supported these results. Nurses were more positive about centralized units and patients were more positive toward decentralized units. The results of this study suggest a need to understand how system components operate in concert. A major contribution of this study is the inclusion of patient satisfaction with design, an important yet overlooked fact in patient satisfaction. Healthcare design researchers and practitioners may consider how changing system interdependencies can lead to unexpected changes to communication processes and system outcomes in complex systems.

  5. The application of projected conjugate gradient solvers on graphical processing units

    International Nuclear Information System (INIS)

    Lin, Youzuo; Renaut, Rosemary

    2011-01-01

    Graphical processing units introduce the capability for large scale computation at the desktop. Presented numerical results verify that efficiencies and accuracies of basic linear algebra subroutines of all levels when implemented in CUDA and Jacket are comparable. But experimental results demonstrate that the basic linear algebra subroutines of level three offer the greatest potential for improving efficiency of basic numerical algorithms. We consider the solution of the multiple right hand side set of linear equations using Krylov subspace-based solvers. Thus, for the multiple right hand side case, it is more efficient to make use of a block implementation of the conjugate gradient algorithm, rather than to solve each system independently. Jacket is used for the implementation. Furthermore, including projection from one system to another improves efficiency. A relevant example, for which simulated results are provided, is the reconstruction of a three dimensional medical image volume acquired from a positron emission tomography scanner. Efficiency of the reconstruction is improved by using projection across nearby slices.

  6. Quantum processes: probability fluxes, transition probabilities in unit time and vacuum vibrations

    International Nuclear Information System (INIS)

    Oleinik, V.P.; Arepjev, Ju D.

    1989-01-01

    Transition probabilities in unit time and probability fluxes are compared in studying the elementary quantum processes -the decay of a bound state under the action of time-varying and constant electric fields. It is shown that the difference between these quantities may be considerable, and so the use of transition probabilities W instead of probability fluxes Π, in calculating the particle fluxes, may lead to serious errors. The quantity W represents the rate of change with time of the population of the energy levels relating partly to the real states and partly to the virtual ones, and it cannot be directly measured in experiment. The vacuum background is shown to be continuously distorted when a perturbation acts on a system. Because of this the viewpoint of an observer on the physical properties of real particles continuously varies with time. This fact is not taken into consideration in the conventional theory of quantum transitions based on using the notion of probability amplitude. As a result, the probability amplitudes lose their physical meaning. All the physical information on quantum dynamics of a system is contained in the mean values of physical quantities. The existence of considerable differences between the quantities W and Π permits one in principle to make a choice of the correct theory of quantum transitions on the basis of experimental data. (author)

  7. Effects of silvicultural activity on ecological processes in floodplain forests of the southern United States

    International Nuclear Information System (INIS)

    Lockaby, B.G.; Stanturf, J.A.

    1996-01-01

    Activities associated with timber harvesting have occurred within floodplain forests in the southern United States for nearly two hundred years. However, it is only in the last ten years that any information has become available about the effects of harvesting on the ecological functions of this valuable resource. Hydrology is the driving influence behind all ecological processes in floodplains and, in most cases, timber harvesting alone has little long-term effect on hydroperiod. However, there may be some instances where logging roads, built in association with harvest sites , can alter hydroperiod to the extent that vegetation productivity is altered positively or negatively. There is no documentation that harvesting followed by natural regeneration represents a threat to ground or surface water quality on floodplain sites, as long as Best Management Practices are followed. Harvested floodplains may increase or have little effect on decomposition rates of surface organic matter. The nature of the effect seems to be controlled by site wetness. Data from recently harvested sites (i.e. within the last ten years) suggest that vegetation productivity is maintained at levels similar to that observed prior to harvests. During the early stages of stand development vegetation species composition is heavily influenced by harvest method. Similarly, amphibian populations (monitored as bioindicators of ecosystem recovery) seem to rebound rapidly following harvests, although species composition may be different. 40 refs, 3 figs

  8. High-Throughput Characterization of Porous Materials Using Graphics Processing Units

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Jihan; Martin, Richard L.; Rübel, Oliver; Haranczyk, Maciej; Smit, Berend

    2012-05-08

    We have developed a high-throughput graphics processing units (GPU) code that can characterize a large database of crystalline porous materials. In our algorithm, the GPU is utilized to accelerate energy grid calculations where the grid values represent interactions (i.e., Lennard-Jones + Coulomb potentials) between gas molecules (i.e., CH$_{4}$ and CO$_{2}$) and material's framework atoms. Using a parallel flood fill CPU algorithm, inaccessible regions inside the framework structures are identified and blocked based on their energy profiles. Finally, we compute the Henry coefficients and heats of adsorption through statistical Widom insertion Monte Carlo moves in the domain restricted to the accessible space. The code offers significant speedup over a single core CPU code and allows us to characterize a set of porous materials at least an order of magnitude larger than ones considered in earlier studies. For structures selected from such a prescreening algorithm, full adsorption isotherms can be calculated by conducting multiple grand canonical Monte Carlo simulations concurrently within the GPU.

  9. Transparent Runtime Migration of Loop-Based Traces of Processor Instructions to Reconfigurable Processing Units

    Directory of Open Access Journals (Sweden)

    João Bispo

    2013-01-01

    Full Text Available The ability to map instructions running in a microprocessor to a reconfigurable processing unit (RPU, acting as a coprocessor, enables the runtime acceleration of applications and ensures code and possibly performance portability. In this work, we focus on the mapping of loop-based instruction traces (called Megablocks to RPUs. The proposed approach considers offline partitioning and mapping stages without ignoring their future runtime applicability. We present a toolchain that automatically extracts specific trace-based loops, called Megablocks, from MicroBlaze instruction traces and generates an RPU for executing those loops. Our hardware infrastructure is able to move loop execution from the microprocessor to the RPU transparently, at runtime, and without changing the executable binaries. The toolchain and the system are fully operational. Three FPGA implementations of the system, differing in the hardware interfaces used, were tested and evaluated with a set of 15 application kernels. Speedups ranging from 1.26 to 3.69 were achieved for the best alternative using a MicroBlaze processor with local memory.

  10. Efficient molecular dynamics simulations with many-body potentials on graphics processing units

    Science.gov (United States)

    Fan, Zheyong; Chen, Wei; Vierimaa, Ville; Harju, Ari

    2017-09-01

    Graphics processing units have been extensively used to accelerate classical molecular dynamics simulations. However, there is much less progress on the acceleration of force evaluations for many-body potentials compared to pairwise ones. In the conventional force evaluation algorithm for many-body potentials, the force, virial stress, and heat current for a given atom are accumulated within different loops, which could result in write conflict between different threads in a CUDA kernel. In this work, we provide a new force evaluation algorithm, which is based on an explicit pairwise force expression for many-body potentials derived recently (Fan et al., 2015). In our algorithm, the force, virial stress, and heat current for a given atom can be accumulated within a single thread and is free of write conflicts. We discuss the formulations and algorithms and evaluate their performance. A new open-source code, GPUMD, is developed based on the proposed formulations. For the Tersoff many-body potential, the double precision performance of GPUMD using a Tesla K40 card is equivalent to that of the LAMMPS (Large-scale Atomic/Molecular Massively Parallel Simulator) molecular dynamics code running with about 100 CPU cores (Intel Xeon CPU X5670 @ 2.93 GHz).

  11. Accelerating electrostatic surface potential calculation with multi-scale approximation on graphics processing units.

    Science.gov (United States)

    Anandakrishnan, Ramu; Scogland, Tom R W; Fenley, Andrew T; Gordon, John C; Feng, Wu-chun; Onufriev, Alexey V

    2010-06-01

    Tools that compute and visualize biomolecular electrostatic surface potential have been used extensively for studying biomolecular function. However, determining the surface potential for large biomolecules on a typical desktop computer can take days or longer using currently available tools and methods. Two commonly used techniques to speed-up these types of electrostatic computations are approximations based on multi-scale coarse-graining and parallelization across multiple processors. This paper demonstrates that for the computation of electrostatic surface potential, these two techniques can be combined to deliver significantly greater speed-up than either one separately, something that is in general not always possible. Specifically, the electrostatic potential computation, using an analytical linearized Poisson-Boltzmann (ALPB) method, is approximated using the hierarchical charge partitioning (HCP) multi-scale method, and parallelized on an ATI Radeon 4870 graphical processing unit (GPU). The implementation delivers a combined 934-fold speed-up for a 476,040 atom viral capsid, compared to an equivalent non-parallel implementation on an Intel E6550 CPU without the approximation. This speed-up is significantly greater than the 42-fold speed-up for the HCP approximation alone or the 182-fold speed-up for the GPU alone. Copyright (c) 2010 Elsevier Inc. All rights reserved.

  12. Accelerating Electrostatic Surface Potential Calculation with Multiscale Approximation on Graphics Processing Units

    Science.gov (United States)

    Anandakrishnan, Ramu; Scogland, Tom R. W.; Fenley, Andrew T.; Gordon, John C.; Feng, Wu-chun; Onufriev, Alexey V.

    2010-01-01

    Tools that compute and visualize biomolecular electrostatic surface potential have been used extensively for studying biomolecular function. However, determining the surface potential for large biomolecules on a typical desktop computer can take days or longer using currently available tools and methods. Two commonly used techniques to speed up these types of electrostatic computations are approximations based on multi-scale coarse-graining and parallelization across multiple processors. This paper demonstrates that for the computation of electrostatic surface potential, these two techniques can be combined to deliver significantly greater speed-up than either one separately, something that is in general not always possible. Specifically, the electrostatic potential computation, using an analytical linearized Poisson Boltzmann (ALPB) method, is approximated using the hierarchical charge partitioning (HCP) multiscale method, and parallelized on an ATI Radeon 4870 graphical processing unit (GPU). The implementation delivers a combined 934-fold speed-up for a 476,040 atom viral capsid, compared to an equivalent non-parallel implementation on an Intel E6550 CPU without the approximation. This speed-up is significantly greater than the 42-fold speed-up for the HCP approximation alone or the 182-fold speed-up for the GPU alone. PMID:20452792

  13. Molecular Monte Carlo Simulations Using Graphics Processing Units: To Waste Recycle or Not?

    Science.gov (United States)

    Kim, Jihan; Rodgers, Jocelyn M; Athènes, Manuel; Smit, Berend

    2011-10-11

    In the waste recycling Monte Carlo (WRMC) algorithm, (1) multiple trial states may be simultaneously generated and utilized during Monte Carlo moves to improve the statistical accuracy of the simulations, suggesting that such an algorithm may be well posed for implementation in parallel on graphics processing units (GPUs). In this paper, we implement two waste recycling Monte Carlo algorithms in CUDA (Compute Unified Device Architecture) using uniformly distributed random trial states and trial states based on displacement random-walk steps, and we test the methods on a methane-zeolite MFI framework system to evaluate their utility. We discuss the specific implementation details of the waste recycling GPU algorithm and compare the methods to other parallel algorithms optimized for the framework system. We analyze the relationship between the statistical accuracy of our simulations and the CUDA block size to determine the efficient allocation of the GPU hardware resources. We make comparisons between the GPU and the serial CPU Monte Carlo implementations to assess speedup over conventional microprocessors. Finally, we apply our optimized GPU algorithms to the important problem of determining free energy landscapes, in this case for molecular motion through the zeolite LTA.

  14. Accelerating large-scale protein structure alignments with graphics processing units

    Directory of Open Access Journals (Sweden)

    Pang Bin

    2012-02-01

    Full Text Available Abstract Background Large-scale protein structure alignment, an indispensable tool to structural bioinformatics, poses a tremendous challenge on computational resources. To ensure structure alignment accuracy and efficiency, efforts have been made to parallelize traditional alignment algorithms in grid environments. However, these solutions are costly and of limited accessibility. Others trade alignment quality for speedup by using high-level characteristics of structure fragments for structure comparisons. Findings We present ppsAlign, a parallel protein structure Alignment framework designed and optimized to exploit the parallelism of Graphics Processing Units (GPUs. As a general-purpose GPU platform, ppsAlign could take many concurrent methods, such as TM-align and Fr-TM-align, into the parallelized algorithm design. We evaluated ppsAlign on an NVIDIA Tesla C2050 GPU card, and compared it with existing software solutions running on an AMD dual-core CPU. We observed a 36-fold speedup over TM-align, a 65-fold speedup over Fr-TM-align, and a 40-fold speedup over MAMMOTH. Conclusions ppsAlign is a high-performance protein structure alignment tool designed to tackle the computational complexity issues from protein structural data. The solution presented in this paper allows large-scale structure comparisons to be performed using massive parallel computing power of GPU.

  15. permGPU: Using graphics processing units in RNA microarray association studies

    Directory of Open Access Journals (Sweden)

    George Stephen L

    2010-06-01

    Full Text Available Abstract Background Many analyses of microarray association studies involve permutation, bootstrap resampling and cross-validation, that are ideally formulated as embarrassingly parallel computing problems. Given that these analyses are computationally intensive, scalable approaches that can take advantage of multi-core processor systems need to be developed. Results We have developed a CUDA based implementation, permGPU, that employs graphics processing units in microarray association studies. We illustrate the performance and applicability of permGPU within the context of permutation resampling for a number of test statistics. An extensive simulation study demonstrates a dramatic increase in performance when using permGPU on an NVIDIA GTX 280 card compared to an optimized C/C++ solution running on a conventional Linux server. Conclusions permGPU is available as an open-source stand-alone application and as an extension package for the R statistical environment. It provides a dramatic increase in performance for permutation resampling analysis in the context of microarray association studies. The current version offers six test statistics for carrying out permutation resampling analyses for binary, quantitative and censored time-to-event traits.

  16. Monte Carlo methods for neutron transport on graphics processing units using Cuda - 015

    International Nuclear Information System (INIS)

    Nelson, A.G.; Ivanov, K.N.

    2010-01-01

    This work examined the feasibility of utilizing Graphics Processing Units (GPUs) to accelerate Monte Carlo neutron transport simulations. First, a clean-sheet MC code was written in C++ for an x86 CPU and later ported to run on GPUs using NVIDIA's CUDA programming language. After further optimization, the GPU ran 21 times faster than the CPU code when using single-precision floating point math. This can be further increased with no additional effort if accuracy is sacrificed for speed: using a compiler flag, the speedup was increased to 22x. Further, if double-precision floating point math is desired for neutron tracking through the geometry, a speedup of 11x was obtained. The GPUs have proven to be useful in this study, but the current generation does have limitations: the maximum memory currently available on a single GPU is only 4 GB; the GPU RAM does not provide error-checking and correction; and the optimization required for large speedups can lead to confusing code. (authors)

  17. GPUmotif: an ultra-fast and energy-efficient motif analysis program using graphics processing units.

    Science.gov (United States)

    Zandevakili, Pooya; Hu, Ming; Qin, Zhaohui

    2012-01-01

    Computational detection of TF binding patterns has become an indispensable tool in functional genomics research. With the rapid advance of new sequencing technologies, large amounts of protein-DNA interaction data have been produced. Analyzing this data can provide substantial insight into the mechanisms of transcriptional regulation. However, the massive amount of sequence data presents daunting challenges. In our previous work, we have developed a novel algorithm called Hybrid Motif Sampler (HMS) that enables more scalable and accurate motif analysis. Despite much improvement, HMS is still time-consuming due to the requirement to calculate matching probabilities position-by-position. Using the NVIDIA CUDA toolkit, we developed a graphics processing unit (GPU)-accelerated motif analysis program named GPUmotif. We proposed a "fragmentation" technique to hide data transfer time between memories. Performance comparison studies showed that commonly-used model-based motif scan and de novo motif finding procedures such as HMS can be dramatically accelerated when running GPUmotif on NVIDIA graphics cards. As a result, energy consumption can also be greatly reduced when running motif analysis using GPUmotif. The GPUmotif program is freely available at http://sourceforge.net/projects/gpumotif/

  18. MGUPGMA: A Fast UPGMA Algorithm With Multiple Graphics Processing Units Using NCCL

    Directory of Open Access Journals (Sweden)

    Guan-Jie Hua

    2017-10-01

    Full Text Available A phylogenetic tree is a visual diagram of the relationship between a set of biological species. The scientists usually use it to analyze many characteristics of the species. The distance-matrix methods, such as Unweighted Pair Group Method with Arithmetic Mean and Neighbor Joining, construct a phylogenetic tree by calculating pairwise genetic distances between taxa. These methods have the computational performance issue. Although several new methods with high-performance hardware and frameworks have been proposed, the issue still exists. In this work, a novel parallel Unweighted Pair Group Method with Arithmetic Mean approach on multiple Graphics Processing Units is proposed to construct a phylogenetic tree from extremely large set of sequences. The experimental results present that the proposed approach on a DGX-1 server with 8 NVIDIA P100 graphic cards achieves approximately 3-fold to 7-fold speedup over the implementation of Unweighted Pair Group Method with Arithmetic Mean on a modern CPU and a single GPU, respectively.

  19. GPUmotif: an ultra-fast and energy-efficient motif analysis program using graphics processing units.

    Directory of Open Access Journals (Sweden)

    Pooya Zandevakili

    Full Text Available Computational detection of TF binding patterns has become an indispensable tool in functional genomics research. With the rapid advance of new sequencing technologies, large amounts of protein-DNA interaction data have been produced. Analyzing this data can provide substantial insight into the mechanisms of transcriptional regulation. However, the massive amount of sequence data presents daunting challenges. In our previous work, we have developed a novel algorithm called Hybrid Motif Sampler (HMS that enables more scalable and accurate motif analysis. Despite much improvement, HMS is still time-consuming due to the requirement to calculate matching probabilities position-by-position. Using the NVIDIA CUDA toolkit, we developed a graphics processing unit (GPU-accelerated motif analysis program named GPUmotif. We proposed a "fragmentation" technique to hide data transfer time between memories. Performance comparison studies showed that commonly-used model-based motif scan and de novo motif finding procedures such as HMS can be dramatically accelerated when running GPUmotif on NVIDIA graphics cards. As a result, energy consumption can also be greatly reduced when running motif analysis using GPUmotif. The GPUmotif program is freely available at http://sourceforge.net/projects/gpumotif/

  20. The application of projected conjugate gradient solvers on graphical processing units

    Energy Technology Data Exchange (ETDEWEB)

    Lin, Youzuo [Los Alamos National Laboratory; Renaut, Rosemary [ARIZONA STATE UNIV.

    2011-01-26

    Graphical processing units introduce the capability for large scale computation at the desktop. Presented numerical results verify that efficiencies and accuracies of basic linear algebra subroutines of all levels when implemented in CUDA and Jacket are comparable. But experimental results demonstrate that the basic linear algebra subroutines of level three offer the greatest potential for improving efficiency of basic numerical algorithms. We consider the solution of the multiple right hand side set of linear equations using Krylov subspace-based solvers. Thus, for the multiple right hand side case, it is more efficient to make use of a block implementation of the conjugate gradient algorithm, rather than to solve each system independently. Jacket is used for the implementation. Furthermore, including projection from one system to another improves efficiency. A relevant example, for which simulated results are provided, is the reconstruction of a three dimensional medical image volume acquired from a positron emission tomography scanner. Efficiency of the reconstruction is improved by using projection across nearby slices.

  1. Quantitative Estimation of Risks for Production Unit Based on OSHMS and Process Resilience

    Science.gov (United States)

    Nyambayar, D.; Koshijima, I.; Eguchi, H.

    2017-06-01

    Three principal elements in the production field of chemical/petrochemical industry are (i) Production Units, (ii) Production Plant Personnel and (iii) Production Support System (computer system introduced for improving productivity). Each principal element has production process resilience, i.e. a capability to restrain disruptive signals occurred in and out of the production field. In each principal element, risk assessment is indispensable for the production field. In a production facility, the occupational safety and health management system (Hereafter, referred to as OSHMS) has been introduced to reduce a risk of accidents and troubles that may occur during production. In OSHMS, a risk assessment is specified to reduce a potential risk in the production facility such as a factory, and PDCA activities are required for a continual improvement of safety production environments. However, there is no clear statement to adopt the OSHMS standard into the production field. This study introduces a metric to estimate the resilience of the production field by using the resilience generated by the production plant personnel and the result of the risk assessment in the production field. A method for evaluating how OSHMS functions are systematically installed in the production field is also discussed based on the resilience of the three principal elements.

  2. Accelerating adaptive inverse distance weighting interpolation algorithm on a graphics processing unit.

    Science.gov (United States)

    Mei, Gang; Xu, Liangliang; Xu, Nengxiong

    2017-09-01

    This paper focuses on designing and implementing parallel adaptive inverse distance weighting (AIDW) interpolation algorithms by using the graphics processing unit (GPU). The AIDW is an improved version of the standard IDW, which can adaptively determine the power parameter according to the data points' spatial distribution pattern and achieve more accurate predictions than those predicted by IDW. In this paper, we first present two versions of the GPU-accelerated AIDW, i.e. the naive version without profiting from the shared memory and the tiled version taking advantage of the shared memory. We also implement the naive version and the tiled version using two data layouts, structure of arrays and array of aligned structures, on both single and double precision. We then evaluate the performance of parallel AIDW by comparing it with its corresponding serial algorithm on three different machines equipped with the GPUs GT730M, M5000 and K40c. The experimental results indicate that: (i) there is no significant difference in the computational efficiency when different data layouts are employed; (ii) the tiled version is always slightly faster than the naive version; and (iii) on single precision the achieved speed-up can be up to 763 (on the GPU M5000), while on double precision the obtained highest speed-up is 197 (on the GPU K40c). To benefit the community, all source code and testing data related to the presented parallel AIDW algorithm are publicly available.

  3. Accelerating image reconstruction in three-dimensional optoacoustic tomography on graphics processing units.

    Science.gov (United States)

    Wang, Kun; Huang, Chao; Kao, Yu-Jiun; Chou, Cheng-Ying; Oraevsky, Alexander A; Anastasio, Mark A

    2013-02-01

    Optoacoustic tomography (OAT) is inherently a three-dimensional (3D) inverse problem. However, most studies of OAT image reconstruction still employ two-dimensional imaging models. One important reason is because 3D image reconstruction is computationally burdensome. The aim of this work is to accelerate existing image reconstruction algorithms for 3D OAT by use of parallel programming techniques. Parallelization strategies are proposed to accelerate a filtered backprojection (FBP) algorithm and two different pairs of projection/backprojection operations that correspond to two different numerical imaging models. The algorithms are designed to fully exploit the parallel computing power of graphics processing units (GPUs). In order to evaluate the parallelization strategies for the projection/backprojection pairs, an iterative image reconstruction algorithm is implemented. Computer simulation and experimental studies are conducted to investigate the computational efficiency and numerical accuracy of the developed algorithms. The GPU implementations improve the computational efficiency by factors of 1000, 125, and 250 for the FBP algorithm and the two pairs of projection/backprojection operators, respectively. Accurate images are reconstructed by use of the FBP and iterative image reconstruction algorithms from both computer-simulated and experimental data. Parallelization strategies for 3D OAT image reconstruction are proposed for the first time. These GPU-based implementations significantly reduce the computational time for 3D image reconstruction, complementing our earlier work on 3D OAT iterative image reconstruction.

  4. An Optimized Multicolor Point-Implicit Solver for Unstructured Grid Applications on Graphics Processing Units

    Science.gov (United States)

    Zubair, Mohammad; Nielsen, Eric; Luitjens, Justin; Hammond, Dana

    2016-01-01

    In the field of computational fluid dynamics, the Navier-Stokes equations are often solved using an unstructuredgrid approach to accommodate geometric complexity. Implicit solution methodologies for such spatial discretizations generally require frequent solution of large tightly-coupled systems of block-sparse linear equations. The multicolor point-implicit solver used in the current work typically requires a significant fraction of the overall application run time. In this work, an efficient implementation of the solver for graphics processing units is proposed. Several factors present unique challenges to achieving an efficient implementation in this environment. These include the variable amount of parallelism available in different kernel calls, indirect memory access patterns, low arithmetic intensity, and the requirement to support variable block sizes. In this work, the solver is reformulated to use standard sparse and dense Basic Linear Algebra Subprograms (BLAS) functions. However, numerical experiments show that the performance of the BLAS functions available in existing CUDA libraries is suboptimal for matrices representative of those encountered in actual simulations. Instead, optimized versions of these functions are developed. Depending on block size, the new implementations show performance gains of up to 7x over the existing CUDA library functions.

  5. Multidimensional upwind hydrodynamics on unstructured meshes using graphics processing units - I. Two-dimensional uniform meshes

    Science.gov (United States)

    Paardekooper, S.-J.

    2017-08-01

    We present a new method for numerical hydrodynamics which uses a multidimensional generalization of the Roe solver and operates on an unstructured triangular mesh. The main advantage over traditional methods based on Riemann solvers, which commonly use one-dimensional flux estimates as building blocks for a multidimensional integration, is its inherently multidimensional nature, and as a consequence its ability to recognize multidimensional stationary states that are not hydrostatic. A second novelty is the focus on graphics processing units (GPUs). By tailoring the algorithms specifically to GPUs, we are able to get speedups of 100-250 compared to a desktop machine. We compare the multidimensional upwind scheme to a traditional, dimensionally split implementation of the Roe solver on several test problems, and we find that the new method significantly outperforms the Roe solver in almost all cases. This comes with increased computational costs per time-step, which makes the new method approximately a factor of 2 slower than a dimensionally split scheme acting on a structured grid.

  6. GAMER: A GRAPHIC PROCESSING UNIT ACCELERATED ADAPTIVE-MESH-REFINEMENT CODE FOR ASTROPHYSICS

    International Nuclear Information System (INIS)

    Schive, H.-Y.; Tsai, Y.-C.; Chiueh Tzihong

    2010-01-01

    We present the newly developed code, GPU-accelerated Adaptive-MEsh-Refinement code (GAMER), which adopts a novel approach in improving the performance of adaptive-mesh-refinement (AMR) astrophysical simulations by a large factor with the use of the graphic processing unit (GPU). The AMR implementation is based on a hierarchy of grid patches with an oct-tree data structure. We adopt a three-dimensional relaxing total variation diminishing scheme for the hydrodynamic solver and a multi-level relaxation scheme for the Poisson solver. Both solvers have been implemented in GPU, by which hundreds of patches can be advanced in parallel. The computational overhead associated with the data transfer between the CPU and GPU is carefully reduced by utilizing the capability of asynchronous memory copies in GPU, and the computing time of the ghost-zone values for each patch is diminished by overlapping it with the GPU computations. We demonstrate the accuracy of the code by performing several standard test problems in astrophysics. GAMER is a parallel code that can be run in a multi-GPU cluster system. We measure the performance of the code by performing purely baryonic cosmological simulations in different hardware implementations, in which detailed timing analyses provide comparison between the computations with and without GPU(s) acceleration. Maximum speed-up factors of 12.19 and 10.47 are demonstrated using one GPU with 4096 3 effective resolution and 16 GPUs with 8192 3 effective resolution, respectively.

  7. Space Object Collision Probability via Monte Carlo on the Graphics Processing Unit

    Science.gov (United States)

    Vittaldev, Vivek; Russell, Ryan P.

    2017-09-01

    Fast and accurate collision probability computations are essential for protecting space assets. Monte Carlo (MC) simulation is the most accurate but computationally intensive method. A Graphics Processing Unit (GPU) is used to parallelize the computation and reduce the overall runtime. Using MC techniques to compute the collision probability is common in literature as the benchmark. An optimized implementation on the GPU, however, is a challenging problem and is the main focus of the current work. The MC simulation takes samples from the uncertainty distributions of the Resident Space Objects (RSOs) at any time during a time window of interest and outputs the separations at closest approach. Therefore, any uncertainty propagation method may be used and the collision probability is automatically computed as a function of RSO collision radii. Integration using a fixed time step and a quartic interpolation after every Runge Kutta step ensures that no close approaches are missed. Two orders of magnitude speedups over a serial CPU implementation are shown, and speedups improve moderately with higher fidelity dynamics. The tool makes the MC approach tractable on a single workstation, and can be used as a final product, or for verifying surrogate and analytical collision probability methods.

  8. Unit operation in food manufacturing and processing. Shokuhin seizo/kako ni okeru tan'i sosa

    Energy Technology Data Exchange (ETDEWEB)

    Matsuno, R. (Kyoto Univ., Kyoto (Japan). Faculty of Aguriculture)

    1993-09-05

    Processed foods must be produced in mass, cheap and safe and should be suitable for the delicate taste of human being. Food tastes are effected by an outlook on human attitude, and the surrounding environment. And these factors are reflected to unit operation in food manufacturing and processing and it is clarified that there are many technical difficulties. The characteristics of unit operation for food manufacturing and processing are that the food materials are a multicomponent system, moreover, a very small amount of aroma components, taste components, vitamin, physiologically activation materials and so on are more important than the main components, and also inapplicable of the model centering to the most quantitative component. The purpose of unit operation in food manufacturing and processing is to produce the properties of matter matching to human sense, and therefore there are many problems left unsolved. The development of analytical technology also has an influence on manufacturing and processing technology. Consequently, food manufacturing and processing technology must be based on general science. It is necessary to develop unit operation with an understanding of mutual effect between food and human body.

  9. [Impact of quality improvement process upon the state of nutritional support in a critical care unit].

    Science.gov (United States)

    Martinuzzi, A; Ferraresi, E; Orsati, M; Palaoro, A; Chaparro, J; Alcántara, S; Amin, C; Feller, C; Di Leo, M E; Guillot, A; García, V

    2012-01-01

    In a preceding article the state of Nutritional support (NS) in an Intensive Care Unit (ICU) was documented [Martinuzzi A et al. Estado del soporte nutricional en una unidad de Cuidados críticos. RNC 2011; 20: 5-17]. In this follow-up work we set to assess the impact of several organizational, recording and educational interventions upon the current state of NS processes. Interventions comprised presentation of the results of the audit conducted at the ICU before the institution's medical as well as paramedical personnel; their publication in a periodical, peer-reviewed journal; drafting and implementation of a protocol regulating NS schemes to be carried out at the ICU; and conduction of continuous education activities on Nutrition (such as "experts talks", interactive courses, and training in the implementation of the NS protocol). The state of NS processes documented after the interventions was compared with the results annotated in the preceding article. Study observation window ran between March the 1st, 2011 and May 31th, 2011, both included. Study series differed only regarding overall-mortality: Phase 1: 40.0% vs. Phase 2: 20.5%; Difference: 19.5%; Z = 1.927; two-tailed-p = 0.054. Interventions resulted in a higher fulfillment rate of the prescribed NS indication; an increase in the number of patients receiving ≥ 80% of prescribed energy; and a reduction in the number of NS lost days. Mortality was (numerically) lower in patients in which the prescribed NS scheme was fulfilled, NS was early initiated, and whom received ≥ 80% of prescribed energy. Adopted interventions had no effect upon average energy intakes: Phase 1: 574.7 ± 395.3 kcal/24 h⁻¹ vs. Phase 2: 591.1 ± 315.3 kcal/24 h⁻¹; two-tailed-p > 0.05. Educational, recording and organizational interventions might result in a better conduction of NS processes, and thus, in a lower mortality. Hemodynamic instability is still the most formidable obstacle for initiating and completing NS.

  10. Surface interactions between nanoscale iron and organic material: Potential uses in water treatment process units

    Science.gov (United States)

    Storms, Max

    Membrane systems are among the primary emergent technologies in water treatment process units due to their ease of use, small physical footprint, and high physical rejection. Membrane fouling, the phenomena by which membranes become clogged or generally soiled, is an inhibitor to optimal efficiency in membrane systems. Novel, composite, and modified surface materials must be investigated to determine their efficacy in improving fouling behavior. Ceramic membranes derived from iron oxide nanoparticles called ferroxanes were coated with a superhydrophillic, zwitterionic polymer called poly (sulfobetaine methacrylate) (polySBMA) to form a composite ceramic-polymeric membrane. Membrane samples with and without polySBMA coating were subjected to fouling with a bovine serum albumin solution and fouling was observed by measuring permeate flux at 10 mL intervals. Loss of polySBMA was measured using total organic carbon analysis, and membrane samples were characterized using x-ray diffraction, scanning electron microscopy, and optical profilometry. The coated membrane samples decreased initial fouling rate by 27% and secondary fouling rate by 24%. Similarly, they displayed a 30% decrease in irreversible fouling during the initial fouling stage, and a 27% decrease in irreversible fouling in the secondary fouling stage; however, retention of polySBMA sufficient for improved performance was not conclusive. The addition of chemical disinfectants into drinking water treatment processes results in the formation of compounds called disinfection by-products (DBPs). The formation of DBPs occurs when common chemical disinfectants (i.e. chlorine) react with organic material. The harmful effects of DBP exposure require that they be monitored and controlled for public safety. This work investigated the ability of nanostructured hematite derived from ferroxane nanoparticles to remove organic precursors to DBPs in the form of humic acid via adsorption processes. The results show that p

  11. A Behavioral Analysis of the Laboratory Learning Process: Redesigning a Teaching Unit on Recrystallization.

    Science.gov (United States)

    Mulder, T.; Verdonk, A. H.

    1984-01-01

    Reports on a project in which observations of student and teaching assistant behavior were used to redesign a teaching unit on recrystallization. Comments on the instruction manual, starting points for teaching the unit, and list of objectives with related tasks are included. (JN)

  12. Risk management and the vulnerability assessment process of the United States Department of Energy

    International Nuclear Information System (INIS)

    Rivers, J.D.; Johnson, O.B.; Callahan, S.N.

    2001-01-01

    Full text: Risk management is an essential element in influencing how the United States Department of Energy's safeguards and security mission is executed. Risk management exists as a function of a target's attractiveness, along with the potential consequences associated with the unauthorized use of that target. The goal of risk management encompasses the fielding and operating of appropriate, cost-effective protection systems generating sufficient deterrence to protect sensitive programs and facilities. Risk mitigation and risk prevention are accomplished through the vulnerability assessment process. The implementation and continued validation of measures to prevent or mitigate risk to acceptable levels constitute the fundamental approach of the Department's risk management program. Due to the incomplete knowledge inherent in any threat definition, it is impossible to precisely tailor a protective system to defend against all threats. The challenge presented to safeguards and security program managers lies in developing systems sufficiently effective to defend against an array of threats slightly greater than can be hypothetically postulated (the design basis threat amended for local conditions). These systems are then balanced against technological, resource, and fiscal constraints. A key element in the risk assessment process is analyzing the security systems against the Design Basis Threat (DBT). The DBT is used to define the level and capability of the threat against the DOE facilities and their assets. In particular it defines motivation, numbers of adversaries, capabilities, and their objectives. Site Safeguards and Security Plans (SSSPs) provide the basis and justification for safeguards and security program development, budget, and staffing requirements. The SSSP process examines, describes, and documents safeguards and security programs, site-wide and by facility; establishes safeguards and security program improvement priorities; describes site and

  13. Failure mode and effect analysis: improving intensive care unit risk management processes.

    Science.gov (United States)

    Askari, Roohollah; Shafii, Milad; Rafiei, Sima; Abolhassani, Mohammad Sadegh; Salarikhah, Elaheh

    2017-04-18

    Purpose Failure modes and effects analysis (FMEA) is a practical tool to evaluate risks, discover failures in a proactive manner and propose corrective actions to reduce or eliminate potential risks. The purpose of this paper is to apply FMEA technique to examine the hazards associated with the process of service delivery in intensive care unit (ICU) of a tertiary hospital in Yazd, Iran. Design/methodology/approach This was a before-after study conducted between March 2013 and December 2014. By forming a FMEA team, all potential hazards associated with ICU services - their frequency and severity - were identified. Then risk priority number was calculated for each activity as an indicator representing high priority areas that need special attention and resource allocation. Findings Eight failure modes with highest priority scores including endotracheal tube defect, wrong placement of endotracheal tube, EVD interface, aspiration failure during suctioning, chest tube failure, tissue injury and deep vein thrombosis were selected for improvement. Findings affirmed that improvement strategies were generally satisfying and significantly decreased total failures. Practical implications Application of FMEA in ICUs proved to be effective in proactively decreasing the risk of failures and corrected the control measures up to acceptable levels in all eight areas of function. Originality/value Using a prospective risk assessment approach, such as FMEA, could be beneficial in dealing with potential failures through proposing preventive actions in a proactive manner. The method could be used as a tool for healthcare continuous quality improvement so that the method identifies both systemic and human errors, and offers practical advice to deal effectively with them.

  14. Embedded-Based Graphics Processing Unit Cluster Platform for Multiple Sequence Alignments

    Directory of Open Access Journals (Sweden)

    Jyh-Da Wei

    2017-08-01

    Full Text Available High-end graphics processing units (GPUs, such as NVIDIA Tesla/Fermi/Kepler series cards with thousands of cores per chip, are widely applied to high-performance computing fields in a decade. These desktop GPU cards should be installed in personal computers/servers with desktop CPUs, and the cost and power consumption of constructing a GPU cluster platform are very high. In recent years, NVIDIA releases an embedded board, called Jetson Tegra K1 (TK1, which contains 4 ARM Cortex-A15 CPUs and 192 Compute Unified Device Architecture cores (belong to Kepler GPUs. Jetson Tegra K1 has several advantages, such as the low cost, low power consumption, and high applicability, and it has been applied into several specific applications. In our previous work, a bioinformatics platform with a single TK1 (STK platform was constructed, and this previous work is also used to prove that the Web and mobile services can be implemented in the STK platform with a good cost-performance ratio by comparing a STK platform with the desktop CPU and GPU. In this work, an embedded-based GPU cluster platform will be constructed with multiple TK1s (MTK platform. Complex system installation and setup are necessary procedures at first. Then, 2 job assignment modes are designed for the MTK platform to provide services for users. Finally, ClustalW v2.0.11 and ClustalWtk will be ported to the MTK platform. The experimental results showed that the speedup ratios achieved 5.5 and 4.8 times for ClustalW v2.0.11 and ClustalWtk, respectively, by comparing 6 TK1s with a single TK1. The MTK platform is proven to be useful for multiple sequence alignments.

  15. Embedded-Based Graphics Processing Unit Cluster Platform for Multiple Sequence Alignments.

    Science.gov (United States)

    Wei, Jyh-Da; Cheng, Hui-Jun; Lin, Chun-Yuan; Ye, Jin; Yeh, Kuan-Yu

    2017-01-01

    High-end graphics processing units (GPUs), such as NVIDIA Tesla/Fermi/Kepler series cards with thousands of cores per chip, are widely applied to high-performance computing fields in a decade. These desktop GPU cards should be installed in personal computers/servers with desktop CPUs, and the cost and power consumption of constructing a GPU cluster platform are very high. In recent years, NVIDIA releases an embedded board, called Jetson Tegra K1 (TK1), which contains 4 ARM Cortex-A15 CPUs and 192 Compute Unified Device Architecture cores (belong to Kepler GPUs). Jetson Tegra K1 has several advantages, such as the low cost, low power consumption, and high applicability, and it has been applied into several specific applications. In our previous work, a bioinformatics platform with a single TK1 (STK platform) was constructed, and this previous work is also used to prove that the Web and mobile services can be implemented in the STK platform with a good cost-performance ratio by comparing a STK platform with the desktop CPU and GPU. In this work, an embedded-based GPU cluster platform will be constructed with multiple TK1s (MTK platform). Complex system installation and setup are necessary procedures at first. Then, 2 job assignment modes are designed for the MTK platform to provide services for users. Finally, ClustalW v2.0.11 and ClustalWtk will be ported to the MTK platform. The experimental results showed that the speedup ratios achieved 5.5 and 4.8 times for ClustalW v2.0.11 and ClustalWtk, respectively, by comparing 6 TK1s with a single TK1. The MTK platform is proven to be useful for multiple sequence alignments.

  16. FLOCKING-BASED DOCUMENT CLUSTERING ON THE GRAPHICS PROCESSING UNIT [Book Chapter

    Energy Technology Data Exchange (ETDEWEB)

    Charles, J S; Patton, R M; Potok, T E; Cui, X

    2008-01-01

    Analyzing and grouping documents by content is a complex problem. One explored method of solving this problem borrows from nature, imitating the fl ocking behavior of birds. Each bird represents a single document and fl ies toward other documents that are similar to it. One limitation of this method of document clustering is its complexity O(n2). As the number of documents grows, it becomes increasingly diffi cult to receive results in a reasonable amount of time. However, fl ocking behavior, along with most naturally inspired algorithms such as ant colony optimization and particle swarm optimization, are highly parallel and have experienced improved performance on expensive cluster computers. In the last few years, the graphics processing unit (GPU) has received attention for its ability to solve highly-parallel and semi-parallel problems much faster than the traditional sequential processor. Some applications see a huge increase in performance on this new platform. The cost of these high-performance devices is also marginal when compared with the price of cluster machines. In this paper, we have conducted research to exploit this architecture and apply its strengths to the document flocking problem. Our results highlight the potential benefi t the GPU brings to all naturally inspired algorithms. Using the CUDA platform from NVIDIA®, we developed a document fl ocking implementation to be run on the NVIDIA® GEFORCE 8800. Additionally, we developed a similar but sequential implementation of the same algorithm to be run on a desktop CPU. We tested the performance of each on groups of news articles ranging in size from 200 to 3,000 documents. The results of these tests were very signifi cant. Performance gains ranged from three to nearly fi ve times improvement of the GPU over the CPU implementation. This dramatic improvement in runtime makes the GPU a potentially revolutionary platform for document clustering algorithms.

  17. Fast analysis of molecular dynamics trajectories with graphics processing units-Radial distribution function histogramming

    International Nuclear Information System (INIS)

    Levine, Benjamin G.; Stone, John E.; Kohlmeyer, Axel

    2011-01-01

    The calculation of radial distribution functions (RDFs) from molecular dynamics trajectory data is a common and computationally expensive analysis task. The rate limiting step in the calculation of the RDF is building a histogram of the distance between atom pairs in each trajectory frame. Here we present an implementation of this histogramming scheme for multiple graphics processing units (GPUs). The algorithm features a tiling scheme to maximize the reuse of data at the fastest levels of the GPU's memory hierarchy and dynamic load balancing to allow high performance on heterogeneous configurations of GPUs. Several versions of the RDF algorithm are presented, utilizing the specific hardware features found on different generations of GPUs. We take advantage of larger shared memory and atomic memory operations available on state-of-the-art GPUs to accelerate the code significantly. The use of atomic memory operations allows the fast, limited-capacity on-chip memory to be used much more efficiently, resulting in a fivefold increase in performance compared to the version of the algorithm without atomic operations. The ultimate version of the algorithm running in parallel on four NVIDIA GeForce GTX 480 (Fermi) GPUs was found to be 92 times faster than a multithreaded implementation running on an Intel Xeon 5550 CPU. On this multi-GPU hardware, the RDF between two selections of 1,000,000 atoms each can be calculated in 26.9 s per frame. The multi-GPU RDF algorithms described here are implemented in VMD, a widely used and freely available software package for molecular dynamics visualization and analysis.

  18. Utilizing General Purpose Graphics Processing Units to Improve Performance of Computer Modelling and Visualization

    Science.gov (United States)

    Monk, J.; Zhu, Y.; Koons, P. O.; Segee, B. E.

    2009-12-01

    With the introduction of the G8X series of cards by nVidia an architecture called CUDA was released, virtually all subsequent video cards have had CUDA support. With this new architecture nVidia provided extensions for C/C++ that create an Application Programming Interface (API) allowing code to be executed on the GPU. Since then the concept of GPGPU (general purpose graphics processing unit) has been growing, this is the concept that the GPU is very good a algebra and running things in parallel so we should take use of that power for other applications. This is highly appealing in the area of geodynamic modeling, as multiple parallel solutions of the same differential equations at different points in space leads to a large speedup in simulation speed. Another benefit of CUDA is a programmatic method of transferring large amounts of data between the computer's main memory and the dedicated GPU memory located on the video card. In addition to being able to compute and render on the video card, the CUDA framework allows for a large speedup in the situation, such as with a tiled display wall, where the rendered pixels are to be displayed in a different location than where they are rendered. A CUDA extension for VirtualGL was developed allowing for faster read back at high resolutions. This paper examines several aspects of rendering OpenGL graphics on large displays using VirtualGL and VNC. It demonstrates how performance can be significantly improved in rendering on a tiled monitor wall. We present a CUDA enhanced version of VirtualGL as well as the advantages to having multiple VNC servers. It will discuss restrictions caused by read back and blitting rates and how they are affected by different sizes of virtual displays being rendered.

  19. An apparatus and process for forming P-N junction semiconductor units

    International Nuclear Information System (INIS)

    1975-01-01

    It is stated that although many methods of ion implantation have been developed it seems that the method of 'hot implantation' is still in its infancy. In this method the target is preheated in an ion implantor during implantation of ions, leading to radiation enhanced diffusion. The apparatus described comprises the following: (i) a bell jar evacuated to -3 Torr containing four electrodes arranged in two pairs, one electrode of the first pair being in the form of a mesh; (ii) a source of high pulsating direct voltage connected to the first pair of electrodes, with the mesh electrode negatively poled, to ionise the rarified air in the bell jar and accelerate the resulting positive N and O ions; (iii) an RF voltage source connected to the other pair of electrodes to facilitate the ionisation; (iv) a dopant semiconductor body, heated by a wire wound heater, placed underneath the mesh electrode so that the accelerated ions bombard the dopant layer through the mesh electrode and implant dopant atoms in the semiconductor body. The distance between the mesh electrode and the surface of the dopant-coated semiconductive body, should be about 5mm. The mesh electrode consists of a sputtering-resistant refractory metal, and includes a cooling system. The dopant-coated semiconductive body is placed on a ceramic plate in the bell jar, and the power supply line of the heater is insulated from the voltage applied to the negative electrode, which is earthed, by using an insulated heater transformer combined with an autotransformer. The ceramic plate is attached to a plate on which the heater is wound, and the temperature of the heating should be variable between 400 0 and 500 0 C. A process for forming P-N junction semiconductor units using this apparatus is described. (U.K.)

  20. Real-time computation of parameter fitting and image reconstruction using graphical processing units

    Science.gov (United States)

    Locans, Uldis; Adelmann, Andreas; Suter, Andreas; Fischer, Jannis; Lustermann, Werner; Dissertori, Günther; Wang, Qiulin

    2017-06-01

    In recent years graphical processing units (GPUs) have become a powerful tool in scientific computing. Their potential to speed up highly parallel applications brings the power of high performance computing to a wider range of users. However, programming these devices and integrating their use in existing applications is still a challenging task. In this paper we examined the potential of GPUs for two different applications. The first application, created at Paul Scherrer Institut (PSI), is used for parameter fitting during data analysis of μSR (muon spin rotation, relaxation and resonance) experiments. The second application, developed at ETH, is used for PET (Positron Emission Tomography) image reconstruction and analysis. Applications currently in use were examined to identify parts of the algorithms in need of optimization. Efficient GPU kernels were created in order to allow applications to use a GPU, to speed up the previously identified parts. Benchmarking tests were performed in order to measure the achieved speedup. During this work, we focused on single GPU systems to show that real time data analysis of these problems can be achieved without the need for large computing clusters. The results show that the currently used application for parameter fitting, which uses OpenMP to parallelize calculations over multiple CPU cores, can be accelerated around 40 times through the use of a GPU. The speedup may vary depending on the size and complexity of the problem. For PET image analysis, the obtained speedups of the GPU version were more than × 40 larger compared to a single core CPU implementation. The achieved results show that it is possible to improve the execution time by orders of magnitude.

  1. Modification and Validation of an Automotive Data Processing Unit, Compessed Video System, and Communications Equipment

    Energy Technology Data Exchange (ETDEWEB)

    Carter, R.J.

    1997-04-01

    The primary purpose of the "modification and validation of an automotive data processing unit (DPU), compressed video system, and communications equipment" cooperative research and development agreement (CRADA) was to modify and validate both hardware and software, developed by Scientific Atlanta, Incorporated (S-A) for defense applications (e.g., rotary-wing airplanes), for the commercial sector surface transportation domain (i.e., automobiles and trucks). S-A also furnished a state-of-the-art compressed video digital storage and retrieval system (CVDSRS), and off-the-shelf data storage and transmission equipment to support the data acquisition system for crash avoidance research (DASCAR) project conducted by Oak Ridge National Laboratory (ORNL). In turn, S-A received access to hardware and technology related to DASCAR. DASCAR was subsequently removed completely and installation was repeated a number of times to gain an accurate idea of complete installation, operation, and removal of DASCAR. Upon satisfactory completion of the DASCAR construction and preliminary shakedown, ORNL provided NHTSA with an operational demonstration of DASCAR at their East Liberty, OH test facility. The demonstration included an on-the-road demonstration of the entire data acquisition system using NHTSA'S test track. In addition, the demonstration also consisted of a briefing, containing the following: ORNL generated a plan for validating the prototype data acquisition system with regard to: removal of DASCAR from an existing vehicle, and installation and calibration in other vehicles; reliability of the sensors and systems; data collection and transmission process (data integrity); impact on the drivability of the vehicle and obtrusiveness of the system to the driver; data analysis procedures; conspicuousness of the vehicle to other drivers; and DASCAR installation and removal training and documentation. In order to identify any operational problems not captured by the systems

  2. Investigation of scale effects and directionality dependence on friction and adhesion of human hair using AFM and macroscale friction test apparatus

    International Nuclear Information System (INIS)

    LaTorre, Carmen; Bhushan, Bharat

    2006-01-01

    Macroscale testing of human hair tribological properties has been widely used to aid in the development of better shampoos and conditioners. Recently, literature has focused on using the atomic force microscope (AFM) to study surface roughness, coefficient of friction, adhesive force, and wear (tribological properties) on the nanoscale in order to increase understanding about how shampoos and conditioners interact with the hair cuticle. Since there are both similarities and differences when comparing the tribological trends at both scales, it is thus recognized that scale effects are an important aspect of studying the tribology of hair. However, no microscale tribological data for hair exists in literature. This is unfortunate because many interactions between hair-skin, hair-comb, and hair-hair contact takes place at microasperities ranging from a few μm to hundreds of μm. Thus, to bridge the gap between the macro- and nanoscale data, as well as to gain a full understanding of the mechanisms behind the trends, it is now worthwhile to look at hair tribology on the microscale. Presented in this paper are coefficient of friction and adhesive force data on various scales for virgin and chemically damaged hair, both with and without conditioner treatment. Macroscale coefficient of friction was determined using a traditional friction test apparatus. Microscale and nanoscale tribological characterization was performed with AFM tips of various radii. The nano-, micro-, and macroscale trends are compared and the mechanisms behind the scale effects are discussed. Since the coefficient of friction changes drastically (on any scale) depending on whether the direction of motion is along or against the cuticle scales, the directionality dependence and responsible mechanisms are discussed

  3. Unit Testing Using Design by Contract and Equivalence Partitions, Extreme Programming and Agile Processes in Software Engineering

    DEFF Research Database (Denmark)

    Madsen, Per

    2003-01-01

    Extreme Programming [1] and in particular the idea of Unit Testing can improve the quality of the testing process. But still programmers need to do a lot of tiresome manual work writing test cases. If the programmers could get some automatic tool support enforcing the quality of test cases then t...... then the overall quality of the software would improve significantly....

  4. 78 FR 1259 - Labor Certification Process for the Temporary Employment of Aliens in Agriculture in the United...

    Science.gov (United States)

    2013-01-08

    ... DEPARTMENT OF LABOR Employment and Training Administration Labor Certification Process for the Temporary Employment of Aliens in Agriculture in the United States: 2013 Adverse Effect Wage Rates AGENCY: Employment and Training Administration, Department of Labor. ACTION: Notice. SUMMARY: The Employment and...

  5. 76 FR 79711 - Labor Certification Process for the Temporary Employment of Aliens in Agriculture in the United...

    Science.gov (United States)

    2011-12-22

    ... DEPARTMENT OF LABOR Employment and Training Administration Labor Certification Process for the Temporary Employment of Aliens in Agriculture in the United States: 2012 Adverse Effect Wage Rates AGENCY: Employment and Training Administration, Department of Labor. ACTION: Notice. SUMMARY: The Employment and...

  6. 40 CFR 63.2252 - What are the requirements for process units that have no control or work practice requirements?

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 12 2010-07-01 2010-07-01 true What are the requirements for process units that have no control or work practice requirements? 63.2252 Section 63.2252 Protection of... Pollutants: Plywood and Composite Wood Products General Compliance Requirements § 63.2252 What are the...

  7. Engineering Encounters: The Cat in the Hat Builds Satellites. A Unit Promoting Scientific Literacy and the Engineering Design Process

    Science.gov (United States)

    Rehmat, Abeera P.; Owens, Marissa C.

    2016-01-01

    This column presents ideas and techniques to enhance your science teaching. This month's issue shares information about a unit promoting scientific literacy and the engineering design process. The integration of engineering with scientific practices in K-12 education can promote creativity, hands-on learning, and an improvement in students'…

  8. Genome-Wide Mapping of Transcriptional Regulation and Metabolism Describes Information-Processing Units in Escherichia coli

    Directory of Open Access Journals (Sweden)

    Daniela Ledezma-Tejeida

    2017-08-01

    Full Text Available In the face of changes in their environment, bacteria adjust gene expression levels and produce appropriate responses. The individual layers of this process have been widely studied: the transcriptional regulatory network describes the regulatory interactions that produce changes in the metabolic network, both of which are coordinated by the signaling network, but the interplay between them has never been described in a systematic fashion. Here, we formalize the process of detection and processing of environmental information mediated by individual transcription factors (TFs, utilizing a concept termed genetic sensory response units (GENSOR units, which are composed of four components: (1 a signal, (2 signal transduction, (3 genetic switch, and (4 a response. We used experimentally validated data sets from two databases to assemble a GENSOR unit for each of the 189 local TFs of Escherichia coli K-12 contained in the RegulonDB database. Further analysis suggested that feedback is a common occurrence in signal processing, and there is a gradient of functional complexity in the response mediated by each TF, as opposed to a one regulator/one pathway rule. Finally, we provide examples of other GENSOR unit applications, such as hypothesis generation, detailed description of cellular decision making, and elucidation of indirect regulatory mechanisms.

  9. High performance direct gravitational N-body simulations on graphics processing units II: An implementation in CUDA

    NARCIS (Netherlands)

    Belleman, R.G.; Bédorf, J.; Portegies Zwart, S.F.

    2008-01-01

    We present the results of gravitational direct N-body simulations using the graphics processing unit (GPU) on a commercial NVIDIA GeForce 8800GTX designed for gaming computers. The force evaluation of the N-body problem is implemented in "Compute Unified Device Architecture" (CUDA) using the GPU to

  10. Medical review practices for driver licensing volume 3: guidelines and processes in the United States.

    Science.gov (United States)

    2017-04-01

    This is the third of three reports examining driver medical review practices in the United States and how : they fulfill the basic functions of identifying, assessing, and rendering licensing decisions on medically or : functionally at-risk drivers. ...

  11. STRATEGIC BUSINESS UNIT – THE CENTRAL ELEMENT OF THE BUSINESS PORTFOLIO STRATEGIC PLANNING PROCESS

    OpenAIRE

    FLORIN TUDOR IONESCU

    2011-01-01

    Over time, due to changes in the marketing environment, generated by the tightening competition, technological, social and political pressures the companies have adopted a new approach, by which the potential businesses began to be treated as strategic business units. A strategic business unit can be considered a part of a company, a product line within a division, and sometimes a single product or brand. From a strategic perspective, the diversified companies represent a collection of busine...

  12. The Politics of Process Implementation: Explaining Variations Across Units in a High-Tech Firm

    DEFF Research Database (Denmark)

    Müller, Sune Dueholm

    political strategies for adopting consensus or conflict based leadership styles and management practices to process implementation depending on the actors' response patterns. The developed concepts and propositions contribute to the streams of literature on process innovation and Software Process...

  13. Massively parallel signal processing using the graphics processing unit for real-time brain-computer interface feature extraction

    Directory of Open Access Journals (Sweden)

    J. Adam Wilson

    2009-07-01

    Full Text Available The clock speeds of modern computer processors have nearly plateaued in the past five years. Consequently, neural prosthetic systems that rely on processing large quantities of data in a short period of time face a bottleneck, in that it may not be possible to process all of the data recorded from an electrode array with high channel counts and bandwidth, such as electrocorticographic grids or other implantable systems. Therefore, in this study a method of using the processing capabilities of a graphics card (GPU was developed for real-time neural signal processing of a brain-computer interface (BCI. The NVIDIA CUDA system was used to offload processing to the GPU, which is capable of running many operations in parallel, potentially greatly increasing the speed of existing algorithms. The BCI system records many channels of data, which are processed and translated into a control signal, such as the movement of a computer cursor. This signal processing chain involves computing a matrix-matrix multiplication (i.e., a spatial filter, followed by calculating the power spectral density on every channel using an auto-regressive method, and finally classifying appropriate features for control. In this study, the first two computationally-intensive steps were implemented on the GPU, and the speed was compared to both the current implementation and a CPU-based implementation that uses multi-threading. Significant performance gains were obtained with GPU processing: the current implementation processed 1000 channels in 933 ms, while the new GPU method took only 27 ms, an improvement of nearly 35 times.

  14. Graphics Processing Unit-Enhanced Genetic Algorithms for Solving the Temporal Dynamics of Gene Regulatory Networks.

    Science.gov (United States)

    García-Calvo, Raúl; Guisado, J L; Diaz-Del-Rio, Fernando; Córdoba, Antonio; Jiménez-Morales, Francisco

    2018-01-01

    Understanding the regulation of gene expression is one of the key problems in current biology. A promising method for that purpose is the determination of the temporal dynamics between known initial and ending network states, by using simple acting rules. The huge amount of rule combinations and the nonlinear inherent nature of the problem make genetic algorithms an excellent candidate for finding optimal solutions. As this is a computationally intensive problem that needs long runtimes in conventional architectures for realistic network sizes, it is fundamental to accelerate this task. In this article, we study how to develop efficient parallel implementations of this method for the fine-grained parallel architecture of graphics processing units (GPUs) using the compute unified device architecture (CUDA) platform. An exhaustive and methodical study of various parallel genetic algorithm schemes-master-slave, island, cellular, and hybrid models, and various individual selection methods (roulette, elitist)-is carried out for this problem. Several procedures that optimize the use of the GPU's resources are presented. We conclude that the implementation that produces better results (both from the performance and the genetic algorithm fitness perspectives) is simulating a few thousands of individuals grouped in a few islands using elitist selection. This model comprises 2 mighty factors for discovering the best solutions: finding good individuals in a short number of generations, and introducing genetic diversity via a relatively frequent and numerous migration. As a result, we have even found the optimal solution for the analyzed gene regulatory network (GRN). In addition, a comparative study of the performance obtained by the different parallel implementations on GPU versus a sequential application on CPU is carried out. In our tests, a multifold speedup was obtained for our optimized parallel implementation of the method on medium class GPU over an equivalent

  15. Investigation of Plant Cell Wall Properties: A Study of Contributions from the Nanoscale to the Macroscale Impacting Cell Wall Recalcitrance

    Science.gov (United States)

    Crowe, Jacob Dillon

    , alkaline hydrogen peroxide and liquid hot water pretreatments were shown to alter structural properties impacting nanoscale porosity in corn stover. Delignification by alkaline hydrogen peroxide pretreatment decreased cell wall rigidity, with subsequent cell wall swelling resulting in increased nanoscale porosity and improved enzymatic hydrolysis compared to limited swelling and increased accessible surface areas observed in liquid hot water pretreated biomass. The volume accessible to a 90 A dextran probe within the cell wall was found to be positively correlated to both enzyme binding and glucose hydrolysis yields, indicating cell wall porosity is a key contributor to effective hydrolysis yields. In the third study, the effect of altered xylan content and structure was investigated in irregular xylem (irx) Arabidopsis thaliana mutants to understand the role xylan plays in secondary cell wall development and organization. Higher xylan extractability and lower cellulose crystallinity observed in irx9 and irx15 irx15-L mutants compared to wild type indicated altered xylan integration into the secondary cell wall. Nanoscale cell wall organization observed using multiple microscopy techniques was impacted to some extent in all irx mutants, with disorganized cellulose microfibril layers in sclerenchyma secondary cell walls likely resulting from irregular xylan structure and content. Irregular secondary cell wall microfibril layers showed heterogeneous nanomechanical properties compared to wild type, which translated to mechanical deficiencies observed in stem tensile tests. These results suggest nanoscale defects in cell wall strength can correspond to macroscale phenotypes.

  16. Fitness for service after a LOCA: A process applied to Pickering NGS Unit 2

    International Nuclear Information System (INIS)

    McLean, J.A.; Beaton, D.L.

    1996-01-01

    The fitness for service process provides a unique proven methodology for assessing and correcting post-LOCA damage, essential to plant restart. The process uses the as-built plant configuration for modelling input and features self correcting feedback from inspection to validate assessment models. This paper focuses on the process steps and the infrastructure necessary to execute the process

  17. [Analysis of the safety culture in a Cardiology Unit managed by processes].

    Science.gov (United States)

    Raso-Raso, Rafael; Uris-Selles, Joaquín; Nolasco-Bonmatí, Andreu; Grau-Jornet, Guillermo; Revert-Gandia, Rosa; Jiménez-Carreño, Rebeca; Sánchez-Soriano, Ruth M; Chamorro-Fernández, Carlos I; Marco-Francés, Elvira; Albero-Martínez, José V

    2017-04-04

    Safety culture is one of the requirements for preventing the occurrence of adverse effects. However, this has not been studied in the field of cardiology. The aim of this study is to evaluate the safety culture in a cardiology unit that has implemented and certified an integrated quality and risk management system for patient safety. A cross-sectional observational study was conducted in 2 consecutive years, with all staff completing the Spanish version of the questionnaire, "Hospital Survey on Patient Safety Culture" of the "Agency for Healthcare Research and Quality", with 42 items grouped into 12 dimensions. The percentage of positive responses in each dimension in 2014 and 2015 were compared, as well as national data and United States data, following the established rules. The overall assessment out of a possible 5, was 4.5 in 2014 and 4.7 in 2015. Seven dimensions were identified as strengths. The worst rated were: staffing, management support and teamwork between units. The comparison showed superiority in all dimensions compared to national data, and in 8 of them compared to American data. The safety culture in a Cardiology Unit with an integrated quality and risk management patient safety system is high, and higher than nationally in all its dimensions and in most of them compared to the United States. Copyright © 2017 Instituto Nacional de Cardiología Ignacio Chávez. Publicado por Masson Doyma México S.A. All rights reserved.

  18. A systematic review evaluating the role of nurses and processes for delivering early mobility interventions in the intensive care unit.

    Science.gov (United States)

    Krupp, Anna; Steege, Linsey; King, Barbara

    2018-04-19

    To investigate processes for delivering early mobility interventions in adult intensive care unit patients used in research and quality improvement studies and the role of nurses in early mobility interventions. A systematic review was conducted. Electronic databases PubMED, CINAHL, PEDro, and Cochrane were searched for studies published from 2000 to June 2017 that implemented an early mobility intervention in adult intensive care units. Included studies involved progression to ambulation as a component of the intervention, included the role of the nurse in preparing for or delivering the intervention, and reported at least one patient or organisational outcome measure. The System Engineering Initiative for Patient Safety (SEIPS) model, a framework for understanding structure, processes, and healthcare outcomes, was used to evaluate studies. 25 studies were included in the final review. Studies consisted of randomised control trials, prospective, retrospective, or mixed designs. A range of processes to support the delivery of early mobility were found. These processes include forming interdisciplinary teams, increasing mobility staff, mobility protocols, interdisciplinary education, champions, communication, and feedback. Variation exists in the process of delivering early mobility in the intensive care unit. In particular, further rigorous studies are needed to better understand the role of nurses in implementing early mobility to maintain a patient's functional status. Copyright © 2018 Elsevier Ltd. All rights reserved.

  19. Methodologies to maximize olefins in process unit of COMPERJ; Metodologias para maximizacao de olefinas nas unidades de processamento do COMPERJ

    Energy Technology Data Exchange (ETDEWEB)

    Santos, Maria Clara de C. dos; Seidl, Peter R.; Guimaraes, Maria Jose O.C. [Universidade Federal do Rio de Janeiro (UFRJ), RJ (Brazil). Escola de Quimica

    2008-07-01

    With the growth of the national and worldwide economy, there has been a considerable increase in demand for polyolefins, thus requiring an increase in the production of basic petrochemicals (primarily ethane and propane). Due the quality of the national oil, heavy and poor in light derivatives, it is necessary investments in processes of conversion of heavy fractions with intent to maximize production these olefins and alternative raw materials for obtaining these petrochemicals The possible alternatives studied were the expansion of the core petrochemical, changes in the refinery processing units and the construction of COMPERJ, the latter being a example of alternative that can change the current scenario. The work aims to the simulation of process units of COMPERJ with the intention of evaluate which solutions like COMPERJ can best meet the growing market of polyolefins. (author)

  20. Public debates - key issue in the environmental licensing process for the completion of the Cernavoda NPP Unit 2

    International Nuclear Information System (INIS)

    Rotaru, Ioan; Jelev, Adrian

    2003-01-01

    SN 'NUCLEARELECTRICA' S.A., the owner of Cernavoda NPP, organized, in 2001, several public consultations related to environmental impact of the completion of the Cernavoda NPP Unit 2, as required by the Romanian environmental law, part of project approval. Public consultations on the environmental assessment for the completion of the Cernavoda NPP - Unit 2 took place in 2001 between August 15 and September 21 in accordance with the provisions of Law No. 137/95 and Order No. 125/96. Romanian environmental legislation, harmonization of national environmental legislation with European Union, Romanian legislative requirements, information distributed to the public, issues raised and follow-up, they all are topics highlighted by this paper and they are addressing the environmental licensing process of the Cernavoda 2 NPP. The public consultation process described fulfils all the Romanian requirements for carrying out meaningful consultation with its relevant shareholders. The process also satisfies EDC (Export Development Corporation - Canada) requirements for public consultation and disclosure with relevant shareholders in the host country. SNN is fully committed to consulting as necessary with relevant shareholders throughout the construction and operation of the Project. Concerns of the public have been taken into account with the operations of Unit 1 and will continue to be addressed during the Unit 2 Project

  1. Defense Waste Processing Facility (DWPF), Modular CSSX Unit (CSSX), and Waste Transfer Line System of Salt Processing Program (U)

    International Nuclear Information System (INIS)

    CHANG, ROBERT

    2006-01-01

    All of the waste streams from ARP, MCU, and SWPF processes will be sent to DWPF for vitrification. The impact these new waste streams will have on DWPF's ability to meet its canister production goal and its ability to support the Salt Processing Program (ARP, MCU, and SWPF) throughput needed to be evaluated. DWPF Engineering and Operations requested OBU Systems Engineering to evaluate DWPF operations and determine how the process could be optimized. The ultimate goal will be to evaluate all of the Liquid Radioactive Waste (LRW) System by developing process modules to cover all facilities/projects which are relevant to the LRW Program and to link the modules together to: (1) study the interfaces issues, (2) identify bottlenecks, and (3) determine the most cost effective way to eliminate them. The results from the evaluation can be used to assist DWPF in identifying improvement opportunities, to assist CBU in LRW strategic planning/tank space management, and to determine the project completion date for the Salt Processing Program

  2. Redesign of the Advanced Education processes in the United States Coast Guard

    OpenAIRE

    Johnson, Lamar V.; Sanders, Marc F.

    1999-01-01

    The processes used in the operation of the Coast Guard Advanced Education Program have evolved as most business processes that were developed prior to the introduction of information technology. These processes include the selection, management, assignment and tracking of advanced education students. These processes are still fully dependent on physical files and the mail system. The Coast Guard has an information technology infrastructure that supports better processes, however it is not bei...

  3. Anaerobic bio-digestion of concentrate obtained in the process of ultra filtration of effluents from tilapia processing unit

    Directory of Open Access Journals (Sweden)

    Milena Alves de Souza

    2012-02-01

    Full Text Available The objective of the present study was to evaluate the efficiency of the process of biodigestion of the protein concentrate resulting from the ultrafiltration of the effluent from a slaughterhouse freezer of Nile tilapia. Bench digesters were used with excrements and water (control in comparison with a mixture of cattle manure and effluent from the stages of filleting and bleeding of tilapias. The effluent obtained in the continuous process (bleeding + filleting was the one with highest accumulated population from the 37th day, as well as greatest daily production. Gases composition did not differ between the protein concentrates, but the gas obtained with the use of the effluent from the filleting stage presented highest methane gas average (78.05% in comparison with those obtained in the bleeding stage (69.95% and in the continuous process (70.02% or by the control method (68.59%.

  4. Portable and fixed monitoring units for tank calibrations and monitoring of process liquids

    International Nuclear Information System (INIS)

    Landat, D.A.; Hunt, B.A.

    1999-01-01

    The development work stems from safeguards support activities carried out at the JRC Ispra, Italy to the inspectorate agencies. A range of measurement equipment covering the needs of the inspector have been designed, developed and tested in both the laboratory and in nuclear facilities. The instruments comprise four units: (1) a portable pressure measurement device, (2) a volume long term monitoring device, (3) an unattended volume measurement system and (4) a level measurement unit. Utilization of the equipment has proven to give independent measurement checks and confirmation of operator's instrumentation and declarations, ensuring continuity of knowledge. (J.P.N.)

  5. A structural and thermal packaging approach for power processing units for 30-cm ion thrusters

    Science.gov (United States)

    Maloy, J. E.; Sharp, G. R.

    1975-01-01

    Solar Electric Propulsion (SEP) is currently being studied for possible use in a number of near earth and planetary missions. The thruster subsystem for these missions would consist of 30 centimeter ion thrusters with Power Processor Units (PPU) clustered in assemblies of from two to ten units. A preliminary design study of the electronic packaging of the PPU has been completed at Lewis Research Center of NASA. This study evaluates designs meeting the competing requirements of low system weight and overall mission flexibility. These requirements are evaluated regarding structural and thermal design, electrical efficiency, and integration of the electrical circuits into a functional PPU layout.

  6. Professional Learning Community Process in the United States: Conceptualization of the Process and District Support for Schools

    Science.gov (United States)

    Olivier, Dianne F.; Huffman, Jane B.

    2016-01-01

    As the Professional Learning Community (PLC) process becomes embedded within schools, the level of district support has a direct impact on whether schools have the ability to re-culture and sustain highly effective collaborative practices. The purpose of this article is to share a professional learning community conceptual framework from the US,…

  7. Process and device for the protection of steam-raising units, particularly of nuclear reactors

    International Nuclear Information System (INIS)

    Beyer, W.; Wieling, N.; Stellwag, B.

    1986-01-01

    To protect the housing against corrosion by chemical conditioning of the feedwater, the redox potential of the feedwater and the corrosion potential of at least one pipe of the pipe bundle is continuously determined during operation of the steam raising unit. With potentials indicating the danger of corrosion, the quality of the secondary water can be improved by suitable measures. (orig./HP) [de

  8. Future forest aboveground carbon dynamics in the central United States: the importance of forest demographic processes

    Science.gov (United States)

    Wenchi Jin; Hong S. He; Frank R. Thompson; Wen J. Wang; Jacob S. Fraser; Stephen R. Shifley; Brice B. Hanberry; William D. Dijak

    2017-01-01

    The Central Hardwood Forest (CHF) in the United States is currently a major carbon sink, there are uncertainties in how long the current carbon sink will persist and if the CHF will eventually become a carbon source. We used a multi-model ensemble to investigate aboveground carbon density of the CHF from 2010 to 2300 under current climate. Simulations were done using...

  9. Single-unit studies of visual motion processing in cat extrastriate areas

    NARCIS (Netherlands)

    Vajda, Ildiko

    2003-01-01

    Motion vision has high survival value and is a fundamental property of all visual systems. The old Greeks already studied motion vision, but the physiological basis of it first came under scrutiny in the late nineteenth century. Later, with the introduction of single-cell (single-unit)

  10. 76 FR 13973 - United States Warehouse Act; Processed Agricultural Products Licensing Agreement

    Science.gov (United States)

    2011-03-15

    ... security of goods in the care and custody of the licensee. The personnel conducting the examinations will..., Warehouse Operations Program Manager, FSA, United States Department of Agriculture, Mail Stop 0553, 1400... continuing compliance with the standards of approval and operation. FSA will conduct examinations of licensed...

  11. Trends in lumber processing in the Western United States. Part II: Overrun and lumber recovery factors.

    Science.gov (United States)

    Charles E. Keegan; Todd A. Morgan; Keith A. Blatner; Jean M. Daniels

    2010-01-01

    This article describes trends in three measures of lumber recovery for sawmills in the western United States: lumber overrun (LO), lumber recovery factor (LRF), and cubic lumber recovery (CLR). All states and regions showed increased LO during the last three decades. Oregon and Montana had the highest LO at 107 and 100 percent, respectively. Alaska had the lowest LO at...

  12. Thermodynamic investigation of waste heat driven desalination unit based on humidification dehumidification (HDH) processes

    International Nuclear Information System (INIS)

    He, W.F.; Xu, L.N.; Han, D.; Gao, L.; Yue, C.; Pu, W.H.

    2016-01-01

    Highlights: • HDH desalination system powered by waste heat is proposed. • Performance of the desalination unit and the relevant heat recovery effect is calculated. • Sensitive analysis of the performance for the HDH desalination system is investigated. • Mathematical model based on the first and second laws of thermodynamics is established. - Abstract: Humidification dehumidification (HDH) technology is an effective pattern to separate freshwater from seawater or brackish water. In this paper, a closed-air open-water (CAOW) desalination unit coupled with plate heat exchangers (PHEs) is applied to recover the waste heat from the gas exhaust. Sensitivity analysis for the HDH desalination unit as well as the PHEs from the key parameters including the top and initial temperature of the seawater, operation pressure, and the terminal temperature difference (TTD) of the PHEs are accomplished, and the corresponding performance of the whole HDH desalination system is calculated and presented. The simulation results show that the balance condition of the dehumidifier is allowed by the basic thermodynamic laws, followed by a peak value of gained-output-ratio (GOR) and a bottom value of total specific entropy generation. It is concluded that excellent results including the system performance, heat recovery effect and investment of the PHEs can be simultaneously obtained with a low top temperature, while the obtained desalination performance and the heat recovery effect from other measures are always conflicting. Different from other parameters of the desalination unit, the terminal temperature difference of the PHEs has little influences on the final value of GOR.

  13. United States paper, paperboard, and market pulp capacity trends by process and location, 1970-2000

    Science.gov (United States)

    Peter J. Ince; Xiaolei Li; Mo Zhou; Joseph Buongiorno; Mary Reuter

    This report presents a relational database with estimates of annual production capacity for all mill locations in the United States where paper, paperboard, or market pulp were produced from 1970 to 2000. Data for more than 500 separate mill locations are included in the database, with annual capacity data for each year from 1970 to 2000 (more than 17, 000 individual...

  14. Catalytic Reforming of Higher Hydrocarbon Fuels to Hydrogen: Process Investigations with Regard to Auxiliary Power Units

    OpenAIRE

    Kaltschmitt, Torsten

    2012-01-01

    This thesis discusses the investigation of the catalytic partial oxidation on rhodium-coated honeycomb catalysts with respect to the conversion of a model surrogate fuel and commercial diesel fuel into hydrogen for the use in auxiliary power units. Furthermore, the influence of simulated tail-gas recycling was investigated.

  15. Assessment of Orthographical Processing in Spanish Children with Dyslexia: The Role of Lexical and Sublexical Units

    Science.gov (United States)

    Rodrigo, Mercedes; Jimenez, Juan E.; Garcia, Eduardo; Diaz, Alicia; Ortiz, M. Rosario; Guzman, Remedios; Hernandez-Valle, Isabel; Estevez, Adelina; Hernandez, Sergio

    2004-01-01

    Introduction: The aim of this study was to examine the role of multiletter units, such as the morpheme and whole word, in accessing the lexicon, in Spanish children with dyslexia. Method: A sample of 60 participants were selected and organised i n three different groups: 1) an experimental group of 18 reading-disabled children, (2) a control group…

  16. Social processes underlying acculturation: a study of drinking behavior among immigrant Latinos in the Northeast United States

    Science.gov (United States)

    LEE, CHRISTINA S.; LÓPEZ, STEVEN REGESER; COBLY, SUZANNE M.; TEJADA, MONICA; GARCÍA-COLL, CYNTHIA; SMITH, MARCIA

    2010-01-01

    Study Goals To identify social processes that underlie the relationship of acculturation and heavy drinking behavior among Latinos who have immigrated to the Northeast United States of America (USA). Method Community-based recruitment strategies were used to identify 36 Latinos who reported heavy drinking. Participants were 48% female, 23 to 56 years of age, and were from South or Central America (39%) and the Caribbean (24%). Six focus groups were audiotaped and transcribed. Results Content analyses indicated that the social context of drinking is different in the participants’ countries of origin and in the United States. In Latin America, alcohol consumption was part of everyday living (being with friends and family). Nostalgia and isolation reflected some of the reasons for drinking in the USA. Results suggest that drinking in the Northeastern United States (US) is related to Latinos’ adaptation to a new sociocultural environment. Knowledge of the shifting social contexts of drinking can inform health interventions. PMID:20376331

  17. Analysis of social relations among organizational units derived from process models and redesign of organization structure

    NARCIS (Netherlands)

    Choi, I.; Song, M.S.; Kim, K.M.; Lee, Y-H.

    2007-01-01

    Despite surging interests in analyzing business processes, there are few scientific approaches to analysis and redesign of organizational structures which can greatly affect the performance of business processes. This paper presents a method for deriving and analyzing organizational relations from

  18. United States Climate Reference Network (USCRN) Processed Data from the Version 2 USCRN Database

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — USCRN Processed data are interpreted values and derived geophysical parameters processed from raw data by the USCRN Team. Data were interpreted and ingested into a...

  19. Grey water treatment by a continuous process of an electrocoagulation unit and a submerged membrane bioreactor system

    KAUST Repository

    Bani-Melhem, Khalid

    2012-08-01

    This paper presents the performance of an integrated process consisting of an electro-coagulation (EC) unit and a submerged membrane bioreactor (SMBR) technology for grey water treatment. For comparison purposes, another SMBR process without electrocoagulation (EC) was operated in parallel with both processes operated under constant transmembrane pressure for 24. days in continuous operation mode. It was found that integrating EC process with SMBR (EC-SMBR) was not only an effective method for grey water treatment but also for improving the overall performance of the membrane filtration process. EC-SMBR process achieved up to 13% reduction in membrane fouling compared to SMBR without electrocoagulation. High average percent removals were attained by both processes for most wastewater parameters studied. The results demonstrated that EC-SMBR performance slightly exceeded that of SMBR for COD, turbidity, and colour. Both processes produced effluent free of suspended solids, and faecal coliforms were nearly (100%) removed in both processes. A substantial improvement was achieved in removal of phosphate in the EC-SMBR process. However, ammonia nitrogen was removed more effectively by the SMBR only. Accordingly, the electrolysis condition in the EC-SMBR process should be optimized so as not to impede biological treatment. © 2012 Elsevier B.V.

  20. Investigation of the Dynamic Melting Process in a Thermal Energy Storage Unit Using a Helical Coil Heat Exchanger

    Directory of Open Access Journals (Sweden)

    Xun Yang

    2017-08-01

    Full Text Available In this study, the dynamic melting process of the phase change material (PCM in a vertical cylindrical tube-in-tank thermal energy storage (TES unit was investigated through numerical simulations and experimental measurements. To ensure good heat exchange performance, a concentric helical coil was inserted into the TES unit to pipe the heat transfer fluid (HTF. A numerical model using the computational fluid dynamics (CFD approach was developed based on the enthalpy-porosity method to simulate the unsteady melting process including temperature and liquid fraction variations. Temperature measurements using evenly spaced thermocouples were conducted, and the temperature variation at three locations inside the TES unit was recorded. The effects of the HTF inlet parameters were investigated by parametric studies with different temperatures and flow rate values. Reasonably good agreement was achieved between the numerical prediction and the temperature measurement, which confirmed the numerical simulation accuracy. The numerical results showed the significance of buoyancy effect for the dynamic melting process. The system TES performance was very sensitive to the HTF inlet temperature. By contrast, no apparent influences can be found when changing the HTF flow rates. This study provides a comprehensive solution to investigate the heat exchange process of the TES system using PCM.

  1. Methodology for systematic analysis and improvement of manufacturing unit process life-cycle inventory (UPLCI)—CO2PE! initiative (cooperative effort on process emissions in manufacturing). Part 1: Methodology description

    DEFF Research Database (Denmark)

    Kellens, Karel; Dewulf, Wim; Overcash, Michael

    2012-01-01

    the provision of high-quality data for LCA studies of products using these unit process datasets for the manufacturing processes, as well as the in-depth analysis of individual manufacturing unit processes.In addition, the accruing availability of data for a range of similar machines (same process, different......This report proposes a life-cycle analysis (LCA)-oriented methodology for systematic inventory analysis of the use phase of manufacturing unit processes providing unit process datasets to be used in life-cycle inventory (LCI) databases and libraries. The methodology has been developed...... and resource efficiency improvements of the manufacturing unit process. To ensure optimal reproducibility and applicability, documentation guidelines for data and metadata are included in both approaches. Guidance on definition of functional unit and reference flow as well as on determination of system...

  2. FamSeq: a variant calling program for family-based sequencing data using graphics processing units.

    Directory of Open Access Journals (Sweden)

    Gang Peng

    2014-10-01

    Full Text Available Various algorithms have been developed for variant calling using next-generation sequencing data, and various methods have been applied to reduce the associated false positive and false negative rates. Few variant calling programs, however, utilize the pedigree information when the family-based sequencing data are available. Here, we present a program, FamSeq, which reduces both false positive and false negative rates by incorporating the pedigree information from the Mendelian genetic model into variant calling. To accommodate variations in data complexity, FamSeq consists of four distinct implementations of the Mendelian genetic model: the Bayesian network algorithm, a graphics processing unit version of the Bayesian network algorithm, the Elston-Stewart algorithm and the Markov chain Monte Carlo algorithm. To make the software efficient and applicable to large families, we parallelized the Bayesian network algorithm that copes with pedigrees with inbreeding loops without losing calculation precision on an NVIDIA graphics processing unit. In order to compare the difference in the four methods, we applied FamSeq to pedigree sequencing data with family sizes that varied from 7 to 12. When there is no inbreeding loop in the pedigree, the Elston-Stewart algorithm gives analytical results in a short time. If there are inbreeding loops in the pedigree, we recommend the Bayesian network method, which provides exact answers. To improve the computing speed of the Bayesian network method, we parallelized the computation on a graphics processing unit. This allowed the Bayesian network method to process the whole genome sequencing data of a family of 12 individuals within two days, which was a 10-fold time reduction compared to the time required for this computation on a central processing unit.

  3. Production of advanced biofuels: co-processing of upgraded pyrolysis oil in standard refinery units

    NARCIS (Netherlands)

    De Miguel Mercader, F.; de Miguel Mercader, F.; Groeneveld, M.J.; Hogendoorn, Kees; Kersten, Sascha R.A.; Way, N.W.J.; Schaverien, C.J.

    2010-01-01

    One of the possible process options for the production of advanced biofuels is the co-processing of upgraded pyrolysis oil in standard refineries. The applicability of hydrodeoxygenation (HDO) was studied as a pyrolysis oil upgrading step to allow FCC co-processing. Different HDO reaction end

  4. RF processing of an S-band high gradient accelerator unit

    International Nuclear Information System (INIS)

    Morita, S.

    1994-01-01

    A 3m-long S-band accelerating structure is used in 1.54 GeV Linac of Accelerator Test Facility. The accelerating structure should be processed up to 200 MW which produce 52 MV/m accelerating gradient. The process of RF processing is described. (author)

  5. Low cost solar array project production process and equipment task: A Module Experimental Process System Development Unit (MEPSDU)

    Science.gov (United States)

    1981-01-01

    Several major modifications were made to the design presented at the PDR. The frame was deleted in favor of a "frameless" design which will provide a substantially improved cell packing factor. Potential shaded cell damage resulting from operation into a short circuit can be eliminated by a change in the cell series/parallel electrical interconnect configuration. The baseline process sequence defined for the MEPSON was refined and equipment design and specification work was completed. SAMICS cost analysis work accelerated, format A's were prepared and computer simulations completed. Design work on the automated cell interconnect station was focused on bond technique selection experiments.

  6. The United States Government Interagency Process and the Failure of Institution Building in Iraq

    Science.gov (United States)

    2008-06-12

    warehouses they were amazed to find pallets and pallets full of medical equipment from floor to ceiling that had had been in the warehouses for some... glass broken; and books and documents strewn about?”27 24 Durch, William ed. “Twenty-First-Century Peace Operations.” United States Institute of...health) to actors that potentially threaten domestic or international order (e.g., Egypt, Palestinian Authority, Lebanon ) Exploitation of legal

  7. Clinical review: Moral assumptions and the process of organ donation in the intensive care unit

    OpenAIRE

    Streat, Stephen

    2004-01-01

    The objective of the present article is to review moral assumptions underlying organ donation in the intensive care unit. Data sources used include personal experience, and a Medline search and a non-Medline search of relevant English-language literature. The study selection included articles concerning organ donation. All data were extracted and analysed by the author. In terms of data synthesis, a rational, utilitarian moral perspective dominates, and has captured and circumscribed, the lan...

  8. Trends in lumber processing in the western United States. Part I: board foot Scribner volume per cubic foot of timber

    Science.gov (United States)

    Charles E. Keegan; Todd A. Morgan; Keith A. Blatner; Jean M. Daniels

    2010-01-01

    This article describes trends in board foot Scribner volume per cubic foot of timber for logs processed by sawmills in the western United States. Board foot to cubic foot (BF/CF) ratios for the period from 2000 through 2006 ranged from 3.70 in Montana to 5.71 in the Four Corners Region (Arizona, Colorado, New Mexico, and Utah). Sawmills in the Four Corners Region,...

  9. Understanding the Development of Minimum Unit Pricing of Alcohol in Scotland: A Qualitative Study of the Policy Process

    Science.gov (United States)

    Katikireddi, Srinivasa Vittal; Hilton, Shona; Bonell, Chris; Bond, Lyndal

    2014-01-01

    Background Minimum unit pricing of alcohol is a novel public health policy with the potential to improve population health and reduce health inequalities. Theories of the policy process may help to understand the development of policy innovation and in turn identify lessons for future public health research and practice. This study aims to explain minimum unit pricing’s development by taking a ‘multiple-lenses’ approach to understanding the policy process. In particular, we apply three perspectives of the policy process (Kingdon’s multiple streams, Punctuated-Equilibrium Theory, Multi-Level Governance) to understand how and why minimum unit pricing has developed in Scotland and describe implications for efforts to develop evidence-informed policymaking. Methods Semi-structured interviews were conducted with policy actors (politicians, civil servants, academics, advocates, industry representatives) involved in the development of MUP (n = 36). Interviewees were asked about the policy process and the role of evidence in policy development. Data from two other sources (a review of policy documents and an analysis of evidence submission documents to the Scottish Parliament) were used for triangulation. Findings The three perspectives provide complementary understandings of the policy process. Evidence has played an important role in presenting the policy issue of alcohol as a problem requiring action. Scotland-specific data and a change in the policy ‘image’ to a population-based problem contributed to making alcohol-related harms a priority for action. The limited powers of Scottish Government help explain the type of price intervention pursued while distinct aspects of the Scottish political climate favoured the pursuit of price-based interventions. Conclusions Evidence has played a crucial but complex role in the development of an innovative policy. Utilising different political science theories helps explain different aspects of the policy process

  10. Understanding the development of minimum unit pricing of alcohol in Scotland: a qualitative study of the policy process.

    Science.gov (United States)

    Katikireddi, Srinivasa Vittal; Hilton, Shona; Bonell, Chris; Bond, Lyndal

    2014-01-01

    Minimum unit pricing of alcohol is a novel public health policy with the potential to improve population health and reduce health inequalities. Theories of the policy process may help to understand the development of policy innovation and in turn identify lessons for future public health research and practice. This study aims to explain minimum unit pricing's development by taking a 'multiple-lenses' approach to understanding the policy process. In particular, we apply three perspectives of the policy process (Kingdon's multiple streams, Punctuated-Equilibrium Theory, Multi-Level Governance) to understand how and why minimum unit pricing has developed in Scotland and describe implications for efforts to develop evidence-informed policymaking. Semi-structured interviews were conducted with policy actors (politicians, civil servants, academics, advocates, industry representatives) involved in the development of MUP (n = 36). Interviewees were asked about the policy process and the role of evidence in policy development. Data from two other sources (a review of policy documents and an analysis of evidence submission documents to the Scottish Parliament) were used for triangulation. The three perspectives provide complementary understandings of the policy process. Evidence has played an important role in presenting the policy issue of alcohol as a problem requiring action. Scotland-specific data and a change in the policy 'image' to a population-based problem contributed to making alcohol-related harms a priority for action. The limited powers of Scottish Government help explain the type of price intervention pursued while distinct aspects of the Scottish political climate favoured the pursuit of price-based interventions. Evidence has played a crucial but complex role in the development of an innovative policy. Utilising different political science theories helps explain different aspects of the policy process, with Multi-Level Governance particularly useful for

  11. Influence of unit operations on the levels of polyacetylenes in minimally processed carrots and parsnips: An industrial trial.

    Science.gov (United States)

    Koidis, Anastasios; Rawson, Ashish; Tuohy, Maria; Brunton, Nigel

    2012-06-01

    Carrots and parsnips are often consumed as minimally processed ready-to-eat convenient foods and contain in minor quantities, bioactive aliphatic C17-polyacetylenes (falcarinol, falcarindiol, falcarindiol-3-acetate). Their retention during minimal processing in an industrial trial was evaluated. Carrot and parsnips were prepared in four different forms (disc cutting, baton cutting, cubing and shredding) and samples were taken in every point of their processing line. The unit operations were: peeling, cutting and washing with chlorinated water and also retention during 7days storage was evaluated. The results showed that the initial unit operations (mainly peeling) influence the polyacetylene retention. This was attributed to the high polyacetylene content of their peels. In most cases, when washing was performed after cutting, less retention was observed possibly due to leakage during tissue damage occurred in the cutting step. The relatively high retention during storage indicates high plant matrix stability. Comparing the behaviour of polyacetylenes in the two vegetables during storage, the results showed that they were slightly more retained in parsnips than in carrots. Unit operations and especially abrasive peeling might need further optimisation to make them gentler and minimise bioactive losses. Copyright © 2011 Elsevier Ltd. All rights reserved.

  12. Graphics processing unit accelerated intensity-based optical coherence tomography angiography using differential frames with real-time motion correction.

    Science.gov (United States)

    Watanabe, Yuuki; Takahashi, Yuhei; Numazawa, Hiroshi

    2014-02-01

    We demonstrate intensity-based optical coherence tomography (OCT) angiography using the squared difference of two sequential frames with bulk-tissue-motion (BTM) correction. This motion correction was performed by minimization of the sum of the pixel values using axial- and lateral-pixel-shifted structural OCT images. We extract the BTM-corrected image from a total of 25 calculated OCT angiographic images. Image processing was accelerated by a graphics processing unit (GPU) with many stream processors to optimize the parallel processing procedure. The GPU processing rate was faster than that of a line scan camera (46.9 kHz). Our OCT system provides the means of displaying structural OCT images and BTM-corrected OCT angiographic images in real time.

  13. Simulation based assembly and alignment process ability analysis for line replaceable units of the high power solid state laser facility

    International Nuclear Information System (INIS)

    Wang, Junfeng; Lu, Cong; Li, Shiqi

    2016-01-01

    Highlights: • Discrete event simulation is applied to analyze the assembly and alignment process ability of LRUs in SG-III facility. • The overall assembly and alignment process of LRUs with specific characteristics is described. • An extended-directed graph is proposed to express the assembly and alignment process of LRUs. • Different scenarios have been simulated to evaluate assembling process ability of LRUs and decision making is supported to ensure the construction millstone. - Abstract: Line replaceable units (LRUs) are important components of the very large high power solid state laser facilities. The assembly and alignment process ability of LRUs will impact the construction milestone of facilities. This paper describes the use of discrete event simulation method for assembly and alignment process analysis of LRUs in such facilities. The overall assembly and alignment process for LRUs is presented based on the layout of the optics assembly laboratory and the process characteristics are analyzed. An extended-directed graph is proposed to express the assembly and alignment process of LRUs. Taking the LRUs of disk amplifier system in Shen Guang-III (SG-III) facility as the example, some process simulation models are built based on the Quest simulation platform. The constraints, such as duration, equipment, technician and part supply, are considered in the simulation models. Different simulation scenarios have been carried out to evaluate the assembling process ability of LRUs. The simulation method can provide a valuable decision making and process optimization tool for the optics assembly laboratory layout and the process working out of such facilities.

  14. Simulation based assembly and alignment process ability analysis for line replaceable units of the high power solid state laser facility

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Junfeng; Lu, Cong; Li, Shiqi, E-mail: sqli@hust.edu.cn

    2016-11-15

    Highlights: • Discrete event simulation is applied to analyze the assembly and alignment process ability of LRUs in SG-III facility. • The overall assembly and alignment process of LRUs with specific characteristics is described. • An extended-directed graph is proposed to express the assembly and alignment process of LRUs. • Different scenarios have been simulated to evaluate assembling process ability of LRUs and decision making is supported to ensure the construction millstone. - Abstract: Line replaceable units (LRUs) are important components of the very large high power solid state laser facilities. The assembly and alignment process ability of LRUs will impact the construction milestone of facilities. This paper describes the use of discrete event simulation method for assembly and alignment process analysis of LRUs in such facilities. The overall assembly and alignment process for LRUs is presented based on the layout of the optics assembly laboratory and the process characteristics are analyzed. An extended-directed graph is proposed to express the assembly and alignment process of LRUs. Taking the LRUs of disk amplifier system in Shen Guang-III (SG-III) facility as the example, some process simulation models are built based on the Quest simulation platform. The constraints, such as duration, equipment, technician and part supply, are considered in the simulation models. Different simulation scenarios have been carried out to evaluate the assembling process ability of LRUs. The simulation method can provide a valuable decision making and process optimization tool for the optics assembly laboratory layout and the process working out of such facilities.

  15. Demonstrating the unit hydrograph and flow routing processes involving active student participation - a university lecture experiment

    Science.gov (United States)

    Schulz, Karsten; Burgholzer, Reinhard; Klotz, Daniel; Wesemann, Johannes; Herrnegger, Mathew

    2018-05-01

    The unit hydrograph (UH) has been one of the most widely employed hydrological modelling techniques to predict rainfall-runoff behaviour of hydrological catchments, and is still used to this day. Its concept is based on the idea that a unit of effective precipitation per time unit (e.g. mm h-1) will always lead to a specific catchment response in runoff. Given its relevance, the UH is an important topic that is addressed in most (engineering) hydrology courses at all academic levels. While the principles of the UH seem to be simple and easy to understand, teaching experiences in the past suggest strong difficulties in students' perception of the UH theory and application. In order to facilitate a deeper understanding of the theory and application of the UH for students, we developed a simple and cheap lecture theatre experiment which involved active student participation. The seating of the students in the lecture theatre represented the hydrological catchment in its size and form. A set of plastic balls, prepared with a piece of magnetic strip to be tacked to any white/black board, each represented a unit amount of effective precipitation. The balls are evenly distributed over the lecture theatre and routed by some given rules down the catchment to the catchment outlet, where the resulting hydrograph is monitored and illustrated at the black/white board. The experiment allowed an illustration of the underlying principles of the UH, including stationarity, linearity, and superposition of the generated runoff and subsequent routing. In addition, some variations of the experimental setup extended the UH concept to demonstrate the impact of elevation, different runoff regimes, and non-uniform precipitation events on the resulting hydrograph. In summary, our own experience in the classroom, a first set of student exams, as well as student feedback and formal evaluation suggest that the integration of such an experiment deepened the learning experience by active

  16. A Module Experimental Process System Development Unit (MEPSDU). [flat plate solar arrays

    Science.gov (United States)

    1981-01-01

    The development of a cost effective process sequence that has the potential for the production of flat plate photovoltaic modules which meet the price goal in 1986 of 70 cents or less per Watt peak is described. The major accomplishments include (1) an improved AR coating technique; (2) the use of sand blast back clean-up to reduce clean up costs and to allow much of the Al paste to serve as a back conductor; and (3) the development of wave soldering for use with solar cells. Cells were processed to evaluate different process steps, a cell and minimodule test plan was prepared and data were collected for preliminary Samics cost analysis.

  17. Ward nurses' experiences of the discharge process between intensive care unit and general ward.

    Science.gov (United States)

    Kauppi, Wivica; Proos, Matilda; Olausson, Sepideh

    2018-05-01

    Intensive care unit (ICU) discharges are challenging practices that carry risks for patients. Despite the existing body of knowledge, there are still difficulties in clinical practice concerning unplanned ICU discharges, specifically where there is no step-down unit. The aim of this study was to explore general ward nurses' experiences of caring for patients being discharged from an ICU. Data were collected from focus groups and in-depth interviews with a total of 16 nurses from three different hospitals in Sweden. An inductive qualitative design was chosen. The analysis revealed three themes that reflect the challenges in nursing former ICU patients: a vulnerable patient, nurses' powerlessness and organizational structure. The nurses described the challenge of nursing a fragile patient based on several aspects. They expressed feeling unrealistic demands when caring for a fragile former ICU patient. The demands were related to their own profession and knowledge regarding how to care for this group of patients. The organizational structure had an impact on how the nurses' caring practice could be realized. This evoked ethical concerns that the nurses had to cope with as the organization's care guidelines did not always favour the patients. The structure of the organization and its leadership appear to have a significant impact on the nurses' ability to offer patients the care they need. This study sheds light on the need for extended outreach services and intermediate care in order to meet the needs of patients after the intensive care period. © 2018 British Association of Critical Care Nurses.

  18. Safety conditions and native microbial flora of three processing units in Alentejo , Portugal

    OpenAIRE

    Laranjo, Marta; Potes, M.E; Elias, M.

    2014-01-01

    Portugal as other Mediterranean countries has a great diversity of dry fermented sausages. This traditional sausage production is highly diverse and products possess very particular organoleptic characteristics, which please consumers. These sensory characteristics are related not only to the manufacturing process, but also to the house microbial flora. On the other hand, the safety of fermented products is always difficult to achieve due to their processing technology and final characteristi...

  19. UNITED STATES DEPARTMENT OF ENERGY OFFICE OF ENVIRONMENTAL MANAGEMENT WASTE PROCESSING ANNUAL TECHNOLOGY DEVELOPMENT REPORT 2008

    Energy Technology Data Exchange (ETDEWEB)

    Bush, S.

    2009-11-05

    The Office of Waste Processing identifies and reduces engineering and technical risks and uncertainties of the waste processing programs and projects of the Department of Energy's Environmental Management (EM) mission through the timely development of solutions to technical issues. The risks, and actions taken to mitigate those risks, are determined through technology readiness assessments, program reviews, technology information exchanges, external technical reviews, technical assistance, and targeted technology development and deployment. The Office of Waste Processing works with other DOE Headquarters offices and project and field organizations to proactively evaluate technical needs, identify multi-site solutions, and improve the technology and engineering associated with project and contract management. Participants in this program are empowered with the authority, resources, and training to implement their defined priorities, roles, and responsibilities. The Office of Waste Processing Multi-Year Program Plan (MYPP) supports the goals and objectives of the U.S. Department of Energy (DOE) - Office of Environmental Management Engineering and Technology Roadmap by providing direction for technology enhancement, development, and demonstration that will lead to a reduction of technical risks and uncertainties in EM waste processing activities. The MYPP summarizes the program areas and the scope of activities within each program area proposed for the next five years to improve safety and reduce costs and environmental impacts associated with waste processing; authorized budget levels will impact how much of the scope of activities can be executed, on a year-to-year basis. Waste Processing Program activities within the Roadmap and the MYPP are described in these seven program areas: (1) Improved Waste Storage Technology; (2) Reliable and Efficient Waste Retrieval Technologies; (3) Enhanced Tank Closure Processes; (4) Next-Generation Pretreatment Solutions; (5

  20. United States Department Of Energy Office Of Environmental Management Waste Processing Annual Technology Development Report 2008

    International Nuclear Information System (INIS)

    Bush, S.

    2009-01-01

    The Office of Waste Processing identifies and reduces engineering and technical risks and uncertainties of the waste processing programs and projects of the Department of Energy's Environmental Management (EM) mission through the timely development of solutions to technical issues. The risks, and actions taken to mitigate those risks, are determined through technology readiness assessments, program reviews, technology information exchanges, external technical reviews, technical assistance, and targeted technology development and deployment. The Office of Waste Processing works with other DOE Headquarters offices and project and field organizations to proactively evaluate technical needs, identify multi-site solutions, and improve the technology and engineering associated with project and contract management. Participants in this program are empowered with the authority, resources, and training to implement their defined priorities, roles, and responsibilities. The Office of Waste Processing Multi-Year Program Plan (MYPP) supports the goals and objectives of the U.S. Department of Energy (DOE) - Office of Environmental Management Engineering and Technology Roadmap by providing direction for technology enhancement, development, and demonstration that will lead to a reduction of technical risks and uncertainties in EM waste processing activities. The MYPP summarizes the program areas and the scope of activities within each program area proposed for the next five years to improve safety and reduce costs and environmental impacts associated with waste processing; authorized budget levels will impact how much of the scope of activities can be executed, on a year-to-year basis. Waste Processing Program activities within the Roadmap and the MYPP are described in these seven program areas: (1) Improved Waste Storage Technology; (2) Reliable and Efficient Waste Retrieval Technologies; (3) Enhanced Tank Closure Processes; (4) Next-Generation Pretreatment Solutions; (5

  1. Efficient particle-in-cell simulation of auroral plasma phenomena using a CUDA enabled graphics processing unit

    Science.gov (United States)

    Sewell, Stephen

    This thesis introduces a software framework that effectively utilizes low-cost commercially available Graphic Processing Units (GPUs) to simulate complex scientific plasma phenomena that are modeled using the Particle-In-Cell (PIC) paradigm. The software framework that was developed conforms to the Compute Unified Device Architecture (CUDA), a standard for general purpose graphic processing that was introduced by NVIDIA Corporation. This framework has been verified for correctness and applied to advance the state of understanding of the electromagnetic aspects of the development of the Aurora Borealis and Aurora Australis. For each phase of the PIC methodology, this research has identified one or more methods to exploit the problem's natural parallelism and effectively map it for execution on the graphic processing unit and its host processor. The sources of overhead that can reduce the effectiveness of parallelization for each of these methods have also been identified. One of the novel aspects of this research was the utilization of particle sorting during the grid interpolation phase. The final representation resulted in simulations that executed about 38 times faster than simulations that were run on a single-core general-purpose processing system. The scalability of this framework to larger problem sizes and future generation systems has also been investigated.

  2. Macro-scale assessment of demographic and environmental variation within genetically derived evolutionary lineages of eastern hemlock (Tsuga canadensis), an imperiled conifer of the eastern United States

    Science.gov (United States)

    Anantha M. Prasad; Kevin M. Potter

    2017-01-01

    Eastern hemlock (Tsuga canadensis) occupies a large swath of eastern North America and has historically undergone range expansion and contraction resulting in several genetically separate lineages. This conifer is currently experiencing mortality across most of its range following infestation of a non-native insect. With the goal of better...

  3. Decreasing laboratory turnaround time and patient wait time by implementing process improvement methodologies in an outpatient oncology infusion unit.

    Science.gov (United States)

    Gjolaj, Lauren N; Gari, Gloria A; Olier-Pino, Angela I; Garcia, Juan D; Fernandez, Gustavo L

    2014-11-01

    Prolonged patient wait times in the outpatient oncology infusion unit indicated a need to streamline phlebotomy processes by using existing resources to decrease laboratory turnaround time and improve patient wait time. Using the DMAIC (define, measure, analyze, improve, control) method, a project to streamline phlebotomy processes within the outpatient oncology infusion unit in an academic Comprehensive Cancer Center known as the Comprehensive Treatment Unit (CTU) was completed. Laboratory turnaround time for patients who needed same-day lab and CTU services and wait time for all CTU patients was tracked for 9 weeks. During the pilot, the wait time from arrival to CTU to sitting in treatment area decreased by 17% for all patients treated in the CTU during the pilot. A total of 528 patients were seen at the CTU phlebotomy location, representing 16% of the total patients who received treatment in the CTU, with a mean turnaround time of 24 minutes compared with a baseline turnaround time of 51 minutes. Streamlining workflows and placing a phlebotomy station inside of the CTU decreased laboratory turnaround times by 53% for patients requiring same day lab and CTU services. The success of the pilot project prompted the team to make the station a permanent fixture. Copyright © 2014 by American Society of Clinical Oncology.

  4. Licence renewal in the United States - enhancing the process through lessons learned

    International Nuclear Information System (INIS)

    Walters, D.J.

    2000-01-01

    The Nuclear Energy Institute (NEI) is the Washington based policy organisation representing the broad and varied interests of the diverse nuclear energy industry. It comprises nearly 300 corporate members in 15 countries with a budget last year of about USD 26.5 million. It has been working for 10 years with the Nuclear Regulatory Commission (NRC), colleagues in the industry and others to demonstrate that license renewal is a safe and workable process. The first renewed license was issued on 24 March to BGE for the the Calvert Cliffs plant. One month later the NRC issued the renewed license for the Ocoenne plant. By 'Enhancing the process through lessons learned', we mean reducing the uncertainty in the license renewal process. This is achieved through lessons learned from the net wave of applicants and the reviews of the Calvert Cliffs and Ocoenne applications. Three areas will be covered: - Incentive for minimising uncertainty as industry interest in license renewal is growing dramatically. - Rigorous reviews by Nuclear Regulatory Commission assure continued safety: process put in place by the Nuclear Regulatory Commission to assure safety throughout the license renewal term, specifically areas where the lessons learned suggest improvements can be made. - Lessons learned have identified enhancements to the process: numerous benefits associated with renewal of nuclear power plant licenses for consumers of electricity, the environment, the nuclear operating companies and the nation. (author)

  5. The United States in the Copenhagen process: the temptation of leadership

    International Nuclear Information System (INIS)

    2009-06-01

    Written before the Copenhagen Conference, this analysis first gives an overview of the United States energy systems (greenhouse gas emissions, energy sources and more particularly fossil sources) and of the evolutions since the Kyoto Conference in comparison with other countries (notably the European Union). It also evokes the commitment of some States and companies within the USA. Then, the authors comment the revitalization of the environmental policy in the National Recovery Act and the prospects of greenhouse gas emission reductions according to a Congress proposition (Waxman and Markey). They analyse the US posture with respect to international negotiations, aspects which are probably not negotiable, and opportunities to involve the USA in an international agreement

  6. The regulatory process, nuclear safety research and the fuel cycle in the United Kingdom

    International Nuclear Information System (INIS)

    Watson, P.

    1996-01-01

    The main legislation governing the safety of nuclear installations in the United Kingdom is the Health and Safety at Work Act 1974 (HSWA) and the associated relevant statutory provisions of the Nuclear Installations Act 1965 (as amended). The HSWA sought to simplify and unify all industrial safety legislation and set in place the Health and Safety Commission (HSC) and its executive arm, the Health and Safety Executive (HSE). The Health and Safety Executive's Nuclear Safety Division (NSD) regulates the nuclear activities on such sites through HM Nuclear Installations Inspectorate (NII). Under the Nuclear Installations Act (NIA) no corporate body may use any site for the purpose of installing or operating any reactor, other than such a reactor comprised in a means of transport, or other prescribed installation unless the operator has been granted a nuclear site licence by the Health and Safety Executive. Nuclear fuel cycle facilities are examples of such prescribed installations. (J.P.N.)

  7. A comparative study of the nanoscale and macroscale tribological attributes of alumina and stainless steel surfaces immersed in aqueous suspensions of positively or negatively charged nanodiamonds

    Science.gov (United States)

    Curtis, Colin K; Marek, Antonin; Smirnov, Alex I

    2017-01-01

    This article reports a comparative study of the nanoscale and macroscale tribological attributes of alumina and stainless steel surfaces immersed in aqueous suspensions of positively (hydroxylated) or negatively (carboxylated) charged nanodiamonds (ND). Immersion in −ND suspensions resulted in a decrease in the macroscopic friction coefficients to values in the range 0.05–0.1 for both stainless steel and alumina, while +ND suspensions yielded an increase in friction for stainless steel contacts but little to no increase for alumina contacts. Quartz crystal microbalance (QCM), atomic force microscopy (AFM) and scanning electron microscopy (SEM) measurements were employed to assess nanoparticle uptake, surface polishing, and resistance to solid–liquid interfacial shear motion. The QCM studies revealed abrupt changes to the surfaces of both alumina and stainless steel upon injection of –ND into the surrounding water environment that are consistent with strong attachment of NDs and/or chemical changes to the surfaces. AFM images of the surfaces indicated slight increases in the surface roughness upon an exposure to both +ND and −ND suspensions. A suggested mechanism for these observations is that carboxylated −NDs from aqueous suspensions are forming robust lubricious deposits on stainless and alumina surfaces that enable gliding of the surfaces through the −ND suspensions with relatively low resistance to shear. In contrast, +ND suspensions are failing to improve tribological performance for either of the surfaces and may have abraded existing protective boundary layers in the case of stainless steel contacts. This study therefore reveals atomic scale details associated with systems that exhibit starkly different macroscale tribological properties, enabling future efforts to predict and design complex lubricant interfaces. PMID:29046852

  8. A comparative study of the nanoscale and macroscale tribological attributes of alumina and stainless steel surfaces immersed in aqueous suspensions of positively or negatively charged nanodiamonds

    Directory of Open Access Journals (Sweden)

    Colin K. Curtis

    2017-09-01

    Full Text Available This article reports a comparative study of the nanoscale and macroscale tribological attributes of alumina and stainless steel surfaces immersed in aqueous suspensions of positively (hydroxylated or negatively (carboxylated charged nanodiamonds (ND. Immersion in −ND suspensions resulted in a decrease in the macroscopic friction coefficients to values in the range 0.05–0.1 for both stainless steel and alumina, while +ND suspensions yielded an increase in friction for stainless steel contacts but little to no increase for alumina contacts. Quartz crystal microbalance (QCM, atomic force microscopy (AFM and scanning electron microscopy (SEM measurements were employed to assess nanoparticle uptake, surface polishing, and resistance to solid–liquid interfacial shear motion. The QCM studies revealed abrupt changes to the surfaces of both alumina and stainless steel upon injection of –ND into the surrounding water environment that are consistent with strong attachment of NDs and/or chemical changes to the surfaces. AFM images of the surfaces indicated slight increases in the surface roughness upon an exposure to both +ND and −ND suspensions. A suggested mechanism for these observations is that carboxylated −NDs from aqueous suspensions are forming robust lubricious deposits on stainless and alumina surfaces that enable gliding of the surfaces through the −ND suspensions with relatively low resistance to shear. In contrast, +ND suspensions are failing to improve tribological performance for either of the surfaces and may have abraded existing protective boundary layers in the case of stainless steel contacts. This study therefore reveals atomic scale details associated with systems that exhibit starkly different macroscale tribological properties, enabling future efforts to predict and design complex lubricant interfaces.

  9. A Survey Study of Institutional Review Board Thought Processes in the United States and South Korea

    Directory of Open Access Journals (Sweden)

    Si-Kyung Jung

    2012-09-01

    Full Text Available Introduction: In the last several decades, South Korea has rapidly adopted Western customs and practices. Yet, cultural differences between South Korea and the United States exist. The purpose ofthis study was to identify and characterize potential cultural differences in the Korean and US institutional review board (IRB approach to certain topics.Methods: A qualitative analysis of a 9-item survey, describing 4 research study case scenarios, sent to IRB members from the United States and South Korea. The case scenarios involved the followingissues: (1 the need for consent for retrospective chart review when research subjects receive their care after the study is conceived; (2 child assent; (3 individual versus population benefit; and (4 exception from informed consent in emergency resuscitation research. The free-text responses were analyzed and abstracted for recurrent themes.Results: Twenty-three of the 45 survey recipients completed the survey, for an overall response rate of 51%. The themes that emerged were as follows: (1 the importance of parental authority among Korean participants versus the importance of child autonomy and child assent among US participants; (2 the recognition of the rights of a proxy or surrogate who can represent an individual’s values by all participants; and (3 the importance of the community, expressed by the Korean respondents, versus individualism, expressed by US respondents.Conclusion: Whereas US participants appear to emphasize the importance of the individual and the autonomy of a child, the Korean respondents stressed the importance of parental authority andbenefiting the community, above and beyond that of the individual person. However, there was substantial overlap in the themes expressed by respondents from both countries.

  10. Predicting Summer Dryness Under a Warmer Climate: Modeling Land Surface Processes in the Midwestern United States

    Science.gov (United States)

    Winter, J. M.; Eltahir, E. A.

    2009-12-01

    One of the most significant impacts of climate change is the potential alteration of local hydrologic cycles over agriculturally productive areas. As the world’s food supply continues to be taxed by its burgeoning population, a greater percentage of arable land will need to be utilized and land currently producing food must become more efficient. This study seeks to quantify the effects of climate change on soil moisture in the American Midwest. A series of 24-year numerical experiments were conducted to assess the ability of Regional Climate Model Version 3 coupled to Integrated Biosphere Simulator (RegCM3-IBIS) and Biosphere-Atmosphere Transfer Scheme 1e (RegCM3-BATS1e) to simulate the observed hydroclimatology of the midwestern United States. Model results were evaluated using NASA Surface Radiation Budget, NASA Earth Radiation Budget Experiment, Illinois State Water Survey, Climate Research Unit Time Series 2.1, Global Soil Moisture Data Bank, and regional-scale estimations of evapotranspiration. The response of RegCM3-IBIS and RegCM3-BATS1e to a surrogate climate change scenario, a warming of 3oC at the boundaries and doubling of CO2, was explored. Precipitation increased significantly during the spring and summer in both RegCM3-IBIS and RegCM3-BATS1e, leading to additional runoff. In contrast, enhancement of evapotranspiration and shortwave radiation were modest. Soil moisture remained relatively unchanged in RegCM3-IBIS, while RegCM3-BATS1e exhibited some fall and winter wetting.

  11. Overview of PAT process analysers applicable in monitoring of film coating unit operations for manufacturing of solid oral dosage forms.

    Science.gov (United States)

    Korasa, Klemen; Vrečer, Franc

    2018-01-01

    Over the last two decades, regulatory agencies have demanded better understanding of pharmaceutical products and processes by implementing new technological approaches, such as process analytical technology (PAT). Process analysers present a key PAT tool, which enables effective process monitoring, and thus improved process control of medicinal product manufacturing. Process analysers applicable in pharmaceutical coating unit operations are comprehensibly described in the present article. The review is focused on monitoring of solid oral dosage forms during film coating in two most commonly used coating systems, i.e. pan and fluid bed coaters. Brief theoretical background and critical overview of process analysers used for real-time or near real-time (in-, on-, at- line) monitoring of critical quality attributes of film coated dosage forms are presented. Besides well recognized spectroscopic methods (NIR and Raman spectroscopy), other techniques, which have made a significant breakthrough in recent years, are discussed (terahertz pulsed imaging (TPI), chord length distribution (CLD) analysis, and image analysis). Last part of the review is dedicated to novel techniques with high potential to become valuable PAT tools in the future (optical coherence tomography (OCT), acoustic emission (AE), microwave resonance (MR), and laser induced breakdown spectroscopy (LIBS)). Copyright © 2017 Elsevier B.V. All rights reserved.

  12. 76 FR 34031 - United States Standards for Grades of Processed Raisins

    Science.gov (United States)

    2011-06-10

    ...: Myron Betts, Inspection and Standardization Section, Processed Products Branch (PPB), Fruit and... third sub-type, ``Vine-dried (without the application of drying chemicals or materials)'' and change the..., treated with drying chemicals or materials''. On February 28, 2006, AMS published an advance notice of...

  13. Discrimination against international medical graduates in the United States residency program selection process.

    Science.gov (United States)

    Desbiens, Norman A; Vidaillet, Humberto J

    2010-01-25

    Available evidence suggests that international medical graduates have improved the availability of U.S. health care while maintaining academic standards. We wondered whether studies had been conducted to address how international graduates were treated in the post-graduate selection process compared to U.S. graduates. We conducted a Medline search for research on the selection process. Two studies provide strong evidence that psychiatry and family practice programs respond to identical requests for applications at least 80% more often for U.S. medical graduates than for international graduates. In a third study, a survey of surgical program directors, over 70% perceived that there was discrimination against international graduates in the selection process. There is sufficient evidence to support action against discrimination in the selection process. Medical organizations should publish explicit proscriptions of discrimination against international medical graduates (as the American Psychiatric Association has done) and promote them in diversity statements. They should develop uniform and transparent policies for program directors to use to select applicants that minimize the possibility of non-academic discrimination, and the accreditation organization should monitor whether it is occurring. Whether there should be protectionism for U.S. graduates or whether post-graduate medical education should be an unfettered meritocracy needs to be openly discussed by medicine and society.

  14. An isotope-enrichment unit and a process for isotope separation

    International Nuclear Information System (INIS)

    1981-01-01

    A process and equipment for isotope enrichment using gas-centrifuge cascades are described. The method is described as applied to the separation of uranium isotopes, using natural-abundance uranium hexafluoride as the gaseous-mixture feedstock. (U.K.)

  15. 40 CFR 63.1104 - Process vents from continuous unit operations: applicability assessment procedures and methods.

    Science.gov (United States)

    2010-07-01

    ...) Necessitating that the owner or operator make product in excess of demand. (e) TOC or Organic HAP concentration....306×10 −2 a Use according to procedures outlined in this section. MJ/scm = Mega Joules per standard cubic meter. scm/min = Standard cubic meters per minute. (2) Nonhalogenated process vents. The owner or...

  16. Hydrologic processes of forested headwater watersheds across a physiographaic gradient in the southeastern United States

    Science.gov (United States)

    Ge Sun; Johnny Boggs; Steven G. McNulty; Devendra M. Amatya; Carl C. Trettin; Zhaohua Dai; James M. Vose; Ileana B. La Torre Torres; Timothy Callahan

    2008-01-01

    Understanding the hydrologic processes is the first step in making sound watershed management decisions including designing Best Management Practices for nonpoint source pollution control. Over the past fifty years, various forest experimental watersheds have been instrumented across the Carolinas through collaborative studies among federal, state, and private...

  17. Ewan and Cooper processes unite in 'paradigm shifting' patent

    Energy Technology Data Exchange (ETDEWEB)

    Tang, R.; Sanyal, P. [International Environmental and Energy Consultants (United States)

    2010-11-15

    A freshly patented technology aimed at removing over 99% of the flue emissions and over 90% of CO{sub 2}, the CEFCO process, is being presented as a comprehensive maximum achievable control technology (MACT) solution to enable coal and oil fired plants to meet tightening US emission standards. 3 figs.

  18. An Empirical Analysis of United States Consumers' Concerns about Eight Food Production and Processing Technologies

    OpenAIRE

    Hwang, Yun Jae; Roe, Brian E.; Teisl, Mario F.

    2005-01-01

    For a representative sample of U.S. consumers, we rank, correlate and explain ratings of concern toward eight food production and processing technologies (antibiotics, pesticides, artificial growth hormones, genetic modification, irradiation, artificial colors/flavors, pasteurization, and preservatives). Concern is highest for pesticides and hormones, followed by concern toward antibiotics, genetic modification and irradiation. We document standard relationships between many demographic, econ...

  19. Performance Analysis of the United States Marine Corps War Reserve Materiel Program Process Flow

    Science.gov (United States)

    2016-12-01

    55 2. Cost /Benefit Analysis of Maintaining Inventory .......................55 B. TRANSPORTATION...REVIEW A. DOD LOGISTICS OVERVIEW Acquiring and supplying materiel to deployed forces is a complicated process. While we conducted our analysis on... Cost /Benefit Analysis of Maintaining Inventory Additionally, should certain item types prove to be more prone to delays or incur proportionally

  20. Applying unit process life cycle inventor (UPLCI) methodology in product/packaging combinatons

    NARCIS (Netherlands)

    Oude Luttikhuis, Ellen; Toxopeus, Marten E.; Overcash, M.; Nee, Andrew Y.C.; Song, Bin; Ong, Soh-Khim

    2013-01-01

    This paper discusses how the UPLCI approach can be used for determining the inventory of the manufacturing phases of product/packaging combinations. The UPLCI approach can make the inventory of the manufacturing process of the product that is investigated more accurate. The life cycle of

  1. How semantics can improve engineering processes: A case of units of measure and quantities

    NARCIS (Netherlands)

    Rijgersberg, H.; Wigham, M.; Top, J.L.

    2011-01-01

    Science and engineering heavily depend on the ability to share data and models. The World Wide Web provides even greater opportunity to reuse such information from disparate sources. Moreover, if the information is digitized it can to a large extent be processed automatically. However, information

  2. Pyrolysis oil upgrading for Co-processing in standard refinery units

    NARCIS (Netherlands)

    De Miguel Mercader, F.

    2010-01-01

    This thesis considers the route that comprises the upgrading of pyrolysis oil (produced from lingo-cellulosic biomass) and its further co-processing in standard refineries to produce transportation fuels. In the present concept, pyrolysis oil is produced where biomass is available and then

  3. Status Report from the United States of America [Processing of Low-Grade Uranium Ores

    Energy Technology Data Exchange (ETDEWEB)

    Kennedy, R H [United States Atomic Energy Commission, Washington, D.C. (United States)

    1967-06-15

    The US uranium production rate has been dropping gradually from a high of 17 760 tons in fiscal year 1961 to a level of about 10 400 tons in fiscal year 1966. As of 1 January 1966, there were 17 uranium mills in operation in the USA compared with a maximum of 26 during 1961, the peak production year. Uranium procurement contracts between the USAEC and companies operating 11 mills have been extended through calendar year 1970. The USAEC contracts for the other six mills are scheduled to expire 31 December 1966. Some of these mills, however, have substantial private orders for production of uranium for nuclear power plants and will continue to operate after completion of deliveries under USAEC contracts. No new uranium mills have been brought into production since 1962. Under these circumstances the emphasis in process development activities in recent years has tended toward improvements that could be incorporated within the general framework of the existing plants. Some major flowsheet changes have been made, however. For example, two of the ore-processing plants have shifted from acid leaching to sodium carbonate leach in order to provide the flexibility to process an increasing proportion of ores of high limestone content in the tributary areas. Several mills employing ion exchange as the primary step for recovery of uranium from solution have added an 'Eluex' solvent extraction step on the ion exchange eluate. This process not only results in a highgrade final product, but also eliminates several metallurgical problems formerly caused by the chloride and nitrate eluants. Such changes together with numerous minor improvements have gradually reduced production cost and increased recoveries. The domestic uranium milling companies have generally had reserves of normal-grade ores well in excess of the amounts required to fulfil the requirements for their contracts with the USAEC. Therefore, there has been little incentive to undertake the processing of lower grade

  4. UNITED STATES DEPARTMENT OF ENERGY WASTE PROCESSING ANNUAL TECHNOLOGY DEVELOPMENT REPORT 2007

    Energy Technology Data Exchange (ETDEWEB)

    Bush, S

    2008-08-12

    The Office of Environmental Management's (EM) Roadmap, U.S. Department of Energy--Office of Environmental Management Engineering & Technology Roadmap (Roadmap), defines the Department's intent to reduce the technical risk and uncertainty in its cleanup programs. The unique nature of many of the remaining facilities will require a strong and responsive engineering and technology program to improve worker and public safety, and reduce costs and environmental impacts while completing the cleanup program. The technical risks and uncertainties associated with cleanup program were identified through: (1) project risk assessments, (2) programmatic external technical reviews and technology readiness assessments, and (3) direct site input. In order to address these needs, the technical risks and uncertainties were compiled and divided into the program areas of: Waste Processing, Groundwater and Soil Remediation, and Deactivation and Decommissioning (D&D). Strategic initiatives were then developed within each program area to address the technical risks and uncertainties in that program area. These strategic initiatives were subsequently incorporated into the Roadmap, where they form the strategic framework of the EM Engineering & Technology Program. The EM-21 Multi-Year Program Plan (MYPP) supports the goals and objectives of the Roadmap by providing direction for technology enhancement, development, and demonstrations that will lead to a reduction of technical uncertainties in EM waste processing activities. The current MYPP summarizes the strategic initiatives and the scope of the activities within each initiative that are proposed for the next five years (FY2008-2012) to improve safety and reduce costs and environmental impacts associated with waste processing; authorized budget levels will impact how much of the scope of activities can be executed, on a year-to-year basis. As a result of the importance of reducing technical risk and uncertainty in the EM Waste

  5. Iron turbidity removal from the active process water system of the Kaiga Generating Station Unit 1 using an electrochemical filter

    International Nuclear Information System (INIS)

    Venkateswaran, G.; Gokhale, B.K.

    2007-01-01

    Iron turbidity is observed in the intermediate cooling circuit of the active process water system (APWS) of Kaiga Generating Station (KGS). Deposition of hydrous/hydrated oxides of iron on the plate type heat exchanger, which is employed to transfer heat from the APWS to the active process cooling water system (APCWS), can in turn result in higher moderator D 2 O temperatures due to reduced heat transfer. Characterization of turbidity showed that the major component is γ-FeOOH. An in-house designed and fabricated electrochemical filter (ECF) containing an alternate array of 33 pairs of cathode and anode graphite felts was successfully tested for the removal of iron turbidity from the APWS of Kaiga Generating Station Unit No. 1 (KGS No. 1). A total volume of 52.5 m 3 water was processed using the filter. At an average inlet turbidity of 5.6 nephelometric turbidity units (NTU), the outlet turbidity observed from the ECF was 1.6 NTU. A maximum flow rate (10 L . min -1 ) and applied potential of 18.0-20.0 V was found to yield an average turbidity-removal efficiency of ∝ 75 %. When the experiment was terminated, a throughput of > 2.08 . 10 5 NTU-liters was realized without any reduction in the removal efficiency. Removal of the internals of the filter showed that only the bottom 11 pairs of felts had brownish deposits, while the remaining felts looked clean and unused. (orig.)

  6. The role of personnel marketing in the process of building corporate social responsibility strategy of a scientific unit

    Directory of Open Access Journals (Sweden)

    Sylwia Jarosławska-Sobór

    2015-09-01

    Full Text Available The goal of this article is to discuss the significance of human capital in the process of building the strategy of social responsibility and the role of personnel marketing in the process. Dynamically changing social environment has enforced a new way of looking at non-material resources. Organizations have understood that it is human capital and social competences that have a significant impact on the creation of an organization’s value, generating profits, as well as gaining competitive advantage in the 21st century. Personnel marketing is now a key element in the process of implementation of the CSR concept and building the value of contemporary organizations, especially such unique organizations as scientific units. In this article you will find a discussion concerning the basic values regarded as crucial by the Central Mining Institute in the context of their significance for the paradigm of social responsibility. Such an analysis was carried out on the basis of the experiences of Central Mining Institute (GIG in the development of strategic CSR, which takes into consideration the specific character of the Institute as a scientific unit.

  7. Evaluation of the Three Mile Island Unit 2 reactor building decontamination process

    International Nuclear Information System (INIS)

    Dougherty, D.; Adams, J.W.

    1983-08-01

    Decontamination activities from the cleanup of the Three Mile Island Unit 2 Reactor Building are generating a variety of waste streams. Solid wastes being disposed of in commercial shallow land burial include trash and rubbish, ion-exchange resins (Epicor-II) and strippable coatings. The radwaste streams arising from cleanup activities currently under way are characterized and classified under the waste classification scheme of 10 CFR Part 61. It appears that much of the Epicor-II ion-exchange resin being disposed of in commerical land burial will be Class B and require stabilization if current radionuclide loading practices continue to be followed. Some of the trash and rubbish from the cleanup of the reactor building so far would be Class B. Strippable coatings being used at TMI-2 were tested for leachability of radionuclides and chelating agents, thermal stability, radiation stability, stability under immersion and biodegradability. Actual coating samples from reactor building decontamination testing were evaluated for radionuclide leaching and biodegradation

  8. Evaluation of the Three Mile Island Unit 2 reactor building decontamination process

    Energy Technology Data Exchange (ETDEWEB)

    Dougherty, D.; Adams, J. W.

    1983-08-01

    Decontamination activities from the cleanup of the Three Mile Island Unit 2 Reactor Building are generating a variety of waste streams. Solid wastes being disposed of in commercial shallow land burial include trash and rubbish, ion-exchange resins (Epicor-II) and strippable coatings. The radwaste streams arising from cleanup activities currently under way are characterized and classified under the waste classification scheme of 10 CFR Part 61. It appears that much of the Epicor-II ion-exchange resin being disposed of in commerical land burial will be Class B and require stabilization if current radionuclide loading practices continue to be followed. Some of the trash and rubbish from the cleanup of the reactor building so far would be Class B. Strippable coatings being used at TMI-2 were tested for leachability of radionuclides and chelating agents, thermal stability, radiation stability, stability under immersion and biodegradability. Actual coating samples from reactor building decontamination testing were evaluated for radionuclide leaching and biodegradation.

  9. Initial Investigation into the Potential of CSP Industrial Process Heat for the Southwest United States

    Energy Technology Data Exchange (ETDEWEB)

    Kurup, Parthiv [National Renewable Energy Lab. (NREL), Golden, CO (United States); Turchi, Craig [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2015-11-01

    After significant interest in the 1970s, but relatively few deployments, the use of solar technologies for thermal applications, including enhanced oil recovery (EOR), desalination, and industrial process heat (IPH), is again receiving global interest. In particular, the European Union (EU) has been a leader in the use, development, deployment, and tracking of Solar Industrial Process Heat (SIPH) plants. The objective of this study is to ascertain U.S. market potential of IPH for concentrating collector technologies that have been developed and promoted through the U.S. Department of Energy's Concentrating Solar Power (CSP) Program. For this study, the solar-thermal collector technologies of interest are parabolic trough collectors (PTCs) and linear Fresnel (LF) systems.

  10. An alternate way for image documentation in gamma camera processing units

    International Nuclear Information System (INIS)

    Schneider, P.

    1980-01-01

    For documentation of images and curves generated by a gamma camera processing system a film exposure tool from a CT system was linked to the video monitor by use of a resistance bridge. The machine has a stock capacity of 100 plane films. For advantage there is no need for an interface, the complete information on the monitor is transferred to the plane film and compared to software controlled data output on printer or plotter the device is tremendously time saving. (orig.) [de

  11. NUMATH: a nuclear material holdup estimator for unit operations and chemical processes

    International Nuclear Information System (INIS)

    Krichinsky, A.M.

    1982-01-01

    NUMATH provides inventory estimation by utilizing previous inventory measurements, operating data, and, where available, on-line process measurements. For the present time, NUMATH's purpose is to provide a reasonable, near-real-time estimate of material inventory until accurate inventory determination can be obtained from chemical analysis. Ultimately, it is intended that NUMATH will further utilize on-line analyzers and more advanced calculational techniques to provide more accurate inventory determinations and estimates

  12. The Development of a General Purpose ARM-based Processing Unit for the ATLAS TileCal sROD

    OpenAIRE

    Cox, Mitchell Arij; Reed, Robert; Mellado Garcia, Bruce Rafael

    2014-01-01

    The Large Hadron Collider at CERN generates enormous amounts of raw data which present a serious computing challenge. After Phase-II upgrades in 2022, the data output from the ATLAS Tile Calorimeter will increase by 200 times to 41 Tb/s! ARM processors are common in mobile devices due to their low cost, low energy consumption and high performance. It is proposed that a cost-effective, high data throughput Processing Unit (PU) can be developed by using several consumer ARM processors in a clus...

  13. Porting of the transfer-matrix method for multilayer thin-film computations on graphics processing units

    Science.gov (United States)

    Limmer, Steffen; Fey, Dietmar

    2013-07-01

    Thin-film computations are often a time-consuming task during optical design. An efficient way to accelerate these computations with the help of graphics processing units (GPUs) is described. It turned out that significant speed-ups can be achieved. We investigate the circumstances under which the best speed-up values can be expected. Therefore we compare different GPUs among themselves and with a modern CPU. Furthermore, the effect of thickness modulation on the speed-up and the runtime behavior depending on the input data is examined.

  14. Risk Quantitative Determination of Fire and Explosion in a Process Unit By Dow’s Fire and Explosion Index

    Directory of Open Access Journals (Sweden)

    S. Varmazyar

    2008-04-01

    Full Text Available Background and aims   Fire and explosion hazards are the first and second of major hazards in process industries, respectively. This study has been done to determine fire and explosion risk severity,radius of exposure and estimating of most probable loss.   Methods   In this quantitative study process unit has been selected with affecting parameters on  fire and explosion risk. Then, it was analyzed by DOW's fire and explosion index (F&EI. Technical data were obtained from process documents and reports, fire and explosion guideline.After calculating of DOW's index, radius of exposure determined and finally most  probable loss was estimated.   Results   The results showed an F&EI value of 226 for this process unit.The F&EI was extremely  high and unacceptable.Risk severity was categorized in sever class.Radius of exposure and damage factor were calculated 57 meters and 83%,respectively. As well as most probable loss was  estimated about 6.7 million dollars.   Conclusion   F&EI is a proper technique for risk assessment and loss estimation of fire and  explosion in process industries.Also,It is an important index for detecting high risk and low risk   areas in an industry. At this technique, all of factors affecting on fire and explosion risk was  showed as index that is a base for judgement risk class. Finally, estimated losses could be used as  a base of fire and explosion insurance.

  15. Processes for CO2 capture. Context of thermal waste treatment units. State of the art. Extended abstract

    International Nuclear Information System (INIS)

    Lopez, A.; Roizard, D.; Favre, E.; Dufour, A.

    2013-01-01

    For most of industrial sectors, Greenhouse Gases (GHG) such as carbon dioxide (CO 2 ) are considered as serious pollutants and have to be controlled and treated. The thermal waste treatment units are part of industrial CO 2 emitters, even if they represent a small part of emissions (2,5 % of GHG emissions in France) compared to power plants (13 % of GHG emissions in France, one third of worldwide GHG emissions) or shaper industries (20 % of GHG emissions in France). Carbon Capture and Storage (CCS) can be a solution to reduce CO 2 emissions from industries (power plants, steel and cement industries...). The issues of CCS applied to thermal waste treatment units are quite similar to those related to power plants (CO 2 flow, flue gas temperature and pressure conditions). The problem is to know if the CO 2 produced by waste treatment plants can be captured thanks to the processes already available on the market or that should be available by 2020. It seems technically possible to adapt CCS post-combustion methods to the waste treatment sector. But on the whole, CCS is complex and costly for a waste treatment unit offering small economies of scale. However, regulations concerning impurities for CO 2 transport and storage are not clearly defined at the moment. Consequently, specific studies must be achieved in order to check the technical feasibility of CCS in waste treatment context and clearly define its cost. (authors)

  16. Instream sand and gravel mining: Environmental issues and regulatory process in the United States

    Science.gov (United States)

    Meador, M.R.; Layher, A.O.

    1998-01-01

    Sand and gravel are widely used throughout the U.S. construction industry, but their extraction can significantly affect the physical, chemical, and biological characteristics of mined streams. Fisheries biologists often find themselves involved in the complex environmental and regulatory issues related to instream sand and gravel mining. This paper provides an overview of information presented in a symposium held at the 1997 midyear meeting of the Southern Division of the American Fisheries Society in San Antonio, Texas, to discuss environmental issues and regulatory procedures related to instream mining. Conclusions from the symposium suggest that complex physicochemical and biotic responses to disturbance such as channel incision and alteration of riparian vegetation ultimately determine the effects of instream mining. An understanding of geomorphic processes can provide insight into the effects of mining operations on stream function, and multidisciplinary empirical studies are needed to determine the relative effects of mining versus other natural and human-induced stream alterations. Mining regulations often result in a confusing regulatory process complicated, for example, by the role of the U.S. Army Corps of Engineers, which has undergone numerous changes and remains unclear. Dialogue among scientists, miners, and regulators can provide an important first step toward developing a plan that integrates biology and politics to protect aquatic resources.

  17. Pretreatment Process for performance Improvement of SIES at Kori Unit 2 in Korea

    International Nuclear Information System (INIS)

    Lee, Sang Jin; Yang, Ho Yeon; Shin, Sang Woon; Song, Myung Jae

    1994-01-01

    Pretreatment process consisted of submerged hollow-fiber microfiltration(HMF) membrane and spiral-wound nanofiltration(SNF) membrane has been developed by NETEC, KHNP for the purpose of improving the impurities of liquid radioactive waste before entering Selective Ion Exchange System(SIES). The lab-scale combined system was installed at Kori NPP no. 2 nuclear power plant and demonstration tests using actual liquid radioactive waste were carried out to verify the performance of the combined system. The submerged HMF membrane was adopted for removal of suspended solid in liquid radioactive waste and the SNF membrane was used for removal of particulate radioisotope such as, Ag-110m and oily waste because ion exchange resin can not remove particulate radioisotopes. The liquid waste in Waste Holdup Tank(WHT) was processed with HMF and SNF membrane, and SIES. The initial SS concentration and total activity of actual waste were 38,000ppb and 1.534x10 -3 μCi/cc, respectively. The SS concentration and total activity of permeate were 30ppb and lower than LLD(Lower Limit of Detection), respectively

  18. PGAS in-memory data processing for the Processing Unit of the Upgraded Electronics of the Tile Calorimeter of the ATLAS Detector

    International Nuclear Information System (INIS)

    Ohene-Kwofie, Daniel; Otoo, Ekow

    2015-01-01

    The ATLAS detector, operated at the Large Hadron Collider (LHC) records proton-proton collisions at CERN every 50ns resulting in a sustained data flow up to PB/s. The upgraded Tile Calorimeter of the ATLAS experiment will sustain about 5PB/s of digital throughput. These massive data rates require extremely fast data capture and processing. Although there has been a steady increase in the processing speed of CPU/GPGPU assembled for high performance computing, the rate of data input and output, even under parallel I/O, has not kept up with the general increase in computing speeds. The problem then is whether one can implement an I/O subsystem infrastructure capable of meeting the computational speeds of the advanced computing systems at the petascale and exascale level.We propose a system architecture that leverages the Partitioned Global Address Space (PGAS) model of computing to maintain an in-memory data-store for the Processing Unit (PU) of the upgraded electronics of the Tile Calorimeter which is proposed to be used as a high throughput general purpose co-processor to the sROD of the upgraded Tile Calorimeter. The physical memory of the PUs are aggregated into a large global logical address space using RDMA- capable interconnects such as PCI- Express to enhance data processing throughput. (paper)

  19. Real-time processing for full-range Fourier-domain optical-coherence tomography with zero-filling interpolation using multiple graphic processing units.

    Science.gov (United States)

    Watanabe, Yuuki; Maeno, Seiya; Aoshima, Kenji; Hasegawa, Haruyuki; Koseki, Hitoshi

    2010-09-01

    The real-time display of full-range, 2048?axial pixelx1024?lateral pixel, Fourier-domain optical-coherence tomography (FD-OCT) images is demonstrated. The required speed was achieved by using dual graphic processing units (GPUs) with many stream processors to realize highly parallel processing. We used a zero-filling technique, including a forward Fourier transform, a zero padding to increase the axial data-array size to 8192, an inverse-Fourier transform back to the spectral domain, a linear interpolation from wavelength to wavenumber, a lateral Hilbert transform to obtain the complex spectrum, a Fourier transform to obtain the axial profiles, and a log scaling. The data-transfer time of the frame grabber was 15.73?ms, and the processing time, which includes the data transfer between the GPU memory and the host computer, was 14.75?ms, for a total time shorter than the 36.70?ms frame-interval time using a line-scan CCD camera operated at 27.9?kHz. That is, our OCT system achieved a processed-image display rate of 27.23 frames/s.

  20. The Process and Impact of Stakeholder Engagement in Developing a Pediatric Intensive Care Unit Communication and Decision-Making Intervention.

    Science.gov (United States)

    Michelson, Kelly N; Frader, Joel; Sorce, Lauren; Clayman, Marla L; Persell, Stephen D; Fragen, Patricia; Ciolino, Jody D; Campbell, Laura C; Arenson, Melanie; Aniciete, Danica Y; Brown, Melanie L; Ali, Farah N; White, Douglas

    2016-12-01

    Stakeholder-developed interventions are needed to support pediatric intensive care unit (PICU) communication and decision-making. Few publications delineate methods and outcomes of stakeholder engagement in research. We describe the process and impact of stakeholder engagement on developing a PICU communication and decision-making support intervention. We also describe the resultant intervention. Stakeholders included parents of PICU patients, healthcare team members (HTMs), and research experts. Through a year-long iterative process, we involved 96 stakeholders in 25 meetings and 26 focus groups or interviews. Stakeholders adapted an adult navigator model by identifying core intervention elements and then determining how to operationalize those core elements in pediatrics. The stakeholder input led to PICU-specific refinements, such as supporting transitions after PICU discharge and including ancillary tools. The resultant intervention includes navigator involvement with parents and HTMs and navigator-guided use of ancillary tools. Subsequent research will test the feasibility and efficacy of our intervention.

  1. The Process and Impact of Stakeholder Engagement in Developing a Pediatric Intensive Care Unit Communication and Decision-Making Intervention

    Science.gov (United States)

    Frader, Joel; Sorce, Lauren; Clayman, Marla L; Persell, Stephen D; Fragen, Patricia; Ciolino, Jody D; Campbell, Laura C; Arenson, Melanie; Aniciete, Danica Y; Brown, Melanie L; Ali, Farah N; White, Douglas

    2016-01-01

    Stakeholder-developed interventions are needed to support pediatric intensive care unit (PICU) communication and decision-making. Few publications delineate methods and outcomes of stakeholder engagement in research. We describe the process and impact of stakeholder engagement on developing a PICU communication and decision-making support intervention. We also describe the resultant intervention. Stakeholders included parents of PICU patients, healthcare team members (HTMs), and research experts. Through a year-long iterative process, we involved 96 stakeholders in 25 meetings and 26 focus groups or interviews. Stakeholders adapted an adult navigator model by identifying core intervention elements and then determining how to operationalize those core elements in pediatrics. The stakeholder input led to PICU-specific refinements, such as supporting transitions after PICU discharge and including ancillary tools. The resultant intervention includes navigator involvement with parents and HTMs and navigator-guided use of ancillary tools. Subsequent research will test the feasibility and efficacy of our intervention. PMID:28725847

  2. Vortex particle method in parallel computations on graphical processing units used in study of the evolution of vortex structures

    International Nuclear Information System (INIS)

    Kudela, Henryk; Kosior, Andrzej

    2014-01-01

    Understanding the dynamics and the mutual interaction among various types of vortical motions is a key ingredient in clarifying and controlling fluid motion. In the paper several different cases related to vortex tube interactions are presented. Due to problems with very long computation times on the single processor, the vortex-in-cell (VIC) method is implemented on the multicore architecture of a graphics processing unit (GPU). Numerical results of leapfrogging of two vortex rings for inviscid and viscous fluid are presented as test cases for the new multi-GPU implementation of the VIC method. Influence of the Reynolds number on the reconnection process is shown for two examples: antiparallel vortex tubes and orthogonally offset vortex tubes. Our aim is to show the great potential of the VIC method for solutions of three-dimensional flow problems and that the VIC method is very well suited for parallel computation. (paper)

  3. Design process and instrumentation of a low NOx wire-mesh duct burner for micro-cogeneration unit

    Energy Technology Data Exchange (ETDEWEB)

    Ramadan, O.B.; Gauthier, J.E.D. [Carleton Univ., Ottawa, ON (Canada). Dept. of Mechanical and Aerospace Engineering; Hughes, P.M.; Brandon, R. [Natural Resources Canada, Ottawa, ON (Canada). CANMET Energy Technology Centre

    2007-07-01

    Air pollution and global climate change have become a serious environmental problem leading to increasingly stringent government regulations worldwide. New designs and methods for improving combustion systems to minimize the production of toxic emissions, like nitrogen oxides (NOx) are therefore needed. In order to control smog, acid rain, ozone depletion, and greenhouse-effect warming, a reduction of nitrogen oxide is necessary. One alternative for combined electrical power and heat generation (CHP) are micro-cogeneration units which use a micro-turbine as a prime mover. However, to increase the efficiencies of these units, micro-cogeneration technology still needs to be developed further. This paper described the design process, building, and testing of a new low NOx wire-mesh duct burner (WMDB) for the development of a more efficient micro-cogeneration unit. The primary goal of the study was to develop a practical and simple WMDB, which produces low emissions by using lean-premixed surface combustion concept and its objectives were separated into four phases which were described in this paper. Phase I involved the design and construction of the burner. Phase II involved a qualitative flow visualization study for the duct burner premixer to assist the new design of the burner by introducing an efficient premixer that could be used in this new application. Phase III of this research program involved non-reacting flow modeling on the burner premixer flow field using a commercial computational fluid dynamic model. In phase IV, the reacting flow experimental investigation was performed. It was concluded that the burner successfully increased the quantity and the quality of the heat released from the micro-CHP unit and carbon monoxide emissions of less than 9 ppm were reached. 3 refs., 3 figs.

  4. The Future Evolution of the Fast TracKer Processing Unit

    CERN Document Server

    Gentsos, Christos; The ATLAS collaboration; Magalotti, Daniel; Bertolucci, Federico; Citraro, Saverio; Kordas, Kostantinos; Nikolaidis, Spyridon

    2016-01-01

    Real time tracking is a key ingredient for online event selection at hadron colliders. The Silicon Vertex Tracker at the CDF experiment and the Fast Tracker (FTK) at ATLAS are two successful examples of the importance of dedicated hardware to reconstruct full events at hadron machines. We present the future evolution of this technology, for applications in the High Luminosity runs at the Large Hadron Collider (HL-LHC). Data processing speed is achieved with custom VLSI pattern recognition and linearized track fitting executed inside modern FPGAs, exploiting deep pipelining, extensive parallelism, and efficient use of available resources. In the current system, one large FPGA executed track fitting in full resolution inside low resolution candidate tracks found by a set of custom ASIC devices, called Associative Memories (AM chips). The FTK dual structure, based on the cooperation of VLSI AM and programmable FPGAs, is maintained, but we plan to increase the FPGA parallelism by associating one FPGA to each AM c...

  5. The Future Evolution of the Fast TracKer Processing Unit

    CERN Document Server

    Gentsos, Christos; The ATLAS collaboration; Magalotti, Daniel; Bertolucci, Federico; Citraro, Saverio; Kordas, Kostantinos; Nikolaidis, Spyridon

    2015-01-01

    Real time tracking is a key ingredient for online event selection at hadron colliders. The Silicon Vertex Tracker at the CDF experiment and the Fast Tracker (FTK) at ATLAS are two successful examples of the importance of dedicated hardware to reconstruct full events at hadron machines. We present the future evolution of this technology, for applications in the High Luminosity runs at the Large Hadron Collider (HL-LHC). Data processing speed is achieved with custom VLSI pattern recognition and linearized track fitting executed inside modern FPGAs, exploiting deep pipelining, extensive parallelism, and efficient use of available resources. In the current system, one large FPGA executed track fitting in full resolution inside low resolution candidate tracks found by a set of custom ASIC devices, called Associative Memories (AM chips). The FTK dual structure, based on the cooperation of VLSI AM and programmable FPGAs, is maintained, but we plan to increase the FPGA parallelism by associating one FPGA to each AM c...

  6. Indirect Transportation Cost in the border crossing process: The United States–Mexico trade

    Directory of Open Access Journals (Sweden)

    Carlos Obed Figueroa Ortiz

    2015-12-01

    Full Text Available Using a Social Accounting Matrix as database, a Computable General Equilibrium model is implemented in order to estimate the Indirect Transportations Costs (ITC present in the border crossing for the U.S.–Mexico bilateral trade. Here, an “iceberg–type” transportation function is assumed to determine the amount of loss that must be faced as a result of border crossing process through the ports of entry existing between the two countries. The study period covers annual data from 1995 to 2009 allowing the analysis of the trend of these costs considering the trade liberalisation that is experienced. Results show that the ITC have experienced a decrease of 12% during the period.Test

  7. Data processing system for small and medium sized clinical chemistry and nuclear medical units

    Energy Technology Data Exchange (ETDEWEB)

    Mariss, P; Haubold, E; Porth, A J

    1987-06-01

    A computer system, in clinical use for over 5 years, with a group practice specializing in laboratory-nuclear medicine is described for its hardware and software configuration. In addition to the conventional tasks of a computer system (patient data acquisition, issuance of worklists, result input, plausibility control, quality assurance, findings documentation), an integrated word processing system was developed for nuclear medicine in-vitro and in-vivo diagnosis. In addition, the computer system masters major administrative tasks, such as private and panel accounts, account reminders payments to suppliers, etc. the hardware and data bank are designed in a manner which permits direct data access over a period of 18 to 21 months.

  8. Warning systems in a computerized nursing process for Intensive Care Units

    Directory of Open Access Journals (Sweden)

    Daniela Couto Carvalho Barra

    2014-02-01

    Full Text Available A hybrid study combining technological production and methodological research aiming to establish associations between the data and information that are part of a Computerized Nursing Process according to the ICNP® Version 1.0, indicators of patient safety and quality of care. Based on the guidelines of the Agency for Healthcare Research and Quality and the American Association of Critical Care Nurses for the expansion of warning systems, five warning systems were developed: potential for iatrogenic pneumothorax, potential for care-related infections, potential for suture dehiscence in patients after abdominal or pelvic surgery, potential for loss of vascular access, and potential for endotracheal extubation. The warning systems are a continuous computerized resource of essential situations that promote patient safety and enable the construction of a way to stimulate clinical reasoning and support clinical decision making of nurses in intensive care.

  9. Accelerating Monte Carlo simulations of photon transport in a voxelized geometry using a massively parallel graphics processing unit

    International Nuclear Information System (INIS)

    Badal, Andreu; Badano, Aldo

    2009-01-01

    Purpose: It is a known fact that Monte Carlo simulations of radiation transport are computationally intensive and may require long computing times. The authors introduce a new paradigm for the acceleration of Monte Carlo simulations: The use of a graphics processing unit (GPU) as the main computing device instead of a central processing unit (CPU). Methods: A GPU-based Monte Carlo code that simulates photon transport in a voxelized geometry with the accurate physics models from PENELOPE has been developed using the CUDA programming model (NVIDIA Corporation, Santa Clara, CA). Results: An outline of the new code and a sample x-ray imaging simulation with an anthropomorphic phantom are presented. A remarkable 27-fold speed up factor was obtained using a GPU compared to a single core CPU. Conclusions: The reported results show that GPUs are currently a good alternative to CPUs for the simulation of radiation transport. Since the performance of GPUs is currently increasing at a faster pace than that of CPUs, the advantages of GPU-based software are likely to be more pronounced in the future.

  10. Accelerating Monte Carlo simulations of photon transport in a voxelized geometry using a massively parallel graphics processing unit

    Energy Technology Data Exchange (ETDEWEB)

    Badal, Andreu; Badano, Aldo [Division of Imaging and Applied Mathematics, OSEL, CDRH, U.S. Food and Drug Administration, Silver Spring, Maryland 20993-0002 (United States)

    2009-11-15

    Purpose: It is a known fact that Monte Carlo simulations of radiation transport are computationally intensive and may require long computing times. The authors introduce a new paradigm for the acceleration of Monte Carlo simulations: The use of a graphics processing unit (GPU) as the main computing device instead of a central processing unit (CPU). Methods: A GPU-based Monte Carlo code that simulates photon transport in a voxelized geometry with the accurate physics models from PENELOPE has been developed using the CUDA programming model (NVIDIA Corporation, Santa Clara, CA). Results: An outline of the new code and a sample x-ray imaging simulation with an anthropomorphic phantom are presented. A remarkable 27-fold speed up factor was obtained using a GPU compared to a single core CPU. Conclusions: The reported results show that GPUs are currently a good alternative to CPUs for the simulation of radiation transport. Since the performance of GPUs is currently increasing at a faster pace than that of CPUs, the advantages of GPU-based software are likely to be more pronounced in the future.

  11. Accelerating Monte Carlo simulations of photon transport in a voxelized geometry using a massively parallel graphics processing unit.

    Science.gov (United States)

    Badal, Andreu; Badano, Aldo

    2009-11-01

    It is a known fact that Monte Carlo simulations of radiation transport are computationally intensive and may require long computing times. The authors introduce a new paradigm for the acceleration of Monte Carlo simulations: The use of a graphics processing unit (GPU) as the main computing device instead of a central processing unit (CPU). A GPU-based Monte Carlo code that simulates photon transport in a voxelized geometry with the accurate physics models from PENELOPE has been developed using the CUDATM programming model (NVIDIA Corporation, Santa Clara, CA). An outline of the new code and a sample x-ray imaging simulation with an anthropomorphic phantom are presented. A remarkable 27-fold speed up factor was obtained using a GPU compared to a single core CPU. The reported results show that GPUs are currently a good alternative to CPUs for the simulation of radiation transport. Since the performance of GPUs is currently increasing at a faster pace than that of CPUs, the advantages of GPU-based software are likely to be more pronounced in the future.

  12. Accelerating Matrix-Vector Multiplication on Hierarchical Matrices Using Graphical Processing Units

    KAUST Repository

    Boukaram, W.

    2015-03-25

    Large dense matrices arise from the discretization of many physical phenomena in computational sciences. In statistics very large dense covariance matrices are used for describing random fields and processes. One can, for instance, describe distribution of dust particles in the atmosphere, concentration of mineral resources in the earth\\'s crust or uncertain permeability coefficient in reservoir modeling. When the problem size grows, storing and computing with the full dense matrix becomes prohibitively expensive both in terms of computational complexity and physical memory requirements. Fortunately, these matrices can often be approximated by a class of data sparse matrices called hierarchical matrices (H-matrices) where various sub-blocks of the matrix are approximated by low rank matrices. These matrices can be stored in memory that grows linearly with the problem size. In addition, arithmetic operations on these H-matrices, such as matrix-vector multiplication, can be completed in almost linear time. Originally the H-matrix technique was developed for the approximation of stiffness matrices coming from partial differential and integral equations. Parallelizing these arithmetic operations on the GPU has been the focus of this work and we will present work done on the matrix vector operation on the GPU using the KSPARSE library.

  13. Massive Parallelism of Monte-Carlo Simulation on Low-End Hardware using Graphic Processing Units

    Energy Technology Data Exchange (ETDEWEB)

    Mburu, Joe Mwangi; Hah, Chang Joo Hah [KEPCO International Nuclear Graduate School, Ulsan (Korea, Republic of)

    2014-05-15

    Within the past decade, research has been done on utilizing GPU massive parallelization in core simulation with impressive results but unfortunately, not much commercial application has been done in the nuclear field especially in reactor core simulation. The purpose of this paper is to give an introductory concept on the topic and illustrate the potential of exploiting the massive parallel nature of GPU computing on a simple monte-carlo simulation with very minimal hardware specifications. To do a comparative analysis, a simple two dimension monte-carlo simulation is implemented for both the CPU and GPU in order to evaluate performance gain based on the computing devices. The heterogeneous platform utilized in this analysis is done on a slow notebook with only 1GHz processor. The end results are quite surprising whereby high speedups obtained are almost a factor of 10. In this work, we have utilized heterogeneous computing in a GPU-based approach in applying potential high arithmetic intensive calculation. By applying a complex monte-carlo simulation on GPU platform, we have speed up the computational process by almost a factor of 10 based on one million neutrons. This shows how easy, cheap and efficient it is in using GPU in accelerating scientific computing and the results should encourage in exploring further this avenue especially in nuclear reactor physics simulation where deterministic and stochastic calculations are quite favourable in parallelization.

  14. Massive Parallelism of Monte-Carlo Simulation on Low-End Hardware using Graphic Processing Units

    International Nuclear Information System (INIS)

    Mburu, Joe Mwangi; Hah, Chang Joo Hah

    2014-01-01

    Within the past decade, research has been done on utilizing GPU massive parallelization in core simulation with impressive results but unfortunately, not much commercial application has been done in the nuclear field especially in reactor core simulation. The purpose of this paper is to give an introductory concept on the topic and illustrate the potential of exploiting the massive parallel nature of GPU computing on a simple monte-carlo simulation with very minimal hardware specifications. To do a comparative analysis, a simple two dimension monte-carlo simulation is implemented for both the CPU and GPU in order to evaluate performance gain based on the computing devices. The heterogeneous platform utilized in this analysis is done on a slow notebook with only 1GHz processor. The end results are quite surprising whereby high speedups obtained are almost a factor of 10. In this work, we have utilized heterogeneous computing in a GPU-based approach in applying potential high arithmetic intensive calculation. By applying a complex monte-carlo simulation on GPU platform, we have speed up the computational process by almost a factor of 10 based on one million neutrons. This shows how easy, cheap and efficient it is in using GPU in accelerating scientific computing and the results should encourage in exploring further this avenue especially in nuclear reactor physics simulation where deterministic and stochastic calculations are quite favourable in parallelization

  15. Challenges to the Aarhus Convention: Public Participation in the Energy Planning Process in the United Kingdom

    Directory of Open Access Journals (Sweden)

    Raphael Heffron

    2014-05-01

    Full Text Available This article examines the tension between the democratic right of public participation on specific environmental issues, guaranteed by European Law, and the degree to which it is being challenged in the UK as a consequence of recent approaches to energy infrastructure planning. Recent trends in UK government policy frameworks seem both to threaten effective public participation and challenge EU planning strategy, in particular those outlined in the Aarhus convention. The research outlined in this study involves an assessment of the changing context of planning and energy policy, in addition to recent changes in legislation formulation in the UK. The research findings, derived from an extensive interview process of elite stakeholders engaged in policy and legislation formulation in the UK and the EU provide a new categorisation system of stakeholders in energy policy that can be utilised in future research. The article concludes with a second order analysis of the interviewee data and provides solutions to increase public participation in the planning of energy infrastructure that emerge from the different perspectives.

  16. Evaluation of virus reduction efficiency in wastewater treatment unit processes as a credit value in the multiple-barrier system for wastewater reclamation and reuse

    OpenAIRE

    Ito, Toshihiro; Kato, Tsuyoshi; Hasegawa, Makoto; Katayama, Hiroyuki; Ishii, Satoshi; Okabe, Satoshi; Sano, Daisuke

    2016-01-01

    The virus reduction efficiency of each unit process is commonly determined based on the ratio of virus concentration in influent to that in effluent of a unit, but the virus concentration in wastewater has often fallen below the analytical quantification limit, which does not allow us to calculate the concentration ratio at each sampling event. In this study, left-censored datasets of norovirus (genogroup I and II), and adenovirus were used to calculate the virus reduction efficiency in unit ...

  17. "Vulnerability, Resiliency, and Adaptation: The Health of Latin Americans during the Migration Process to the United States"

    Science.gov (United States)

    Riosmena, Fernando; Jochem, Warren C

    2012-01-01

    In this paper, we offer a general outlook of the health of Latin Americans (with a special emphasis on Mexicans) during the different stages of the migration process to the U.S. given the usefulness of the social vulnerability concept and given that said vulnerability varies conspicuously across the different stages of the migration process. Severe migrant vulnerability during the transit and crossing has serious negative health consequences. Yet, upon their arrival to the U.S., migrant health is favorable in outcomes such as mortality by many causes of death and in several chronic conditions and risk factors, though these apparent advantages seem to disappear during the process of adaptation to the host society. We discuss potential explanations for the initial health advantage and the sources of vulnerability that explain its erosion, with special emphasis in systematic timely access to health care. Given that migration can affect social vulnerability processes in sending areas, we discuss the potential health consequences for these places and conclude by considering the immigration and health policy implications of these issues for the United States and sending countries, with emphasis on Mexico.

  18. Real-time acquisition and display of flow contrast using speckle variance optical coherence tomography in a graphics processing unit.

    Science.gov (United States)

    Xu, Jing; Wong, Kevin; Jian, Yifan; Sarunic, Marinko V

    2014-02-01

    In this report, we describe a graphics processing unit (GPU)-accelerated processing platform for real-time acquisition and display of flow contrast images with Fourier domain optical coherence tomography (FDOCT) in mouse and human eyes in vivo. Motion contrast from blood flow is processed using the speckle variance OCT (svOCT) technique, which relies on the acquisition of multiple B-scan frames at the same location and tracking the change of the speckle pattern. Real-time mouse and human retinal imaging using two different custom-built OCT systems with processing and display performed on GPU are presented with an in-depth analysis of performance metrics. The display output included structural OCT data, en face projections of the intensity data, and the svOCT en face projections of retinal microvasculature; these results compare projections with and without speckle variance in the different retinal layers to reveal significant contrast improvements. As a demonstration, videos of real-time svOCT for in vivo human and mouse retinal imaging are included in our results. The capability of performing real-time svOCT imaging of the retinal vasculature may be a useful tool in a clinical environment for monitoring disease-related pathological changes in the microcirculation such as diabetic retinopathy.

  19. Syngas to Synfuels Process Development Unit Final Scientific/Technical Report

    Energy Technology Data Exchange (ETDEWEB)

    Brown, Robert C. [Iowa State Univ., Ames, IA (United States)

    2012-03-30

    The process described is for the gasification of 20 kg/h of biomass (switchgrass) to produce a syngas suitable for upgrading to Fischer-Tropsch (FT) liquid fuels (gas, diesel, waxes, etc.). The gas stream generated from gasification is primarily composed of carbon monoxide (CO), hydrogen (H2), carbon dioxide (CO2), steam (H2O), and methane (CH4), but also includes tars, particulate matter, ammonia (NH3), hydrogen cyanide (HCN), hydrogen chloride (HCl), hydrogen sulfide ( H2S), carbonyl sulfide (COS), etc. as contaminants. The gas stream passes through an array of cleaning devices to remove the contaminants to levels suitable for FT synthesis of fuels/chemicals. These devices consist primarily of an oil scrubber (to remove tars and remaining particulates), sulfur scrubber (to remove sulfur compounds), and a wet scrubber (to remove NH3, HCl and remaining water soluble contaminants). The ammonia and oil scrubbers are absorption columns with a combination of random and structured packing materials, using water and oil as the adsorption liquids respectively. The ammonia scrubber performed very well, while operating the oil scrubber proved to be more difficult due to the nature of tar compounds. The sulfur scrubber is a packed bed absorption device with solid extrudates of adsorbent material, primarily composed of ZnO and CuO. It performed well, but over a limited amount of time due to fouling created by excess tar/particulate matter and oil aerosols. Overall gas contaminants were reduced to below 1 ppm NH3, and less than 1 ppm collective sulfur compounds.

  20. An assessment of dioxin levels in processed ball clay from the United States

    Energy Technology Data Exchange (ETDEWEB)

    Ferrario, J.; Byrne, C. [USEPA, Stennis Space Ctr. Mississippi (United States); Schaum, J. [USEPA, Washington, DC (United States)

    2004-09-15

    Introduction The presence of dioxin-like compounds in ball clay was discovered in 1996 as a result of an investigation to determine the sources of elevated levels of dioxin found in two chicken fat samples from a national survey of poultry. The investigation indicated that soybean meal added to chicken feed was the source of dioxin contamination. Further investigation showed that the dioxin contamination came from the mixing of a natural clay known as ''ball clay'' with the soybean meal as an anti-caking agent. The FDA subsequently discontinued the use of contaminated ball clay as an anti-caking agent in animal feeds. The source of the dioxins found in ball clay has yet to be established. A comparison of the characteristic dioxin profile found in ball clay to those of known anthropogenic sources from the U.S.EPA Source Inventory has been undertaken, and none of those examined match the features found in the clays. These characteristic features together with the fact that the geologic formations in which the clays are found are ancient suggest a natural origin for the dioxins. The plasticity of ball clays makes them an important commercial resource for a variety of commercial uses. The percentage of commercial uses of ball clay in 2000 included: 29% for floor and wall tile, 24% for sanitary ware, 10% pottery, and 37% for other industrial and commercial uses. The total mining of ball clay in the U.S. for 2003 was 1.12 million metric tons. EPA is examining the potential for the environmental release of dioxins from the processing/use of ball clays and evaluating potential exposure pathways. Part of this overall effort and the subject of this study includes the analysis of dioxin levels found in commercially available ball clays commonly used in ceramic art studios.

  1. Waste-to-energy in the United States: Socioeconomic factors and the decision-making process

    Energy Technology Data Exchange (ETDEWEB)

    Curlee, T.R.; Schexnayder, S.M.; Vogt, D.P.; Wolfe, A.K.; Kelsay, M.P.; Feldman, D.L. [Oak Ridge National Lab., TN (United States)

    1993-10-01

    Municipal solid waste (MSW) combustion with energy recovery, commonly called waste-to-energy (WTE), was adopted by many US communities during the 1980s to manage their growing quantities of MSW. Although less than one percent of all US MSW was burned to retrieve its heat energy in 1970, WTE grew to account for 16 percent of MSW in 1990, and many experts forecasted that WTE would be used to manage as much as half of all garbage by the turn of the century. However, the growth of WTE has been reduced in recent years by project cancellations. This study takes an in-depth look at the socioeconomic factors that have played a role in the decisions of communities that have considered WTE as a component of their solid waste management strategies. More specifically, a three-pronged approach is adopted to investigate (1) the relationships between a municipality`s decision to consider and accept/reject WTE and key socioeconomic parameters, (2) the potential impacts of recent changes in financial markets on the viability of WTE, and (3) the WTE decision-making process and the socioeconomic parameters that are most important in the municipality`s decision. The first two objectives are met by the collection and analysis of aggregate data on all US WTE initiatives during the 1982 to 1990 time frame. The latter objective is met by way of four in-depth case studies -- two directed at communities that have accepted WTE and two that have cancelled WTE projects.

  2. Listeria prevalence and Listeria monocytogenes serovar diversity at cull cow and bull processing plants in the United States.

    Science.gov (United States)

    Guerini, Michael N; Brichta-Harhay, Dayna M; Shackelford, T Steven D; Arthur, Terrance M; Bosilevac, Joseph M; Kalchayanand, Norasak; Wheeler, Tommy L; Koohmaraie, Mohammad

    2007-11-01

    Listeria monocytogenes, the causative agent of epidemic and sporadic listeriosis, is routinely isolated from many sources, including cattle, yet information on the prevalence of Listeria in beef processing plants in the United States is minimal. From July 2005 through April 2006, four commercial cow and bull processing plants were sampled in the United States to determine the prevalence of Listeria and the serovar diversity of L. monocytogenes. Samples were collected during the summer, fall, winter, and spring. Listeria prevalence on hides was consistently higher during cooler weather (28 to 92% of samples) than during warmer weather (6 and 77% of samples). The Listeria prevalence data collected from preevisceration carcass ranged from undetectable in some warm season samples to as high as 71% during cooler weather. Listeria on postintervention carcasses in the chill cooler was normally undetectable, with the exception of summer and spring samples from one plant where > 19% of the carcasses were positive for Listeria. On hides, L. monocytogenes serovar 1/2a was the predominant serovar observed, with serovars 1/2b and 4b present 2.5 times less often and serovar 1/2c not detected on any hides sampled. L. monocytogenes serovars 1/2a, 1/2c, and 4b were found on postintervention carcasses. This prevalence study demonstrates that Listeria species are more prevalent on hides during the winter and spring and that interventions being used in cow and bull processing plants appear to be effective in reducing or eliminating Listeria contamination on carcasses.

  3. Exploring the impact of permitting and local regulatory processes on residential solar prices in the United States

    International Nuclear Information System (INIS)

    Burkhardt, Jesse; Wiser, Ryan; Darghouth, Naïm; Dong, C.G.; Huneycutt, Joshua

    2015-01-01

    This article statistically isolates the impacts of city-level permitting and other local regulatory processes on residential PV prices in the United States. We combine data from two “scoring” mechanisms that independently capture local regulatory process efficiency with the largest dataset of installed PV prices in the United States. We find that variations in local permitting procedures can lead to differences in average residential PV prices of approximately $0.18/W between the jurisdictions with the least-favorable and most-favorable permitting procedures. Between jurisdictions with scores across the middle 90% of the range (i.e., 5th percentile to 95th percentile), the difference is $0.14/W, equivalent to a $700 (2.2%) difference in system costs for a typical 5-kW residential PV installation. When considering variations not only in permitting practices, but also in other local regulatory procedures, price differences grow to $0.64–$0.93/W between the least-favorable and most-favorable jurisdictions. Between jurisdictions with scores across the middle 90% of the range, the difference is equivalent to a price impact of at least $2500 (8%) for a typical 5-kW residential PV installation. These results highlight the magnitude of cost reduction that might be expected from streamlining local regulatory regimes. - Highlights: • We show local regulatory processes meaningfully affect U.S. residential PV prices. • We use regression analysis and two mechanisms for “scoring” regulatory efficiency. • Local permitting procedure variations can produce PV price differences of $0.18/W. • Broader regulatory variations can produce PV price differences of $0.64–$0.93/W. • The results suggest the cost-reduction potential of streamlining local regulations

  4. Implementing evidence in an onco-haematology nursing unit: a process of change using participatory action research.

    Science.gov (United States)

    Abad-Corpa, Eva; Delgado-Hito, Pilar; Cabrero-García, Julio; Meseguer-Liza, Cristobal; Zárate-Riscal, Carmen Lourdes; Carrillo-Alcaraz, Andrés; Martínez-Corbalán, José Tomás; Caravaca-Hernández, Amor

    2013-03-01

    To implement evidence in a nursing unit and to gain a better understanding of the experience of change within a participatory action research. Study design of a participatory action research type was use from the constructivist paradigm. The analytical-methodological decisions were inspired by Checkland Flexible Systems for evidence implementation in the nursing unit. The study was carried out between March and November 2007 in the isolation unit section for onco-haematological patients in a tertiary level general university hospital in Spain. Accidental sampling was carried out with the participation of six nurses. Data were collected using five group meetings and individual reflections in participants' dairies. The participant observation technique was also carried out by researchers. Data analysis was carried out by content analysis. The rigorous criteria were used: credibility, confirmability, dependence, transferability and reflexivity. A lack of use of evidence in clinical practice is the main problem. The factors involved were identified (training, values, beliefs, resources and professional autonomy). Their daily practice (complexity in taking decisions, variability, lack of professional autonomy and safety) was compared with an ideal situation (using evidence it will be possible to normalise practice and to work more effectively in teams by increasing safety and professional recognition). It was decided to create five working areas about several clinical topics (mucositis, pain, anxiety, satisfaction, nutritional assessment, nauseas and vomiting, pressure ulcers and catheter-related problems) and seven changes in clinical practice were agreed upon together with 11 implementation strategies. Some reflections were made about the features of the study: the changes produced; the strategies used and how to improve them; the nursing 'subculture'; attitudes towards innovation; and the commitment as participants in the study and as healthcare professionals. The

  5. Evaluation of Selected Resource Allocation and Scheduling Methods in Heterogeneous Many-Core Processors and Graphics Processing Units

    Directory of Open Access Journals (Sweden)

    Ciznicki Milosz

    2014-12-01

    Full Text Available Heterogeneous many-core computing resources are increasingly popular among users due to their improved performance over homogeneous systems. Many developers have realized that heterogeneous systems, e.g. a combination of a shared memory multi-core CPU machine with massively parallel Graphics Processing Units (GPUs, can provide significant performance opportunities to a wide range of applications. However, the best overall performance can only be achieved if application tasks are efficiently assigned to different types of processor units in time taking into account their specific resource requirements. Additionally, one should note that available heterogeneous resources have been designed as general purpose units, however, with many built-in features accelerating specific application operations. In other words, the same algorithm or application functionality can be implemented as a different task for CPU or GPU. Nevertheless, from the perspective of various evaluation criteria, e.g. the total execution time or energy consumption, we may observe completely different results. Therefore, as tasks can be scheduled and managed in many alternative ways on both many-core CPUs or GPUs and consequently have a huge impact on the overall computing resources performance, there are needs for new and improved resource management techniques. In this paper we discuss results achieved during experimental performance studies of selected task scheduling methods in heterogeneous computing systems. Additionally, we present a new architecture for resource allocation and task scheduling library which provides a generic application programming interface at the operating system level for improving scheduling polices taking into account a diversity of tasks and heterogeneous computing resources characteristics.

  6. Structural Foaming at the Nano-, Micro-, and Macro-Scales of Continuous Carbon Fiber Reinforced Polymer Matrix Composites

    Science.gov (United States)

    2012-10-29

    structural porosity at MNM scales could be introduced into the matrix, the carbon fiber reinforcement, and during prepreg lamination processing, without...areas, including fibers. Furthermore, investigate prepreg thickness and resin content effects on the thermomechanical performance of laminated ...Accomplishment 4) 5 Develop constitutive models for nano- foamed and micro- foamed PMC systems from single ply prepreg to multilayer laminated

  7. A real-time GNSS-R system based on software-defined radio and graphics processing units

    Science.gov (United States)

    Hobiger, Thomas; Amagai, Jun; Aida, Masanori; Narita, Hideki

    2012-04-01

    Reflected signals of the Global Navigation Satellite System (GNSS) from the sea or land surface can be utilized to deduce and monitor physical and geophysical parameters of the reflecting area. Unlike most other remote sensing techniques, GNSS-Reflectometry (GNSS-R) operates as a passive radar that takes advantage from the increasing number of navigation satellites that broadcast their L-band signals. Thereby, most of the GNSS-R receiver architectures are based on dedicated hardware solutions. Software-defined radio (SDR) technology has advanced in the recent years and enabled signal processing in real-time, which makes it an ideal candidate for the realization of a flexible GNSS-R system. Additionally, modern commodity graphic cards, which offer massive parallel computing performances, allow to handle the whole signal processing chain without interfering with the PC's CPU. Thus, this paper describes a GNSS-R system which has been developed on the principles of software-defined radio supported by General Purpose Graphics Processing Units (GPGPUs), and presents results from initial field tests which confirm the anticipated capability of the system.

  8. Assessment of changes in plasma hemoglobin and potassium levels in red cell units during processing and storage.

    Science.gov (United States)

    Saini, Nishant; Basu, Sabita; Kaur, Ravneet; Kaur, Jasbinder

    2015-06-01

    Red cell units undergo changes during storage and processing. The study was planned to assess plasma potassium, plasma hemoglobin, percentage hemolysis during storage and to determine the effects of outdoor blood collection and processing on those parameters. Blood collection in three types of blood storage bags was done - single CPDA bag (40 outdoor and 40 in-house collection), triple CPD + SAGM bag (40 in-house collection) and quadruple CPD + SAGM bag with integral leukoreduction filter (40 in-house collection). All bags were sampled on day 0 (day of collection), day 1 (after processing), day 7, day 14 and day 28 for measurement of percentage hemolysis and potassium levels in the plasma of bag contents. There was significant increase in percentage hemolysis, plasma hemoglobin and plasma potassium level in all the groups during storage (p levels during the storage of red blood cells. Blood collection can be safely undertaken in outdoor blood donation camps even in hot summer months in monitored blood transport boxes. SAGM additive solution decreases the red cell hemolysis and allows extended storage of red cells. Prestorage leukoreduction decreases the red cell hemolysis and improves the quality of blood. Copyright © 2015 Elsevier Ltd. All rights reserved.

  9. How many schools adopt interviews during the student admission process across the health professions in the United States of America?

    Science.gov (United States)

    Glazer, Greer; Startsman, Laura F; Bankston, Karen; Michaels, Julia; Danek, Jennifer C; Fair, Malika

    2016-01-01

    Health profession schools use interviews during the admissions process to identify certain non-cognitive skills that are needed for success in diverse, inter-professional settings. This study aimed to assess the use of interviews during the student admissions process across health disciplines at schools in the United States of America in 2014. The type and frequency of non-cognitive skills assessed were also evaluated. Descriptive methods were used to analyze a sample of interview rubrics collected as part of a national survey on admissions in the health professions, which surveyed 228 schools of medicine, dentistry, pharmacy, nursing, and public health. Of the 228 schools, 130 used interviews. The most desirable non-cognitive skills from 34 schools were identified as follows: communication skills (30), motivation (22), readiness for the profession (17), service (12), and problem-solving (12). Ten schools reported using the multiple mini-interview format, which may indicate potential for expanding this practice. Disparities in the use of interviewing across health professions should be verified to help schools adopt interviews during student admissions processes.

  10. Optogenetic stimulation of lateral amygdala input to posterior piriform cortex modulates single-unit and ensemble odor processing

    Directory of Open Access Journals (Sweden)

    Benjamin eSadrian

    2015-12-01

    Full Text Available Olfactory information is synthesized within the olfactory cortex to provide not only an odor percept, but also a contextual significance that supports appropriate behavioral response to specific odor cues. The piriform cortex serves as a communication hub within this circuit by sharing reciprocal connectivity with higher processing regions, such as the lateral entorhinal cortex and amygdala. The functional significance of these descending inputs on piriform cortical processing of odorants is currently not well understood. We have employed optogenetic methods to selectively stimulate lateral and basolateral amygdala (BLA afferent fibers innervating the posterior piriform cortex (pPCX to quantify BLA modulation of pPCX odor-evoked activity. Single unit odor-evoked activity of anaesthetized BLA-infected animals was significantly modulated compared with control animal recordings, with individual cells displaying either enhancement or suppression of odor-driven spiking. In addition, BLA activation induced a decorrelation of odor-evoked pPCX ensemble activity relative to odor alone. Together these results indicate a modulatory role in pPCX odor processing for the BLA complex, which could contribute to learned changes in PCX activity following associative conditioning.

  11. Health professionals in the process of vaccination against hepatitis B in two basic units of Belo Horizonte: a qualitative evaluation.

    Science.gov (United States)

    Lages, Annelisa Santos; França, Elisabeth Barboza; Freitas, Maria Imaculada de Fátima

    2013-06-01

    According to the Vaccine Coverage Survey, performed in 2007, the immunization coverage against hepatitis B in Belo Horizonte, for infants under one year old, was below the level proposed by the Brazilian National Program of Immunization. This vaccine was used as basis for evaluating the involvement of health professionals in the process of vaccination in two Basic Health Units (UBS, acronym in Portuguese) in the city. This study is qualitative and uses the notions of Social Representations Theory and the method of Structural Analysis of Narrative to carry out the interviews and data analysis. The results show flaws related to controlling and use of the mirror card and the parent orientation, and also the monitoring of vaccination coverage (VC) and use of VC data as input for planning health actions. It was observed that the working process in the UBS is focused on routine tasks, with low creativity of the professionals, which includes representations that maintain strong tendency to value activities focused on the health of individuals to the detriment of public health actions. In conclusion, the vaccination process fault can be overcome with a greater appreciation of everyday actions and with a much better use of local information about vaccination, and some necessary adjustments within the UBS to improve public health actions.

  12. Healthy Change Processes-A Diary Study of Five Organizational Units. Establishing a Healthy Change Feedback Loop.

    Science.gov (United States)

    Lien, Mathilde; Saksvik, Per Øystein

    2016-10-01

    This paper explores a change process in the Central Norway Regional Health Authority that was brought about by the implementation of a new economics and logistics system. The purpose of this paper is to contribute to understanding of how employees' attitudes towards change develop over time and how attitudes differ between the five health trusts under this authority. In this paper, we argue that a process-oriented focus through a longitudinal diary method, in addition to action research and feedback loops, will provide greater understanding of the evaluation of organizational change and interventions. This is explored through the assumption that different units will have different perspectives and attitudes towards the same intervention over time because of different contextual and time-related factors. The diary method aims to capture the context, events, reflections and interactions when they occur and allows for a nuanced frame of reference for the different phases of the implementation process and how these phases are perceived by employees. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  13. Analytical gradients for tensor hyper-contracted MP2 and SOS-MP2 on graphical processing units

    Science.gov (United States)

    Song, Chenchen; Martínez, Todd J.

    2017-10-01

    Analytic energy gradients for tensor hyper-contraction (THC) are derived and implemented for second-order Møller-Plesset perturbation theory (MP2), with and without the scaled-opposite-spin (SOS)-MP2 approximation. By exploiting the THC factorization, the formal scaling of MP2 and SOS-MP2 gradient calculations with respect to system size is reduced to quartic and cubic, respectively. An efficient implementation has been developed that utilizes both graphics processing units and sparse tensor techniques exploiting spatial sparsity of the atomic orbitals. THC-MP2 has been applied to both geometry optimization and ab initio molecular dynamics (AIMD) simulations. The resulting energy conservation in micro-canonical AIMD demonstrates that the implementation provides accurate nuclear gradients with respect to the THC-MP2 potential energy surfaces.

  14. A parallel approximate string matching under Levenshtein distance on graphics processing units using warp-shuffle operations.

    Directory of Open Access Journals (Sweden)

    ThienLuan Ho

    Full Text Available Approximate string matching with k-differences has a number of practical applications, ranging from pattern recognition to computational biology. This paper proposes an efficient memory-access algorithm for parallel approximate string matching with k-differences on Graphics Processing Units (GPUs. In the proposed algorithm, all threads in the same GPUs warp share data using warp-shuffle operation instead of accessing the shared memory. Moreover, we implement the proposed algorithm by exploiting the memory structure of GPUs to optimize its performance. Experiment results for real DNA packages revealed that the performance of the proposed algorithm and its implementation archived up to 122.64 and 1.53 times compared to that of sequential algorithm on CPU and previous parallel approximate string matching algorithm on GPUs, respectively.

  15. Mexico in the United States: Analysis of the Processes that Shape the «Illegalized» Mexican Identity

    Directory of Open Access Journals (Sweden)

    María Pilar Tudela-Vázquez

    2017-02-01

    Full Text Available In 2006, migrant rights demonstrations in the United States became important scenarios of Mexican identity. This work attempts to approach this phenomenon by analyzing, from a historical perspective, the processes involved in ascription to this identity in the US nation state project, from parameters of subordinated belonging. For this purpose, three axes of analysis are proposed: 1 incorporating the production of external political identities as a constituent aspect of the national community, ascribed to the nation-state political model; 2 recognizing the current role of colonial heritage; 3 incorporating the interrelation between the consolidation of a market economy and the legal production of a precarious and expendable workforce. The article’s main aim is to address «illegality» as a dynamic sociopolitical space, rather than as a legal status, from which to produce new formulas of active citizenship.

  16. Access to the decision-making process: opportunities for public involvement in the facility decommissioning process of the United States Nuclear Regulatory Commission

    International Nuclear Information System (INIS)

    Cameron, F.X.

    1996-01-01

    This paper discusses recent initiatives taken by the United States Nuclear Regulatory Commission NRC) to effectively involve the public in decommissioning decisions. Initiatives discussed include the Commission's rulemaking to establish the radiological criteria for decommissioning, as well as public involvement methods that have been used on a site-by-site basis. As un example of public involvement, the NRC is currently in the process of developing generic rules on the radiological criteria for the decontamination and decommissioning of NRC-licensed sites. Not only was this proposed rule developed through an extensive and novel approach for public involvement, but it also establishes the basic provisions that will govern public involvement in future NRC decisions on the decommissioning of individual sites. The aim is to provide the public with timely information about all phases of the NRC staff to express concerns and make recommendations. Th NRC recognizes the value and the necessity of effective public involvement in its regulatory activities and has initiated a number of changes to its regulatory program to accomplish this. From the NRC's perspective, it is much easier and less costly to incorporate these mechanisms for public involvement into the regulatory program early in the process, rather than try to add them after considerable public controversy on an action has already been generated. The historical antecedents for initiatives mentioned, as well as 'lessons learned' from prior experience are also discussed. (author)

  17. Solvent refined coal (SRC) process. Flashing of SRC-II slurry in the vacuum column on Process Development Unit P-99. Interim report, February-June 1980

    Energy Technology Data Exchange (ETDEWEB)

    Gray, J. A.; Mathias, S. T.

    1980-10-01

    This report presents the results of 73 tests on the vacuum flash system of Process Development Unit P-99 performed during processing of three different coals; the second batch, fourth shipment (low ash batch) of Powhatan No. 5 Mine (LR-27383), Powhatan No. 6 Mine (LR-27596) and Ireland Mine (LR-27987). The objective of this work was to obtain experimental data for use in confirming and improving the design of the vacuum distillation column for the 6000 ton/day SRC-II Demonstration Plant. The 900/sup 0/F distillate content of the bottoms and the percent of feed flashed overhead were correlated with flash zone operating conditions for each coal, and the observed differences in performance were attributed to differences in the feed compositions. Retrogressive reactions appeared to be occurring in the 900/sup 0/F+ pyridine soluble material leading to an increase in the quantity of pyridine insoluble organic matter. Stream physical properties determined include specific gravity, viscosity and melting point. Elemental, distillation and solvent analyses were used to calculate component material balances. The Technology and Materials Department has used these results in a separate study comparing experimental K-values and vapor/liquid split with CHAMP computer program design predictions.

  18. The regulation in the unitization process in the petroleum and natural gas exploration in Brazil; A regulacao no processo de unitization na exploracao de petroleo e gas natural no Brasil

    Energy Technology Data Exchange (ETDEWEB)

    Vazquez, Felipe Alvite; Silva, Moises Espindola [Universidade Federal do Rio de Janeiro (UFRJ), RJ (Brazil). Engenharia de Petroleo; Bone, Rosemarie Broeker [Universidade Federal do Rio de Janeiro (UFRJ), RJ (Brazil). Engenharia de Producao

    2008-07-01

    This paper presents and analyses the unitization of exploration and production process of petroleum and natural gas in Brazil, focusing the regulatory aspects under the Petroleum Law 9478/97. Considering the deficiency and blanks of the existent regulation when referring the utilization, this work intends to present and discuss those non resolved points and, in concise way, to present international unitization cases, applying when possible, their resolutions to Brazil.

  19. An Illustration of the Corrective Action Process, The Corrective Action Management Unit at Sandia National Laboratories/New Mexico

    International Nuclear Information System (INIS)

    Irwin, M.; Kwiecinski, D.

    2002-01-01

    Corrective Action Management Units (CAMUs) were established by the Environmental Protection Agency (EPA) to streamline the remediation of hazardous waste sites. Streamlining involved providing cost saving measures for the treatment, storage, and safe containment of the wastes. To expedite cleanup and remove disincentives, EPA designed 40 CFR 264 Subpart S to be flexible. At the heart of this flexibility are the provisions for CAMUs and Temporary Units (TUs). CAMUs and TUs were created to remove cleanup disincentives resulting from other Resource Conservation Recovery Act (RCRA) hazardous waste provisions--specifically, RCRA land disposal restrictions (LDRs) and minimum technology requirements (MTRs). Although LDR and MTR provisions were not intended for remediation activities, LDRs and MTRs apply to corrective actions because hazardous wastes are generated. However, management of RCRA hazardous remediation wastes in a CAMU or TU is not subject to these stringent requirements. The CAMU at Sandia National Laboratories in Albuquerque, New Mexico (SNL/NM) was proposed through an interactive process involving the regulators (EPA and the New Mexico Environment Department), DOE, SNL/NM, and stakeholders. The CAMU at SNL/NM has been accepting waste from the nearby Chemical Waste Landfill remediation since January of 1999. During this time, a number of unique techniques have been implemented to save costs, improve health and safety, and provide the best value and management practices. This presentation will take the audience through the corrective action process implemented at the CAMU facility, from the selection of the CAMU site to permitting and construction, waste management, waste treatment, and final waste placement. The presentation will highlight the key advantages that CAMUs and TUs offer in the corrective action process. These advantages include yielding a practical approach to regulatory compliance, expediting efficient remediation and site closure, and realizing

  20. Understanding micro-processes of institutionalization: stewardship contracting and national forest management

    Science.gov (United States)

    Cassandra Moseley; Susan Charnley

    2014-01-01

    This paper examines micro-processes of institutionalization, using the case of stewardship contracting within the US Forest Service. Our basic premise is that, until a new policy becomes an everyday practice among local actors, it will not become institutionalized at the macro-scale. We find that micro-processes of institutionalization are driven by a mixture of large-...