WorldWideScience

Sample records for macroscale unit processes

  1. Micro- and macroscale coefficients of friction of cementitious materials

    International Nuclear Information System (INIS)

    Lomboy, Gilson; Sundararajan, Sriram; Wang, Kejin

    2013-01-01

    Millions of metric tons of cementitious materials are produced, transported and used in construction each year. The ease or difficulty of handling cementitious materials is greatly influenced by the material friction properties. In the present study, the coefficients of friction of cementitious materials were measured at the microscale and macroscale. The materials tested were commercially-available Portland cement, Class C fly ash, and ground granulated blast furnace slag. At the microscale, the coefficient of friction was determined from the interaction forces between cementitious particles using an Atomic Force Microscope. At the macroscale, the coefficient of friction was determined from stresses on bulk cementitious materials under direct shear. The study indicated that the microscale coefficient of friction ranged from 0.020 to 0.059, and the macroscale coefficient of friction ranged from 0.56 to 0.75. The fly ash studied had the highest microscale coefficient of friction and the lowest macroscale coefficient of friction. -- Highlights: •Microscale (interparticle) coefficient of friction (COF) was determined with AFM. •Macroscale (bulk) COF was measured under direct shear. •Fly ash had the highest microscale COF and the lowest macroscale COF. •Portland cement against GGBFS had the lowest microscale COF. •Portland cement against Portland cement had the highest macroscale COF

  2. Multi-unit Integration in Microfluidic Processes: Current Status and Future Horizons

    Directory of Open Access Journals (Sweden)

    Pratap R. Patnaik

    2011-07-01

    Full Text Available Microfluidic processes, mainly for biological and chemical applications, have expanded rapidly in recent years. While the initial focus was on single units, principally microreactors, technological and economic considerations have caused a shift to integrated microchips in which a number of microdevices function coherently. These integrated devices have many advantages over conventional macro-scale processes. However, the small scale of operation, complexities in the underlying physics and chemistry, and differences in the time constants of the participating units, in the interactions among them and in the outputs of interest make it difficult to design and optimize integrated microprocesses. These aspects are discussed here, current research and applications are reviewed, and possible future directions are considered.

  3. Scaling up: Assessing social impacts at the macro-scale

    International Nuclear Information System (INIS)

    Schirmer, Jacki

    2011-01-01

    Social impacts occur at various scales, from the micro-scale of the individual to the macro-scale of the community. Identifying the macro-scale social changes that results from an impacting event is a common goal of social impact assessment (SIA), but is challenging as multiple factors simultaneously influence social trends at any given time, and there are usually only a small number of cases available for examination. While some methods have been proposed for establishing the contribution of an impacting event to macro-scale social change, they remain relatively untested. This paper critically reviews methods recommended to assess macro-scale social impacts, and proposes and demonstrates a new approach. The 'scaling up' method involves developing a chain of logic linking change at the individual/site scale to the community scale. It enables a more problematised assessment of the likely contribution of an impacting event to macro-scale social change than previous approaches. The use of this approach in a recent study of change in dairy farming in south east Australia is described.

  4. Investigation of Micro- and Macro-Scale Transport Processes for Improved Fuel Cell Performance

    Energy Technology Data Exchange (ETDEWEB)

    Gu, Wenbin [General Motors LLC, Pontiac, MI (United States)

    2014-08-29

    This report documents the work performed by General Motors (GM) under the Cooperative agreement No. DE-EE0000470, “Investigation of Micro- and Macro-Scale Transport Processes for Improved Fuel Cell Performance,” in collaboration with the Penn State University (PSU), University of Tennessee Knoxville (UTK), Rochester Institute of Technology (RIT), and University of Rochester (UR) via subcontracts. The overall objectives of the project are to investigate and synthesize fundamental understanding of transport phenomena at both the macro- and micro-scales for the development of a down-the-channel model that accounts for all transport domains in a broad operating space. GM as a prime contractor focused on cell level experiments and modeling, and the Universities as subcontractors worked toward fundamental understanding of each component and associated interface.

  5. Characteristics of soil water retention curve at macro-scale

    Institute of Scientific and Technical Information of China (English)

    2009-01-01

    Scale adaptable hydrological models have attracted more and more attentions in the hydrological modeling research community, and the constitutive relationship at the macro-scale is one of the most important issues, upon which there are not enough research activities yet. Taking the constitutive relationships of soil water movement--soil water retention curve (SWRC) as an example, this study extends the definition of SWRC at the micro-scale to that at the macro-scale, and aided by Monte Carlo method we demonstrate that soil property and the spatial distribution of soil moisture will affect the features of SWRC greatly. Furthermore, we assume that the spatial distribution of soil moisture is the result of self-organization of climate, soil, ground water and soil water movement under the specific boundary conditions, and we also carry out numerical experiments of soil water movement at the vertical direction in order to explore the relationship between SWRC at the macro-scale and the combinations of climate, soil, and groundwater. The results show that SWRCs at the macro-scale and micro-scale presents totally different features, e.g., the essential hysteresis phenomenon which is exaggerated with increasing aridity index and rising groundwater table. Soil property plays an important role in the shape of SWRC which will even lead to a rectangular shape under drier conditions, and power function form of SWRC widely adopted in hydrological model might be revised for most situations at the macro-scale.

  6. Cortical chemoarchitecture shapes macroscale effective functional connectivity patterns in macaque cerebral cortex

    NARCIS (Netherlands)

    Turk, Elise; Scholtens, Lianne H.; van den Heuvel, Martijn P.

    The mammalian cortex is a complex system of-at the microscale level-interconnected neurons and-at the macroscale level-interconnected areas, forming the infrastructure for local and global neural processing and information integration. While the effects of regional chemoarchitecture on local

  7. The correlation between gelatin macroscale differences and nanoparticle properties: providing insight into biopolymer variability.

    Science.gov (United States)

    Stevenson, André T; Jankus, Danny J; Tarshis, Max A; Whittington, Abby R

    2018-05-21

    From therapeutic delivery to sustainable packaging, manipulation of biopolymers into nanostructures imparts biocompatibility to numerous materials with minimal environmental pollution during processing. While biopolymers are appealing natural based materials, the lack of nanoparticle (NP) physicochemical consistency has decreased their nanoscale translation into actual products. Insights regarding the macroscale and nanoscale property variation of gelatin, one of the most common biopolymers already utilized in its bulk form, are presented. Novel correlations between macroscale and nanoscale properties were made by characterizing similar gelatin rigidities obtained from different manufacturers. Samples with significant differences in clarity, indicating sample purity, obtained the largest deviations in NP diameter. Furthermore, a statistically significant positive correlation between macroscale molecular weight dispersity and NP diameter was determined. New theoretical calculations proposing the limited number of gelatin chains that can aggregate and subsequently get crosslinked for NP formation were presented as one possible reason to substantiate the correlation analysis. NP charge and crosslinking extent were also related to diameter. Lower gelatin sample molecular weight dispersities produced statistically smaller average diameters (<75 nm), and higher average electrostatic charges (∼30 mV) and crosslinking extents (∼95%), which were independent of gelatin rigidity, conclusions not shown in the literature. This study demonstrates that the molecular weight composition of the starting material is one significant factor affecting gelatin nanoscale properties and must be characterized prior to NP preparation. Identifying gelatin macroscale and nanoscale correlations offers a route toward greater physicochemical property control and reproducibility of new NP formulations for translation to industry.

  8. Macroscale tribological properties of fluorinated graphene

    Science.gov (United States)

    Matsumura, Kento; Chiashi, Shohei; Maruyama, Shigeo; Choi, Junho

    2018-02-01

    Because graphene is carbon material and has excellent mechanical characteristics, its use as ultrathin lubrication protective films for machine elements is greatly expected. The durability of graphene strongly depends on the number of layers and the load scale. For use in ultrathin lubrication protective films for machine elements, it is also necessary to maintain low friction and high durability under macroscale loads in the atmosphere. In this study, we modified the surfaces of both monolayer and multilayer graphene by fluorine plasma treatment and examined the friction properties and durability of the fluorinated graphene under macroscale load. The durability of both monolayer and multilayer graphene improved by the surface fluorination owing to the reduction of adhesion forces between the friction interfaces. This occurs because the carbon film containing fluorine is transferred to the friction-mating material, and thus friction acts between the two carbon films containing fluorine. On the other hand, the friction coefficient decreased from 0.20 to 0.15 by the fluorine plasma treatment in the multilayer graphene, whereas it increased from 0.21 to 0.27 in the monolayer graphene. It is considered that, in the monolayer graphene, the change of the surface structure had a stronger influence on the friction coefficient than in the multilayer graphene, and the friction coefficient increased mainly due to the increase in defects on the graphene surface by the fluorine plasma treatment.

  9. Derivation of a macroscale formulation for a class of nonlinear partial differential equations

    International Nuclear Information System (INIS)

    Pantelis, G.

    1995-05-01

    A macroscale formulation is constructed from a system of partial differential equations which govern the microscale dependent variables. The construction is based upon the requirement that the solutions of the macroscale partial differential equations satisfy, in some approximate sense, the system of partial differential equations associated with the microscale. These results are restricted to the class of nonlinear partial differential equations which can be expressed as polynomials of the dependent variables and their partial derivatives up to second order. A linear approximation of transformations of second order contact manifolds is employed. 6 refs

  10. Macroscale and Nanoscale Morphology Evolution during in Situ Spray Coating of Titania Films for Perovskite Solar Cells.

    Science.gov (United States)

    Su, Bo; Caller-Guzman, Herbert A; Körstgens, Volker; Rui, Yichuan; Yao, Yuan; Saxena, Nitin; Santoro, Gonzalo; Roth, Stephan V; Müller-Buschbaum, Peter

    2017-12-20

    Mesoporous titania is a cheap and widely used material for photovoltaic applications. To enable a large-scale fabrication and a controllable pore size, we combined a block copolymer-assisted sol-gel route with spray coating to fabricate titania films, in which the block copolymer polystyrene-block-poly(ethylene oxide) (PS-b-PEO) is used as a structure-directing template. Both the macroscale and nanoscale are studied. The kinetics and thermodynamics of the spray deposition processes are simulated on a macroscale, which shows a good agreement with the large-scale morphology of the spray-coated films obtained in practice. On the nanoscale, the structure evolution of the titania films is probed with in situ grazing incidence small-angle X-ray scattering (GISAXS) during the spray process. The changes of the PS domain size depend not only on micellization but also on solvent evaporation during the spray coating. Perovskite (CH 3 NH 3 PbI 3 ) solar cells (PSCs) based on sprayed titania film are fabricated, which showcases the suitability of spray-deposited titania films for PSCs.

  11. Micro- to macroscale perspectives on space plasmas

    International Nuclear Information System (INIS)

    Eastman, T.E.

    1993-01-01

    The Earth's magnetosphere is the most accessible of natural collisionless plasma environments; an astrophysical plasma ''laboratory.'' Magnetospheric physics has been in an exploration phase since its origin 35 years ago but new coordinated, multipoint observations, theory, modeling, and simulations are moving this highly interdisciplinary field of plasma science into a new phase of synthesis and understanding. Plasma systems are ones in which binary collisions are relatively negligible and collective behavior beyond the microscale emerges. Most readily accessible natural plasma systems are collisional and nearest-neighbor classical interactions compete with longer-range plasma effects. Except for stars, most space plasmas are collisionless, however, and the effects of electrodynamic coupling dominate. Basic physical processes in such collisionless plasmas occur at micro-, meso-, and macroscales that are not merely reducible to each other in certain crucial ways as illustrated for the global coupling of the Earth's magnetosphere and for the nonlinear dynamics of charged particle motion in the magnetotail. Such global coupling and coherence makes the geospace environment, the domain of solar-terrestrial science, the most highly coupled of all physical geospheres

  12. Advancing Tissue Engineering: A Tale of Nano-, Micro-, and Macroscale Integration

    NARCIS (Netherlands)

    Leijten, Jeroen Christianus Hermanus; Rouwkema, Jeroen; Zhang, Y.S.; Nasajpour, A.; Dokmeci, M.R.; Khademhosseini, A.

    2016-01-01

    Tissue engineering has the potential to revolutionize the health care industry. Delivering on this promise requires the generation of efficient, controllable and predictable implants. The integration of nano- and microtechnologies into macroscale regenerative biomaterials plays an essential role in

  13. Quantum manifestation of systems on the macro-scale – the concept ...

    Indian Academy of Sciences (India)

    Transition amplitude; inelastic scattering; macro-scale quantum effects. ... ingly large wavelength of ∼5 cm for typical parameters (electron energy ε ∼ 1 keV ...... and hence as the generator of the transition amplitude wave at its position. As.

  14. Macroscale particle simulation of externally driven magnetic reconnection

    International Nuclear Information System (INIS)

    Murakami, Sadayoshi; Sato, Tetsuya.

    1991-09-01

    Externally driven reconnection, assuming an anomalous particle collision model, is numerically studied by means of a 2.5D macroscale particle simulation code in which the field and particle motions are solved self-consistently. Explosive magnetic reconnection and energy conversion are observed as a result of slow shock formation. Electron and ion distribution functions exhibit large bulk acceleration and heating of the plasma. Simulation runs with different collision parameters suggest that the development of reconnection, particle acceleration and heating do not significantly depend on the parameters of the collision model. (author)

  15. Micro and Macroscale Drivers of Nutrient Concentrations in Urban Streams in South, Central and North America.

    Science.gov (United States)

    Loiselle, Steven A; Gasparini Fernandes Cunha, Davi; Shupe, Scott; Valiente, Elsa; Rocha, Luciana; Heasley, Eleanore; Belmont, Patricia Pérez; Baruch, Avinoam

    Global metrics of land cover and land use provide a fundamental basis to examine the spatial variability of human-induced impacts on freshwater ecosystems. However, microscale processes and site specific conditions related to bank vegetation, pollution sources, adjacent land use and water uses can have important influences on ecosystem conditions, in particular in smaller tributary rivers. Compared to larger order rivers, these low-order streams and rivers are more numerous, yet often under-monitored. The present study explored the relationship of nutrient concentrations in 150 streams in 57 hydrological basins in South, Central and North America (Buenos Aires, Curitiba, São Paulo, Rio de Janeiro, Mexico City and Vancouver) with macroscale information available from global datasets and microscale data acquired by trained citizen scientists. Average sub-basin phosphate (P-PO4) concentrations were found to be well correlated with sub-basin attributes on both macro and microscales, while the relationships between sub-basin attributes and nitrate (N-NO3) concentrations were limited. A phosphate threshold for eutrophic conditions (>0.1 mg L-1 P-PO4) was exceeded in basins where microscale point source discharge points (eg. residential, industrial, urban/road) were identified in more than 86% of stream reaches monitored by citizen scientists. The presence of bankside vegetation covaried (rho = -0.53) with lower phosphate concentrations in the ecosystems studied. Macroscale information on nutrient loading allowed for a strong separation between basins with and without eutrophic conditions. Most importantly, the combination of macroscale and microscale information acquired increased our ability to explain sub-basin variability of P-PO4 concentrations. The identification of microscale point sources and bank vegetation conditions by citizen scientists provided important information that local authorities could use to improve their management of lower order river ecosystems.

  16. Effect of fiber geometry on macroscale friction of ordered low-density polyethylene nanofiber arrays.

    Science.gov (United States)

    Lee, Dae Ho; Kim, Yongkwan; Fearing, Ronald S; Maboudian, Roya

    2011-09-06

    Ordered low-density polyethylene (LDPE) nanofiber arrays are fabricated from silicon nanowire (SiNW) templates synthesized by a simple wet-chemical process based on metal-assisted electroless etching combined with colloidal lithography. The geometrical effect of nanofibrillar structures on their macroscale friction is investigated over a wide range of diameters and lengths under the same fiber density. The optimum geometry for contacting a smooth glass surface is presented with discussions on the compromise between fiber tip-contact area and fiber compliance. A friction design map is developed, which shows that the theoretical optimum design condition agrees well with the LDPE nanofiber geometries exhibiting high measured friction. © 2011 American Chemical Society

  17. Macroscale and microscale fracture toughness of microporous sintered Ag for applications in power electronic devices

    International Nuclear Information System (INIS)

    Chen, Chuantong; Nagao, Shijo; Suganuma, Katsuaki; Jiu, Jinting; Sugahara, Tohru; Zhang, Hao; Iwashige, Tomohito; Sugiura, Kazuhiko; Tsuruta, Kazuhiro

    2017-01-01

    The application of microporous sintered silver (Ag) as a bonding material to replace conventional die-bonding materials in power electronic devices has attracted considerable interest. Characterization of the mechanical properties of microporous Ag will enable its use in applications such as lead-free solder electronics and provide a fundamental understanding of its design principles. However, the material typically suffers from thermal and mechanical stress during its production fabrication, and service. In this work, we have studied the effect of microporous Ag specimen size on fracture toughness from the microscale to the macroscale. A focused ion beam was used to fabricate 20-, 10- and 5-μm-wide microscale specimens, which were of the same order of magnitude as the pore networks in the microporous Ag. Micro-cantilever bending tests revealed that fracture toughness decreased as the specimen size decreased. Conventional middle-cracked tensile tests were performed to determine the fracture toughness of the macroscale specimens. The microscale and macroscale fracture toughness results showed a clear size effect, which is discussed in terms of both the deformation behavior of crack tip and the influence of pore networks within Ag with different specimen sizes. Finite element model simulations showed that stress at the crack tip increased as the specimen size increased, which led to larger plastic deformation and more energy being consumed when the specimen fractured.

  18. Macroscale hydrologic modeling of ecologically relevant flow metrics

    Science.gov (United States)

    Wenger, Seth J.; Luce, Charles H.; Hamlet, Alan F.; Isaak, Daniel J.; Neville, Helen M.

    2010-09-01

    Stream hydrology strongly affects the structure of aquatic communities. Changes to air temperature and precipitation driven by increased greenhouse gas concentrations are shifting timing and volume of streamflows potentially affecting these communities. The variable infiltration capacity (VIC) macroscale hydrologic model has been employed at regional scales to describe and forecast hydrologic changes but has been calibrated and applied mainly to large rivers. An important question is how well VIC runoff simulations serve to answer questions about hydrologic changes in smaller streams, which are important habitat for many fish species. To answer this question, we aggregated gridded VIC outputs within the drainage basins of 55 streamflow gages in the Pacific Northwest United States and compared modeled hydrographs and summary metrics to observations. For most streams, several ecologically relevant aspects of the hydrologic regime were accurately modeled, including center of flow timing, mean annual and summer flows and frequency of winter floods. Frequencies of high and low flows in the summer were not well predicted, however. Predictions were worse for sites with strong groundwater influence, and some sites showed errors that may result from limitations in the forcing climate data. Higher resolution (1/16th degree) modeling provided small improvements over lower resolution (1/8th degree). Despite some limitations, the VIC model appears capable of representing several ecologically relevant hydrologic characteristics in streams, making it a useful tool for understanding the effects of hydrology in delimiting species distributions and predicting the potential effects of climate shifts on aquatic organisms.

  19. Macroscale implicit electromagnetic particle simulation of magnetized plasmas

    International Nuclear Information System (INIS)

    Tanaka, Motohiko.

    1988-01-01

    An electromagnetic and multi-dimensional macroscale particle simulation code (MACROS) is presented which enables us to make a large time and spatial scale kinetic simulation of magnetized plasmas. Particle ions, finite mass electrons with the guiding-center approximation and a complete set of Maxwell equations are employed. Implicit field-particle coupled equations are derived in which a time-decentered (slightly backward) finite differential scheme is used to achieve stability for large time and spatial scales. It is shown analytically that the present simulation scheme suppresses high frequency electromagnetic waves and that it accurately reproduces low frequency waves in the plasma. These properties are verified by numerical examination of eigenmodes in a 2-D thermal equilibrium plasma and by that of the kinetic Alfven wave. (author)

  20. Data-Science Analysis of the Macro-scale Features Governing the Corrosion to Crack Transition in AA7050-T7451

    Science.gov (United States)

    Co, Noelle Easter C.; Brown, Donald E.; Burns, James T.

    2018-05-01

    This study applies data science approaches (random forest and logistic regression) to determine the extent to which macro-scale corrosion damage features govern the crack formation behavior in AA7050-T7451. Each corrosion morphology has a set of corresponding predictor variables (pit depth, volume, area, diameter, pit density, total fissure length, surface roughness metrics, etc.) describing the shape of the corrosion damage. The values of the predictor variables are obtained from white light interferometry, x-ray tomography, and scanning electron microscope imaging of the corrosion damage. A permutation test is employed to assess the significance of the logistic and random forest model predictions. Results indicate minimal relationship between the macro-scale corrosion feature predictor variables and fatigue crack initiation. These findings suggest that the macro-scale corrosion features and their interactions do not solely govern the crack formation behavior. While these results do not imply that the macro-features have no impact, they do suggest that additional parameters must be considered to rigorously inform the crack formation location.

  1. Line-scan macro-scale Raman chemical imaging for authentication of powdered foods and ingredients

    Science.gov (United States)

    Adulteration and fraud for powdered foods and ingredients are rising food safety risks that threaten consumers’ health. In this study, a newly developed line-scan macro-scale Raman imaging system using a 5 W 785 nm line laser as excitation source was used to authenticate the food powders. The system...

  2. The statistical power to detect cross-scale interactions at macroscales

    Science.gov (United States)

    Wagner, Tyler; Fergus, C. Emi; Stow, Craig A.; Cheruvelil, Kendra S.; Soranno, Patricia A.

    2016-01-01

    Macroscale studies of ecological phenomena are increasingly common because stressors such as climate and land-use change operate at large spatial and temporal scales. Cross-scale interactions (CSIs), where ecological processes operating at one spatial or temporal scale interact with processes operating at another scale, have been documented in a variety of ecosystems and contribute to complex system dynamics. However, studies investigating CSIs are often dependent on compiling multiple data sets from different sources to create multithematic, multiscaled data sets, which results in structurally complex, and sometimes incomplete data sets. The statistical power to detect CSIs needs to be evaluated because of their importance and the challenge of quantifying CSIs using data sets with complex structures and missing observations. We studied this problem using a spatially hierarchical model that measures CSIs between regional agriculture and its effects on the relationship between lake nutrients and lake productivity. We used an existing large multithematic, multiscaled database, LAke multiscaled GeOSpatial, and temporal database (LAGOS), to parameterize the power analysis simulations. We found that the power to detect CSIs was more strongly related to the number of regions in the study rather than the number of lakes nested within each region. CSI power analyses will not only help ecologists design large-scale studies aimed at detecting CSIs, but will also focus attention on CSI effect sizes and the degree to which they are ecologically relevant and detectable with large data sets.

  3. Creating multithemed ecological regions for macroscale ecology: Testing a flexible, repeatable, and accessible clustering method

    Science.gov (United States)

    Cheruvelil, Kendra Spence; Yuan, Shuai; Webster, Katherine E.; Tan, Pang-Ning; Lapierre, Jean-Francois; Collins, Sarah M.; Fergus, C. Emi; Scott, Caren E.; Norton Henry, Emily; Soranno, Patricia A.; Filstrup, Christopher T.; Wagner, Tyler

    2017-01-01

    Understanding broad-scale ecological patterns and processes often involves accounting for regional-scale heterogeneity. A common way to do so is to include ecological regions in sampling schemes and empirical models. However, most existing ecological regions were developed for specific purposes, using a limited set of geospatial features and irreproducible methods. Our study purpose was to: (1) describe a method that takes advantage of recent computational advances and increased availability of regional and global data sets to create customizable and reproducible ecological regions, (2) make this algorithm available for use and modification by others studying different ecosystems, variables of interest, study extents, and macroscale ecology research questions, and (3) demonstrate the power of this approach for the research question—How well do these regions capture regional-scale variation in lake water quality? To achieve our purpose we: (1) used a spatially constrained spectral clustering algorithm that balances geospatial homogeneity and region contiguity to create ecological regions using multiple terrestrial, climatic, and freshwater geospatial data for 17 northeastern U.S. states (~1,800,000 km2); (2) identified which of the 52 geospatial features were most influential in creating the resulting 100 regions; and (3) tested the ability of these ecological regions to capture regional variation in water nutrients and clarity for ~6,000 lakes. We found that: (1) a combination of terrestrial, climatic, and freshwater geospatial features influenced region creation, suggesting that the oft-ignored freshwater landscape provides novel information on landscape variability not captured by traditionally used climate and terrestrial metrics; and (2) the delineated regions captured macroscale heterogeneity in ecosystem properties not included in region delineation—approximately 40% of the variation in total phosphorus and water clarity among lakes was at the regional

  4. Modelling PM10 aerosol data from the Qalabotjha low-smoke fuels macro-scale experiment in South Africa

    CSIR Research Space (South Africa)

    Engelbrecht, JP

    2000-03-30

    Full Text Available for combustion in cooking and heating appliances are being con- sidered to mitigate human exposure to D-grade coal combustion emissions. In 1997, South Africa's Department of Minerals and Energy conducted a macro-scale experiment to test three brands of low...

  5. Nondestructive chemical imaging of wood at the micro-scale: advanced technology to complement macro-scale evaluations

    Science.gov (United States)

    Barbara L. Illman; Julia Sedlmair; Miriam Unger; Carol Hirschmugl

    2013-01-01

    Chemical images help understanding of wood properties, durability, and cell wall deconstruction for conversion of lignocellulose to biofuels, nanocellulose and other value added chemicals in forest biorefineries. We describe here a new method for nondestructive chemical imaging of wood and wood-based materials at the micro-scale to complement macro-scale methods based...

  6. Macroscale porous carbonized polydopamine-modified cotton textile for application as electrode in microbial fuel cells

    Science.gov (United States)

    Zeng, Lizhen; Zhao, Shaofei; He, Miao

    2018-02-01

    The anode material is a crucial factor that significantly affects the cost and performance of microbial fuel cells (MFCs). In this study, a novel macroscale porous, biocompatible, highly conductive and low cost electrode, carbonized polydopamine-modified cotton textile (NC@CCT), is fabricated by using normal cheap waste cotton textiles as raw material via a simple in situ polymerization and carbonization treatment as anode of MFCs. The physical and chemical characterizations show that the macroscale porous and biocompatible NC@CCT electrode is coated by nitrogen-doped carbon nanoparticles and offers a large specific surface area (888.67 m2 g-1) for bacterial cells growth, accordingly greatly increases the loading amount of bacterial cells and facilitates extracellular electron transfer (EET). As a result, the MFC equipped with the NC@CCT anode achieves a maximum power density of 931 ± 61 mW m-2, which is 80.5% higher than that of commercial carbon felt (516 ± 27 mW m-2) anode. Moreover, making full use of the normal cheap waste cotton textiles can greatly reduce the cost of MFCs and the environmental pollution problem.

  7. Influence of Bubble-Bubble interactions on the macroscale circulation patterns in a bubbling gas-solid fluidized bed

    NARCIS (Netherlands)

    Laverman, J.A.; van Sint Annaland, M.; Kuipers, J.A.M.

    2007-01-01

    The macro-scale circulation patterns in the emulsion phase of a gas-solid fluidized bed in the bubbling regime have been studied with a 3D Discrete Bubble Model. It has been shown that bubble-bubble interactions strongly influence the extent of the solids circulation and the bubble size

  8. Dynamic Data-Driven Reduced-Order Models of Macroscale Quantities for the Prediction of Equilibrium System State for Multiphase Porous Medium Systems

    Science.gov (United States)

    Talbot, C.; McClure, J. E.; Armstrong, R. T.; Mostaghimi, P.; Hu, Y.; Miller, C. T.

    2017-12-01

    Microscale simulation of multiphase flow in realistic, highly-resolved porous medium systems of a sufficient size to support macroscale evaluation is computationally demanding. Such approaches can, however, reveal the dynamic, steady, and equilibrium states of a system. We evaluate methods to utilize dynamic data to reduce the cost associated with modeling a steady or equilibrium state. We construct data-driven models using extensions to dynamic mode decomposition (DMD) and its connections to Koopman Operator Theory. DMD and its variants comprise a class of equation-free methods for dimensionality reduction of time-dependent nonlinear dynamical systems. DMD furnishes an explicit reduced representation of system states in terms of spatiotemporally varying modes with time-dependent oscillation frequencies and amplitudes. We use DMD to predict the steady and equilibrium macroscale state of a realistic two-fluid porous medium system imaged using micro-computed tomography (µCT) and simulated using the lattice Boltzmann method (LBM). We apply Koopman DMD to direct numerical simulation data resulting from simulations of multiphase fluid flow through a 1440x1440x4320 section of a full 1600x1600x5280 realization of imaged sandstone. We determine a representative set of system observables via dimensionality reduction techniques including linear and kernel principal component analysis. We demonstrate how this subset of macroscale quantities furnishes a representation of the time-evolution of the system in terms of dynamic modes, and discuss the selection of a subset of DMD modes yielding the optimal reduced model, as well as the time-dependence of the error in the predicted equilibrium value of each macroscale quantity. Finally, we describe how the above procedure, modified to incorporate methods from compressed sensing and random projection techniques, may be used in an online fashion to facilitate adaptive time-stepping and parsimonious storage of system states over time.

  9. Micro- and macro-scale petrophysical characterization of potential reservoir units from the Northern Israel

    Science.gov (United States)

    Haruzi, Peleg; Halisch, Matthias; Katsman, Regina; Waldmann, Nicolas

    2016-04-01

    Lower Cretaceous sandstone serves as hydrocarbon reservoir in some places over the world, and potentially in Hatira formation in the Golan Heights, northern Israel. The purpose of the current research is to characterize the petrophysical properties of these sandstone units. The study is carried out by two alternative methods: using conventional macroscopic lab measurements, and using CT-scanning, image processing and subsequent fluid mechanics simulations at a microscale, followed by upscaling to the conventional macroscopic rock parameters (porosity and permeability). Comparison between the upscaled and measured in the lab properties will be conducted. The best way to upscale the microscopic rock characteristics will be analyzed based the models suggested in the literature. Proper characterization of the potential reservoir will provide necessary analytical parameters for the future experimenting and modeling of the macroscopic fluid flow behavior in the Lower Cretaceous sandstone.

  10. Plasma simulation by macroscale, electromagnetic particle code and its application to current-drive by relativistic electron beam injection

    International Nuclear Information System (INIS)

    Tanaka, M.; Sato, T.

    1985-01-01

    A new implicit macroscale electromagnetic particle simulation code (MARC) which allows a large scale length and a time step in multi-dimensions is described. Finite mass electrons and ions are used with relativistic version of the equation of motion. The electromagnetic fields are solved by using a complete set of Maxwell equations. For time integration of the field equations, a decentered (backward) finite differencing scheme is employed with the predictor - corrector method for small noise and super-stability. It is shown both in analytical and numerical ways that the present scheme efficiently suppresses high frequency electrostatic and electromagnetic waves in a plasma, and that it accurately reproduces low frequency waves such as ion acoustic waves, Alfven waves and fast magnetosonic waves. The present numerical scheme has currently been coded in three dimensions for application to a new tokamak current-drive method by means of relativistic electron beam injection. Some remarks of the proper macroscale code application is presented in this paper

  11. Using Discrete Event Simulation for Programming Model Exploration at Extreme-Scale: Macroscale Components for the Structural Simulation Toolkit (SST).

    Energy Technology Data Exchange (ETDEWEB)

    Wilke, Jeremiah J [Sandia National Laboratories (SNL-CA), Livermore, CA (United States); Kenny, Joseph P. [Sandia National Laboratories (SNL-CA), Livermore, CA (United States)

    2015-02-01

    Discrete event simulation provides a powerful mechanism for designing and testing new extreme- scale programming models for high-performance computing. Rather than debug, run, and wait for results on an actual system, design can first iterate through a simulator. This is particularly useful when test beds cannot be used, i.e. to explore hardware or scales that do not yet exist or are inaccessible. Here we detail the macroscale components of the structural simulation toolkit (SST). Instead of depending on trace replay or state machines, the simulator is architected to execute real code on real software stacks. Our particular user-space threading framework allows massive scales to be simulated even on small clusters. The link between the discrete event core and the threading framework allows interesting performance metrics like call graphs to be collected from a simulated run. Performance analysis via simulation can thus become an important phase in extreme-scale programming model and runtime system design via the SST macroscale components.

  12. Judicial Process, Grade Eight. Resource Unit (Unit V).

    Science.gov (United States)

    Minnesota Univ., Minneapolis. Project Social Studies Curriculum Center.

    This resource unit, developed by the University of Minnesota's Project Social Studies, introduces eighth graders to the judicial process. The unit was designed with two major purposes in mind. First, it helps pupils understand judicial decision-making, and second, it provides for the study of the rights guaranteed by the federal Constitution. Both…

  13. From micro-scale 3D simulations to macro-scale model of periodic porous media

    Science.gov (United States)

    Crevacore, Eleonora; Tosco, Tiziana; Marchisio, Daniele; Sethi, Rajandrea; Messina, Francesca

    2015-04-01

    In environmental engineering, the transport of colloidal suspensions in porous media is studied to understand the fate of potentially harmful nano-particles and to design new remediation technologies. In this perspective, averaging techniques applied to micro-scale numerical simulations are a powerful tool to extrapolate accurate macro-scale models. Choosing two simplified packing configurations of soil grains and starting from a single elementary cell (module), it is possible to take advantage of the periodicity of the structures to reduce the computation costs of full 3D simulations. Steady-state flow simulations for incompressible fluid in laminar regime are implemented. Transport simulations are based on the pore-scale advection-diffusion equation, that can be enriched introducing also the Stokes velocity (to consider the gravity effect) and the interception mechanism. Simulations are carried on a domain composed of several elementary modules, that serve as control volumes in a finite volume method for the macro-scale method. The periodicity of the medium involves the periodicity of the flow field and this will be of great importance during the up-scaling procedure, allowing relevant simplifications. Micro-scale numerical data are treated in order to compute the mean concentration (volume and area averages) and fluxes on each module. The simulation results are used to compare the micro-scale averaged equation to the integral form of the macroscopic one, making a distinction between those terms that could be computed exactly and those for which a closure in needed. Of particular interest it is the investigation of the origin of macro-scale terms such as the dispersion and tortuosity, trying to describe them with micro-scale known quantities. Traditionally, to study the colloidal transport many simplifications are introduced, such those concerning ultra-simplified geometry that usually account for a single collector. Gradual removal of such hypothesis leads to a

  14. Predator-prey interactions as macro-scale drivers of species diversity in mammals

    DEFF Research Database (Denmark)

    Sandom, Christopher James; Sandel, Brody Steven; Dalby, Lars

    Background/Question/Methods Understanding the importance of predator-prey interactions for species diversity is a central theme in ecology, with fundamental consequences for predicting the responses of ecosystems to land use and climate change. We assessed the relative support for different...... mechanistic drivers of mammal species richness at macro-scales for two trophic levels: predators and prey. To disentangle biotic (i.e. functional predator-prey interactions) from abiotic (i.e. environmental) and bottom-up from top-down determinants we considered three hypotheses: 1) environmental factors...... that determine ecosystem productivity drive prey and predator richness (the productivity hypothesis, abiotic, bottom-up), 2) consumer richness is driven by resource diversity (the resource diversity hypothesis, biotic, bottom-up) and 3) consumers drive richness of their prey (the top-down hypothesis, biotic, top...

  15. Investigation of scale effects and directionality dependence on friction and adhesion of human hair using AFM and macroscale friction test apparatus

    International Nuclear Information System (INIS)

    LaTorre, Carmen; Bhushan, Bharat

    2006-01-01

    Macroscale testing of human hair tribological properties has been widely used to aid in the development of better shampoos and conditioners. Recently, literature has focused on using the atomic force microscope (AFM) to study surface roughness, coefficient of friction, adhesive force, and wear (tribological properties) on the nanoscale in order to increase understanding about how shampoos and conditioners interact with the hair cuticle. Since there are both similarities and differences when comparing the tribological trends at both scales, it is thus recognized that scale effects are an important aspect of studying the tribology of hair. However, no microscale tribological data for hair exists in literature. This is unfortunate because many interactions between hair-skin, hair-comb, and hair-hair contact takes place at microasperities ranging from a few μm to hundreds of μm. Thus, to bridge the gap between the macro- and nanoscale data, as well as to gain a full understanding of the mechanisms behind the trends, it is now worthwhile to look at hair tribology on the microscale. Presented in this paper are coefficient of friction and adhesive force data on various scales for virgin and chemically damaged hair, both with and without conditioner treatment. Macroscale coefficient of friction was determined using a traditional friction test apparatus. Microscale and nanoscale tribological characterization was performed with AFM tips of various radii. The nano-, micro-, and macroscale trends are compared and the mechanisms behind the scale effects are discussed. Since the coefficient of friction changes drastically (on any scale) depending on whether the direction of motion is along or against the cuticle scales, the directionality dependence and responsible mechanisms are discussed

  16. Monitoring and assessment of soil erosion at micro-scale and macro-scale in forests affected by fire damage in northern Iran.

    Science.gov (United States)

    Akbarzadeh, Ali; Ghorbani-Dashtaki, Shoja; Naderi-Khorasgani, Mehdi; Kerry, Ruth; Taghizadeh-Mehrjardi, Ruhollah

    2016-12-01

    Understanding the occurrence of erosion processes at large scales is very difficult without studying them at small scales. In this study, soil erosion parameters were investigated at micro-scale and macro-scale in forests in northern Iran. Surface erosion and some vegetation attributes were measured at the watershed scale in 30 parcels of land which were separated into 15 fire-affected (burned) forests and 15 original (unburned) forests adjacent to the burned sites. The soil erodibility factor and splash erosion were also determined at the micro-plot scale within each burned and unburned site. Furthermore, soil sampling and infiltration studies were carried out at 80 other sites, as well as the 30 burned and unburned sites, (a total of 110 points) to create a map of the soil erodibility factor at the regional scale. Maps of topography, rainfall, and cover-management were also determined for the study area. The maps of erosion risk and erosion risk potential were finally prepared for the study area using the Revised Universal Soil Loss Equation (RUSLE) procedure. Results indicated that destruction of the protective cover of forested areas by fire had significant effects on splash erosion and the soil erodibility factor at the micro-plot scale and also on surface erosion, erosion risk, and erosion risk potential at the watershed scale. Moreover, the results showed that correlation coefficients between different variables at the micro-plot and watershed scales were positive and significant. Finally, assessment and monitoring of the erosion maps at the regional scale showed that the central and western parts of the study area were more susceptible to erosion compared with the western regions due to more intense crop-management, greater soil erodibility, and more rainfall. The relationships between erosion parameters and the most important vegetation attributes were also used to provide models with equations that were specific to the study region. The results of this

  17. Modellierung of meso- and macroscale river basins - a workshop held at Lauenburg; Modellierung in meso- bis makroskaligen Flusseinzugsgebieten - Tagungsband zum gleichnamigen Workshop

    Energy Technology Data Exchange (ETDEWEB)

    Sutmoeller, J.; Raschke, E. (eds.) [GKSS-Forschungszentrum Geesthacht GmbH (Germany). Inst. fuer Atmosphaerenphysik

    2001-07-01

    During the past decade measuring and modelling of global and regional processes that exchange energy and water in the climate system of the Earth became a focus in hydrological and meteorological research. Besides climate research many more applications will gain from this effort, e.g. as weather forecasting, water management and agriculture. As large scale weather and climate applications diversify to water related issues such as water resources, reservoir management, and flood and drought forecasting hydrologists and meteorologists are challenged to work interdisciplinary. The workshop 'Modelling of meso- and macroscale river basins' brought together various current aspects of this issue, ranging from coupled atmosphere-hydrology models to integrated river basin management to land use change. Recent results are introduced and summarised in this report. (orig.)

  18. The Executive Process, Grade Eight. Resource Unit (Unit III).

    Science.gov (United States)

    Minnesota Univ., Minneapolis. Project Social Studies Curriculum Center.

    This resource unit, developed by the University of Minnesota's Project Social Studies, introduces eighth graders to the executive process. The unit uses case studies of presidential decision making such as the decision to drop the atomic bomb on Hiroshima, the Cuba Bay of Pigs and quarantine decisions, and the Little Rock decision. A case study of…

  19. Image processing unit with fall-back.

    NARCIS (Netherlands)

    2011-01-01

    An image processing unit ( 100,200,300 ) for computing a sequence of output images on basis of a sequence of input images, comprises: a motion estimation unit ( 102 ) for computing a motion vector field on basis of the input images; a quality measurement unit ( 104 ) for computing a value of a

  20. Portable brine evaporator unit, process, and system

    Science.gov (United States)

    Hart, Paul John; Miller, Bruce G.; Wincek, Ronald T.; Decker, Glenn E.; Johnson, David K.

    2009-04-07

    The present invention discloses a comprehensive, efficient, and cost effective portable evaporator unit, method, and system for the treatment of brine. The evaporator unit, method, and system require a pretreatment process that removes heavy metals, crude oil, and other contaminates in preparation for the evaporator unit. The pretreatment and the evaporator unit, method, and system process metals and brine at the site where they are generated (the well site). Thus, saving significant money to producers who can avoid present and future increases in transportation costs.

  1. Semi-automatic film processing unit

    International Nuclear Information System (INIS)

    Mohamad Annuar Assadat Husain; Abdul Aziz Bin Ramli; Mohd Khalid Matori

    2005-01-01

    The design concept applied in the development of an semi-automatic film processing unit needs creativity and user support in channelling the required information to select materials and operation system that suit the design produced. Low cost and efficient operation are the challenges that need to be faced abreast with the fast technology advancement. In producing this processing unit, there are few elements which need to be considered in order to produce high quality image. Consistent movement and correct time coordination for developing and drying are a few elements which need to be controlled. Other elements which need serious attentions are temperature, liquid density and the amount of time for the chemical liquids to react. Subsequent chemical reaction that take place will cause the liquid chemical to age and this will adversely affect the quality of image produced. This unit is also equipped with liquid chemical drainage system and disposal chemical tank. This unit would be useful in GP clinics especially in rural area which practice manual system for developing and require low operational cost. (Author)

  2. Data Sorting Using Graphics Processing Units

    Directory of Open Access Journals (Sweden)

    M. J. Mišić

    2012-06-01

    Full Text Available Graphics processing units (GPUs have been increasingly used for general-purpose computation in recent years. The GPU accelerated applications are found in both scientific and commercial domains. Sorting is considered as one of the very important operations in many applications, so its efficient implementation is essential for the overall application performance. This paper represents an effort to analyze and evaluate the implementations of the representative sorting algorithms on the graphics processing units. Three sorting algorithms (Quicksort, Merge sort, and Radix sort were evaluated on the Compute Unified Device Architecture (CUDA platform that is used to execute applications on NVIDIA graphics processing units. Algorithms were tested and evaluated using an automated test environment with input datasets of different characteristics. Finally, the results of this analysis are briefly discussed.

  3. A novel low-power fluxgate sensor using a macroscale optimisation technique for space physics instrumentation

    Science.gov (United States)

    Dekoulis, G.; Honary, F.

    2007-05-01

    This paper describes the design of a novel low-power single-axis fluxgate sensor. Several soft magnetic alloy materials have been considered and the choice was based on the balance between maximum permeability and minimum saturation flux density values. The sensor has been modelled using the Finite Integration Theory (FIT) method. The sensor was imposed to a custom macroscale optimisation technique that significantly reduced the power consumption by a factor of 16. The results of the sensor's optimisation technique will be used, subsequently, in the development of a cutting-edge ground based magnetometer for the study of the complex solar wind-magnetospheric-ionospheric system.

  4. Metric-Resolution 2D River Modeling at the Macroscale: Computational Methods and Applications in a Braided River

    Directory of Open Access Journals (Sweden)

    Jochen eSchubert

    2015-11-01

    Full Text Available Metric resolution digital terrain models (DTMs of rivers now make it possible for multi-dimensional fluid mechanics models to be applied to characterize flow at fine scales that are relevant to studies of river morphology and ecological habitat, or microscales. These developments are important for managing rivers because of the potential to better understand system dynamics, anthropogenic impacts, and the consequences of proposed interventions. However, the data volumes and computational demands of microscale river modeling have largely constrained applications to small multiples of the channel width, or the mesoscale. This report presents computational methods to extend a microscale river model beyond the mesoscale to the macroscale, defined as large multiples of the channel width. A method of automated unstructured grid generation is presented that automatically clusters fine resolution cells in areas of curvature (e.g., channel banks, and places relatively coarse cells in areas lacking topographic variability. This overcomes the need to manually generate breaklines to constrain the grid, which is painstaking at the mesoscale and virtually impossible at the macroscale. The method is applied to a braided river with an extremely complex channel network configuration and shown to yield an efficient fine resolution model. The sensitivity of model output to grid design and resistance parameters is also examined as it relates to analysis of hydrology, hydraulic geometry and river habitats and the findings reiterate the importance of model calibration and validation.

  5. Product- and Process Units in the CRITT Translation Process Research Database

    DEFF Research Database (Denmark)

    Carl, Michael

    than 300 hours of text production. The database provides the raw logging data, as well as Tables of pre-processed product- and processing units. The TPR-DB includes various types of simple and composed product and process units that are intended to support the analysis and modelling of human text......The first version of the "Translation Process Research Database" (TPR DB v1.0) was released In August 2012, containing logging data of more than 400 translation and text production sessions. The current version of the TPR DB, (v1.4), contains data from more than 940 sessions, which represents more...

  6. Environmental drivers defining linkages among life-history traits: mechanistic insights from a semiterrestrial amphipod subjected to macroscale gradients.

    Science.gov (United States)

    Gómez, Julio; Barboza, Francisco R; Defeo, Omar

    2013-10-01

    Determining the existence of interconnected responses among life-history traits and identifying underlying environmental drivers are recognized as key goals for understanding the basis of phenotypic variability. We studied potentially interconnected responses among senescence, fecundity, embryos size, weight of brooding females, size at maturity and sex ratio in a semiterrestrial amphipod affected by macroscale gradients in beach morphodynamics and salinity. To this end, multiple modelling processes based on generalized additive mixed models were used to deal with the spatio-temporal structure of the data obtained at 10 beaches during 22 months. Salinity was the only nexus among life-history traits, suggesting that this physiological stressor influences the energy balance of organisms. Different salinity scenarios determined shifts in the weight of brooding females and size at maturity, having consequences in the number and size of embryos which in turn affected sex determination and sex ratio at the population level. Our work highlights the importance of analysing field data to find the variables and potential mechanisms that define concerted responses among traits, therefore defining life-history strategies.

  7. Comparing SMAP to Macro-scale and Hyper-resolution Land Surface Models over Continental U. S.

    Science.gov (United States)

    Pan, Ming; Cai, Xitian; Chaney, Nathaniel; Wood, Eric

    2016-04-01

    SMAP sensors collect moisture information in top soil at the spatial resolution of ~40 km (radiometer) and ~1 to 3 km (radar, before its failure in July 2015). Such information is extremely valuable for understanding various terrestrial hydrologic processes and their implications on human life. At the same time, soil moisture is a joint consequence of numerous physical processes (precipitation, temperature, radiation, topography, crop/vegetation dynamics, soil properties, etc.) that happen at a wide range of scales from tens of kilometers down to tens of meters. Therefore, a full and thorough analysis/exploration of SMAP data products calls for investigations at multiple spatial scales - from regional, to catchment, and to field scales. Here we first compare the SMAP retrievals to the Variable Infiltration Capacity (VIC) macro-scale land surface model simulations over the continental U. S. region at 3 km resolution. The forcing inputs to the model are merged/downscaled from a suite of best available data products including the NLDAS-2 forcing, Stage IV and Stage II precipitation, GOES Surface and Insolation Products, and fine elevation data. The near real time VIC simulation is intended to provide a source of large scale comparisons at the active sensor resolution. Beyond the VIC model scale, we perform comparisons at 30 m resolution against the recently developed HydroBloks hyper-resolution land surface model over several densely gauged USDA experimental watersheds. Comparisons are also made against in-situ point-scale observations from various SMAP Cal/Val and field campaign sites.

  8. Development of interface technology between unit processes in E-Refining process

    Energy Technology Data Exchange (ETDEWEB)

    Lee, S. H.; Lee, H. S.; Kim, J. G. [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2010-10-15

    The pyroprocessing is composed mainly four subprocesses, such as an electrolytic reduction, an electrorefining, an electrowinning, and waste salt regeneration/ solidification processes. The electrorefining process, one of main processes which are composed of pyroprocess to recover the useful elements from spent fuel, is under development by Korea Atomic Energy Research Institute as a sub process of pyrochemical treatment of spent PWR fuel. The CERS(Continuous ElectroRefining System) is composed of some unit processes such as an electrorefiner, a salt distiller, a melting furnace for the U-ingot and U-chlorinator (UCl{sub 3} making equipment) as shown in Fig. 1. In this study, the interfaces technology between unit processes in E-Refining system is investigated and developed for the establishment of integrated E-Refining operation system as a part of integrated pyroprocessing

  9. Bridging micro to macroscale fracture properties in highly heterogeneous brittle solids: weak pinning versus fingering

    Science.gov (United States)

    Vasoya, Manish; Lazarus, Véronique; Ponson, Laurent

    2016-10-01

    The effect of strong toughness heterogeneities on the macroscopic failure properties of brittle solids is investigated in the context of planar crack propagation. The basic mechanism at play is that the crack is locally slowed down or even trapped when encountering tougher material. The induced front deformation results in a selection of local toughness values that reflect at larger scale on the material resistance. To unravel this complexity and bridge micro to macroscale in failure of strongly heterogeneous media, we propose a homogenization procedure based on the introduction of two complementary macroscopic properties: An apparent toughness defined from the loading required to make the crack propagate and an effective fracture energy defined from the rate of energy released by unit area of crack advance. The relationship between these homogenized properties and the features of the local toughness map is computed using an iterative perturbation method. This approach is applied to a circular crack pinned by a periodic array of obstacles invariant in the radial direction, which gives rise to two distinct propagation regimes: A weak pinning regime where the crack maintains a stationary shape after reaching an equilibrium position and a fingering regime characterized by the continuous growth of localized regions of the fronts while the other parts remain trapped. Our approach successfully bridges micro to macroscopic failure properties in both cases and illustrates how small scale heterogeneities can drastically affect the overall failure response of brittle solids. On a broader perspective, we believe that our approach can be used as a powerful tool for the rational design of heterogeneous brittle solids and interfaces with tailored failure properties.

  10. An Integrated Computational Materials Engineering Method for Woven Carbon Fiber Composites Preforming Process

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Weizhao; Ren, Huaqing; Wang, Zequn; Liu, Wing K.; Chen, Wei; Zeng, Danielle; Su, Xuming; Cao, Jian

    2016-10-19

    An integrated computational materials engineering method is proposed in this paper for analyzing the design and preforming process of woven carbon fiber composites. The goal is to reduce the cost and time needed for the mass production of structural composites. It integrates the simulation methods from the micro-scale to the macro-scale to capture the behavior of the composite material in the preforming process. In this way, the time consuming and high cost physical experiments and prototypes in the development of the manufacturing process can be circumvented. This method contains three parts: the micro-scale representative volume element (RVE) simulation to characterize the material; the metamodeling algorithm to generate the constitutive equations; and the macro-scale preforming simulation to predict the behavior of the composite material during forming. The results show the potential of this approach as a guidance to the design of composite materials and its manufacturing process.

  11. 15 CFR 971.209 - Processing outside the United States.

    Science.gov (United States)

    2010-01-01

    ... 15 Commerce and Foreign Trade 3 2010-01-01 2010-01-01 false Processing outside the United States... THE ENVIRONMENTAL DATA SERVICE DEEP SEABED MINING REGULATIONS FOR COMMERCIAL RECOVERY PERMITS Applications Contents § 971.209 Processing outside the United States. (a) Except as provided in this section...

  12. 40 CFR 63.765 - Glycol dehydration unit process vent standards.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 10 2010-07-01 2010-07-01 false Glycol dehydration unit process vent... Facilities § 63.765 Glycol dehydration unit process vent standards. (a) This section applies to each glycol dehydration unit subject to this subpart with an actual annual average natural gas flowrate equal to or...

  13. 40 CFR 63.1275 - Glycol dehydration unit process vent standards.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 11 2010-07-01 2010-07-01 true Glycol dehydration unit process vent... Facilities § 63.1275 Glycol dehydration unit process vent standards. (a) This section applies to each glycol dehydration unit subject to this subpart with an actual annual average natural gas flowrate equal to or...

  14. Proton Testing of Advanced Stellar Compass Digital Processing Unit

    DEFF Research Database (Denmark)

    Thuesen, Gøsta; Denver, Troelz; Jørgensen, Finn E

    1999-01-01

    The Advanced Stellar Compass Digital Processing Unit was radiation tested with 300 MeV protons at Proton Irradiation Facility (PIF), Paul Scherrer Institute, Switzerland.......The Advanced Stellar Compass Digital Processing Unit was radiation tested with 300 MeV protons at Proton Irradiation Facility (PIF), Paul Scherrer Institute, Switzerland....

  15. On the hazard rate process for imperfectly monitored multi-unit systems

    International Nuclear Information System (INIS)

    Barros, A.; Berenguer, C.; Grall, A.

    2005-01-01

    The aim of this paper is to present a stochastic model to characterize the failure distribution of multi-unit systems when the current units state is imperfectly monitored. The definition of the hazard rate process existing with perfect monitoring is extended to the realistic case where the units failure time are not always detected (non-detection events). The so defined observed hazard rate process gives a better representation of the system behavior than the classical failure rate calculated without any information on the units state and than the hazard rate process based on perfect monitoring information. The quality of this representation is, however, conditioned by the monotony property of the process. This problem is mainly discussed and illustrated on a practical example (two parallel units). The results obtained motivate the use of the observed hazard rate process to characterize the stochastic behavior of the multi-unit systems and to optimize for example preventive maintenance policies

  16. On the hazard rate process for imperfectly monitored multi-unit systems

    Energy Technology Data Exchange (ETDEWEB)

    Barros, A. [Institut des Sciences et Techonologies de l' Information de Troyes (ISTIT-CNRS), Equipe de Modelisation et Surete des Systemes, Universite de Technologie de Troyes (UTT), 12, rue Marie Curie, BP2060, 10010 Troyes cedex (France)]. E-mail: anne.barros@utt.fr; Berenguer, C. [Institut des Sciences et Techonologies de l' Information de Troyes (ISTIT-CNRS), Equipe de Modelisation et Surete des Systemes, Universite de Technologie de Troyes (UTT), 12, rue Marie Curie, BP2060, 10010 Troyes cedex (France); Grall, A. [Institut des Sciences et Techonologies de l' Information de Troyes (ISTIT-CNRS), Equipe de Modelisation et Surete des Systemes, Universite de Technologie de Troyes (UTT), 12, rue Marie Curie, BP2060, 10010 Troyes cedex (France)

    2005-12-01

    The aim of this paper is to present a stochastic model to characterize the failure distribution of multi-unit systems when the current units state is imperfectly monitored. The definition of the hazard rate process existing with perfect monitoring is extended to the realistic case where the units failure time are not always detected (non-detection events). The so defined observed hazard rate process gives a better representation of the system behavior than the classical failure rate calculated without any information on the units state and than the hazard rate process based on perfect monitoring information. The quality of this representation is, however, conditioned by the monotony property of the process. This problem is mainly discussed and illustrated on a practical example (two parallel units). The results obtained motivate the use of the observed hazard rate process to characterize the stochastic behavior of the multi-unit systems and to optimize for example preventive maintenance policies.

  17. Understanding micro-processes of institutionalization: stewardship contracting and national forest management

    Science.gov (United States)

    Cassandra Moseley; Susan Charnley

    2014-01-01

    This paper examines micro-processes of institutionalization, using the case of stewardship contracting within the US Forest Service. Our basic premise is that, until a new policy becomes an everyday practice among local actors, it will not become institutionalized at the macro-scale. We find that micro-processes of institutionalization are driven by a mixture of large-...

  18. On Tour... Primary Hardwood Processing, Products and Recycling Unit

    Science.gov (United States)

    Philip A. Araman; Daniel L. Schmoldt

    1995-01-01

    Housed within the Department of Wood Science and Forest Products at Virginia Polytechnic Institute is a three-person USDA Forest Service research work unit (with one vacancy) devoted to hardwood processing and recycling research. Phil Araman is the project leader of this truly unique and productive unit, titled ãPrimary Hardwood Processing, Products and Recycling.ä The...

  19. Tomography system having an ultrahigh-speed processing unit

    International Nuclear Information System (INIS)

    Brunnett, C.J.; Gerth, V.W. Jr.

    1977-01-01

    A transverse section tomography system has an ultrahigh-speed data processing unit for performing back projection and updating. An x-ray scanner directs x-ray beams through a planar section of a subject from a sequence of orientations and positions. The data processing unit includes a scan storage section for retrievably storing a set of filtered scan signals in scan storage locations corresponding to predetermined beam orientations. An array storage section is provided for storing image signals as they are generated

  20. Control system design specification of advanced spent fuel management process units

    Energy Technology Data Exchange (ETDEWEB)

    Ahn, S. H.; Kim, S. H.; Yoon, J. S

    2003-06-01

    In this study, the design specifications of instrumentation and control system for advanced spent fuel management process units are presented. The advanced spent fuel management process consists of several process units such as slitting device, dry pulverizing/mixing device, metallizer, etc. In this study, the control and operation characteristics of the advanced spent fuel management mockup process devices and the process devices developed in 2001 and 2002 are analysed. Also, a integral processing system of the unit process control signals is proposed, which the operation efficiency is improved. And a redundant PLC control system is constructed which the reliability is improved. A control scheme is proposed for the time delayed systems compensating the control performance degradation caused by time delay. The control system design specification is presented for the advanced spent fuel management process units. This design specifications can be effectively used for the detail design of the advanced spent fuel management process.

  1. A FPGA-based signal processing unit for a GEM array detector

    International Nuclear Information System (INIS)

    Yen, W.W.; Chou, H.P.

    2013-06-01

    in the present study, a signal processing unit for a GEM one-dimensional array detector is presented to measure the trajectory of photoelectrons produced by cosmic X-rays. The present GEM array detector system has 16 signal channels. The front-end unit provides timing signals from trigger units and energy signals from charge sensitive amplifies. The prototype of the processing unit is implemented using commercial field programmable gate array circuit boards. The FPGA based system is linked to a personal computer for testing and data analysis. Tests using simulated signals indicated that the FPGA-based signal processing unit has a good linearity and is flexible for parameter adjustment for various experimental conditions (authors)

  2. [The nursing process at a burns unit: an ethnographic study].

    Science.gov (United States)

    Rossi, L A; Casagrande, L D

    2001-01-01

    This ethnographic study aimed at understanding the cultural meaning that nursing professionals working at a Burns Unit attribute to the nursing process as well as at identifying the factors affecting the implementation of this methodology. Data were collected through participant observation and semi-structured interviews. The findings indicate that, to the nurses from the investigated unit, the nursing process seems to be identified as bureaucratic management. Some factors determining this perception are: the way in which the nursing process has been taught and interpreted, routine as a guideline for nursing activity, and knowledge and power in the life-world of the Burns Unit.

  3. Thermo-mechanical efficiency of the bimetallic strip heat engine at the macro-scale and micro-scale

    International Nuclear Information System (INIS)

    Arnaud, A; Boughaleb, J; Monfray, S; Boeuf, F; Skotnicki, T; Cugat, O

    2015-01-01

    Bimetallic strip heat engines are energy harvesters that exploit the thermo-mechanical properties of bistable bimetallic membranes to convert heat into mechanical energy. They thus represent a solution to transform low-grade heat into electrical energy if the bimetallic membrane is coupled with an electro-mechanical transducer. The simplicity of these devices allows us to consider their miniaturization using MEMS fabrication techniques. In order to design and optimize these devices at the macro-scale and micro-scale, this article proposes an explanation of the origin of the thermal snap-through by giving the expressions of the constitutive equations of composite beams. This allows us to evaluate the capability of bimetallic strips to convert heat into mechanical energy whatever their size is, and to give the theoretical thermo-mechanical efficiencies which can be obtained with these harvesters. (paper)

  4. Delineating the Macroscale Areal Organization of the Macaque Cortex In Vivo

    Directory of Open Access Journals (Sweden)

    Ting Xu

    2018-04-01

    Full Text Available Summary: Complementing long-standing traditions centered on histology, fMRI approaches are rapidly maturing in delineating brain areal organization at the macroscale. The non-human primate (NHP provides the opportunity to overcome critical barriers in translational research. Here, we establish the data requirements for achieving reproducible and internally valid parcellations in individuals. We demonstrate that functional boundaries serve as a functional fingerprint of the individual animals and can be achieved under anesthesia or awake conditions (rest, naturalistic viewing, though differences between awake and anesthetized states precluded the detection of individual differences across states. Comparison of awake and anesthetized states suggested a more nuanced picture of changes in connectivity for higher-order association areas, as well as visual and motor cortex. These results establish feasibility and data requirements for the generation of reproducible individual-specific parcellations in NHPs, provide insights into the impact of scan state, and motivate efforts toward harmonizing protocols. : Noninvasive fMRI in macaques is an essential tool in translation research. Xu et al. establish the individual functional parcellation of the macaque cortex and demonstrate that brain organization is unique, reproducible, and valid, serving as a fingerprint for an individual macaque. Keywords: macaque, parcellation, cortical areas, gradient, functional connectivity

  5. Spatial variation in nutrient and water color effects on lake chlorophyll at macroscales

    Science.gov (United States)

    Fergus, C. Emi; Finley, Andrew O.; Soranno, Patricia A.; Wagner, Tyler

    2016-01-01

    positive effect such that a unit increase in water color resulted in a 2 μg/L increase in CHL and other locations where it had a negative effect such that a unit increase in water color resulted in a 2 μg/L decrease in CHL. In addition, the spatial scales that captured variation in TP and water color effects were different for our study lakes. Variation in TP–CHL relationships was observed at intermediate distances (~20 km) compared to variation in water color–CHL relationships that was observed at regional distances (~200 km). These results demonstrate that there are lake-to-lake differences in the effects of TP and water color on lake CHL and that this variation is spatially structured. Quantifying spatial structure in these relationships furthers our understanding of the variability in these relationships at macroscales and would improve model prediction of chlorophyll a to better meet lake management goals.

  6. High Input Voltage, Silicon Carbide Power Processing Unit Performance Demonstration

    Science.gov (United States)

    Bozak, Karin E.; Pinero, Luis R.; Scheidegger, Robert J.; Aulisio, Michael V.; Gonzalez, Marcelo C.; Birchenough, Arthur G.

    2015-01-01

    A silicon carbide brassboard power processing unit has been developed by the NASA Glenn Research Center in Cleveland, Ohio. The power processing unit operates from two sources: a nominal 300 Volt high voltage input bus and a nominal 28 Volt low voltage input bus. The design of the power processing unit includes four low voltage, low power auxiliary supplies, and two parallel 7.5 kilowatt (kW) discharge power supplies that are capable of providing up to 15 kilowatts of total power at 300 to 500 Volts (V) to the thruster. Additionally, the unit contains a housekeeping supply, high voltage input filter, low voltage input filter, and master control board, such that the complete brassboard unit is capable of operating a 12.5 kilowatt Hall effect thruster. The performance of the unit was characterized under both ambient and thermal vacuum test conditions, and the results demonstrate exceptional performance with full power efficiencies exceeding 97%. The unit was also tested with a 12.5kW Hall effect thruster to verify compatibility and output filter specifications. With space-qualified silicon carbide or similar high voltage, high efficiency power devices, this would provide a design solution to address the need for high power electric propulsion systems.

  7. Evaluation of Micro- and Macro-Scale Petrophysical Characteristics of Lower Cretaceous Sandstone with Flow Modeling in µ-CT Imaged Geometry

    Science.gov (United States)

    Katsman, R.; Haruzi, P.; Waldmann, N.; Halisch, M.

    2017-12-01

    In this study petrophysical characteristics of rock samples from 3 successive outcrop layers of Hatira Formation Lower Cretaceous Sandstone in northen Israel were evaluated at micro- and macro-scales. The study was carried out by two complementary methods: using conventional experimental measurements of porosity, pore size distribution and permeability; and using a 3D µCT imaging and modeling of signle-phase flow in the real micro-scale sample geometry. The workfow included µ-CT scanning, image processing, image segmentation, and image analyses of pore network, followed by fluid flow simulations at a pore-scale. Upscaling the results of the micro-scale flow simulations yielded a macroscopic permeabilty tensor. Comparison of the upscaled and the experimentally measured rock properties demonstrated a reasonable agreement. In addition, geometrical (pore size distribution, surface area and tortuosity) and topological (Euler characteristic) characteristics of the grains and of the pore network were evaluated at a micro-scale. Statistical analyses of the samples for estimation of anisotropy and inhomogeneity of the porous media were conducted and the results agree with anisotropy and inhomogeneity of the upscaled permeabilty tensor. Isotropic pore orientation of the primary inter-granular porosity was identified in all three samples, whereas the characteristics of the secondary porosity were affected by precipitated cement and clay matrix within the primary pore network. Results of this study provide micro- and macro-scale characteristics of the Lower Cretaceous sandstone that is used in different places over the world as a reservoir for petroleum production and png;base64,R0lGODlhHAARAHcAMSH+GlNvZnR3YXJlOiBNaWNyb3NvZnQgT2ZmaWNlACH5BAEAAAAALAAABAAYAA0AhAAAAAAAAAAAOgAAZgA6kABmtjoAADoAZjo6kDqQ22YAAGa2/5A6AJA6ZpDb/7ZmALb//9uQOtv///+2Zv/bkP//tv//2wECAwECAwECAwECAwECAwECAwECAwECAwECAwVtICBaTGAWIkCaA5S+QKWgZCJSBgo8hASrjJ4osgDqABOB45dcwpopKIznmwpFkxas9uOmqDBZMawYxxS2iakn

  8. Scale up risk of developing oil shale processing units

    International Nuclear Information System (INIS)

    Oepik, I.

    1991-01-01

    The experiences in oil shale processing in three large countries, China, the U.S.A. and the U.S.S.R. have demonstrated, that the relative scale up risk of developing oil shale processing units is related to the scale up factor. On the background of large programmes for developing the oil shale industry branch, i.e. the $30 billion investments in colorado and Utah or 50 million t/year oil shale processing in Estonia and Leningrad Region planned in the late seventies, the absolute scope of the scale up risk of developing single retorting plants, seems to be justified. But under the conditions of low crude oil prices, when the large-scale development of oil shale processing industry is stopped, the absolute scope of the scale up risk is to be divided between a small number of units. Therefore, it is reasonable to build the new commercial oil shale processing plants with a minimum scale up risk. For example, in Estonia a new oil shale processing plant with gas combustion retorts projected to start in the early nineties will be equipped with four units of 1500 t/day enriched oil shale throughput each, designed with scale up factor M=1.5 and with a minimum scale up risk, only r=2.5-4.5%. The oil shale retorting unit for the PAMA plant in Israel [1] is planned to develop in three steps, also with minimum scale up risk: feasibility studies in Colorado with Israel's shale at Paraho 250 t/day retort and other tests, demonstration retort of 700 t/day and M=2.8 in Israel, and commercial retorts in the early nineties with the capacity of about 1000 t/day with M=1.4. The scale up risk of the PAMA project r=2-4% is approximately the same as that in Estonia. the knowledge of the scope of the scale up risk of developing oil shale processing retorts assists on the calculation of production costs in erecting new units. (author). 9 refs., 2 tabs

  9. 32 CFR 516.12 - Service of civil process outside the United States.

    Science.gov (United States)

    2010-07-01

    ... 32 National Defense 3 2010-07-01 2010-07-01 true Service of civil process outside the United... AID OF CIVIL AUTHORITIES AND PUBLIC RELATIONS LITIGATION Service of Process § 516.12 Service of civil process outside the United States. (a) Process of foreign courts. In foreign countries service of process...

  10. Iterative Methods for MPC on Graphical Processing Units

    DEFF Research Database (Denmark)

    Gade-Nielsen, Nicolai Fog; Jørgensen, John Bagterp; Dammann, Bernd

    2012-01-01

    The high oating point performance and memory bandwidth of Graphical Processing Units (GPUs) makes them ideal for a large number of computations which often arises in scientic computing, such as matrix operations. GPUs achieve this performance by utilizing massive par- allelism, which requires ree...... as to avoid the use of dense matrices, which may be too large for the limited memory capacity of current graphics cards.......The high oating point performance and memory bandwidth of Graphical Processing Units (GPUs) makes them ideal for a large number of computations which often arises in scientic computing, such as matrix operations. GPUs achieve this performance by utilizing massive par- allelism, which requires...

  11. Macrosystems ecology: novel methods and new understanding of multi-scale patterns and processes

    Science.gov (United States)

    Songlin Fei; Qinfeng Guo; Kevin Potter

    2016-01-01

    As the global biomes are increasingly threatened by human activities, understanding of macroscale patterns and processes is pressingly needed for effective management and policy making. Macrosystems ecology, which studies multiscale ecologicalpatterns and processes, has gained growing interest in the research community. However, as a relatively new field in...

  12. 32 CFR 516.10 - Service of civil process within the United States.

    Science.gov (United States)

    2010-07-01

    ... 32 National Defense 3 2010-07-01 2010-07-01 true Service of civil process within the United States... CIVIL AUTHORITIES AND PUBLIC RELATIONS LITIGATION Service of Process § 516.10 Service of civil process within the United States. (a) Policy. DA officials will not prevent or evade the service or process in...

  13. Analysis of Unit Process Cost for an Engineering-Scale Pyroprocess Facility Using a Process Costing Method in Korea

    Directory of Open Access Journals (Sweden)

    Sungki Kim

    2015-08-01

    Full Text Available Pyroprocessing, which is a dry recycling method, converts spent nuclear fuel into U (Uranium/TRU (TRansUranium metal ingots in a high-temperature molten salt phase. This paper provides the unit process cost of a pyroprocess facility that can process up to 10 tons of pyroprocessing product per year by utilizing the process costing method. Toward this end, the pyroprocess was classified into four kinds of unit processes: pretreatment, electrochemical reduction, electrorefining and electrowinning. The unit process cost was calculated by classifying the cost consumed at each process into raw material and conversion costs. The unit process costs of the pretreatment, electrochemical reduction, electrorefining and electrowinning were calculated as 195 US$/kgU-TRU, 310 US$/kgU-TRU, 215 US$/kgU-TRU and 231 US$/kgU-TRU, respectively. Finally the total pyroprocess cost was calculated as 951 US$/kgU-TRU. In addition, the cost driver for the raw material cost was identified as the cost for Li3PO4, needed for the LiCl-KCl purification process, and platinum as an anode electrode in the electrochemical reduction process.

  14. Macro-scale turbulence modelling for flows in porous media

    International Nuclear Information System (INIS)

    Pinson, F.

    2006-03-01

    - This work deals with the macroscopic modeling of turbulence in porous media. It concerns heat exchangers, nuclear reactors as well as urban flows, etc. The objective of this study is to describe in an homogenized way, by the mean of a spatial average operator, turbulent flows in a solid matrix. In addition to this first operator, the use of a statistical average operator permits to handle the pseudo-aleatory character of turbulence. The successive application of both operators allows us to derive the balance equations of the kind of flows under study. Two major issues are then highlighted, the modeling of dispersion induced by the solid matrix and the turbulence modeling at a macroscopic scale (Reynolds tensor and turbulent dispersion). To this aim, we lean on the local modeling of turbulence and more precisely on the k - ε RANS models. The methodology of dispersion study, derived thanks to the volume averaging theory, is extended to turbulent flows. Its application includes the simulation, at a microscopic scale, of turbulent flows within a representative elementary volume of the porous media. Applied to channel flows, this analysis shows that even within the turbulent regime, dispersion remains one of the dominating phenomena within the macro-scale modeling framework. A two-scale analysis of the flow allows us to understand the dominating role of the drag force in the kinetic energy transfers between scales. Transfers between the mean part and the turbulent part of the flow are formally derived. This description significantly improves our understanding of the issue of macroscopic modeling of turbulence and leads us to define the sub-filter production and the wake dissipation. A f - f - w >f model is derived. It is based on three balance equations for the turbulent kinetic energy, the viscous dissipation and the wake dissipation. Furthermore, a dynamical predictor for the friction coefficient is proposed. This model is then successfully applied to the study of

  15. An new MHD/kinetic model for exploring energetic particle production in macro-scale systems

    Science.gov (United States)

    Drake, J. F.; Swisdak, M.; Dahlin, J. T.

    2017-12-01

    A novel MHD/kinetic model is being developed to explore magneticreconnection and particle energization in macro-scale systems such asthe solar corona and the outer heliosphere. The model blends the MHDdescription with a macro-particle description. The rationale for thismodel is based on the recent discovery that energetic particleproduction during magnetic reconnection is controlled by Fermireflection and Betatron acceleration and not parallel electricfields. Since the former mechanisms are not dependent on kineticscales such as the Debye length and the electron and ion inertialscales, a model that sheds these scales is sufficient for describingparticle acceleration in macro-systems. Our MHD/kinetic model includesmacroparticles laid out on an MHD grid that are evolved with the MHDfields. Crucially, the feedback of the energetic component on the MHDfluid is included in the dynamics. Thus, energy of the total system,the MHD fluid plus the energetic component, is conserved. The systemhas no kinetic scales and therefore can be implemented to modelenergetic particle production in macro-systems with none of theconstraints associated with a PIC model. Tests of the new model insimple geometries will be presented and potential applications will bediscussed.

  16. 32 CFR 516.9 - Service of criminal process within the United States.

    Science.gov (United States)

    2010-07-01

    ... 32 National Defense 3 2010-07-01 2010-07-01 true Service of criminal process within the United... OF CIVIL AUTHORITIES AND PUBLIC RELATIONS LITIGATION Service of Process § 516.9 Service of criminal process within the United States. (a) Surrender of personnel. Guidance for surrender of military personnel...

  17. Ultrasonic and mechanical soil washing processes for the remediation of heavy-metal-contaminated soil

    Science.gov (United States)

    Kim, Seulgi; Lee, Wontae; Son, Younggyu

    2016-07-01

    Ultrasonic/mechanical soil washing process was investigated and compared with ultrasonic process and mechanical process using a relatively large lab-scale sonoreactor. It was found that higher removal efficiencies were observed in the combined processes for 0.1 and 0.3 M HCl washing liquids. It was due to the combination effects of macroscale removal for the overall range of slurry by mechanical mixing and microscale removal for the limited zone of slurry by cavitational actions.

  18. 15 CFR 971.427 - Processing outside the United States.

    Science.gov (United States)

    2010-01-01

    ... 15 Commerce and Foreign Trade 3 2010-01-01 2010-01-01 false Processing outside the United States... THE ENVIRONMENTAL DATA SERVICE DEEP SEABED MINING REGULATIONS FOR COMMERCIAL RECOVERY PERMITS Issuance/Transfer: Terms, Conditions and Restrictions Terms, Conditions and Restrictions § 971.427 Processing...

  19. Real-time radar signal processing using GPGPU (general-purpose graphic processing unit)

    Science.gov (United States)

    Kong, Fanxing; Zhang, Yan Rockee; Cai, Jingxiao; Palmer, Robert D.

    2016-05-01

    This study introduces a practical approach to develop real-time signal processing chain for general phased array radar on NVIDIA GPUs(Graphical Processing Units) using CUDA (Compute Unified Device Architecture) libraries such as cuBlas and cuFFT, which are adopted from open source libraries and optimized for the NVIDIA GPUs. The processed results are rigorously verified against those from the CPUs. Performance benchmarked in computation time with various input data cube sizes are compared across GPUs and CPUs. Through the analysis, it will be demonstrated that GPGPUs (General Purpose GPU) real-time processing of the array radar data is possible with relatively low-cost commercial GPUs.

  20. Ising Processing Units: Potential and Challenges for Discrete Optimization

    Energy Technology Data Exchange (ETDEWEB)

    Coffrin, Carleton James [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Nagarajan, Harsha [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Bent, Russell Whitford [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-07-05

    The recent emergence of novel computational devices, such as adiabatic quantum computers, CMOS annealers, and optical parametric oscillators, presents new opportunities for hybrid-optimization algorithms that leverage these kinds of specialized hardware. In this work, we propose the idea of an Ising processing unit as a computational abstraction for these emerging tools. Challenges involved in using and bench- marking these devices are presented, and open-source software tools are proposed to address some of these challenges. The proposed benchmarking tools and methodology are demonstrated by conducting a baseline study of established solution methods to a D-Wave 2X adiabatic quantum computer, one example of a commercially available Ising processing unit.

  1. Conceptual Design for the Pilot-Scale Plutonium Oxide Processing Unit in the Radiochemical Processing Laboratory

    Energy Technology Data Exchange (ETDEWEB)

    Lumetta, Gregg J.; Meier, David E.; Tingey, Joel M.; Casella, Amanda J.; Delegard, Calvin H.; Edwards, Matthew K.; Jones, Susan A.; Rapko, Brian M.

    2014-08-05

    This report describes a conceptual design for a pilot-scale capability to produce plutonium oxide for use as exercise and reference materials, and for use in identifying and validating nuclear forensics signatures associated with plutonium production. This capability is referred to as the Pilot-scale Plutonium oxide Processing Unit (P3U), and it will be located in the Radiochemical Processing Laboratory at the Pacific Northwest National Laboratory. The key unit operations are described, including plutonium dioxide (PuO2) dissolution, purification of the Pu by ion exchange, precipitation, and conversion to oxide by calcination.

  2. Study of AFM-based nanometric cutting process using molecular dynamics

    International Nuclear Information System (INIS)

    Zhu Pengzhe; Hu Yuanzhong; Ma Tianbao; Wang Hui

    2010-01-01

    Three-dimensional molecular dynamics (MD) simulations are conducted to investigate the atomic force microscope (AFM)-based nanometric cutting process of copper using diamond tool. The effects of tool geometry, cutting depth, cutting velocity and bulk temperature are studied. It is found that the tool geometry has a significant effect on the cutting resistance. The friction coefficient (cutting resistance) on the nanoscale decreases with the increase of tool angle as predicted by the macroscale theory. However, the friction coefficients on the nanoscale are bigger than those on the macroscale. The simulation results show that a bigger cutting depth results in more material deformation and larger chip volume, thus leading to bigger cutting force and bigger normal force. It is also observed that a higher cutting velocity results in a larger chip volume in front of the tool and bigger cutting force and normal force. The chip volume in front of the tool increases while the cutting force and normal force decrease with the increase of bulk temperature.

  3. Radiative heat transfer exceeding the blackbody limit between macroscale planar surfaces separated by a nanosize vacuum gap

    Science.gov (United States)

    Bernardi, Michael P.; Milovich, Daniel; Francoeur, Mathieu

    2016-09-01

    Using Rytov's fluctuational electrodynamics framework, Polder and Van Hove predicted that radiative heat transfer between planar surfaces separated by a vacuum gap smaller than the thermal wavelength exceeds the blackbody limit due to tunnelling of evanescent modes. This finding has led to the conceptualization of systems capitalizing on evanescent modes such as thermophotovoltaic converters and thermal rectifiers. Their development is, however, limited by the lack of devices enabling radiative transfer between macroscale planar surfaces separated by a nanosize vacuum gap. Here we measure radiative heat transfer for large temperature differences (~120 K) using a custom-fabricated device in which the gap separating two 5 × 5 mm2 intrinsic silicon planar surfaces is modulated from 3,500 to 150 nm. A substantial enhancement over the blackbody limit by a factor of 8.4 is reported for a 150-nm-thick gap. Our device paves the way for the establishment of novel evanescent wave-based systems.

  4. The process of implementation of emergency care units in Brazil.

    Science.gov (United States)

    O'Dwyer, Gisele; Konder, Mariana Teixeira; Reciputti, Luciano Pereira; Lopes, Mônica Guimarães Macau; Agostinho, Danielle Fernandes; Alves, Gabriel Farias

    2017-12-11

    To analyze the process of implementation of emergency care units in Brazil. We have carried out a documentary analysis, with interviews with twenty-four state urgency coordinators and a panel of experts. We have analyzed issues related to policy background and trajectory, players involved in the implementation, expansion process, advances, limits, and implementation difficulties, and state coordination capacity. We have used the theoretical framework of the analysis of the strategic conduct of the Giddens theory of structuration. Emergency care units have been implemented after 2007, initially in the Southeast region, and 446 emergency care units were present in all Brazilian regions in 2016. Currently, 620 emergency care units are under construction, which indicates expectation of expansion. Federal funding was a strong driver for the implementation. The states have planned their emergency care units, but the existence of direct negotiation between municipalities and the Union has contributed with the significant number of emergency care units that have been built but that do not work. In relation to the urgency network, there is tension with the hospital because of the lack of beds in the country, which generates hospitalizations in the emergency care unit. The management of emergency care units is predominantly municipal, and most of the emergency care units are located outside the capitals and classified as Size III. The main challenges identified were: under-funding and difficulty in recruiting physicians. The emergency care unit has the merit of having technological resources and being architecturally differentiated, but it will only succeed within an urgency network. Federal induction has generated contradictory responses, since not all states consider the emergency care unit a priority. The strengthening of the state management has been identified as a challenge for the implementation of the urgency network.

  5. The process of implementation of emergency care units in Brazil

    Directory of Open Access Journals (Sweden)

    Gisele O'Dwyer

    2017-12-01

    Full Text Available ABSTRACT OBJECTIVE To analyze the process of implementation of emergency care units in Brazil. METHODS We have carried out a documentary analysis, with interviews with twenty-four state urgency coordinators and a panel of experts. We have analyzed issues related to policy background and trajectory, players involved in the implementation, expansion process, advances, limits, and implementation difficulties, and state coordination capacity. We have used the theoretical framework of the analysis of the strategic conduct of the Giddens theory of structuration. RESULTS Emergency care units have been implemented after 2007, initially in the Southeast region, and 446 emergency care units were present in all Brazilian regions in 2016. Currently, 620 emergency care units are under construction, which indicates expectation of expansion. Federal funding was a strong driver for the implementation. The states have planned their emergency care units, but the existence of direct negotiation between municipalities and the Union has contributed with the significant number of emergency care units that have been built but that do not work. In relation to the urgency network, there is tension with the hospital because of the lack of beds in the country, which generates hospitalizations in the emergency care unit. The management of emergency care units is predominantly municipal, and most of the emergency care units are located outside the capitals and classified as Size III. The main challenges identified were: under-funding and difficulty in recruiting physicians. CONCLUSIONS The emergency care unit has the merit of having technological resources and being architecturally differentiated, but it will only succeed within an urgency network. Federal induction has generated contradictory responses, since not all states consider the emergency care unit a priority. The strengthening of the state management has been identified as a challenge for the implementation of the

  6. Molecular and macro-scale analysis of enzyme-crosslinked silk hydrogels for rational biomaterial design.

    Science.gov (United States)

    McGill, Meghan; Coburn, Jeannine M; Partlow, Benjamin P; Mu, Xuan; Kaplan, David L

    2017-11-01

    Silk fibroin-based hydrogels have exciting applications in tissue engineering and therapeutic molecule delivery; however, their utility is dependent on their diffusive properties. The present study describes a molecular and macro-scale investigation of enzymatically-crosslinked silk fibroin hydrogels, and demonstrates that these systems have tunable crosslink density and diffusivity. We developed a liquid chromatography tandem mass spectroscopy (LC-MS/MS) method to assess the quantity and order of covalent tyrosine crosslinks in the hydrogels. This analysis revealed between 28 and 56% conversion of tyrosine to dityrosine, which was dependent on the silk concentration and reactant concentration. The crosslink density was then correlated with storage modulus, revealing that both crosslinking and protein concentration influenced the mechanical properties of the hydrogels. The diffusive properties of the bulk material were studied by fluorescence recovery after photobleaching (FRAP), which revealed a non-linear relationship between silk concentration and diffusivity. As a result of this work, a model for synthesizing hydrogels with known crosslink densities and diffusive properties has been established, enabling the rational design of silk hydrogels for biomedical applications. Hydrogels from naturally-derived silk polymers offer versitile opportunities in the biomedical field, however, their design has largely been an empirical process. We present a fundamental study of the crosslink density, storage modulus, and diffusion behavior of enzymatically-crosslinked silk hydrogels to better inform scaffold design. These studies revealed unexpected non-linear trends in the crosslink density and diffusivity of silk hydrogels with respect to protein concentration and crosslink reagent concentration. This work demonstrates the tunable diffusivity and crosslinking in silk fibroin hydrogels, and enables the rational design of biomaterials. Further, the characterization methods

  7. Macroscale patterns in body size of intertidal crustaceans provide insights on climate change effects

    Science.gov (United States)

    Dugan, Jenifer E.; Hubbard, David M.; Contreras, Heraldo; Duarte, Cristian; Acuña, Emilio; Schoeman, David S.

    2017-01-01

    Predicting responses of coastal ecosystems to altered sea surface temperatures (SST) associated with global climate change, requires knowledge of demographic responses of individual species. Body size is an excellent metric because it scales strongly with growth and fecundity for many ectotherms. These attributes can underpin demographic as well as community and ecosystem level processes, providing valuable insights for responses of vulnerable coastal ecosystems to changing climate. We investigated contemporary macroscale patterns in body size among widely distributed crustaceans that comprise the majority of intertidal abundance and biomass of sandy beach ecosystems of the eastern Pacific coasts of Chile and California, USA. We focused on ecologically important species representing different tidal zones, trophic guilds and developmental modes, including a high-shore macroalga-consuming talitrid amphipod (Orchestoidea tuberculata), two mid-shore scavenging cirolanid isopods (Excirolana braziliensis and E. hirsuticauda), and a low-shore suspension-feeding hippid crab (Emerita analoga) with an amphitropical distribution. Significant latitudinal patterns in body sizes were observed for all species in Chile (21° - 42°S), with similar but steeper patterns in Emerita analoga, in California (32°- 41°N). Sea surface temperature was a strong predictor of body size (-4% to -35% °C-1) in all species. Beach characteristics were subsidiary predictors of body size. Alterations in ocean temperatures of even a few degrees associated with global climate change are likely to affect body sizes of important intertidal ectotherms, with consequences for population demography, life history, community structure, trophic interactions, food-webs, and indirect effects such as ecosystem function. The consistency of results for body size and temperature across species with different life histories, feeding modes, ecological roles, and microhabitats inhabiting a single widespread coastal

  8. Reflector antenna analysis using physical optics on Graphics Processing Units

    DEFF Research Database (Denmark)

    Borries, Oscar Peter; Sørensen, Hans Henrik Brandenborg; Dammann, Bernd

    2014-01-01

    The Physical Optics approximation is a widely used asymptotic method for calculating the scattering from electrically large bodies. It requires significant computational work and little memory, and is thus well suited for application on a Graphics Processing Unit. Here, we investigate the perform......The Physical Optics approximation is a widely used asymptotic method for calculating the scattering from electrically large bodies. It requires significant computational work and little memory, and is thus well suited for application on a Graphics Processing Unit. Here, we investigate...

  9. Assessment of Process Capability: the case of Soft Drinks Processing Unit

    Science.gov (United States)

    Sri Yogi, Kottala

    2018-03-01

    The process capability studies have significant impact in investigating process variation which is important in achieving product quality characteristics. Its indices are to measure the inherent variability of a process and thus to improve the process performance radically. The main objective of this paper is to understand capability of the process being produced within specification of the soft drinks processing unit, a premier brands being marketed in India. A few selected critical parameters in soft drinks processing: concentration of gas volume, concentration of brix, torque of crock has been considered for this study. Assessed some relevant statistical parameters: short term capability, long term capability as a process capability indices perspective. For assessment we have used real time data of soft drinks bottling company which is located in state of Chhattisgarh, India. As our research output suggested reasons for variations in the process which is validated using ANOVA and also predicted Taguchi cost function, assessed also predicted waste monetarily this shall be used by organization for improving process parameters. This research work has substantially benefitted the organization in understanding the various variations of selected critical parameters for achieving zero rejection.

  10. Implementation and adaptation of a macro-scale methodology to calculate direct economic losses

    Science.gov (United States)

    Natho, Stephanie; Thieken, Annegret

    2017-04-01

    As one of the 195 member countries of the United Nations, Germany signed the Sendai Framework for Disaster Risk Reduction 2015-2030 (SFDRR). With this, though voluntary and non-binding, Germany agreed to report on achievements to reduce disaster impacts. Among other targets, the SFDRR aims at reducing direct economic losses in relation to the global gross domestic product by 2030 - but how to measure this without a standardized approach? The United Nations Office for Disaster Risk Reduction (UNISDR) has hence proposed a methodology to estimate direct economic losses per event and country on the basis of the number of damaged or destroyed items in different sectors. The method bases on experiences from developing countries. However, its applicability in industrial countries has not been investigated so far. Therefore, this study presents the first implementation of this approach in Germany to test its applicability for the costliest natural hazards and suggests adaptations. The approach proposed by UNISDR considers assets in the sectors agriculture, industry, commerce, housing, and infrastructure by considering roads, medical and educational facilities. The asset values are estimated on the basis of sector and event specific number of affected items, sector specific mean sizes per item, their standardized construction costs per square meter and a loss ratio of 25%. The methodology was tested for the three costliest natural hazard types in Germany, i.e. floods, storms and hail storms, considering 13 case studies on the federal or state scale between 1984 and 2016. Not any complete calculation of all sectors necessary to describe the total direct economic loss was possible due to incomplete documentation. Therefore, the method was tested sector-wise. Three new modules were developed to better adapt this methodology to German conditions covering private transport (cars), forestry and paved roads. Unpaved roads in contrast were integrated into the agricultural and

  11. Testing a model of componential processing of multi-symbol numbers-evidence from measurement units.

    Science.gov (United States)

    Huber, Stefan; Bahnmueller, Julia; Klein, Elise; Moeller, Korbinian

    2015-10-01

    Research on numerical cognition has addressed the processing of nonsymbolic quantities and symbolic digits extensively. However, magnitude processing of measurement units is still a neglected topic in numerical cognition research. Hence, we investigated the processing of measurement units to evaluate whether typical effects of multi-digit number processing such as the compatibility effect, the string length congruity effect, and the distance effect are also present for measurement units. In three experiments, participants had to single out the larger one of two physical quantities (e.g., lengths). In Experiment 1, the compatibility of number and measurement unit (compatible: 3 mm_6 cm with 3 mm) as well as string length congruity (congruent: 1 m_2 km with m 2 characters) were manipulated. We observed reliable compatibility effects with prolonged reaction times (RT) for incompatible trials. Moreover, a string length congruity effect was present in RT with longer RT for incongruent trials. Experiments 2 and 3 served as control experiments showing that compatibility effects persist when controlling for holistic distance and that a distance effect for measurement units exists. Our findings indicate that numbers and measurement units are processed in a componential manner and thus highlight that processing characteristics of multi-digit numbers generalize to measurement units. Thereby, our data lend further support to the recently proposed generalized model of componential multi-symbol number processing.

  12. CALCULATION PECULIARITIES OF RE-PROCESSED ROAD COVERING UNIT COST

    Directory of Open Access Journals (Sweden)

    Dilyara Kyazymovna Izmaylova

    2017-09-01

    Full Text Available In the article there are considered questions of economic expediency of non-waste technology application for road covering repair and restoration. Determined the conditions of asphalt-concrete processing at plants. Carried out cost changing analysis of asphalt granulate considering the conditions of transportation and preproduction processing. Given an example of expense calculation of one conventional unit of asphalt-concrete mixture volume preparation with and without processing.

  13. Micromagnetic simulations using Graphics Processing Units

    International Nuclear Information System (INIS)

    Lopez-Diaz, L; Aurelio, D; Torres, L; Martinez, E; Hernandez-Lopez, M A; Gomez, J; Alejos, O; Carpentieri, M; Finocchio, G; Consolo, G

    2012-01-01

    The methodology for adapting a standard micromagnetic code to run on graphics processing units (GPUs) and exploit the potential for parallel calculations of this platform is discussed. GPMagnet, a general purpose finite-difference GPU-based micromagnetic tool, is used as an example. Speed-up factors of two orders of magnitude can be achieved with GPMagnet with respect to a serial code. This allows for running extensive simulations, nearly inaccessible with a standard micromagnetic solver, at reasonable computational times. (topical review)

  14. Formalizing the Process of Constructing Chains of Lexical Units

    Directory of Open Access Journals (Sweden)

    Grigorij Chetverikov

    2015-06-01

    Full Text Available Formalizing the Process of Constructing Chains of Lexical Units The paper investigates mathematical aspects of describing the construction of chains of lexical units on the basis of finite-predicate algebra. Analyzing the construction peculiarities is carried out and application of the method of finding the power of linear logical transformation for removing characteristic words of a dictionary entry is given. Analysis and perspectives of the results of the study are provided.

  15. Processing and microstructural characterization of B4C-Al cermets

    International Nuclear Information System (INIS)

    Halverson, D.C.; Pyzik, A.J.; Aksay, I.A.

    1985-01-01

    Reaction thermodynamics and wetting studies were employed to evaluate boron carbide-aluminum cermets. Wetting phonomenon and interfacial reactions are characterized using ''macroscale'' and ''microscale'' techniques. Macroscale evaluation involved aluminium sessile drop studies on boron carbide substrates. Microscale evaluation involved the fabrication of actural cermet microstructures and their characterization through sem, x-ray diffraction, metallography, and electron microprobe. Contact-angle measurements and interfacial-reaction products are reported

  16. Instruction Set Architectures for Quantum Processing Units

    OpenAIRE

    Britt, Keith A.; Humble, Travis S.

    2017-01-01

    Progress in quantum computing hardware raises questions about how these devices can be controlled, programmed, and integrated with existing computational workflows. We briefly describe several prominent quantum computational models, their associated quantum processing units (QPUs), and the adoption of these devices as accelerators within high-performance computing systems. Emphasizing the interface to the QPU, we analyze instruction set architectures based on reduced and complex instruction s...

  17. Developing maintenance technologies for FBR's heat exchanger units by advanced laser processing

    International Nuclear Information System (INIS)

    Nishimura, Akihiko; Shimada, Yukihiro

    2011-01-01

    Laser processing technologies were developed for the purpose of maintenance of FBR's heat exchanger units. Ultrashort laser processing fabricated fiber Bragg grating sensor for seismic monitoring. Fiber laser welding with a newly developed robot system repair cracks on inner wall of heat exchanger tubes. Safety operation of the heat exchanger units will be improved by the advanced laser processing technologies. These technologies are expected to be applied to the maintenance for the next generation FBRs. (author)

  18. Fast, multi-channel real-time processing of signals with microsecond latency using graphics processing units

    Energy Technology Data Exchange (ETDEWEB)

    Rath, N., E-mail: Nikolaus@rath.org; Levesque, J. P.; Mauel, M. E.; Navratil, G. A.; Peng, Q. [Department of Applied Physics and Applied Mathematics, Columbia University, 500 W 120th St, New York, New York 10027 (United States); Kato, S. [Department of Information Engineering, Nagoya University, Nagoya (Japan)

    2014-04-15

    Fast, digital signal processing (DSP) has many applications. Typical hardware options for performing DSP are field-programmable gate arrays (FPGAs), application-specific integrated DSP chips, or general purpose personal computer systems. This paper presents a novel DSP platform that has been developed for feedback control on the HBT-EP tokamak device. The system runs all signal processing exclusively on a Graphics Processing Unit (GPU) to achieve real-time performance with latencies below 8 μs. Signals are transferred into and out of the GPU using PCI Express peer-to-peer direct-memory-access transfers without involvement of the central processing unit or host memory. Tests were performed on the feedback control system of the HBT-EP tokamak using forty 16-bit floating point inputs and outputs each and a sampling rate of up to 250 kHz. Signals were digitized by a D-TACQ ACQ196 module, processing done on an NVIDIA GTX 580 GPU programmed in CUDA, and analog output was generated by D-TACQ AO32CPCI modules.

  19. Fast, multi-channel real-time processing of signals with microsecond latency using graphics processing units

    International Nuclear Information System (INIS)

    Rath, N.; Levesque, J. P.; Mauel, M. E.; Navratil, G. A.; Peng, Q.; Kato, S.

    2014-01-01

    Fast, digital signal processing (DSP) has many applications. Typical hardware options for performing DSP are field-programmable gate arrays (FPGAs), application-specific integrated DSP chips, or general purpose personal computer systems. This paper presents a novel DSP platform that has been developed for feedback control on the HBT-EP tokamak device. The system runs all signal processing exclusively on a Graphics Processing Unit (GPU) to achieve real-time performance with latencies below 8 μs. Signals are transferred into and out of the GPU using PCI Express peer-to-peer direct-memory-access transfers without involvement of the central processing unit or host memory. Tests were performed on the feedback control system of the HBT-EP tokamak using forty 16-bit floating point inputs and outputs each and a sampling rate of up to 250 kHz. Signals were digitized by a D-TACQ ACQ196 module, processing done on an NVIDIA GTX 580 GPU programmed in CUDA, and analog output was generated by D-TACQ AO32CPCI modules

  20. Fast analytical scatter estimation using graphics processing units.

    Science.gov (United States)

    Ingleby, Harry; Lippuner, Jonas; Rickey, Daniel W; Li, Yue; Elbakri, Idris

    2015-01-01

    To develop a fast patient-specific analytical estimator of first-order Compton and Rayleigh scatter in cone-beam computed tomography, implemented using graphics processing units. The authors developed an analytical estimator for first-order Compton and Rayleigh scatter in a cone-beam computed tomography geometry. The estimator was coded using NVIDIA's CUDA environment for execution on an NVIDIA graphics processing unit. Performance of the analytical estimator was validated by comparison with high-count Monte Carlo simulations for two different numerical phantoms. Monoenergetic analytical simulations were compared with monoenergetic and polyenergetic Monte Carlo simulations. Analytical and Monte Carlo scatter estimates were compared both qualitatively, from visual inspection of images and profiles, and quantitatively, using a scaled root-mean-square difference metric. Reconstruction of simulated cone-beam projection data of an anthropomorphic breast phantom illustrated the potential of this method as a component of a scatter correction algorithm. The monoenergetic analytical and Monte Carlo scatter estimates showed very good agreement. The monoenergetic analytical estimates showed good agreement for Compton single scatter and reasonable agreement for Rayleigh single scatter when compared with polyenergetic Monte Carlo estimates. For a voxelized phantom with dimensions 128 × 128 × 128 voxels and a detector with 256 × 256 pixels, the analytical estimator required 669 seconds for a single projection, using a single NVIDIA 9800 GX2 video card. Accounting for first order scatter in cone-beam image reconstruction improves the contrast to noise ratio of the reconstructed images. The analytical scatter estimator, implemented using graphics processing units, provides rapid and accurate estimates of single scatter and with further acceleration and a method to account for multiple scatter may be useful for practical scatter correction schemes.

  1. Stochastic Analysis of a Queue Length Model Using a Graphics Processing Unit

    Czech Academy of Sciences Publication Activity Database

    Přikryl, Jan; Kocijan, J.

    2012-01-01

    Roč. 5, č. 2 (2012), s. 55-62 ISSN 1802-971X R&D Projects: GA MŠk(CZ) MEB091015 Institutional support: RVO:67985556 Keywords : graphics processing unit * GPU * Monte Carlo simulation * computer simulation * modeling Subject RIV: BC - Control Systems Theory http://library.utia.cas.cz/separaty/2012/AS/prikryl-stochastic analysis of a queue length model using a graphics processing unit.pdf

  2. Psychiatry training in the United Kingdom--part 2: the training process.

    Science.gov (United States)

    Christodoulou, N; Kasiakogia, K

    2015-01-01

    In the second part of this diptych, we shall deal with psychiatric training in the United Kingdom in detail, and we will compare it--wherever this is meaningful--with the equivalent system in Greece. As explained in the first part of the paper, due to the recently increased emigration of Greek psychiatrists and psychiatric trainees, and the fact that the United Kingdom is a popular destination, it has become necessary to inform those aspiring to train in the United Kingdom of the system and the circumstances they should expect to encounter. This paper principally describes the structure of the United Kingdom's psychiatric training system, including the different stages trainees progress through and their respective requirements and processes. Specifically, specialty and subspecialty options are described and explained, special paths in training are analysed, and the notions of "special interest day" and the optional "Out of programme experience" schemes are explained. Furthermore, detailed information is offered on the pivotal points of each of the stages of the training process, with special care to explain the important differences and similarities between the systems in Greece and the United Kingdom. Special attention is given to The Royal College of Psychiatrists' Membership Exams (MRCPsych) because they are the only exams towards completing specialisation in Psychiatry in the United Kingdom. Also, the educational culture of progressing according to a set curriculum, of utilising diverse means of professional development, of empowering the trainees' autonomy by allowing initiative-based development and of applying peer supervision as a tool for professional development is stressed. We conclude that psychiatric training in the United Kingdom differs substantially to that of Greece in both structure and process. Τhere are various differences such as pure psychiatric training in the United Kingdom versus neurological and medical modules in Greece, in

  3. Undergraduate Game Degree Programs in the United Kingdom and United States: A Comparison of the Curriculum Planning Process

    Science.gov (United States)

    McGill, Monica M.

    2010-01-01

    Digital games are marketed, mass-produced, and consumed by an increasing number of people and the game industry is only expected to grow. In response, post-secondary institutions in the United Kingdom (UK) and the United States (US) have started to create game degree programs. Though curriculum theorists provide insight into the process of…

  4. Use of general purpose graphics processing units with MODFLOW

    Science.gov (United States)

    Hughes, Joseph D.; White, Jeremy T.

    2013-01-01

    To evaluate the use of general-purpose graphics processing units (GPGPUs) to improve the performance of MODFLOW, an unstructured preconditioned conjugate gradient (UPCG) solver has been developed. The UPCG solver uses a compressed sparse row storage scheme and includes Jacobi, zero fill-in incomplete, and modified-incomplete lower-upper (LU) factorization, and generalized least-squares polynomial preconditioners. The UPCG solver also includes options for sequential and parallel solution on the central processing unit (CPU) using OpenMP. For simulations utilizing the GPGPU, all basic linear algebra operations are performed on the GPGPU; memory copies between the central processing unit CPU and GPCPU occur prior to the first iteration of the UPCG solver and after satisfying head and flow criteria or exceeding a maximum number of iterations. The efficiency of the UPCG solver for GPGPU and CPU solutions is benchmarked using simulations of a synthetic, heterogeneous unconfined aquifer with tens of thousands to millions of active grid cells. Testing indicates GPGPU speedups on the order of 2 to 8, relative to the standard MODFLOW preconditioned conjugate gradient (PCG) solver, can be achieved when (1) memory copies between the CPU and GPGPU are optimized, (2) the percentage of time performing memory copies between the CPU and GPGPU is small relative to the calculation time, (3) high-performance GPGPU cards are utilized, and (4) CPU-GPGPU combinations are used to execute sequential operations that are difficult to parallelize. Furthermore, UPCG solver testing indicates GPGPU speedups exceed parallel CPU speedups achieved using OpenMP on multicore CPUs for preconditioners that can be easily parallelized.

  5. Tomography system having an ultrahigh speed processing unit

    International Nuclear Information System (INIS)

    Cox, J.P. Jr.; Gerth, V.W. Jr.

    1977-01-01

    A transverse section tomography system has an ultrahigh-speed data processing unit for performing back projection and updating. An x-ray scanner directs x-ray beams through a planar section of a subject from a sequence of orientations and positions. The scanner includes a movably supported radiation detector for detecting the intensity of the beams of radiation after they pass through the subject

  6. The First Prototype for the FastTracker Processing Unit

    CERN Document Server

    Andreani, A; The ATLAS collaboration; Beretta, M; Bogdan, M; Citterio, M; Alberti, F; Giannetti, P; Lanza, A; Magalotti, D; Piendibene, M; Shochet, M; Stabile, A; Tang, J; Tompkins, L

    2012-01-01

    Modern experiments search for extremely rare processes hidden in much larger background levels. As the experiment complexity and the accelerator backgrounds and luminosity increase we need increasingly complex and exclusive selections. We present the first prototype of a new Processing Unit, the core of the FastTracker processor for Atlas, whose computing power is such that a couple of hundreds of them will be able to reconstruct all the tracks with transverse momentum above 1 GeV in the ATLAS events up to Phase II instantaneous luminosities (5×1034 cm-2 s-1) with an event input rate of 100 kHz and a latency below hundreds of microseconds. We plan extremely powerful, very compact and low consumption units for the far future, essential to increase efficiency and purity of the Level 2 selected samples through the intensive use of tracking. This strategy requires massive computing power to minimize the online execution time of complex tracking algorithms. The time consuming pattern recognition problem, generall...

  7. Quality Improvement Process in a Large Intensive Care Unit: Structure and Outcomes.

    Science.gov (United States)

    Reddy, Anita J; Guzman, Jorge A

    2016-11-01

    Quality improvement in the health care setting is a complex process, and even more so in the critical care environment. The development of intensive care unit process measures and quality improvement strategies are associated with improved outcomes, but should be individualized to each medical center as structure and culture can differ from institution to institution. The purpose of this report is to describe the structure of quality improvement processes within a large medical intensive care unit while using examples of the study institution's successes and challenges in the areas of stat antibiotic administration, reduction in blood product waste, central line-associated bloodstream infections, and medication errors. © The Author(s) 2015.

  8. Accelerating Malware Detection via a Graphics Processing Unit

    Science.gov (United States)

    2010-09-01

    Processing Unit . . . . . . . . . . . . . . . . . . 4 PE Portable Executable . . . . . . . . . . . . . . . . . . . . . 4 COFF Common Object File Format...operating systems for the future [Szo05]. The PE format is an updated version of the common object file format ( COFF ) [Mic06]. Microsoft released a new...NAs02]. These alerts can be costly in terms of time and resources for individuals and organizations to investigate each misidentified file [YWL07] [Vak10

  9. Alternative Procedure of Heat Integration Tehnique Election between Two Unit Processes to Improve Energy Saving

    Science.gov (United States)

    Santi, S. S.; Renanto; Altway, A.

    2018-01-01

    The energy use system in a production process, in this case heat exchangers networks (HENs), is one element that plays a role in the smoothness and sustainability of the industry itself. Optimizing Heat Exchanger Networks (HENs) from process streams can have a major effect on the economic value of an industry as a whole. So the solving of design problems with heat integration becomes an important requirement. In a plant, heat integration can be carried out internally or in combination between process units. However, steps in the determination of suitable heat integration techniques require long calculations and require a long time. In this paper, we propose an alternative step in determining heat integration technique by investigating 6 hypothetical units using Pinch Analysis approach with objective function energy target and total annual cost target. The six hypothetical units consist of units A, B, C, D, E, and F, where each unit has the location of different process streams to the temperature pinch. The result is a potential heat integration (ΔH’) formula that can trim conventional steps from 7 steps to just 3 steps. While the determination of the preferred heat integration technique is to calculate the potential of heat integration (ΔH’) between the hypothetical process units. Completion of calculation using matlab language programming.

  10. Application of Contact Mode AFM to Manufacturing Processes

    Science.gov (United States)

    Giordano, Michael A.; Schmid, Steven R.

    A review of the application of contact mode atomic force microscopy (AFM) to manufacturing processes is presented. A brief introduction to common experimental techniques including hardness, scratch, and wear testing is presented, with a discussion of challenges in the extension of manufacturing scale investigations to the AFM. Differences between the macro- and nanoscales tests are discussed, including indentation size effects and their importance in the simulation of processes such as grinding. The basics of lubrication theory are presented and friction force microscopy is introduced as a method of investigating metal forming lubrication on the nano- and microscales that directly simulates tooling/workpiece asperity interactions. These concepts are followed by a discussion of their application to macroscale industrial manufacturing processes and direct correlations are made.

  11. Graphics processing unit based computation for NDE applications

    Science.gov (United States)

    Nahas, C. A.; Rajagopal, Prabhu; Balasubramaniam, Krishnan; Krishnamurthy, C. V.

    2012-05-01

    Advances in parallel processing in recent years are helping to improve the cost of numerical simulation. Breakthroughs in Graphical Processing Unit (GPU) based computation now offer the prospect of further drastic improvements. The introduction of 'compute unified device architecture' (CUDA) by NVIDIA (the global technology company based in Santa Clara, California, USA) has made programming GPUs for general purpose computing accessible to the average programmer. Here we use CUDA to develop parallel finite difference schemes as applicable to two problems of interest to NDE community, namely heat diffusion and elastic wave propagation. The implementations are for two-dimensions. Performance improvement of the GPU implementation against serial CPU implementation is then discussed.

  12. Architectural design of heterogeneous metallic nanocrystals--principles and processes.

    Science.gov (United States)

    Yu, Yue; Zhang, Qingbo; Yao, Qiaofeng; Xie, Jianping; Lee, Jim Yang

    2014-12-16

    CONSPECTUS: Heterogeneous metal nanocrystals (HMNCs) are a natural extension of simple metal nanocrystals (NCs), but as a research topic, they have been much less explored until recently. HMNCs are formed by integrating metal NCs of different compositions into a common entity, similar to the way atoms are bonded to form molecules. HMNCs can be built to exhibit an unprecedented architectural diversity and complexity by programming the arrangement of the NC building blocks ("unit NCs"). The architectural engineering of HMNCs involves the design and fabrication of the architecture-determining elements (ADEs), i.e., unit NCs with precise control of shape and size, and their relative positions in the design. Similar to molecular engineering, where structural diversity is used to create more property variations for application explorations, the architectural engineering of HMNCs can similarly increase the utility of metal NCs by offering a suite of properties to support multifunctionality in applications. The architectural engineering of HMNCs calls for processes and operations that can execute the design. Some enabling technologies already exist in the form of classical micro- and macroscale fabrication techniques, such as masking and etching. These processes, when used singly or in combination, are fully capable of fabricating nanoscopic objects. What is needed is a detailed understanding of the engineering control of ADEs and the translation of these principles into actual processes. For simplicity of execution, these processes should be integrated into a common reaction system and yet retain independence of control. The key to architectural diversity is therefore the independent controllability of each ADE in the design blueprint. The right chemical tools must be applied under the right circumstances in order to achieve the desired outcome. In this Account, after a short illustration of the infinite possibility of combining different ADEs to create HMNC design

  13. A comparative study of the nanoscale and macroscale tribological attributes of alumina and stainless steel surfaces immersed in aqueous suspensions of positively or negatively charged nanodiamonds

    Science.gov (United States)

    Curtis, Colin K; Marek, Antonin; Smirnov, Alex I

    2017-01-01

    This article reports a comparative study of the nanoscale and macroscale tribological attributes of alumina and stainless steel surfaces immersed in aqueous suspensions of positively (hydroxylated) or negatively (carboxylated) charged nanodiamonds (ND). Immersion in −ND suspensions resulted in a decrease in the macroscopic friction coefficients to values in the range 0.05–0.1 for both stainless steel and alumina, while +ND suspensions yielded an increase in friction for stainless steel contacts but little to no increase for alumina contacts. Quartz crystal microbalance (QCM), atomic force microscopy (AFM) and scanning electron microscopy (SEM) measurements were employed to assess nanoparticle uptake, surface polishing, and resistance to solid–liquid interfacial shear motion. The QCM studies revealed abrupt changes to the surfaces of both alumina and stainless steel upon injection of –ND into the surrounding water environment that are consistent with strong attachment of NDs and/or chemical changes to the surfaces. AFM images of the surfaces indicated slight increases in the surface roughness upon an exposure to both +ND and −ND suspensions. A suggested mechanism for these observations is that carboxylated −NDs from aqueous suspensions are forming robust lubricious deposits on stainless and alumina surfaces that enable gliding of the surfaces through the −ND suspensions with relatively low resistance to shear. In contrast, +ND suspensions are failing to improve tribological performance for either of the surfaces and may have abraded existing protective boundary layers in the case of stainless steel contacts. This study therefore reveals atomic scale details associated with systems that exhibit starkly different macroscale tribological properties, enabling future efforts to predict and design complex lubricant interfaces. PMID:29046852

  14. A comparative study of the nanoscale and macroscale tribological attributes of alumina and stainless steel surfaces immersed in aqueous suspensions of positively or negatively charged nanodiamonds

    Directory of Open Access Journals (Sweden)

    Colin K. Curtis

    2017-09-01

    Full Text Available This article reports a comparative study of the nanoscale and macroscale tribological attributes of alumina and stainless steel surfaces immersed in aqueous suspensions of positively (hydroxylated or negatively (carboxylated charged nanodiamonds (ND. Immersion in −ND suspensions resulted in a decrease in the macroscopic friction coefficients to values in the range 0.05–0.1 for both stainless steel and alumina, while +ND suspensions yielded an increase in friction for stainless steel contacts but little to no increase for alumina contacts. Quartz crystal microbalance (QCM, atomic force microscopy (AFM and scanning electron microscopy (SEM measurements were employed to assess nanoparticle uptake, surface polishing, and resistance to solid–liquid interfacial shear motion. The QCM studies revealed abrupt changes to the surfaces of both alumina and stainless steel upon injection of –ND into the surrounding water environment that are consistent with strong attachment of NDs and/or chemical changes to the surfaces. AFM images of the surfaces indicated slight increases in the surface roughness upon an exposure to both +ND and −ND suspensions. A suggested mechanism for these observations is that carboxylated −NDs from aqueous suspensions are forming robust lubricious deposits on stainless and alumina surfaces that enable gliding of the surfaces through the −ND suspensions with relatively low resistance to shear. In contrast, +ND suspensions are failing to improve tribological performance for either of the surfaces and may have abraded existing protective boundary layers in the case of stainless steel contacts. This study therefore reveals atomic scale details associated with systems that exhibit starkly different macroscale tribological properties, enabling future efforts to predict and design complex lubricant interfaces.

  15. Study of automatic boat loading unit and horizontal sintering process of uranium dioxide pellet

    International Nuclear Information System (INIS)

    He Zhongjing; Chen Yu; Yao Dengfeng; Wang Youliang; Shu Binhua; Wu Genjiu

    2014-01-01

    Sintering process is a key process for the manufacture of nuclear fuel UO_2 pellet. In our factory, the continuous high temperature sintering furnace is used for sintering process. During the sintering of green pellets, the furnace, the boat and the accumulation way can influence the quality of the final product. In this text, on the basis of early process research, The automatic loading boat Unit and horizontal sintering process is studied successively. The results show that the physical and chemical properties of the products manufactured by automatic loading boat unit and horizontal sintering process can meet the technique requirements completely, and this system is reliable and continuous. (authors)

  16. Computerized nursing process in the Intensive Care Unit: ergonomics and usability

    OpenAIRE

    Almeida,Sônia Regina Wagner de; Sasso,Grace Teresinha Marcon Dal; Barra,Daniela Couto Carvalho

    2016-01-01

    Abstract OBJECTIVE Analyzing the ergonomics and usability criteria of the Computerized Nursing Process based on the International Classification for Nursing Practice in the Intensive Care Unit according to International Organization for Standardization(ISO). METHOD A quantitative, quasi-experimental, before-and-after study with a sample of 16 participants performed in an Intensive Care Unit. Data collection was performed through the application of five simulated clinical cases and an evalua...

  17. 32 CFR 516.11 - Service of criminal process outside the United States.

    Science.gov (United States)

    2010-07-01

    ... 32 National Defense 3 2010-07-01 2010-07-01 true Service of criminal process outside the United... AID OF CIVIL AUTHORITIES AND PUBLIC RELATIONS LITIGATION Service of Process § 516.11 Service of... status of forces agreements, govern the service of criminal process of foreign courts and the surrender...

  18. Automated processing of whole blood units: operational value and in vitro quality of final blood components.

    Science.gov (United States)

    Jurado, Marisa; Algora, Manuel; Garcia-Sanchez, Félix; Vico, Santiago; Rodriguez, Eva; Perez, Sonia; Barbolla, Luz

    2012-01-01

    The Community Transfusion Centre in Madrid currently processes whole blood using a conventional procedure (Compomat, Fresenius) followed by automated processing of buffy coats with the OrbiSac system (CaridianBCT). The Atreus 3C system (CaridianBCT) automates the production of red blood cells, plasma and an interim platelet unit from a whole blood unit. Interim platelet unit are pooled to produce a transfusable platelet unit. In this study the Atreus 3C system was evaluated and compared to the routine method with regards to product quality and operational value. Over a 5-week period 810 whole blood units were processed using the Atreus 3C system. The attributes of the automated process were compared to those of the routine method by assessing productivity, space, equipment and staffing requirements. The data obtained were evaluated in order to estimate the impact of implementing the Atreus 3C system in the routine setting of the blood centre. Yield and in vitro quality of the final blood components processed with the two systems were evaluated and compared. The Atreus 3C system enabled higher throughput while requiring less space and employee time by decreasing the amount of equipment and processing time per unit of whole blood processed. Whole blood units processed on the Atreus 3C system gave a higher platelet yield, a similar amount of red blood cells and a smaller volume of plasma. These results support the conclusion that the Atreus 3C system produces blood components meeting quality requirements while providing a high operational efficiency. Implementation of the Atreus 3C system could result in a large organisational improvement.

  19. Accelerating Molecular Dynamic Simulation on Graphics Processing Units

    Science.gov (United States)

    Friedrichs, Mark S.; Eastman, Peter; Vaidyanathan, Vishal; Houston, Mike; Legrand, Scott; Beberg, Adam L.; Ensign, Daniel L.; Bruns, Christopher M.; Pande, Vijay S.

    2009-01-01

    We describe a complete implementation of all-atom protein molecular dynamics running entirely on a graphics processing unit (GPU), including all standard force field terms, integration, constraints, and implicit solvent. We discuss the design of our algorithms and important optimizations needed to fully take advantage of a GPU. We evaluate its performance, and show that it can be more than 700 times faster than a conventional implementation running on a single CPU core. PMID:19191337

  20. Minimization of entropy production in separate and connected process units

    Energy Technology Data Exchange (ETDEWEB)

    Roesjorde, Audun

    2004-08-01

    The objective of this thesis was to further develop a methodology for minimizing the entropy production of single and connected chemical process units. When chemical process equipment is designed and operated at the lowest entropy production possible, the energy efficiency of the equipment is enhanced. We have found for single process units that the entropy production could be reduced with up to 20-40%, given the degrees of freedom in the optimization. In processes, our results indicated that even bigger reductions were possible. The states of minimum entropy production were studied and important painter's for obtaining significant reductions in the entropy production were identified. Both from sustain ability and economical viewpoints knowledge of energy efficient design and operation are important. In some of the systems we studied, nonequilibrium thermodynamics was used to model the entropy production. In Chapter 2, we gave a brief introduction to different industrial applications of nonequilibrium thermodynamics. The link between local transport phenomena and overall system description makes nonequilibrium thermodynamics a useful tool for understanding design of chemical process units. We developed the methodology of minimization of entropy production in several steps. First, we analyzed and optimized the entropy production of single units: Two alternative concepts of adiabatic distillation; diabatic and heat-integrated distillation, were analyzed and optimized in Chapter 3 to 5. In diabatic distillation, heat exchange is allowed along the column, and it is this feature that increases the energy efficiency of the distillation column. In Chapter 3, we found how a given area of heat transfer should be optimally distributed among the trays in a column separating a mixture of propylene and propane. The results showed that heat exchange was most important on the trays close to the re boiler and condenser. In Chapter 4 and 5, we studied how the entropy

  1. PREMATH: a Precious-Material Holdup Estimator for unit operations and chemical processes

    International Nuclear Information System (INIS)

    Krichinsky, A.M.; Bruns, D.D.

    1982-01-01

    A computer program, PREMATH (Precious Material Holdup Estimator), has been developed to permit inventory estimation in vessels involved in unit operations and chemical processes. This program has been implemented in an operating nuclear fuel processing plant. PREMATH's purpose is to provide steady-state composition estimates for material residing in process vessels until representative samples can be obtained and chemical analyses can be performed. Since these compositions are used for inventory estimation, the results are determined for and cataloged in container-oriented files. The estimated compositions represent material collected in applicable vessels - including consideration for material previously acknowledged in these vessels. The program utilizes process measurements and simple material balance models to estimate material holdups and distribution within unit operations. During simulated run testing, PREMATH-estimated inventories typically produced material balances within 7% of the associated measured material balances for uranium and within 16% of the associated, measured material balances for thorium (a less valuable material than uranium) during steady-state process operation

  2. NUMATH: a nuclear-material-holdup estimator for unit operations and chemical processes

    International Nuclear Information System (INIS)

    Krichinsky, A.M.

    1981-01-01

    A computer program, NUMATH (Nuclear Material Holdup Estimator), has been developed to permit inventory estimation in vessels involved in unit operations and chemical processes. This program has been implemented in an operating nuclear fuel processing plant. NUMATH's purpose is to provide steady-state composition estimates for material residing in process vessels until representative samples can be obtained and chemical analyses can be performed. Since these compositions are used for inventory estimation, the results are determined for and cataloged in container-oriented files. The estimated compositions represent material collected in applicable vessels-including consideration for material previously acknowledged in these vessels. The program utilizes process measurements and simple material balance models to estimate material holdups and distribution within unit operations. During simulated run testing, NUMATH-estimated inventories typically produced material balances within 7% of the associated measured material balances for uranium and within 16% of the associated, measured material balance for thorium during steady-state process operation

  3. FEATURES OF THE SOCIO-POLITICAL PROCESS IN THE UNITED STATES

    Directory of Open Access Journals (Sweden)

    Tatyana Evgenevna Beydina

    2017-06-01

    Full Text Available The subject of this article is the study of political and social developments of the USA at the present stage. There are four stages of the American tradition of studying political processes. The first stage is connected with substantiation of the Executive, Legislative and Judicial branches of political system (works of F. Pollack and R. Sili. The second one includes behavioral studies of politics. Besides studying political processes Charles Merriam has studied their similarities and differences. The third stage is characterized by political system studies – the works of T. Parsons, D. Easton, R. Aron, G. Almond and K. Deutsch. The fourth stage is characterized by superpower and the systems democratization problem (S. Huntington, Zb. Bzhezinsky. American social processes were qualified by R. Park, P. Sorokin, E. Giddens. The work is concentrated on the divided explanation of social and political processes of the us and the reflection of unity of American social-political reality. Academic novelty is composed of substantiation of the US social-political process concept and characterization of its features. The US social-political process is characterized by two channels: soft power and aggression. Soft power appears in the US economy dominancy. The main results of the research are features of the socio-political process in the United States. Purpose: the main goal of the research is to systematize the definition of social-political process of the USA and estimate the line of its study within American political tradition. Methodology: in this article have used methods: such as system, comparison and historical analysis, structural-functional analysis. Results: during the research the analysis of the dynamics of social and political processes of the United States had been made. Practical implications it is expedient to apply the received results in the international relation theory and practice.

  4. Massively Parallel Signal Processing using the Graphics Processing Unit for Real-Time Brain-Computer Interface Feature Extraction.

    Science.gov (United States)

    Wilson, J Adam; Williams, Justin C

    2009-01-01

    The clock speeds of modern computer processors have nearly plateaued in the past 5 years. Consequently, neural prosthetic systems that rely on processing large quantities of data in a short period of time face a bottleneck, in that it may not be possible to process all of the data recorded from an electrode array with high channel counts and bandwidth, such as electrocorticographic grids or other implantable systems. Therefore, in this study a method of using the processing capabilities of a graphics card [graphics processing unit (GPU)] was developed for real-time neural signal processing of a brain-computer interface (BCI). The NVIDIA CUDA system was used to offload processing to the GPU, which is capable of running many operations in parallel, potentially greatly increasing the speed of existing algorithms. The BCI system records many channels of data, which are processed and translated into a control signal, such as the movement of a computer cursor. This signal processing chain involves computing a matrix-matrix multiplication (i.e., a spatial filter), followed by calculating the power spectral density on every channel using an auto-regressive method, and finally classifying appropriate features for control. In this study, the first two computationally intensive steps were implemented on the GPU, and the speed was compared to both the current implementation and a central processing unit-based implementation that uses multi-threading. Significant performance gains were obtained with GPU processing: the current implementation processed 1000 channels of 250 ms in 933 ms, while the new GPU method took only 27 ms, an improvement of nearly 35 times.

  5. Design, manufacturing and commissioning of mobile unit for EDF (Dow Chemical process)

    International Nuclear Information System (INIS)

    Cangini, D.; Cordier, J.P.; PEC Engineering, Osny, France)

    1985-01-01

    To process their spent ion exchange resins and the liquid wastes, EDF has ordered from PEC a mobile unit using the DOW CHEMICAL binder. This paper presents the EDF's design requirements as well as the new French regulation for waste embedding. The mobile unit was started in January 1983 and commissioned successfully in January 1985 in the TRICASTIN EDF's power plant

  6. Software Graphics Processing Unit (sGPU) for Deep Space Applications

    Science.gov (United States)

    McCabe, Mary; Salazar, George; Steele, Glen

    2015-01-01

    A graphics processing capability will be required for deep space missions and must include a range of applications, from safety-critical vehicle health status to telemedicine for crew health. However, preliminary radiation testing of commercial graphics processing cards suggest they cannot operate in the deep space radiation environment. Investigation into an Software Graphics Processing Unit (sGPU)comprised of commercial-equivalent radiation hardened/tolerant single board computers, field programmable gate arrays, and safety-critical display software shows promising results. Preliminary performance of approximately 30 frames per second (FPS) has been achieved. Use of multi-core processors may provide a significant increase in performance.

  7. Model-based analysis of high shear wet granulation from batch to continuous processes in pharmaceutical production - A critical review

    DEFF Research Database (Denmark)

    Kumar, Ashish; Gernaey, Krist; De Beer, Thomas

    2013-01-01

    of the developments, the review focuses on the twin-screw granulator as a device for continuous HSWG and attempts to critically evaluate the current process. As a result, a set of open research questions are identified. These questions need to be answered in the future in order to fill the knowledge gap...... that currently exists both at micro- and macro-scale, and which is currently limiting the further development of the process to its full potential in pharmaceutical applications....

  8. Partial wave analysis using graphics processing units

    Energy Technology Data Exchange (ETDEWEB)

    Berger, Niklaus; Liu Beijiang; Wang Jike, E-mail: nberger@ihep.ac.c [Institute of High Energy Physics, Chinese Academy of Sciences, 19B Yuquan Lu, Shijingshan, 100049 Beijing (China)

    2010-04-01

    Partial wave analysis is an important tool for determining resonance properties in hadron spectroscopy. For large data samples however, the un-binned likelihood fits employed are computationally very expensive. At the Beijing Spectrometer (BES) III experiment, an increase in statistics compared to earlier experiments of up to two orders of magnitude is expected. In order to allow for a timely analysis of these datasets, additional computing power with short turnover times has to be made available. It turns out that graphics processing units (GPUs) originally developed for 3D computer games have an architecture of massively parallel single instruction multiple data floating point units that is almost ideally suited for the algorithms employed in partial wave analysis. We have implemented a framework for tensor manipulation and partial wave fits called GPUPWA. The user writes a program in pure C++ whilst the GPUPWA classes handle computations on the GPU, memory transfers, caching and other technical details. In conjunction with a recent graphics processor, the framework provides a speed-up of the partial wave fit by more than two orders of magnitude compared to legacy FORTRAN code.

  9. The ATLAS Fast TracKer Processing Units

    CERN Document Server

    Krizka, Karol; The ATLAS collaboration

    2016-01-01

    The Fast Tracker is a hardware upgrade to the ATLAS trigger and data-acquisition system, with the goal of providing global track reconstruction by the start of the High Level Trigger starts. The Fast Tracker can process incoming data from the whole inner detector at full first level trigger rate, up to 100 kHz, using custom electronic boards. At the core of the system is a Processing Unit installed in a VMEbus crate, formed by two sets of boards: the Associative Memory Board and a powerful rear transition module called the Auxiliary card, while the second set is the Second Stage board. The associative memories perform the pattern matching looking for correlations within the incoming data, compatible with track candidates at coarse resolution. The pattern matching task is performed using custom application specific integrated circuits, called associative memory chips. The auxiliary card prepares the input and reject bad track candidates obtained from from the Associative Memory Board using the full precision a...

  10. Exploiting graphics processing units for computational biology and bioinformatics.

    Science.gov (United States)

    Payne, Joshua L; Sinnott-Armstrong, Nicholas A; Moore, Jason H

    2010-09-01

    Advances in the video gaming industry have led to the production of low-cost, high-performance graphics processing units (GPUs) that possess more memory bandwidth and computational capability than central processing units (CPUs), the standard workhorses of scientific computing. With the recent release of generalpurpose GPUs and NVIDIA's GPU programming language, CUDA, graphics engines are being adopted widely in scientific computing applications, particularly in the fields of computational biology and bioinformatics. The goal of this article is to concisely present an introduction to GPU hardware and programming, aimed at the computational biologist or bioinformaticist. To this end, we discuss the primary differences between GPU and CPU architecture, introduce the basics of the CUDA programming language, and discuss important CUDA programming practices, such as the proper use of coalesced reads, data types, and memory hierarchies. We highlight each of these topics in the context of computing the all-pairs distance between instances in a dataset, a common procedure in numerous disciplines of scientific computing. We conclude with a runtime analysis of the GPU and CPU implementations of the all-pairs distance calculation. We show our final GPU implementation to outperform the CPU implementation by a factor of 1700.

  11. Graphics Processing Units for HEP trigger systems

    International Nuclear Information System (INIS)

    Ammendola, R.; Bauce, M.; Biagioni, A.; Chiozzi, S.; Cotta Ramusino, A.; Fantechi, R.; Fiorini, M.; Giagu, S.; Gianoli, A.; Lamanna, G.; Lonardo, A.; Messina, A.

    2016-01-01

    General-purpose computing on GPUs (Graphics Processing Units) is emerging as a new paradigm in several fields of science, although so far applications have been tailored to the specific strengths of such devices as accelerator in offline computation. With the steady reduction of GPU latencies, and the increase in link and memory throughput, the use of such devices for real-time applications in high-energy physics data acquisition and trigger systems is becoming ripe. We will discuss the use of online parallel computing on GPU for synchronous low level trigger, focusing on CERN NA62 experiment trigger system. The use of GPU in higher level trigger system is also briefly considered.

  12. Graphics Processing Units for HEP trigger systems

    Energy Technology Data Exchange (ETDEWEB)

    Ammendola, R. [INFN Sezione di Roma “Tor Vergata”, Via della Ricerca Scientifica 1, 00133 Roma (Italy); Bauce, M. [INFN Sezione di Roma “La Sapienza”, P.le A. Moro 2, 00185 Roma (Italy); University of Rome “La Sapienza”, P.lee A.Moro 2, 00185 Roma (Italy); Biagioni, A. [INFN Sezione di Roma “La Sapienza”, P.le A. Moro 2, 00185 Roma (Italy); Chiozzi, S.; Cotta Ramusino, A. [INFN Sezione di Ferrara, Via Saragat 1, 44122 Ferrara (Italy); University of Ferrara, Via Saragat 1, 44122 Ferrara (Italy); Fantechi, R. [INFN Sezione di Pisa, Largo B. Pontecorvo 3, 56127 Pisa (Italy); CERN, Geneve (Switzerland); Fiorini, M. [INFN Sezione di Ferrara, Via Saragat 1, 44122 Ferrara (Italy); University of Ferrara, Via Saragat 1, 44122 Ferrara (Italy); Giagu, S. [INFN Sezione di Roma “La Sapienza”, P.le A. Moro 2, 00185 Roma (Italy); University of Rome “La Sapienza”, P.lee A.Moro 2, 00185 Roma (Italy); Gianoli, A. [INFN Sezione di Ferrara, Via Saragat 1, 44122 Ferrara (Italy); University of Ferrara, Via Saragat 1, 44122 Ferrara (Italy); Lamanna, G., E-mail: gianluca.lamanna@cern.ch [INFN Sezione di Pisa, Largo B. Pontecorvo 3, 56127 Pisa (Italy); INFN Laboratori Nazionali di Frascati, Via Enrico Fermi 40, 00044 Frascati (Roma) (Italy); Lonardo, A. [INFN Sezione di Roma “La Sapienza”, P.le A. Moro 2, 00185 Roma (Italy); Messina, A. [INFN Sezione di Roma “La Sapienza”, P.le A. Moro 2, 00185 Roma (Italy); University of Rome “La Sapienza”, P.lee A.Moro 2, 00185 Roma (Italy); and others

    2016-07-11

    General-purpose computing on GPUs (Graphics Processing Units) is emerging as a new paradigm in several fields of science, although so far applications have been tailored to the specific strengths of such devices as accelerator in offline computation. With the steady reduction of GPU latencies, and the increase in link and memory throughput, the use of such devices for real-time applications in high-energy physics data acquisition and trigger systems is becoming ripe. We will discuss the use of online parallel computing on GPU for synchronous low level trigger, focusing on CERN NA62 experiment trigger system. The use of GPU in higher level trigger system is also briefly considered.

  13. Process Improvement to Enhance Quality in a Large Volume Labor and Birth Unit.

    Science.gov (United States)

    Bell, Ashley M; Bohannon, Jessica; Porthouse, Lisa; Thompson, Heather; Vago, Tony

    The goal of the perinatal team at Mercy Hospital St. Louis is to provide a quality patient experience during labor and birth. After the move to a new labor and birth unit in 2013, the team recognized many of the routines and practices needed to be modified based on different demands. The Lean process was used to plan and implement required changes. This technique was chosen because it is based on feedback from clinicians, teamwork, strategizing, and immediate evaluation and implementation of common sense solutions. Through rapid improvement events, presence of leaders in the work environment, and daily huddles, team member engagement and communication were enhanced. The process allowed for team members to offer ideas, test these ideas, and evaluate results, all within a rapid time frame. For 9 months, frontline clinicians met monthly for a weeklong rapid improvement event to create better experiences for childbearing women and those who provide their care, using Lean concepts. At the end of each week, an implementation plan and metrics were developed to help ensure sustainment. The issues that were the focus of these process improvements included on-time initiation of scheduled cases such as induction of labor and cesarean birth, timely and efficient assessment and triage disposition, postanesthesia care and immediate newborn care completed within approximately 2 hours, transfer from the labor unit to the mother baby unit, and emergency transfers to the main operating room and intensive care unit. On-time case initiation for labor induction and cesarean birth improved, length of stay in obstetric triage decreased, postanesthesia recovery care was reorganized to be completed within the expected 2-hour standard time frame, and emergency transfers to the main hospital operating room and intensive care units were standardized and enhanced for efficiency and safety. Participants were pleased with the process improvements and quality outcomes. Working together as a team

  14. Heterogeneous Multicore Parallel Programming for Graphics Processing Units

    Directory of Open Access Journals (Sweden)

    Francois Bodin

    2009-01-01

    Full Text Available Hybrid parallel multicore architectures based on graphics processing units (GPUs can provide tremendous computing power. Current NVIDIA and AMD Graphics Product Group hardware display a peak performance of hundreds of gigaflops. However, exploiting GPUs from existing applications is a difficult task that requires non-portable rewriting of the code. In this paper, we present HMPP, a Heterogeneous Multicore Parallel Programming workbench with compilers, developed by CAPS entreprise, that allows the integration of heterogeneous hardware accelerators in a unintrusive manner while preserving the legacy code.

  15. Application of ion-exchange unit in uranium extraction process in China (to be continued)

    International Nuclear Information System (INIS)

    Gong Chuanwen

    2004-01-01

    The application conditions of five different ion exchange units in uranium milling plant and wastewater treatment plant of uranium mine in China are introduced, including working parameters, existing problems and improvements. The advantages and disadvantages of these units are reviewed briefly. The procedure points to be followed in selecting ion exchange unit are recommended in the engineering design. The primary views are presented upon the application prospects of some ion exchange units in uranium extraction process in China

  16. Unit Process Wetlands for Removal of Trace Organic Contaminants and Pathogens from Municipal Wastewater Effluents

    Science.gov (United States)

    Jasper, Justin T.; Nguyen, Mi T.; Jones, Zackary L.; Ismail, Niveen S.; Sedlak, David L.; Sharp, Jonathan O.; Luthy, Richard G.; Horne, Alex J.; Nelson, Kara L.

    2013-01-01

    Abstract Treatment wetlands have become an attractive option for the removal of nutrients from municipal wastewater effluents due to their low energy requirements and operational costs, as well as the ancillary benefits they provide, including creating aesthetically appealing spaces and wildlife habitats. Treatment wetlands also hold promise as a means of removing other wastewater-derived contaminants, such as trace organic contaminants and pathogens. However, concerns about variations in treatment efficacy of these pollutants, coupled with an incomplete mechanistic understanding of their removal in wetlands, hinder the widespread adoption of constructed wetlands for these two classes of contaminants. A better understanding is needed so that wetlands as a unit process can be designed for their removal, with individual wetland cells optimized for the removal of specific contaminants, and connected in series or integrated with other engineered or natural treatment processes. In this article, removal mechanisms of trace organic contaminants and pathogens are reviewed, including sorption and sedimentation, biotransformation and predation, photolysis and photoinactivation, and remaining knowledge gaps are identified. In addition, suggestions are provided for how these treatment mechanisms can be enhanced in commonly employed unit process wetland cells or how they might be harnessed in novel unit process cells. It is hoped that application of the unit process concept to a wider range of contaminants will lead to more widespread application of wetland treatment trains as components of urban water infrastructure in the United States and around the globe. PMID:23983451

  17. Unit Process Wetlands for Removal of Trace Organic Contaminants and Pathogens from Municipal Wastewater Effluents.

    Science.gov (United States)

    Jasper, Justin T; Nguyen, Mi T; Jones, Zackary L; Ismail, Niveen S; Sedlak, David L; Sharp, Jonathan O; Luthy, Richard G; Horne, Alex J; Nelson, Kara L

    2013-08-01

    Treatment wetlands have become an attractive option for the removal of nutrients from municipal wastewater effluents due to their low energy requirements and operational costs, as well as the ancillary benefits they provide, including creating aesthetically appealing spaces and wildlife habitats. Treatment wetlands also hold promise as a means of removing other wastewater-derived contaminants, such as trace organic contaminants and pathogens. However, concerns about variations in treatment efficacy of these pollutants, coupled with an incomplete mechanistic understanding of their removal in wetlands, hinder the widespread adoption of constructed wetlands for these two classes of contaminants. A better understanding is needed so that wetlands as a unit process can be designed for their removal, with individual wetland cells optimized for the removal of specific contaminants, and connected in series or integrated with other engineered or natural treatment processes. In this article, removal mechanisms of trace organic contaminants and pathogens are reviewed, including sorption and sedimentation, biotransformation and predation, photolysis and photoinactivation, and remaining knowledge gaps are identified. In addition, suggestions are provided for how these treatment mechanisms can be enhanced in commonly employed unit process wetland cells or how they might be harnessed in novel unit process cells. It is hoped that application of the unit process concept to a wider range of contaminants will lead to more widespread application of wetland treatment trains as components of urban water infrastructure in the United States and around the globe.

  18. Unitized Stiffened Composite Textile Panels: Manufacturing, Characterization, Experiments, and Analysis

    Science.gov (United States)

    Kosztowny, Cyrus Joseph Robert

    Use of carbon fiber textiles in complex manufacturing methods creates new implementations of structural components by increasing performance, lowering manufacturing costs, and making composites overall more attractive across industry. Advantages of textile composites include high area output, ease of handling during the manufacturing process, lower production costs per material used resulting from automation, and provide post-manufacturing assembly mainstreaming because significantly more complex geometries such as stiffened shell structures can be manufactured with fewer pieces. One significant challenge with using stiffened composite structures is stiffener separation under compression. Axial compression loading conditions have frequently observed catastrophic structural failure due to stiffeners separating from the shell skin. Characterizing stiffener separation behavior is often costly computationally and experimentally. The objectives of this research are to demonstrate unitized stiffened textile composite panels can be manufactured to produce quality test specimens, that existing characterization techniques applied to state-of-the-art high-performance composites provide valuable information in modeling such structures, that the unitized structure concept successfully removes stiffener separation as a primary structural failure mode, and that modeling textile material failure modes are sufficient to accurately capture postbuckling and final failure responses of the stiffened structures. The stiffened panels in this study have taken the integrally stiffened concept to an extent such that the stiffeners and skin are manufactured at the same time, as one single piece, and from the same composite textile layers. Stiffener separation is shown to be removed as a primary structural failure mode for unitized stiffened composite textile panels loaded under axial compression well into the postbuckling regime. Instead of stiffener separation, a material damaging and

  19. Pre-design safety analyses of cesium ion-exchange compact processing unit

    International Nuclear Information System (INIS)

    Richmond, W.G.; Ballinger, M.Y.

    1993-11-01

    This report describes an innovative radioactive waste pretreatment concept. This cost-effective, highly flexible processing approach is based on the use of Compact Processing Units (CPUs) to treat highly radioactive tank wastes in proximity to the tanks themselves. The units will be designed to treat tank wastes at rates from 8 to 20 liters per minute and have the capacity to remove cesium, and ultimately other radionuclides, from 4,000 cubic meters of waste per year. This new concept is being integrated into waste per year. This new concept is being integrated into Hanford's tank farm management plans by a team of PNL and Westinghouse Hanford Company scientists and engineers. The first CPU to be designed and deployed will be used to remove cesium from Hanford double-shell tank (DST) supernatant waste. Separating Cs from the waste would be a major step toward lowering the radioactivity in the bulk of the waste, allowing it to be disposed of as a low-level solid waste form (e.g.,grout), while concentrating the more highly radioactive material for processing as high-level solid waste

  20. Methodology for systematic analysis and improvement of manufacturing unit process life-cycle inventory (UPLCI)—CO2PE! initiative (cooperative effort on process emissions in manufacturing). Part 1: Methodology description

    DEFF Research Database (Denmark)

    Kellens, Karel; Dewulf, Wim; Overcash, Michael

    2012-01-01

    the provision of high-quality data for LCA studies of products using these unit process datasets for the manufacturing processes, as well as the in-depth analysis of individual manufacturing unit processes.In addition, the accruing availability of data for a range of similar machines (same process, different......This report proposes a life-cycle analysis (LCA)-oriented methodology for systematic inventory analysis of the use phase of manufacturing unit processes providing unit process datasets to be used in life-cycle inventory (LCI) databases and libraries. The methodology has been developed...... and resource efficiency improvements of the manufacturing unit process. To ensure optimal reproducibility and applicability, documentation guidelines for data and metadata are included in both approaches. Guidance on definition of functional unit and reference flow as well as on determination of system...

  1. Unit operation in food manufacturing and processing. Shokuhin seizo/kako ni okeru tan'i sosa

    Energy Technology Data Exchange (ETDEWEB)

    Matsuno, R. (Kyoto Univ., Kyoto (Japan). Faculty of Aguriculture)

    1993-09-05

    Processed foods must be produced in mass, cheap and safe and should be suitable for the delicate taste of human being. Food tastes are effected by an outlook on human attitude, and the surrounding environment. And these factors are reflected to unit operation in food manufacturing and processing and it is clarified that there are many technical difficulties. The characteristics of unit operation for food manufacturing and processing are that the food materials are a multicomponent system, moreover, a very small amount of aroma components, taste components, vitamin, physiologically activation materials and so on are more important than the main components, and also inapplicable of the model centering to the most quantitative component. The purpose of unit operation in food manufacturing and processing is to produce the properties of matter matching to human sense, and therefore there are many problems left unsolved. The development of analytical technology also has an influence on manufacturing and processing technology. Consequently, food manufacturing and processing technology must be based on general science. It is necessary to develop unit operation with an understanding of mutual effect between food and human body.

  2. Low cost solar array project production process and equipment task. A Module Experimental Process System Development Unit (MEPSDU)

    Science.gov (United States)

    1981-01-01

    Technical readiness for the production of photovoltaic modules using single crystal silicon dendritic web sheet material is demonstrated by: (1) selection, design and implementation of solar cell and photovoltaic module process sequence in a Module Experimental Process System Development Unit; (2) demonstration runs; (3) passing of acceptance and qualification tests; and (4) achievement of a cost effective module.

  3. Modelling of a Naphtha Recovery Unit (NRU with Implications for Process Optimization

    Directory of Open Access Journals (Sweden)

    Jiawei Du

    2018-06-01

    Full Text Available The naphtha recovery unit (NRU is an integral part of the processes used in the oil sands industry for bitumen extraction. The principle role of the NRU is to recover naphtha from the tailings for reuse in this process. This process is energy-intensive, and environmental guidelines for naphtha recovery must be met. Steady-state models for the NRU system are developed in this paper using two different approaches. The first approach is a statistical, data-based modelling approach where linear regression models have been developed using Minitab® from plant data collected during a performance test. The second approach involves the development of a first-principles model in Aspen Plus® based on the NRU process flow diagram. A novel refinement to this latter model, called “withdraw and remix”, is proposed based on comparing actual plant data to model predictions around the two units used to separate water and naphtha. The models developed in this paper suggest some interesting ideas for the further optimization of the process, in that it may be possible to achieve the required naphtha recovery using less energy. More plant tests are required to validate these ideas.

  4. Graphics processing units in bioinformatics, computational biology and systems biology.

    Science.gov (United States)

    Nobile, Marco S; Cazzaniga, Paolo; Tangherloni, Andrea; Besozzi, Daniela

    2017-09-01

    Several studies in Bioinformatics, Computational Biology and Systems Biology rely on the definition of physico-chemical or mathematical models of biological systems at different scales and levels of complexity, ranging from the interaction of atoms in single molecules up to genome-wide interaction networks. Traditional computational methods and software tools developed in these research fields share a common trait: they can be computationally demanding on Central Processing Units (CPUs), therefore limiting their applicability in many circumstances. To overcome this issue, general-purpose Graphics Processing Units (GPUs) are gaining an increasing attention by the scientific community, as they can considerably reduce the running time required by standard CPU-based software, and allow more intensive investigations of biological systems. In this review, we present a collection of GPU tools recently developed to perform computational analyses in life science disciplines, emphasizing the advantages and the drawbacks in the use of these parallel architectures. The complete list of GPU-powered tools here reviewed is available at http://bit.ly/gputools. © The Author 2016. Published by Oxford University Press.

  5. Process quality in the Trade Finance unit from the perspective of corporate banking employees

    OpenAIRE

    Mikkola, Henri

    2013-01-01

    This thesis examines the quality of the processes in the Trade Finance unit of Pohjola Bank, from the perspective of the corporate banking employees at Helsinki OP Bank. The Trade Finance unit provides methods of payment for foreign trade. Such services are intended for companies and the perspective investigated in this thesis is that of corporate banking employees. The purpose of this thesis is to define the quality of the processes and to develop solutions for difficulties discovered. The q...

  6. Modeling PM10 gravimetric data from the Qalabotjha low-smoke fuels macro-scale experiment in South Africa

    International Nuclear Information System (INIS)

    Engelbrecht, J.P.; Swanepoel, L.; Zunckel, M.; Chow, J.C.

    1998-01-01

    D-grade domestic coal is being widely used for household cooking and heating purposes by the poorer urban communities in South Africa. The smoke from the combustion of coal has had a severe impact on the health of communities living in the rural townships and cities. To alleviate this escalating problem, the Department of Minerals and Energy of South Africa evaluated low-smoke fuels as an alternative source of energy. The technical and social implications of such fuels were investigated in the course of the Qalabotjha Low-Smoke Fuels Macro-Scale Experiment. Three low-smoke fuels (Chartech, African Fine Carbon (AFC) and Flame Africa) were tested in Qalabotjha over a 10 to 20 day period. This paper presents results from a PM10 TEOM continuous monitor at the Clinic site in Qalabotjha over the mentioned monitoring period. Both the fuel-type and the wind were found to have an effect on the air particulate concentrations. An exponential model which incorporates both these variables is proposed. This model allows for all measured particulate concentrations to be re-calculated to zero wind values. From the analysis of variance (ANOVA) calculations on the zero wind concentrations, it is concluded that the combustion of low-smoke fuels did make a significant improvement to the air quality in Qalabotjha over the period when these were used

  7. Biodiversity of indigenous staphylococci of naturally fermented dry sausages and manufacturing environments of small-scale processing units.

    Science.gov (United States)

    Leroy, Sabine; Giammarinaro, Philippe; Chacornac, Jean-Paul; Lebert, Isabelle; Talon, Régine

    2010-04-01

    The staphylococcal community of the environments of nine French small-scale processing units and their naturally fermented meat products was identified by analyzing 676 isolates. Fifteen species were accurately identified using validated molecular methods. The three prevalent species were Staphylococcus equorum (58.4%), Staphylococcus saprophyticus (15.7%) and Staphylococcus xylosus (9.3%). S. equorum was isolated in all the processing units in similar proportion in meat and environmental samples. S. saprophyticus was also isolated in all the processing units with a higher percentage in environmental samples. S. xylosus was present sporadically in the processing units and its prevalence was higher in meat samples. The genetic diversity of the strains within the three species isolated from one processing unit was studied by PFGE and revealed a high diversity for S. equorum and S. saprophyticus both in the environment and the meat isolates. The genetic diversity remained high through the manufacturing steps. A small percentage of the strains of the two species share the two ecological niches. These results highlight that some strains, probably introduced by the meat, will persist in the manufacturing environment, while other strains are more adapted to the meat products.

  8. Effective parameters, effective processes: From porous flow physics to in situ remediation technology

    International Nuclear Information System (INIS)

    Pruess, K.

    1995-06-01

    This paper examines the conceptualization of multiphase flow processes on the macroscale, as needed in field applications. It emphasizes that upscaling from the pore-level will in general not only introduce effective parameters but will also give rise to ''effective processes,'' i.e., the emergence of new physical effects that may not have a microscopic counterpart. ''Phase dispersion'' is discussed as an example of an effective process for the migration and remediation of non-aqueous phase liquid (NAPL) contaminants in heterogeneous media. An approximate space-and-time scaling invariance is derived for gravity-driven liquid flow in unsaturated two-dimensional porous media (fractures). Issues for future experimental and theoretical work are identified

  9. Optimization Solutions for Improving the Performance of the Parallel Reduction Algorithm Using Graphics Processing Units

    Directory of Open Access Journals (Sweden)

    Ion LUNGU

    2012-01-01

    Full Text Available In this paper, we research, analyze and develop optimization solutions for the parallel reduction function using graphics processing units (GPUs that implement the Compute Unified Device Architecture (CUDA, a modern and novel approach for improving the software performance of data processing applications and algorithms. Many of these applications and algorithms make use of the reduction function in their computational steps. After having designed the function and its algorithmic steps in CUDA, we have progressively developed and implemented optimization solutions for the reduction function. In order to confirm, test and evaluate the solutions' efficiency, we have developed a custom tailored benchmark suite. We have analyzed the obtained experimental results regarding: the comparison of the execution time and bandwidth when using graphic processing units covering the main CUDA architectures (Tesla GT200, Fermi GF100, Kepler GK104 and a central processing unit; the data type influence; the binary operator's influence.

  10. Investigations on Temperature Fields during Laser Beam Melting by Means of Process Monitoring and Multiscale Process Modelling

    Directory of Open Access Journals (Sweden)

    J. Schilp

    2014-07-01

    Full Text Available Process monitoring and modelling can contribute to fostering the industrial relevance of additive manufacturing. Process related temperature gradients and thermal inhomogeneities cause residual stresses, and distortions and influence the microstructure. Variations in wall thickness can cause heat accumulations. These occur predominantly in filigree part areas and can be detected by utilizing off-axis thermographic monitoring during the manufacturing process. In addition, numerical simulation models on the scale of whole parts can enable an analysis of temperature fields upstream to the build process. In a microscale domain, modelling of several exposed single hatches allows temperature investigations at a high spatial and temporal resolution. Within this paper, FEM-based micro- and macroscale modelling approaches as well as an experimental setup for thermographic monitoring are introduced. By discussing and comparing experimental data with simulation results in terms of temperature distributions both the potential of numerical approaches and the complexity of determining suitable computation time efficient process models are demonstrated. This paper contributes to the vision of adjusting the transient temperature field during manufacturing in order to improve the resulting part's quality by simulation based process design upstream to the build process and the inline process monitoring.

  11. A Block-Asynchronous Relaxation Method for Graphics Processing Units

    OpenAIRE

    Anzt, H.; Dongarra, J.; Heuveline, Vincent; Tomov, S.

    2011-01-01

    In this paper, we analyze the potential of asynchronous relaxation methods on Graphics Processing Units (GPUs). For this purpose, we developed a set of asynchronous iteration algorithms in CUDA and compared them with a parallel implementation of synchronous relaxation methods on CPU-based systems. For a set of test matrices taken from the University of Florida Matrix Collection we monitor the convergence behavior, the average iteration time and the total time-to-solution time. Analyzing the r...

  12. Genome-Wide Mapping of Transcriptional Regulation and Metabolism Describes Information-Processing Units in Escherichia coli

    Directory of Open Access Journals (Sweden)

    Daniela Ledezma-Tejeida

    2017-08-01

    Full Text Available In the face of changes in their environment, bacteria adjust gene expression levels and produce appropriate responses. The individual layers of this process have been widely studied: the transcriptional regulatory network describes the regulatory interactions that produce changes in the metabolic network, both of which are coordinated by the signaling network, but the interplay between them has never been described in a systematic fashion. Here, we formalize the process of detection and processing of environmental information mediated by individual transcription factors (TFs, utilizing a concept termed genetic sensory response units (GENSOR units, which are composed of four components: (1 a signal, (2 signal transduction, (3 genetic switch, and (4 a response. We used experimentally validated data sets from two databases to assemble a GENSOR unit for each of the 189 local TFs of Escherichia coli K-12 contained in the RegulonDB database. Further analysis suggested that feedback is a common occurrence in signal processing, and there is a gradient of functional complexity in the response mediated by each TF, as opposed to a one regulator/one pathway rule. Finally, we provide examples of other GENSOR unit applications, such as hypothesis generation, detailed description of cellular decision making, and elucidation of indirect regulatory mechanisms.

  13. Process control and product evaluation in micro molding using a screwless/two-plunger injection unit

    DEFF Research Database (Denmark)

    Tosello, Guido; Hansen, Hans Nørgaard; Dormann, B.

    2010-01-01

    A newly developed μ-injection molding machine equipped with a screwless/two-plunger injection unit has been employed to mould miniaturized dog-bone shaped specimens on polyoxymethylene and its process capability and robustness have been analyzed. The influence of process parameters on μ-injection......A newly developed μ-injection molding machine equipped with a screwless/two-plunger injection unit has been employed to mould miniaturized dog-bone shaped specimens on polyoxymethylene and its process capability and robustness have been analyzed. The influence of process parameters on μ......-injection molding was investigated using the Design of Experiments technique. Injection pressure and piston stroke speed as well as part weight and dimensions were considered as quality factors over a wide range of process parameters. Experimental results obtained under different processing conditions were...

  14. Optimized Laplacian image sharpening algorithm based on graphic processing unit

    Science.gov (United States)

    Ma, Tinghuai; Li, Lu; Ji, Sai; Wang, Xin; Tian, Yuan; Al-Dhelaan, Abdullah; Al-Rodhaan, Mznah

    2014-12-01

    In classical Laplacian image sharpening, all pixels are processed one by one, which leads to large amount of computation. Traditional Laplacian sharpening processed on CPU is considerably time-consuming especially for those large pictures. In this paper, we propose a parallel implementation of Laplacian sharpening based on Compute Unified Device Architecture (CUDA), which is a computing platform of Graphic Processing Units (GPU), and analyze the impact of picture size on performance and the relationship between the processing time of between data transfer time and parallel computing time. Further, according to different features of different memory, an improved scheme of our method is developed, which exploits shared memory in GPU instead of global memory and further increases the efficiency. Experimental results prove that two novel algorithms outperform traditional consequentially method based on OpenCV in the aspect of computing speed.

  15. Modelling spring flood in the area of the Upper Volga basin

    Directory of Open Access Journals (Sweden)

    M. Helms

    2006-01-01

    Full Text Available Integrated river-basin management for the Volga river requires understanding and modelling of the flow process in its macro-scale tributary catchments. At the example of the Kostroma catchment (16 000 km2, a method combining existing hydrologic simulation tools was developed that allows operational modelling even when data are scarce. Emphasis was placed on simulation of three processes: snow cover development using a snow-compaction model, runoff generation using a conceptual approach with parameters for seasonal antecedent moisture conditions, and runoff concentration using a regionalised unit hydrograph approach. Based on this method, specific regional characteristics of the precipitation-runoff process were identified, in particular a distinct threshold behaviour of runoff generation in catchments with clay-rich soils. With a plausible overall parameterisation of involved tools, spring flood events could successfully be simulated. Present paper mainly focuses on the simulation of a 16-year sample of snowmelt events in a meso-scale catchment. An example of regionalised simulation in the scope of the modelling system "Flussgebietsmodell" shows the capabilities of developed method for application in macro-scale tributary catchments of the Upper Volga basin.

  16. Evolution of the Power Processing Units Architecture for Electric Propulsion at CRISA

    Science.gov (United States)

    Palencia, J.; de la Cruz, F.; Wallace, N.

    2008-09-01

    Since 2002, the team formed by EADS Astrium CRISA, Astrium GmbH Friedrichshafen, and QinetiQ has participated in several flight programs where the Electric Propulsion based on Kaufman type Ion Thrusters is the baseline conceptOn 2002, CRISA won the contract for the development of the Ion Propulsion Control Unit (IPCU) for GOCE. This unit together with the T5 thruster by QinetiQ provides near perfect atmospheric drag compensation offering thrust levels in the range of 1 to 20mN.By the end of 2003, CRISA started the adaptation of the IPCU concept to the QinetiQ T6 Ion Thruster for the Alphabus program.This paper shows how the Power Processing Unit design evolved in time including the current developments.

  17. Controllable unit concept as applied to a hypothetical tritium process

    International Nuclear Information System (INIS)

    Seabaugh, P.W.; Sellers, D.E.; Woltermann, H.A.; Boh, D.R.; Miles, J.C.; Fushimi, F.C.

    1976-01-01

    A methodology (controllable unit accountability) is described that identifies controlling errors for corrective action, locates areas and time frames of suspected diversions, defines time and sensitivity limits of diversion flags, defines the time frame in which pass-through quantities of accountable material and by inference SNM remain controllable and provides a basis for identification of incremental cost associated with purely safeguards considerations. The concept provides a rationale from which measurement variability and specific safeguard criteria can be converted into a numerical value that represents the degree of control or improvement attainable with a specific measurement system or combination of systems. Currently the methodology is being applied to a high-throughput, mixed-oxide fuel fabrication process. The process described is merely used to illustrate a procedure that can be applied to other more pertinent processes

  18. Status Report from the United Kingdom [Processing of Low-Grade Uranium Ores

    Energy Technology Data Exchange (ETDEWEB)

    North, A A [Warren Spring Laboratory, Stevenage, Herts. (United Kingdom)

    1967-06-15

    The invitation to present this status report could have been taken literally as a request for information on experience gained in the actual processing of low-grade uranium ores in the United Kingdom, in which case there would have been very little to report; however, the invitation naturally was considered to be a request for a report on the experience gained by the United Kingdom of the processing of uranium ores. Lowgrade uranium ores are not treated in the United Kingdom simply because the country does not possess any known significant deposits of uranium ore. It is of interest to record the fact that during the nineteenth century mesothermal vein deposits associated with Hercynian granite were worked at South Terras, Cornwall, and ore that contained approximately 100 tons of uranium oxide was exported to Germany. Now only some 20 tons of contained uranium oxide remain at South Terras; also in Cornwall there is a small number of other vein deposits that each hold about five tons of uranium. Small lodes of uranium ore have been located in the southern uplands of Scotland; in North Wales lower palaeozoic black shales have only as much as 50 to 80 parts per million of uranium oxide, and a slightly lower grade carbonaceous shale is found near the base of the millstone grit that occurs in the north of England. Thus the experience gained by the United Kingdom has been of the treatment of uranium ores that occur abroad.

  19. ENTREPRENEURIAL OPPORTUNITIES IN FOOD PROCESSING UNITS (WITH SPECIAL REFERENCES TO BYADGI RED CHILLI COLD STORAGE UNITS IN THE KARNATAKA STATE

    Directory of Open Access Journals (Sweden)

    P. ISHWARA

    2010-01-01

    Full Text Available After the green revolution, we are now ushering in the evergreen revolution in the country; food processing is an evergreen activity. It is the key to the agricultural sector. In this paper an attempt has been made to study the workings of food processing units with special references to Red Chilli Cold Storage units in the Byadgi district of Karnataka State. Byadgi has been famous for Red Chilli since the days it’s of antiquity. The vast and extensive market yard in Byadagi taluk is famous as the second largest Red Chilli dealing market in the country. However, the most common and recurring problem faced by the farmer is inability to store enough red chilli from one harvest to another. Red chilli that was locally abundant for only a short period of time had to be stored against times of scarcity. In recent years, due to Oleoresin, demand for Red Chilli has grow from other countries like Sri Lanka, Bangladesh, America, Europe, Nepal, Indonesia, Mexico etc. The study reveals that all the cold storage units of the study area have been using vapour compression refrigeration system or method. All entrepreneurs have satisfied with their turnover and profit and they are in a good economic position. Even though the average turnover and profits are increased, few units have shown negligible amount of decrease in turnover and profit. This is due to the competition from increasing number of cold storages and early established units. The cold storages of the study area have been storing Red chilli, Chilli seeds, Chilli powder, Tamarind, Jeera, Dania, Turmeric, Sunflower, Zinger, Channa, Flower seeds etc,. But the 80 per cent of the each cold storage is filled by the red chilli this is due to the existence of vast and extensivered chilli market yard in the Byadgi. There is no business without problems. In the same way the entrepreneurs who are chosen for the study are facing a few problems in their business like skilled labour, technical and management

  20. Computerized nursing process in the Intensive Care Unit: ergonomics and usability

    Directory of Open Access Journals (Sweden)

    Sônia Regina Wagner de Almeida

    Full Text Available Abstract OBJECTIVE Analyzing the ergonomics and usability criteria of the Computerized Nursing Process based on the International Classification for Nursing Practice in the Intensive Care Unit according to International Organization for Standardization(ISO. METHOD A quantitative, quasi-experimental, before-and-after study with a sample of 16 participants performed in an Intensive Care Unit. Data collection was performed through the application of five simulated clinical cases and an evaluation instrument. Data analysis was performed by descriptive and inferential statistics. RESULTS The organization, content and technical criteria were considered "excellent", and the interface criteria were considered "very good", obtaining means of 4.54, 4.60, 4.64 and 4.39, respectively. The analyzed standards obtained means above 4.0, being considered "very good" by the participants. CONCLUSION The Computerized Nursing Processmet ergonomic and usability standards according to the standards set by ISO. This technology supports nurses' clinical decision-making by providing complete and up-to-date content for Nursing practice in the Intensive Care Unit.

  1. Design of the Laboratory-Scale Plutonium Oxide Processing Unit in the Radiochemical Processing Laboratory

    Energy Technology Data Exchange (ETDEWEB)

    Lumetta, Gregg J. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Meier, David E. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Tingey, Joel M. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Casella, Amanda J. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Delegard, Calvin H. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Edwards, Matthew K. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Orton, Robert D. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Rapko, Brian M. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Smart, John E. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2015-05-01

    This report describes a design for a laboratory-scale capability to produce plutonium oxide (PuO2) for use in identifying and validating nuclear forensics signatures associated with plutonium production, as well as for use as exercise and reference materials. This capability will be located in the Radiochemical Processing Laboratory at the Pacific Northwest National Laboratory. The key unit operations are described, including PuO2 dissolution, purification of the Pu by ion exchange, precipitation, and re-conversion to PuO2 by calcination.

  2. High-Performance Pseudo-Random Number Generation on Graphics Processing Units

    OpenAIRE

    Nandapalan, Nimalan; Brent, Richard P.; Murray, Lawrence M.; Rendell, Alistair

    2011-01-01

    This work considers the deployment of pseudo-random number generators (PRNGs) on graphics processing units (GPUs), developing an approach based on the xorgens generator to rapidly produce pseudo-random numbers of high statistical quality. The chosen algorithm has configurable state size and period, making it ideal for tuning to the GPU architecture. We present a comparison of both speed and statistical quality with other common parallel, GPU-based PRNGs, demonstrating favourable performance o...

  3. Analysis of the overall energy intensity of alumina refinery process using unit process energy intensity and product ratio method

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Liru; Aye, Lu [International Technologies Center (IDTC), Department of Civil and Environmental Engineering,The University of Melbourne, Vic. 3010 (Australia); Lu, Zhongwu [Institute of Materials and Metallurgy, Northeastern University, Shenyang 110004 (China); Zhang, Peihong [Department of Municipal and Environmental Engineering, Shenyang Architecture University, Shenyang 110168 (China)

    2006-07-15

    Alumina refinery is an energy intensive industry. Traditional energy saving methods employed have been single-equipment-orientated. Based on two concepts of 'energy carrier' and 'system', this paper presents a method that analyzes the effects of unit process energy intensity (e) and product ratio (p) on overall energy intensity of alumina. The important conclusion drawn from this method is that it is necessary to decrease both the unit process energy intensity and the product ratios in order to decrease the overall energy intensity of alumina, which may be taken as a future policy for energy saving. As a case study, the overall energy intensity of the Chinese Zhenzhou alumina refinery plant with Bayer-sinter combined method between 1995 and 2000 was analyzed. The result shows that the overall energy intensity of alumina in this plant decreased by 7.36 GJ/t-Al{sub 2}O{sub 3} over this period, 49% of total energy saving is due to direct energy saving, and 51% is due to indirect energy saving. The emphasis in this paper is on decreasing product ratios of high-energy consumption unit processes, such as evaporation, slurry sintering, aluminium trihydrate calcining and desilication. Energy savings can be made (1) by increasing the proportion of Bayer and indirect digestion, (2) by increasing the grade of ore by ore dressing or importing some rich gibbsite and (3) by promoting the advancement in technology. (author)

  4. A low-cost system for graphical process monitoring with colour video symbol display units

    International Nuclear Information System (INIS)

    Grauer, H.; Jarsch, V.; Mueller, W.

    1977-01-01

    A system for computer controlled graphic process supervision, using color symbol video displays is described. It has the following characteristics: - compact unit: no external memory for image storage - problem oriented simple descriptive cut to the process program - no restriction of the graphical representation of process variables - computer and display independent, by implementation of colours and parameterized code creation for the display. (WB) [de

  5. Snow cover setting-up dates in the north of Eurasia: relations and feedback to the macro-scale atmospheric circulation

    Directory of Open Access Journals (Sweden)

    V. V. Popova

    2014-01-01

    Full Text Available Variations of snow cover onset data in 1950–2008 based on daily snow depth data collected at first-order meteorological stations of the former USSR compiled at the Russia Institute of Hydrometeorological Information are analyzed in order to reveal climatic norms, relations with macro-scale atmospheric circulation and influence of snow cover anomalies on strengthening/weakening of westerly basing on observational data and results of simulation using model Planet Simulator, as well. Patterns of mean snow cover setting-up data and their correlation with temperature of the Northern Hemisphere extra-tropical land presented in Fig. 1 show that the most sensible changes observed in last decade are caused by temperature trend since 1990th. For the most portion of the studied territory variations of snow cover setting-up data may be explained by the circulation indices in the terms of Northern Hemisphere Teleconnection Patterns: Scand, EA–WR, WP and NAO (Fig. 2. Role of the Scand and EA–WR (see Fig. 2, а, в, г is recognized as the most significant.Changes of snow cover extent calculated on the base of snow cover onset data over the Russia territory, and its western and eastern parts as well, for the second decade of October (Fig. 3 demonstrate significant difference in variability between eastern and western regions. Eastern part of territory essentially differs by lower both year-to-year and long-term variations in the contrast to the western part, characterized by high variance including long-term tendencies: increase in 1950–70th and decrease in 1970–80 and during last six years. Nevertheless relations between snow cover anomalies and Arctic Oscillation (AO index appear to be significant exceptionally for the eastern part of the territory. In the same time negative linear correlation revealed between snow extent and AO index changes during 1950–2008 from statistically insignificant values (in 1950–70 and 1996–2008 to coefficient

  6. Integrating post-Newtonian equations on graphics processing units

    Energy Technology Data Exchange (ETDEWEB)

    Herrmann, Frank; Tiglio, Manuel [Department of Physics, Center for Fundamental Physics, and Center for Scientific Computation and Mathematical Modeling, University of Maryland, College Park, MD 20742 (United States); Silberholz, John [Center for Scientific Computation and Mathematical Modeling, University of Maryland, College Park, MD 20742 (United States); Bellone, Matias [Facultad de Matematica, Astronomia y Fisica, Universidad Nacional de Cordoba, Cordoba 5000 (Argentina); Guerberoff, Gustavo, E-mail: tiglio@umd.ed [Facultad de Ingenieria, Instituto de Matematica y Estadistica ' Prof. Ing. Rafael Laguardia' , Universidad de la Republica, Montevideo (Uruguay)

    2010-02-07

    We report on early results of a numerical and statistical study of binary black hole inspirals. The two black holes are evolved using post-Newtonian approximations starting with initially randomly distributed spin vectors. We characterize certain aspects of the distribution shortly before merger. In particular we note the uniform distribution of black hole spin vector dot products shortly before merger and a high correlation between the initial and final black hole spin vector dot products in the equal-mass, maximally spinning case. More than 300 million simulations were performed on graphics processing units, and we demonstrate a speed-up of a factor 50 over a more conventional CPU implementation. (fast track communication)

  7. Modeling process-structure-property relationships for additive manufacturing

    Science.gov (United States)

    Yan, Wentao; Lin, Stephen; Kafka, Orion L.; Yu, Cheng; Liu, Zeliang; Lian, Yanping; Wolff, Sarah; Cao, Jian; Wagner, Gregory J.; Liu, Wing Kam

    2018-02-01

    This paper presents our latest work on comprehensive modeling of process-structure-property relationships for additive manufacturing (AM) materials, including using data-mining techniques to close the cycle of design-predict-optimize. To illustrate the processstructure relationship, the multi-scale multi-physics process modeling starts from the micro-scale to establish a mechanistic heat source model, to the meso-scale models of individual powder particle evolution, and finally to the macro-scale model to simulate the fabrication process of a complex product. To link structure and properties, a highefficiency mechanistic model, self-consistent clustering analyses, is developed to capture a variety of material response. The model incorporates factors such as voids, phase composition, inclusions, and grain structures, which are the differentiating features of AM metals. Furthermore, we propose data-mining as an effective solution for novel rapid design and optimization, which is motivated by the numerous influencing factors in the AM process. We believe this paper will provide a roadmap to advance AM fundamental understanding and guide the monitoring and advanced diagnostics of AM processing.

  8. Transport phenomena in fuel cells : from microscale to macroscale

    Energy Technology Data Exchange (ETDEWEB)

    Djilali, N. [Victoria Univ., BC (Canada). Dept. of Mechanical Engineering]|[Victoria Univ., BC (Canada). Inst. for Integrated Energy Systems

    2006-07-01

    Proton Exchange Membrane (PEM) fuel cells rely on an array of thermofluid transport processes for the regulated supply of reactant gases and the removal of by-product heat and water. Flows are characterized by a broad range of length and time scales that take place in conjunction with reaction kinetics in a variety of regimes and structures. This paper examined some of the challenges related to computational fluid dynamics (CFD) modelling of PEM fuel cell transport phenomena. An overview of the main features, components and operation of PEM fuel cells was followed by a discussion of the various strategies used for component modelling of the electrolyte membrane; the gas diffusion layer; microporous layer; and flow channels. A review of integrated CFD models for PEM fuel cells included the coupling of electrochemical thermal and fluid transport with 3-D unit cell simulations; air-breathing micro-structured fuel cells; and stack level modelling. Physical models for modelling of transport at the micro-scale were also discussed. Results of the review indicated that the treatment of electrochemical reactions in a PEM fuel cell currently combines classical reaction kinetics with solutions procedures to resolve charged species transport, which may lead to thermodynamically inconsistent solutions for more complex systems. Proper representation of the surface coverage of all the chemical species at all reaction sites is needed, and secondary reactions such as platinum (Pt) dissolution and oxidation must be accounted for in order to model and understand degradation mechanisms in fuel cells. While progress has been made in CFD-based modelling of fuel cells, functional and predictive capabilities remain a challenge because of fundamental modelling and material characterization deficiencies in ionic and water transport in polymer membranes; 2-phase transport in porous gas diffusion electrodes and gas flow channels; inadequate macroscopic modelling and resolution of catalyst

  9. The impact of a lean rounding process in a pediatric intensive care unit.

    Science.gov (United States)

    Vats, Atul; Goin, Kristin H; Villarreal, Monica C; Yilmaz, Tuba; Fortenberry, James D; Keskinocak, Pinar

    2012-02-01

    Poor workflow associated with physician rounding can produce inefficiencies that decrease time for essential activities, delay clinical decisions, and reduce staff and patient satisfaction. Workflow and provider resources were not optimized when a pediatric intensive care unit increased by 22,000 square feet (to 33,000) and by nine beds (to 30). Lean methods (focusing on essential processes) and scenario analysis were used to develop and implement a patient-centric standardized rounding process, which we hypothesize would lead to improved rounding efficiency, decrease required physician resources, improve satisfaction, and enhance throughput. Human factors techniques and statistical tools were used to collect and analyze observational data for 11 rounding events before and 12 rounding events after process redesign. Actions included: 1) recording rounding events, times, and patient interactions and classifying them as essential, nonessential, or nonvalue added; 2) comparing rounding duration and time per patient to determine the impact on efficiency; 3) analyzing discharge orders for timeliness; 4) conducting staff surveys to assess improvements in communication and care coordination; and 5) analyzing customer satisfaction data to evaluate impact on patient experience. Thirty-bed pediatric intensive care unit in a children's hospital with academic affiliation. Eight attending pediatric intensivists and their physician rounding teams. Eight attending physician-led teams were observed for 11 rounding events before and 12 rounding events after implementation of a standardized lean rounding process focusing on essential processes. Total rounding time decreased significantly (157 ± 35 mins before vs. 121 ± 20 mins after), through a reduction in time spent on nonessential (53 ± 30 vs. 9 ± 6 mins) activities. The previous process required three attending physicians for an average of 157 mins (7.55 attending physician man-hours), while the new process required two

  10. The AMchip04 and the Processing Unit Prototype for the FastTracker

    CERN Document Server

    Andreani, A; The ATLAS collaboration; Beretta, M; Bogdan, M; Citterio, M; Alberti, F; Giannetti, P; Lanza, A; Magalotti, D; Piendibene, M; Shochet, M; Stabile, A; Tang, J; Tompkins, L; Volpi, G

    2012-01-01

    Modern experiments search for extremely rare processes hidden in much larger background levels. As the experiment complexity and the accelerator backgrounds and luminosity increase we need increasingly complex and exclusive selections. We present the first prototype of a new Processing Unit, the core of the FastTracker processor for Atlas, whose computing power is such that a couple of hundreds of them will be able to reconstruct all the tracks with transverse momentum above 1 GeV in the ATLAS events up to Phase II instantaneous luminosities (5×1034 cm-2 s-1) with an event input rate of 100 kHz and a latency below hundreds of microseconds. We plan extremely powerful, very compact and low consumption units for the far future, essential to increase efficiency and purity of the Level 2 selected samples through the intensive use of tracking. This strategy requires massive computing power to minimize the online execution time of complex tracking algorithms. The time consuming pattern recognition problem, generall...

  11. Security central processing unit applications in the protection of nuclear facilities

    International Nuclear Information System (INIS)

    Goetzke, R.E.

    1987-01-01

    New or upgraded electronic security systems protecting nuclear facilities or complexes will be heavily computer dependent. Proper planning for new systems and the employment of new state-of-the-art 32 bit processors in the processing of subsystem reports are key elements in effective security systems. The processing of subsystem reports represents only a small segment of system overhead. In selecting a security system to meet the current and future needs for nuclear security applications the central processing unit (CPU) applied in the system architecture is the critical element in system performance. New 32 bit technology eliminates the need for program overlays while providing system programmers with well documented program tools to develop effective systems to operate in all phases of nuclear security applications

  12. Test results of the signal processing and amplifier unit for the emittance measurement system

    International Nuclear Information System (INIS)

    Stawiszynski, L.; Schneider, S.

    1984-01-01

    The signal processing and amplifier unit for the emittance measurement system is the unit with which the beam current on the harp-wires and the slit is measured and converted to a digital output. Temperature effects are very critical at low currents and the purpose of the test measurements described in this report was mainly to establish the accuracy and repeatability of the measurements under the influence of temperature variations

  13. Discrete-Event Execution Alternatives on General Purpose Graphical Processing Units

    International Nuclear Information System (INIS)

    Perumalla, Kalyan S.

    2006-01-01

    Graphics cards, traditionally designed as accelerators for computer graphics, have evolved to support more general-purpose computation. General Purpose Graphical Processing Units (GPGPUs) are now being used as highly efficient, cost-effective platforms for executing certain simulation applications. While most of these applications belong to the category of time-stepped simulations, little is known about the applicability of GPGPUs to discrete event simulation (DES). Here, we identify some of the issues and challenges that the GPGPU stream-based interface raises for DES, and present some possible approaches to moving DES to GPGPUs. Initial performance results on simulation of a diffusion process show that DES-style execution on GPGPU runs faster than DES on CPU and also significantly faster than time-stepped simulations on either CPU or GPGPU.

  14. Calculation of the real states of Ignalina NPP Unit 1 and Unit 2 RBMK-1500 reactors in the verification process of QUABOX/CUBBOX code

    International Nuclear Information System (INIS)

    Bubelis, E.; Pabarcius, R.; Demcenko, M.

    2001-01-01

    Calculations of the main neutron-physical characteristics of RBMK-1500 reactors of Ignalina NPP Unit 1 and Unit 2 were performed, taking real reactor core states as the basis for these calculations. Comparison of the calculation results, obtained using QUABOX/CUBBOX code, with experimental data and the calculation results, obtained using STEPAN code, showed that all the main neutron-physical characteristics of the reactors of Unit 1 and Unit 2 of Ignalina NPP are in the safe deviation range of die analyzed parameters, and that reactors of Ignalina NPP, during the process of the reactor core composition change, are operated in a safe and stable manner. (author)

  15. Orthographic units in the absence of visual processing: Evidence from sublexical structure in braille.

    Science.gov (United States)

    Fischer-Baum, Simon; Englebretson, Robert

    2016-08-01

    Reading relies on the recognition of units larger than single letters and smaller than whole words. Previous research has linked sublexical structures in reading to properties of the visual system, specifically on the parallel processing of letters that the visual system enables. But whether the visual system is essential for this to happen, or whether the recognition of sublexical structures may emerge by other means, is an open question. To address this question, we investigate braille, a writing system that relies exclusively on the tactile rather than the visual modality. We provide experimental evidence demonstrating that adult readers of (English) braille are sensitive to sublexical units. Contrary to prior assumptions in the braille research literature, we find strong evidence that braille readers do indeed access sublexical structure, namely the processing of multi-cell contractions as single orthographic units and the recognition of morphemes within morphologically-complex words. Therefore, we conclude that the recognition of sublexical structure is not exclusively tied to the visual system. However, our findings also suggest that there are aspects of morphological processing on which braille and print readers differ, and that these differences may, crucially, be related to reading using the tactile rather than the visual sensory modality. Copyright © 2016 Elsevier B.V. All rights reserved.

  16. Multidisciplinary Simulation Acceleration using Multiple Shared-Memory Graphical Processing Units

    Science.gov (United States)

    Kemal, Jonathan Yashar

    For purposes of optimizing and analyzing turbomachinery and other designs, the unsteady Favre-averaged flow-field differential equations for an ideal compressible gas can be solved in conjunction with the heat conduction equation. We solve all equations using the finite-volume multiple-grid numerical technique, with the dual time-step scheme used for unsteady simulations. Our numerical solver code targets CUDA-capable Graphical Processing Units (GPUs) produced by NVIDIA. Making use of MPI, our solver can run across networked compute notes, where each MPI process can use either a GPU or a Central Processing Unit (CPU) core for primary solver calculations. We use NVIDIA Tesla C2050/C2070 GPUs based on the Fermi architecture, and compare our resulting performance against Intel Zeon X5690 CPUs. Solver routines converted to CUDA typically run about 10 times faster on a GPU for sufficiently dense computational grids. We used a conjugate cylinder computational grid and ran a turbulent steady flow simulation using 4 increasingly dense computational grids. Our densest computational grid is divided into 13 blocks each containing 1033x1033 grid points, for a total of 13.87 million grid points or 1.07 million grid points per domain block. To obtain overall speedups, we compare the execution time of the solver's iteration loop, including all resource intensive GPU-related memory copies. Comparing the performance of 8 GPUs to that of 8 CPUs, we obtain an overall speedup of about 6.0 when using our densest computational grid. This amounts to an 8-GPU simulation running about 39.5 times faster than running than a single-CPU simulation.

  17. FamSeq: a variant calling program for family-based sequencing data using graphics processing units.

    Directory of Open Access Journals (Sweden)

    Gang Peng

    2014-10-01

    Full Text Available Various algorithms have been developed for variant calling using next-generation sequencing data, and various methods have been applied to reduce the associated false positive and false negative rates. Few variant calling programs, however, utilize the pedigree information when the family-based sequencing data are available. Here, we present a program, FamSeq, which reduces both false positive and false negative rates by incorporating the pedigree information from the Mendelian genetic model into variant calling. To accommodate variations in data complexity, FamSeq consists of four distinct implementations of the Mendelian genetic model: the Bayesian network algorithm, a graphics processing unit version of the Bayesian network algorithm, the Elston-Stewart algorithm and the Markov chain Monte Carlo algorithm. To make the software efficient and applicable to large families, we parallelized the Bayesian network algorithm that copes with pedigrees with inbreeding loops without losing calculation precision on an NVIDIA graphics processing unit. In order to compare the difference in the four methods, we applied FamSeq to pedigree sequencing data with family sizes that varied from 7 to 12. When there is no inbreeding loop in the pedigree, the Elston-Stewart algorithm gives analytical results in a short time. If there are inbreeding loops in the pedigree, we recommend the Bayesian network method, which provides exact answers. To improve the computing speed of the Bayesian network method, we parallelized the computation on a graphics processing unit. This allowed the Bayesian network method to process the whole genome sequencing data of a family of 12 individuals within two days, which was a 10-fold time reduction compared to the time required for this computation on a central processing unit.

  18. Modeling of yield and environmental impact categories in tea processing units based on artificial neural networks.

    Science.gov (United States)

    Khanali, Majid; Mobli, Hossein; Hosseinzadeh-Bandbafha, Homa

    2017-12-01

    In this study, an artificial neural network (ANN) model was developed for predicting the yield and life cycle environmental impacts based on energy inputs required in processing of black tea, green tea, and oolong tea in Guilan province of Iran. A life cycle assessment (LCA) approach was used to investigate the environmental impact categories of processed tea based on the cradle to gate approach, i.e., from production of input materials using raw materials to the gate of tea processing units, i.e., packaged tea. Thus, all the tea processing operations such as withering, rolling, fermentation, drying, and packaging were considered in the analysis. The initial data were obtained from tea processing units while the required data about the background system was extracted from the EcoInvent 2.2 database. LCA results indicated that diesel fuel and corrugated paper box used in drying and packaging operations, respectively, were the main hotspots. Black tea processing unit caused the highest pollution among the three processing units. Three feed-forward back-propagation ANN models based on Levenberg-Marquardt training algorithm with two hidden layers accompanied by sigmoid activation functions and a linear transfer function in output layer, were applied for three types of processed tea. The neural networks were developed based on energy equivalents of eight different input parameters (energy equivalents of fresh tea leaves, human labor, diesel fuel, electricity, adhesive, carton, corrugated paper box, and transportation) and 11 output parameters (yield, global warming, abiotic depletion, acidification, eutrophication, ozone layer depletion, human toxicity, freshwater aquatic ecotoxicity, marine aquatic ecotoxicity, terrestrial ecotoxicity, and photochemical oxidation). The results showed that the developed ANN models with R 2 values in the range of 0.878 to 0.990 had excellent performance in predicting all the output variables based on inputs. Energy consumption for

  19. Future evolution of the Fast TracKer (FTK) processing unit

    CERN Document Server

    Gentsos, C; The ATLAS collaboration; Giannetti, P; Magalotti, D; Nikolaidis, S

    2014-01-01

    The Fast Tracker (FTK) processor [1] for the ATLAS experiment has a computing core made of 128 Processing Units that reconstruct tracks in the silicon detector in a ~100 μsec deep pipeline. The track parameter resolution provided by FTK enables the HLT trigger to identify efficiently and reconstruct significant samples of fermionic Higgs decays. Data processing speed is achieved with custom VLSI pattern recognition, linearized track fitting executed inside modern FPGAs, pipelining, and parallel processing. One large FPGA executes full resolution track fitting inside low resolution candidate tracks found by a set of 16 custom Asic devices, called Associative Memories (AM chips) [2]. The FTK dual structure, based on the cooperation of VLSI dedicated AM and programmable FPGAs, is maintained to achieve further technology performance, miniaturization and integration of the current state of the art prototypes. This allows to fully exploit new applications within and outside the High Energy Physics field. We plan t...

  20. Water Use in the United States Energy System: A National Assessment and Unit Process Inventory of Water Consumption and Withdrawals.

    Science.gov (United States)

    Grubert, Emily; Sanders, Kelly T

    2018-06-05

    The United States (US) energy system is a large water user, but the nature of that use is poorly understood. To support resource comanagement and fill this noted gap in the literature, this work presents detailed estimates for US-based water consumption and withdrawals for the US energy system as of 2014, including both intensity values and the first known estimate of total water consumption and withdrawal by the US energy system. We address 126 unit processes, many of which are new additions to the literature, differentiated among 17 fuel cycles, five life cycle stages, three water source categories, and four levels of water quality. Overall coverage is about 99% of commercially traded US primary energy consumption with detailed energy flows by unit process. Energy-related water consumption, or water removed from its source and not directly returned, accounts for about 10% of both total and freshwater US water consumption. Major consumers include biofuels (via irrigation), oil (via deep well injection, usually of nonfreshwater), and hydropower (via evaporation and seepage). The US energy system also accounts for about 40% of both total and freshwater US water withdrawals, i.e., water removed from its source regardless of fate. About 70% of withdrawals are associated with the once-through cooling systems of approximately 300 steam cycle power plants that produce about 25% of US electricity.

  1. Use of a tangential filtration unit for processing liquid waste from nuclear laundries

    International Nuclear Information System (INIS)

    Augustin, X.; Buzonniere, A. de; Barnier, H.

    1993-01-01

    Nuclear laundries produce large quantities of weakly contaminated effluents charged with insoluble and soluble products. In collaboration with CEA, TECHNICATOME has developed an ultrafiltration process for liquid waste from nuclear laundries, associated with prior in-solubilization of the radiochemical activity. This process 'seeded ultrafiltration' is based on the use of decloggable mineral filter media and combines very high separation efficiency with long membrane life. The efficiency of the tangential filtration unit which has been processing effluents from the Cadarache Nuclear Research Center (CEA-France) nuclear laundry since mid-1988, has been confirmed on several sites

  2. Miniaturized Power Processing Unit Study: A Cubesat Electric Propulsion Technology Enabler Project

    Science.gov (United States)

    Ghassemieh, Shakib M.

    2014-01-01

    This study evaluates High Voltage Power Processing Unit (PPU) technology and driving requirements necessary to enable the Microfluidic Electric Propulsion technology research and development by NASA and university partners. This study provides an overview of the state of the art PPU technology with recommendations for technology demonstration projects and missions for NASA to pursue.

  3. Graphics Processing Unit Accelerated Hirsch-Fye Quantum Monte Carlo

    Science.gov (United States)

    Moore, Conrad; Abu Asal, Sameer; Rajagoplan, Kaushik; Poliakoff, David; Caprino, Joseph; Tomko, Karen; Thakur, Bhupender; Yang, Shuxiang; Moreno, Juana; Jarrell, Mark

    2012-02-01

    In Dynamical Mean Field Theory and its cluster extensions, such as the Dynamic Cluster Algorithm, the bottleneck of the algorithm is solving the self-consistency equations with an impurity solver. Hirsch-Fye Quantum Monte Carlo is one of the most commonly used impurity and cluster solvers. This work implements optimizations of the algorithm, such as enabling large data re-use, suitable for the Graphics Processing Unit (GPU) architecture. The GPU's sheer number of concurrent parallel computations and large bandwidth to many shared memories takes advantage of the inherent parallelism in the Green function update and measurement routines, and can substantially improve the efficiency of the Hirsch-Fye impurity solver.

  4. MASSIVELY PARALLEL LATENT SEMANTIC ANALYSES USING A GRAPHICS PROCESSING UNIT

    Energy Technology Data Exchange (ETDEWEB)

    Cavanagh, J.; Cui, S.

    2009-01-01

    Latent Semantic Analysis (LSA) aims to reduce the dimensions of large term-document datasets using Singular Value Decomposition. However, with the ever-expanding size of datasets, current implementations are not fast enough to quickly and easily compute the results on a standard PC. A graphics processing unit (GPU) can solve some highly parallel problems much faster than a traditional sequential processor or central processing unit (CPU). Thus, a deployable system using a GPU to speed up large-scale LSA processes would be a much more effective choice (in terms of cost/performance ratio) than using a PC cluster. Due to the GPU’s application-specifi c architecture, harnessing the GPU’s computational prowess for LSA is a great challenge. We presented a parallel LSA implementation on the GPU, using NVIDIA® Compute Unifi ed Device Architecture and Compute Unifi ed Basic Linear Algebra Subprograms software. The performance of this implementation is compared to traditional LSA implementation on a CPU using an optimized Basic Linear Algebra Subprograms library. After implementation, we discovered that the GPU version of the algorithm was twice as fast for large matrices (1 000x1 000 and above) that had dimensions not divisible by 16. For large matrices that did have dimensions divisible by 16, the GPU algorithm ran fi ve to six times faster than the CPU version. The large variation is due to architectural benefi ts of the GPU for matrices divisible by 16. It should be noted that the overall speeds for the CPU version did not vary from relative normal when the matrix dimensions were divisible by 16. Further research is needed in order to produce a fully implementable version of LSA. With that in mind, the research we presented shows that the GPU is a viable option for increasing the speed of LSA, in terms of cost/performance ratio.

  5. General purpose graphic processing unit implementation of adaptive pulse compression algorithms

    Science.gov (United States)

    Cai, Jingxiao; Zhang, Yan

    2017-07-01

    This study introduces a practical approach to implement real-time signal processing algorithms for general surveillance radar based on NVIDIA graphical processing units (GPUs). The pulse compression algorithms are implemented using compute unified device architecture (CUDA) libraries such as CUDA basic linear algebra subroutines and CUDA fast Fourier transform library, which are adopted from open source libraries and optimized for the NVIDIA GPUs. For more advanced, adaptive processing algorithms such as adaptive pulse compression, customized kernel optimization is needed and investigated. A statistical optimization approach is developed for this purpose without needing much knowledge of the physical configurations of the kernels. It was found that the kernel optimization approach can significantly improve the performance. Benchmark performance is compared with the CPU performance in terms of processing accelerations. The proposed implementation framework can be used in various radar systems including ground-based phased array radar, airborne sense and avoid radar, and aerospace surveillance radar.

  6. Coal conversion process by the United Power Plants of Westphalia

    Energy Technology Data Exchange (ETDEWEB)

    1974-08-01

    The coal conversion process used by the United Power Plants of Westphalia and its possible applications are described. In this process, the crushed and predried coal is degassed and partly gasified in a gas generator, during which time the sulfur present in the coal is converted into hydrogen sulfide, which together with the carbon dioxide is subsequently washed out and possibly utilized or marketed. The residual coke together with the ashes and tar is then sent to the melting chamber of the steam generator where the ashes are removed. After desulfurization, the purified gas is fed into an external circuit and/or to a gas turbine for electricity generation. The raw gas from the gas generator can be directly used as fuel in a conventional power plant. The calorific value of the purified gas varies from 3200 to 3500 kcal/cu m. The purified gas can be used as reducing agent, heating gas, as raw material for various chemical processes, or be conveyed via pipelines to remote areas for electricity generation. The conversion process has the advantages of increased economy of electricity generation with desulfurization, of additional gas generation, and, in long-term prospects, of the use of the waste heat from high-temperature nuclear reactors for this process.

  7. Nuclear safety inspection in treatment process for SG heat exchange tubes deficiency of unit 1, TNPS

    International Nuclear Information System (INIS)

    Zhang Chunming; Song Chenxiu; Zhao Pengyu; Hou Wei

    2006-01-01

    This paper describes treatment process for SG heat exchange tubes deficiency of Unit 1, TNPS, nuclear safety inspection of Northern Regional Office during treatment process for deficiency and further inspection after deficiency had been treated. (authors)

  8. Ultra-processed food consumption in children from a Basic Health Unit.

    Science.gov (United States)

    Sparrenberger, Karen; Friedrich, Roberta Roggia; Schiffner, Mariana Dihl; Schuch, Ilaine; Wagner, Mário Bernardes

    2015-01-01

    To evaluate the contribution of ultra-processed food (UPF) on the dietary consumption of children treated at a Basic Health Unit and the associated factors. Cross-sectional study carried out with a convenience sample of 204 children, aged 2-10 years old, in Southern Brazil. Children's food intake was assessed using a 24-h recall questionnaire. Food items were classified as minimally processed, processed for culinary use, and ultra-processed. A semi-structured questionnaire was applied to collect socio-demographic and anthropometric variables. Overweight in children was classified using a Z score >2 for children younger than 5 and Z score >+1 for those aged between 5 and 10 years, using the body mass index for age. Overweight frequency was 34% (95% CI: 28-41%). Mean energy consumption was 1672.3 kcal/day, with 47% (95% CI: 45-49%) coming from ultra-processed food. In the multiple linear regression model, maternal education (r=0.23; p=0.001) and child age (r=0.40; pde Pediatria. Published by Elsevier Editora Ltda. All rights reserved.

  9. 40 CFR Appendix Xiii to Part 266 - Mercury Bearing Wastes That May Be Processed in Exempt Mercury Recovery Units

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 26 2010-07-01 2010-07-01 false Mercury Bearing Wastes That May Be Processed in Exempt Mercury Recovery Units XIII Appendix XIII to Part 266 Protection of Environment... XIII to Part 266—Mercury Bearing Wastes That May Be Processed in Exempt Mercury Recovery Units These...

  10. Advanced spent fuel processing technologies for the United States GNEP programme

    International Nuclear Information System (INIS)

    Laidler, J.J.

    2007-01-01

    Spent fuel processing technologies for future advanced nuclear fuel cycles are being developed under the scope of the Global Nuclear Energy Partnership (GNEP). This effort seeks to make available for future deployment a fissile material recycling system that does not involve the separation of pure plutonium from spent fuel. In the nuclear system proposed by the United States under the GNEP initiative, light water reactor spent fuel is treated by means of a solvent extraction process that involves a group extraction of transuranic elements. The recovered transuranics are recycled as fuel material for advanced burner reactors, which can lead in the long term to fast reactors with conversion ratios greater than unity, helping to assure the sustainability of nuclear power systems. Both aqueous and pyrochemical methods are being considered for fast reactor spent fuel processing in the current US development programme. (author)

  11. Congestion estimation technique in the optical network unit registration process.

    Science.gov (United States)

    Kim, Geunyong; Yoo, Hark; Lee, Dongsoo; Kim, Youngsun; Lim, Hyuk

    2016-07-01

    We present a congestion estimation technique (CET) to estimate the optical network unit (ONU) registration success ratio for the ONU registration process in passive optical networks. An optical line terminal (OLT) estimates the number of collided ONUs via the proposed scheme during the serial number state. The OLT can obtain congestion level among ONUs to be registered such that this information may be exploited to change the size of a quiet window to decrease the collision probability. We verified the efficiency of the proposed method through simulation and experimental results.

  12. Using Systems Theory to Examine Patient and Nurse Structures, Processes, and Outcomes in Centralized and Decentralized Units.

    Science.gov (United States)

    Real, Kevin; Fay, Lindsey; Isaacs, Kathy; Carll-White, Allison; Schadler, Aric

    2018-01-01

    This study utilizes systems theory to understand how changes to physical design structures impact communication processes and patient and staff design-related outcomes. Many scholars and researchers have noted the importance of communication and teamwork for patient care quality. Few studies have examined changes to nursing station design within a systems theory framework. This study employed a multimethod, before-and-after, quasi-experimental research design. Nurses completed surveys in centralized units and later in decentralized units ( N = 26 pre , N = 51 post ). Patients completed surveys ( N = 62 pre ) in centralized units and later in decentralized units ( N = 49 post ). Surveys included quantitative measures and qualitative open-ended responses. Patients preferred the decentralized units because of larger single-occupancy rooms, greater privacy/confidentiality, and overall satisfaction with design. Nurses had a more complex response. Nurses approved the patient rooms, unit environment, and noise levels in decentralized units. However, they reported reduced access to support spaces, lower levels of team/mentoring communication, and less satisfaction with design than in centralized units. Qualitative findings supported these results. Nurses were more positive about centralized units and patients were more positive toward decentralized units. The results of this study suggest a need to understand how system components operate in concert. A major contribution of this study is the inclusion of patient satisfaction with design, an important yet overlooked fact in patient satisfaction. Healthcare design researchers and practitioners may consider how changing system interdependencies can lead to unexpected changes to communication processes and system outcomes in complex systems.

  13. All-optical quantum computing with a hybrid solid-state processing unit

    International Nuclear Information System (INIS)

    Pei Pei; Zhang Fengyang; Li Chong; Song Heshan

    2011-01-01

    We develop an architecture of a hybrid quantum solid-state processing unit for universal quantum computing. The architecture allows distant and nonidentical solid-state qubits in distinct physical systems to interact and work collaboratively. All the quantum computing procedures are controlled by optical methods using classical fields and cavity QED. Our methods have a prominent advantage of the insensitivity to dissipation process benefiting from the virtual excitation of subsystems. Moreover, the quantum nondemolition measurements and state transfer for the solid-state qubits are proposed. The architecture opens promising perspectives for implementing scalable quantum computation in a broader sense that different solid-state systems can merge and be integrated into one quantum processor afterward.

  14. [Variations in the diagnostic confirmation process between breast cancer mass screening units].

    Science.gov (United States)

    Natal, Carmen; Fernández-Somoano, Ana; Torá-Rocamora, Isabel; Tardón, Adonina; Castells, Xavier

    2016-01-01

    To analyse variations in the diagnostic confirmation process between screening units, variations in the outcome of each episode and the relationship between the use of the different diagnostic confirmation tests and the lesion detection rate. Observational study of variability of the standardised use of diagnostic and lesion detection tests in 34 breast cancer mass screening units participating in early-detection programmes in three Spanish regions from 2002-2011. The diagnostic test variation ratio in percentiles 25-75 ranged from 1.68 (further appointments) to 3.39 (fine-needle aspiration). The variation ratio in detection rates of benign lesions, ductal carcinoma in situ and invasive cancer were 2.79, 1.99 and 1.36, respectively. A positive relationship between rates of testing and detection rates was found with fine-needle aspiration-benign lesions (R(2): 0.53), fine-needle aspiration-invasive carcinoma (R(2): 0 28), core biopsy-benign lesions (R(2): 0.64), core biopsy-ductal carcinoma in situ (R(2): 0.61) and core biopsy-invasive carcinoma (R(2): 0.48). Variation in the use of invasive tests between the breast cancer screening units participating in early-detection programmes was found to be significantly higher than variations in lesion detection. Units which conducted more fine-needle aspiration tests had higher benign lesion detection rates, while units that conducted more core biopsies detected more benign lesions and cancer. Copyright © 2016 SESPAS. Published by Elsevier Espana. All rights reserved.

  15. Performance Recognition for Sulphur Flotation Process Based on Froth Texture Unit Distribution

    Directory of Open Access Journals (Sweden)

    Mingfang He

    2013-01-01

    Full Text Available As an important indicator of flotation performance, froth texture is believed to be related to operational condition in sulphur flotation process. A novel fault detection method based on froth texture unit distribution (TUD is proposed to recognize the fault condition of sulphur flotation in real time. The froth texture unit number is calculated based on texture spectrum, and the probability density function (PDF of froth texture unit number is defined as texture unit distribution, which can describe the actual textual feature more accurately than the grey level dependence matrix approach. As the type of the froth TUD is unknown, a nonparametric kernel estimation method based on the fixed kernel basis is proposed, which can overcome the difficulty when comparing different TUDs under various conditions is impossible using the traditional varying kernel basis. Through transforming nonparametric description into dynamic kernel weight vectors, a principle component analysis (PCA model is established to reduce the dimensionality of the vectors. Then a threshold criterion determined by the TQ statistic based on the PCA model is proposed to realize the performance recognition. The industrial application results show that the accurate performance recognition of froth flotation can be achieved by using the proposed method.

  16. Silicon Carbide (SiC) Power Processing Unit (PPU) for Hall Effect Thrusters, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — In this SBIR project, APEI, Inc. is proposing to develop a high efficiency, rad-hard 3.8 kW silicon carbide (SiC) Power Processing Unit (PPU) for Hall Effect...

  17. Co-occurrence of Photochemical and Microbiological Transformation Processes in Open-Water Unit Process Wetlands.

    Science.gov (United States)

    Prasse, Carsten; Wenk, Jannis; Jasper, Justin T; Ternes, Thomas A; Sedlak, David L

    2015-12-15

    The fate of anthropogenic trace organic contaminants in surface waters can be complex due to the occurrence of multiple parallel and consecutive transformation processes. In this study, the removal of five antiviral drugs (abacavir, acyclovir, emtricitabine, lamivudine and zidovudine) via both bio- and phototransformation processes, was investigated in laboratory microcosm experiments simulating an open-water unit process wetland receiving municipal wastewater effluent. Phototransformation was the main removal mechanism for abacavir, zidovudine, and emtricitabine, with half-lives (t1/2,photo) in wetland water of 1.6, 7.6, and 25 h, respectively. In contrast, removal of acyclovir and lamivudine was mainly attributable to slower microbial processes (t1/2,bio = 74 and 120 h, respectively). Identification of transformation products revealed that bio- and phototransformation reactions took place at different moieties. For abacavir and zidovudine, rapid transformation was attributable to high reactivity of the cyclopropylamine and azido moieties, respectively. Despite substantial differences in kinetics of different antiviral drugs, biotransformation reactions mainly involved oxidation of hydroxyl groups to the corresponding carboxylic acids. Phototransformation rates of parent antiviral drugs and their biotransformation products were similar, indicating that prior exposure to microorganisms (e.g., in a wastewater treatment plant or a vegetated wetland) would not affect the rate of transformation of the part of the molecule susceptible to phototransformation. However, phototransformation strongly affected the rates of biotransformation of the hydroxyl groups, which in some cases resulted in greater persistence of phototransformation products.

  18. Sodium content of popular commercially processed and restaurant foods in the United States

    Science.gov (United States)

    Nutrient Data Laboratory (NDL) of the U.S. Department of Agriculture (USDA) in close collaboration with U.S. Center for Disease Control and Prevention is monitoring the sodium content of commercially processed and restaurant foods in the United States. The main purpose of this manuscript is to prov...

  19. Integration of Satellite, Global Reanalysis Data and Macroscale Hydrological Model for Drought Assessment in Sub-Tropical Region of India

    Science.gov (United States)

    Pandey, V.; Srivastava, P. K.

    2018-04-01

    Change in soil moisture regime is highly relevant for agricultural drought, which can be best analyzed in terms of Soil Moisture Deficit Index (SMDI). A macroscale hydrological model Variable Infiltration Capacity (VIC) was used to simulate the hydro-climatological fluxes including evapotranspiration, runoff, and soil moisture storage to reconstruct the severity and duration of agricultural drought over semi-arid region of India. The simulations in VIC were performed at 0.25° spatial resolution by using a set of meteorological forcing data, soil parameters and Land Use Land Cover (LULC) and vegetation parameters. For calibration and validation, soil parameters obtained from National Bureau of Soil Survey and Land Use Planning (NBSSLUP) and ESA's Climate Change Initiative soil moisture (CCI-SM) data respectively. The analysis of results demonstrates that most of the study regions (> 80 %) especially for central northern part are affected by drought condition. The year 2001, 2002, 2007, 2008 and 2009 was highly affected by agricultural drought. Due to high average and maximum temperature, we observed higher soil evaporation that reduces the surface soil moisture significantly as well as the high topographic variations; coarse soil texture and moderate to high wind speed enhanced the drying upper soil moisture layer that incorporate higher negative SMDI over the study area. These findings can also facilitate the archetype in terms of daily time step data, lengths of the simulation period, various hydro-climatological outputs and use of reasonable hydrological model.

  20. INTEGRATION OF SATELLITE, GLOBAL REANALYSIS DATA AND MACROSCALE HYDROLOGICAL MODEL FOR DROUGHT ASSESSMENT IN SUB-TROPICAL REGION OF INDIA

    Directory of Open Access Journals (Sweden)

    V. Pandey

    2018-04-01

    Full Text Available Change in soil moisture regime is highly relevant for agricultural drought, which can be best analyzed in terms of Soil Moisture Deficit Index (SMDI. A macroscale hydrological model Variable Infiltration Capacity (VIC was used to simulate the hydro-climatological fluxes including evapotranspiration, runoff, and soil moisture storage to reconstruct the severity and duration of agricultural drought over semi-arid region of India. The simulations in VIC were performed at 0.25° spatial resolution by using a set of meteorological forcing data, soil parameters and Land Use Land Cover (LULC and vegetation parameters. For calibration and validation, soil parameters obtained from National Bureau of Soil Survey and Land Use Planning (NBSSLUP and ESA's Climate Change Initiative soil moisture (CCI-SM data respectively. The analysis of results demonstrates that most of the study regions (> 80 % especially for central northern part are affected by drought condition. The year 2001, 2002, 2007, 2008 and 2009 was highly affected by agricultural drought. Due to high average and maximum temperature, we observed higher soil evaporation that reduces the surface soil moisture significantly as well as the high topographic variations; coarse soil texture and moderate to high wind speed enhanced the drying upper soil moisture layer that incorporate higher negative SMDI over the study area. These findings can also facilitate the archetype in terms of daily time step data, lengths of the simulation period, various hydro-climatological outputs and use of reasonable hydrological model.

  1. Steady electrodiffusion in hydrogel-colloid composites: macroscale properties from microscale electrokinetics

    Directory of Open Access Journals (Sweden)

    Reghan J. Hill

    2010-03-01

    Full Text Available A rigorous microscale electrokinetic model for hydrogel-colloid composites is adopted to compute macroscale profiles of electrolyte concentration, electrostatic potential, and hydrostatic pressure across membranes that separate electrolytes with different concentrations. The membranes are uncharged polymeric hydrogels in which charged spherical colloidal particles are immobilized and randomly dispersed with a low solid volume fraction. Bulk membrane characteristics and performance are calculated from a continuum microscale electrokinetic model (Hill 2006b, c. The computations undertaken in this paper quantify the streaming and membrane potentials. For the membrane potential, increasing the volume fraction of negatively charged inclusions decreases the differential electrostatic potential across the membrane under conditions where there is zero convective flow and zero electrical current. With low electrolyte concentration and highly charged nanoparticles, the membrane potential is very sensitive to the particle volume fraction. Accordingly, the membrane potential - and changes brought about by the inclusion size, charge and concentration - could be a useful experimental diagnostic to complement more recent applications of the microscale electrokinetic model for electrical microrheology and electroacoustics (Hill and Ostoja-Starzewski 2008, Wang and Hill 2008.Um modelo eletrocinético rigoroso para compósitos formados por um hidrogel e um colóide é adotado para computar os perfis macroscópicos de concentração eletrolítica, potencial eletrostático e pressão hidrostática através de uma membrana que separa soluções com diferentes concentrações eletrolíticas. A membrana é composta por um hidrogel polimérico sem carga elétrica onde partículas esféricas são imobilizadas e dispersas aleatoriamente com baixa fração de volume do sólido. As características da membrana e a sua performance são calculadas a partir de um modelo

  2. Silicon Carbide (SiC) Power Processing Unit (PPU) for Hall Effect Thrusters, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — In this SBIR project, APEI, Inc. is proposing to develop a high efficiency, rad-hard 3.8 kW silicon carbide (SiC) power supply for the Power Processing Unit (PPU) of...

  3. HTS current lead units prepared by the TFA-MOD processed YBCO coated conductors

    International Nuclear Information System (INIS)

    Shiohara, K.; Sakai, S.; Ishii, Y.; Yamada, Y.; Tachikawa, K.; Koizumi, T.; Aoki, Y.; Hasegawa, T.; Tamura, H.; Mito, T.

    2010-01-01

    Two superconducting current lead units have been prepared using ten coated conductors of the Tri-Fluoro-Acetate - Metal Organic Deposition (TFA-MOD) processed Y 1 Ba 2 Cu 3 O 7-δ (YBCO) coated conductors with critical current (I c ) of about 170 A at 77 K in self-field. The coated conductors are 5 mm in width, 190 mm in length and about 120 μm in overall thickness. The 1.5 μm thick superconducting YBCO layer was synthesized through the TFA-MOD process on Hastelloy TM C-276 substrate tape with two buffer oxide layers of Gd 2 Zr 2 O 7 and CeO 2 . The five YBCO coated conductors are attached on a 1 mm thick Glass Fiber Reinforced Plastics (GFRP) board and soldered to Cu caps at the both ends. We prepared two 500 A-class current lead units. The DC transport current of 800 A was stably applied at 77 K without any voltage generation in all coated conductors. The voltage between both Cu caps linearly increased with increasing the applied current, and was about 350 μV at 500 A in both current lead units. According to the estimated values of the heat leakage from 77 K to 4.2 K, the heat leakage for the current lead unit was 46.5 mW. We successfully attained reduction of the heat leakage because of improvement of the transport current performance (I c ), a thinner Ag layer of YBCO coated conductor and usage of the GFRP board for reinforcement instead of a stainless steel board used in the previous study. The DC transport current of 1400 A was stably applied when the two current lead units were joined in parallel. The sum of the heat leakages from 77 K to 4.2 K for the combined the current lead units was 93 mW. In comparison with the conventional Cu current leads by gas-cooling, it could be noted that the heat leakage of the current lead is about one order of magnitude smaller than that of the Cu current lead.

  4. Grey water treatment by a continuous process of an electrocoagulation unit and a submerged membrane bioreactor system

    KAUST Repository

    Bani-Melhem, Khalid; Smith, Edward

    2012-01-01

    This paper presents the performance of an integrated process consisting of an electro-coagulation (EC) unit and a submerged membrane bioreactor (SMBR) technology for grey water treatment. For comparison purposes, another SMBR process without

  5. Development of diagnostic process for abnormal conditions of Ulchin units 1 and 2

    Energy Technology Data Exchange (ETDEWEB)

    Choi, Hyun Soo; Kwak, Jeong Keun; Yun, Jung Hyun; Kim, Jong Hyun [KEPCO International Nuclear Graduate School, Ulsan (Korea, Republic of)

    2012-10-15

    Diagnosis of abnormal conditions during operation is one of difficult tasks to nuclear power plant operators. Operators may have trouble in handling abnormal conditions due to various reasons such as 1) many alarms (around 2,000 alarms in the Ulchin units 1 and 2 each) and multi alarms occurrences, 2) the same alarms occurrences in different abnormal conditions, and 3) a number of Abnormal Operating Procedures (AOPs). For these reasons, the first diagnosis on abnormal conditions largely relies on operator's experiences and pattern recognition. Then, this difficulty may be highlighted for inexperienced operators. This paper suggests an approach to develop the optimal diagnostic process for appropriate selection of AOPs by using the Elimination by Aspect (EBA) method. The EBA method uses a heuristic followed by decision makers during a process of sequential choice and which constitutes a good balance between the cost of a decision and its quality. At each stage of decision, the individuals eliminate all the options not having an expected given attribute, until only one option remains. This approach is applied to steam generator level control system abnormal procedure for Ulchin units 1 and 2. The result indicates that the EBA method is applicable to the development of optimal process on diagnosis of abnormal conditions.

  6. Graphics Processing Unit Enhanced Parallel Document Flocking Clustering

    Energy Technology Data Exchange (ETDEWEB)

    Cui, Xiaohui [ORNL; Potok, Thomas E [ORNL; ST Charles, Jesse Lee [ORNL

    2010-01-01

    Analyzing and clustering documents is a complex problem. One explored method of solving this problem borrows from nature, imitating the flocking behavior of birds. One limitation of this method of document clustering is its complexity O(n2). As the number of documents grows, it becomes increasingly difficult to generate results in a reasonable amount of time. In the last few years, the graphics processing unit (GPU) has received attention for its ability to solve highly-parallel and semi-parallel problems much faster than the traditional sequential processor. In this paper, we have conducted research to exploit this archi- tecture and apply its strengths to the flocking based document clustering problem. Using the CUDA platform from NVIDIA, we developed a doc- ument flocking implementation to be run on the NVIDIA GEFORCE GPU. Performance gains ranged from thirty-six to nearly sixty times improvement of the GPU over the CPU implementation.

  7. Comparison of ultrafiltration and dissolved air flotation efficiencies in industrial units during the papermaking process

    OpenAIRE

    Monte Lara, Concepción; Ordóñez Sanz, Ruth; Hermosilla Redondo, Daphne; Sánchez González, Mónica; Blanco Suárez, Ángeles

    2011-01-01

    The efficiency of an ultrafiltration unit has been studied and compared with a dissolved air flotation system to get water with a suited quality to be reused in the process. The study was done at a paper mill producing light weight coated paper and newsprint paper from 100% recovered paper. Efficiency was analysed by removal of turbidity, cationic demand, total and dissolved chemical oxygen demand, hardness, sulphates and microstickies. Moreover, the performance of the ultrafiltration unit an...

  8. Exploring the decision-making process in the delivery of physiotherapy in a stroke unit.

    Science.gov (United States)

    McGlinchey, Mark P; Davenport, Sally

    2015-01-01

    The aim of this study was to explore the decision-making process in the delivery of physiotherapy in a stroke unit. A focused ethnographical approach involving semi-structured interviews and observations of clinical practice was used. A purposive sample of seven neurophysiotherapists and four patients participated in semi-structured interviews. From this group, three neurophysiotherapists and four patients were involved in observation of practice. Data from interviews and observations were analysed to generate themes. Three themes were identified: planning the ideal physiotherapy delivery, the reality of physiotherapy delivery and involvement in the decision-making process. Physiotherapists used a variety of clinical reasoning strategies and considered many factors to influence their decision-making in the planning and delivery of physiotherapy post-stroke. These factors included the therapist's clinical experience, patient's presentation and response to therapy, prioritisation, organisational constraints and compliance with organisational practice. All physiotherapists highlighted the importance to involve patients in planning and delivering their physiotherapy. However, there were varying levels of patient involvement observed in this process. The study has generated insight into the reality of decision-making in the planning and delivery of physiotherapy post-stroke. Further research involving other stroke units is required to gain a greater understanding of this aspect of physiotherapy. Implications for Rehabilitation Physiotherapists need to consider multiple patient, therapist and organisational factors when planning and delivering physiotherapy in a stroke unit. Physiotherapists should continually reflect upon how they provide physiotherapy, with respect to the duration, frequency and time of day sessions are delivered, in order to guide current and future physiotherapy delivery. As patients may demonstrate varying levels of participation in deciding and

  9. The ATLAS Fast Tracker Processing Units - track finding and fitting

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00384270; The ATLAS collaboration; Alison, John; Ancu, Lucian Stefan; Andreani, Alessandro; Annovi, Alberto; Beccherle, Roberto; Beretta, Matteo; Biesuz, Nicolo Vladi; Bogdan, Mircea Arghir; Bryant, Patrick; Calabro, Domenico; Citraro, Saverio; Crescioli, Francesco; Dell'Orso, Mauro; Donati, Simone; Gentsos, Christos; Giannetti, Paola; Gkaitatzis, Stamatios; Gramling, Johanna; Greco, Virginia; Horyn, Lesya Anna; Iovene, Alessandro; Kalaitzidis, Panagiotis; Kim, Young-Kee; Kimura, Naoki; Kordas, Kostantinos; Kubota, Takashi; Lanza, Agostino; Liberali, Valentino; Luciano, Pierluigi; Magnin, Betty; Sakellariou, Andreas; Sampsonidis, Dimitrios; Saxon, James; Shojaii, Seyed Ruhollah; Sotiropoulou, Calliope Louisa; Stabile, Alberto; Swiatlowski, Maximilian; Volpi, Guido; Zou, Rui; Shochet, Mel

    2016-01-01

    The Fast Tracker is a hardware upgrade to the ATLAS trigger and data-acquisition system, with the goal of providing global track reconstruction by the start of the High Level Trigger starts. The Fast Tracker can process incoming data from the whole inner detector at full first level trigger rate, up to 100 kHz, using custom electronic boards. At the core of the system is a Processing Unit installed in a VMEbus crate, formed by two sets of boards: the Associative Memory Board and a powerful rear transition module called the Auxiliary card, while the second set is the Second Stage board. The associative memories perform the pattern matching looking for correlations within the incoming data, compatible with track candidates at coarse resolution. The pattern matching task is performed using custom application specific integrated circuits, called associative memory chips. The auxiliary card prepares the input and reject bad track candidates obtained from from the Associative Memory Board using the full precision a...

  10. Design of Biochemical Oxidation Process Engineering Unit for Treatment of Organic Radioactive Liquid Waste

    International Nuclear Information System (INIS)

    Zainus Salimin; Endang Nuraeni; Mirawaty; Tarigan, Cerdas

    2010-01-01

    Organic radioactive liquid waste from nuclear industry consist of detergent waste from nuclear laundry, 30% TBP-kerosene solvent waste from purification or recovery of uranium from process failure of nuclear fuel fabrication, and solvent waste containing D 2 EHPA, TOPO, and kerosene from purification of phosphoric acid. The waste is dangerous and toxic matter having low pH, high COD and BOD, and also low radioactivity. Biochemical oxidation process is the effective method for detoxification of organic waste and decontamination of radionuclide by bio sorption. The result process are sludges and non radioactive supernatant. The existing treatment facilities radioactive waste in Serpong can not use for treatment of that’s organics waste. Dio chemical oxidation process engineering unit for continuous treatment of organic radioactive liquid waste on the capacity of 1.6 L/h has been designed and constructed the equipment of process unit consist of storage tank of 100 L capacity for nutrition solution, 2 storage tanks of 100 L capacity per each for liquid waste, reactor oxidation of 120 L, settling tank of 50 L capacity storage tank of 55 L capacity for sludge, storage tank of 50 capacity for supernatant. Solution on the reactor R-01 are added by bacteria, nutrition and aeration using two difference aerators until biochemical oxidation occurs. The sludge from reactor of R-01 are recirculated to the settling tank of R-02 and on the its reverse operation biological sludge will be settled, and supernatant will be overflow. (author)

  11. From bentonite powder to engineered barrier units - an industrial process

    International Nuclear Information System (INIS)

    Gatabin, Claude; Guyot, Jean-Luc; Resnikow, Serge; Bosgiraud, Jean-Michel; Londe, Louis; Seidler, Wolf

    2008-01-01

    In the framework of the ESDRED Project, a consortium, called GME, dealt with the study and development of all required industrial processes for the fabrication of scale-1 buffer rings and discs, as well as all related means for transporting and handling the rings, the assembly in 4-unit sets, the packaging of buffer-ring assemblies, and all associated procedures. In 2006, a 100-t mould was built in order to compact in a few hours 12 rings and two discs measuring 2.3 m in diameter and 0.5 m in height, and weighing 4 t each. The ring-handling, assembly and transport means were tested successfully in 2007. (author)

  12. 78 FR 1260 - Labor Certification Process for the Temporary Employment of Aliens in Agriculture in the United...

    Science.gov (United States)

    2013-01-08

    ... DEPARTMENT OF LABOR Employment and Training Administration Labor Certification Process for the Temporary Employment of Aliens in Agriculture in the United States: Prevailing Wage Rates for Certain Occupations Processed Under H-2A Special Procedures AGENCY: Employment and Training Administration, Labor...

  13. Monte Carlo MP2 on Many Graphical Processing Units.

    Science.gov (United States)

    Doran, Alexander E; Hirata, So

    2016-10-11

    In the Monte Carlo second-order many-body perturbation (MC-MP2) method, the long sum-of-product matrix expression of the MP2 energy, whose literal evaluation may be poorly scalable, is recast into a single high-dimensional integral of functions of electron pair coordinates, which is evaluated by the scalable method of Monte Carlo integration. The sampling efficiency is further accelerated by the redundant-walker algorithm, which allows a maximal reuse of electron pairs. Here, a multitude of graphical processing units (GPUs) offers a uniquely ideal platform to expose multilevel parallelism: fine-grain data-parallelism for the redundant-walker algorithm in which millions of threads compute and share orbital amplitudes on each GPU; coarse-grain instruction-parallelism for near-independent Monte Carlo integrations on many GPUs with few and infrequent interprocessor communications. While the efficiency boost by the redundant-walker algorithm on central processing units (CPUs) grows linearly with the number of electron pairs and tends to saturate when the latter exceeds the number of orbitals, on a GPU it grows quadratically before it increases linearly and then eventually saturates at a much larger number of pairs. This is because the orbital constructions are nearly perfectly parallelized on a GPU and thus completed in a near-constant time regardless of the number of pairs. In consequence, an MC-MP2/cc-pVDZ calculation of a benzene dimer is 2700 times faster on 256 GPUs (using 2048 electron pairs) than on two CPUs, each with 8 cores (which can use only up to 256 pairs effectively). We also numerically determine that the cost to achieve a given relative statistical uncertainty in an MC-MP2 energy increases as O(n 3 ) or better with system size n, which may be compared with the O(n 5 ) scaling of the conventional implementation of deterministic MP2. We thus establish the scalability of MC-MP2 with both system and computer sizes.

  14. Effect of hybrid fiber reinforcement on the cracking process in fiber reinforced cementitious composites

    DEFF Research Database (Denmark)

    Pereira, Eduardo B.; Fischer, Gregor; Barros, Joaquim A.O.

    2012-01-01

    The simultaneous use of different types of fibers as reinforcement in cementitious matrix composites is typically motivated by the underlying principle of a multi-scale nature of the cracking processes in fiber reinforced cementitious composites. It has been hypothesized that while undergoing...... tensile deformations in the composite, the fibers with different geometrical and mechanical properties restrain the propagation and further development of cracking at different scales from the micro- to the macro-scale. The optimized design of the fiber reinforcing systems requires the objective...... materials is carried out by assessing directly their tensile stress-crack opening behavior. The efficiency of hybrid fiber reinforcements and the multi-scale nature of cracking processes are discussed based on the experimental results obtained, as well as the micro-mechanisms underlying the contribution...

  15. Developing a Comprehensive Model of Intensive Care Unit Processes: Concept of Operations.

    Science.gov (United States)

    Romig, Mark; Tropello, Steven P; Dwyer, Cindy; Wyskiel, Rhonda M; Ravitz, Alan; Benson, John; Gropper, Michael A; Pronovost, Peter J; Sapirstein, Adam

    2015-04-23

    This study aimed to use a systems engineering approach to improve performance and stakeholder engagement in the intensive care unit to reduce several different patient harms. We developed a conceptual framework or concept of operations (ConOps) to analyze different types of harm that included 4 steps as follows: risk assessment, appropriate therapies, monitoring and feedback, as well as patient and family communications. This framework used a transdisciplinary approach to inventory the tasks and work flows required to eliminate 7 common types of harm experienced by patients in the intensive care unit. The inventory gathered both implicit and explicit information about how the system works or should work and converted the information into a detailed specification that clinicians could understand and use. Using the ConOps document, we created highly detailed work flow models to reduce harm and offer an example of its application to deep venous thrombosis. In the deep venous thrombosis model, we identified tasks that were synergistic across different types of harm. We will use a system of systems approach to integrate the variety of subsystems and coordinate processes across multiple types of harm to reduce the duplication of tasks. Through this process, we expect to improve efficiency and demonstrate synergistic interactions that ultimately can be applied across the spectrum of potential patient harms and patient locations. Engineering health care to be highly reliable will first require an understanding of the processes and work flows that comprise patient care. The ConOps strategy provided a framework for building complex systems to reduce patient harm.

  16. Beowulf Distributed Processing and the United States Geological Survey

    Science.gov (United States)

    Maddox, Brian G.

    2002-01-01

    Introduction In recent years, the United States Geological Survey's (USGS) National Mapping Discipline (NMD) has expanded its scientific and research activities. Work is being conducted in areas such as emergency response research, scientific visualization, urban prediction, and other simulation activities. Custom-produced digital data have become essential for these types of activities. High-resolution, remotely sensed datasets are also seeing increased use. Unfortunately, the NMD is also finding that it lacks the resources required to perform some of these activities. Many of these projects require large amounts of computer processing resources. Complex urban-prediction simulations, for example, involve large amounts of processor-intensive calculations on large amounts of input data. This project was undertaken to learn and understand the concepts of distributed processing. Experience was needed in developing these types of applications. The idea was that this type of technology could significantly aid the needs of the NMD scientific and research programs. Porting a numerically intensive application currently being used by an NMD science program to run in a distributed fashion would demonstrate the usefulness of this technology. There are several benefits that this type of technology can bring to the USGS's research programs. Projects can be performed that were previously impossible due to a lack of computing resources. Other projects can be performed on a larger scale than previously possible. For example, distributed processing can enable urban dynamics research to perform simulations on larger areas without making huge sacrifices in resolution. The processing can also be done in a more reasonable amount of time than with traditional single-threaded methods (a scaled version of Chester County, Pennsylvania, took about fifty days to finish its first calibration phase with a single-threaded program). This paper has several goals regarding distributed processing

  17. Electromagnetic compatibility of tools and automated process control systems of NPP units

    International Nuclear Information System (INIS)

    Alpeev, A.S.

    1994-01-01

    Problems of electromagnetic compatibility of automated process control subsystems in NPP units are discussed. It is emphasized, that at the stage of development of request for proposal for each APC subsystem special attention should be paid to electromagnetic situation in specific room and requirements to the quality of functions performed by the system. Besides, requirements to electromagnetic compatibility tests at the work stations should be formulated, and mock-ups of the subsystems should be tested

  18. United States Department of Energy Integrated Manufacturing & Processing Predoctoral Fellowships. Final Report

    Energy Technology Data Exchange (ETDEWEB)

    Petrochenkov, M.

    2003-03-31

    The objective of the program was threefold: to create a pool of PhDs trained in the integrated approach to manufacturing and processing, to promote academic interest in the field, and to attract talented professionals to this challenging area of engineering. It was anticipated that the program would result in the creation of new manufacturing methods that would contribute to improved energy efficiency, to better utilization of scarce resources, and to less degradation of the environment. Emphasis in the competition was on integrated systems of manufacturing and the integration of product design with manufacturing processes. Research addressed such related areas as aspects of unit operations, tooling and equipment, intelligent sensors, and manufacturing systems as they related to product design.

  19. 78 FR 19019 - Labor Certification Process for the Temporary Employment of Aliens in Agriculture in the United...

    Science.gov (United States)

    2013-03-28

    ... DEPARTMENT OF LABOR Employment and Training Administration Labor Certification Process for the Temporary Employment of Aliens in Agriculture in the United States: Prevailing Wage Rates for Certain Occupations Processed Under H-2A Special Procedures; Correction and Rescission AGENCY: Employment and Training...

  20. Energy audit and conservation opportunities for pyroprocessing unit of a typical dry process cement plant

    International Nuclear Information System (INIS)

    Kabir, G.; Abubakar, A.I.; El-Nafaty, U.A.

    2010-01-01

    Cement production process has been highly energy and cost intensive. The cement plant requires 8784 h per year of the total operating hours to produce 640,809 tonnes of clinker. To achieve effective and efficient energy management scheme, thermal energy audit analysis was employed on the pyroprocessing unit of the cement plant. Fuel combustion generates the bulk of the thermal energy for the process, amounting to 95.48% (4164.02 kJ/kg cl ) of the total thermal energy input. Thermal efficiency of the unit stands at 41%, below 50-54% achieved in modern plants. The exhaust gases and kiln shell heat energy losses are in significant quantity, amounting to 27.9% and 11.97% of the total heat input respectively. To enhance the energy performance of the unit, heat losses conservation systems are considered. Waste heat recovery steam generator (WHRSG) and Secondary kiln shell were studied. Power and thermal energy savings of 42.88 MWh/year and 5.30 MW can be achieved respectively. Financial benefits for use of the conservation methods are substantial. Environmental benefit of 14.10% reduction in Greenhouse gases (GHG) emissions could be achieved.

  1. Energy audit and conservation opportunities for pyroprocessing unit of a typical dry process cement plant

    Energy Technology Data Exchange (ETDEWEB)

    Kabir, G.; Abubakar, A.I.; El-Nafaty, U.A. [Chemical Engineering Programme, Abubakar Tafawa Balewa University, P. M. B. 0248, Bauchi (Nigeria)

    2010-03-15

    Cement production process has been highly energy and cost intensive. The cement plant requires 8784 h per year of the total operating hours to produce 640,809 tonnes of clinker. To achieve effective and efficient energy management scheme, thermal energy audit analysis was employed on the pyroprocessing unit of the cement plant. Fuel combustion generates the bulk of the thermal energy for the process, amounting to 95.48% (4164.02 kJ/kg{sub cl}) of the total thermal energy input. Thermal efficiency of the unit stands at 41%, below 50-54% achieved in modern plants. The exhaust gases and kiln shell heat energy losses are in significant quantity, amounting to 27.9% and 11.97% of the total heat input respectively. To enhance the energy performance of the unit, heat losses conservation systems are considered. Waste heat recovery steam generator (WHRSG) and Secondary kiln shell were studied. Power and thermal energy savings of 42.88 MWh/year and 5.30 MW can be achieved respectively. Financial benefits for use of the conservation methods are substantial. Environmental benefit of 14.10% reduction in Greenhouse gases (GHG) emissions could be achieved. (author)

  2. Effect of unit size on thermal fatigue behavior of hot work steel repaired by a biomimetic laser remelting process

    Science.gov (United States)

    Cong, Dalong; Li, Zhongsheng; He, Qingbing; Chen, Dajun; Chen, Hanbin; Yang, Jiuzhou; Zhang, Peng; Zhou, Hong

    2018-01-01

    AISI H13 hot work steel with fatigue cracks was repaired by a biomimetic laser remelting (BLR) process in the form of lattice units with different sizes. Detailed microstructural studies and microhardness tests were carried out on the units. Studies revealed a mixed microstructure containing martensite, retained austenite and carbide particles with ultrafine grain size in units. BLR samples with defect-free units exhibited superior thermal fatigue resistance due to microstructure strengthening, and mechanisms of crack tip blunting and blocking. In addition, effects of unit size on thermal fatigue resistance of BLR samples were discussed.

  3. Process Control System of a 500-MW Unit of the Reftinskaya Local Hydroelectric Power Plant

    International Nuclear Information System (INIS)

    Grekhov, L. L.; Bilenko, V. A.; Derkach, N. N.; Galperina, A. I.; Strukov, A. P.

    2002-01-01

    The results of the installation of a process control system developed by the Interavtomatika Company (Moscow) for controlling a 500-MW pulverized coal power unit with the use of the Teleperm ME and OM650 equipment of the Siemens Company are described. The system provides a principally new level of automation and process control through monitors comparable with the operation of foreign counterparts with complete preservation of the domestic peripheral equipment. During the 4.5 years of operation of the process control system the intricate algorithms for control and data processing have proved their operational integrity

  4. 43 CFR 429.37 - Does interest accrue on monies owed to the United States during my appeal process?

    Science.gov (United States)

    2010-10-01

    ... United States during my appeal process? 429.37 Section 429.37 Public Lands: Interior Regulations Relating... States during my appeal process? Except for any period in the appeal process during which a stay is then... decision to OHA, or during judicial review of final agency action. ...

  5. Gas-centrifuge unit and centrifugal process for isotope separation

    International Nuclear Information System (INIS)

    Stark, T.M.

    1979-01-01

    An invention involving a process and apparatus for isotope-separation applications such as uranium-isotope enrichment is disclosed which employs cascades of gas centrifuges. A preferred apparatus relates to an isotope-enrichment unit which includes a first group of cascades of gas centrifuges and an auxiliary cascade. Each cascade has an input, a light-fraction output, and a heavy-fraction output for separating a gaseous-mixture feed including a compound of a light nuclear isotope and a compound of a heavy nuclear isotope into light and heavy fractions respectively enriched and depleted in the light isotope. The cascades of the first group have at least one enriching stage and at least one stripping stage. The unit further includes means for introducing a gaseous-mixture feedstock into each input of the first group of cascades, means for withdrawing at least a portion of a product fraction from the light-fraction outputs of the first group of cascades, and means for withdrawing at least a portion of a waste fraction from the heavy-fraction outputs of the first group of cascades. The isotope-enrichment unit also includes a means for conveying a gaseous-mixture from a light-fraction output of a first cascade included in the first group to the input of the auxiliary cascade so that at least a portion of a light gaseous-mixture fraction produced by the first group of cascades is further separated into a light and a heavy fraction by the auxiliary cascade. At least a portion of a product fraction is withdrawn from the light fraction output of the auxiliary cascade. If the light-fraction output of the first cascade and the heavy-fraction output of the auxiliary cascade are reciprocal outputs, the concentraton of the light isotope in the heavy fraction produced by the auxiliary cascade essentially equals the concentration of the light isotope in the gaseous-mixture feedstock

  6. Intra- versus inter-site macroscale variation in biogeochemical properties along a paddy soil chronosequence

    Directory of Open Access Journals (Sweden)

    C. Mueller-Niggemann

    2012-03-01

    Full Text Available In order to assess the intrinsic heterogeneity of paddy soils, a set of biogeochemical soil parameters was investigated in five field replicates of seven paddy fields (50, 100, 300, 500, 700, 1000, and 2000 yr of wetland rice cultivation, one flooded paddy nursery, one tidal wetland (TW, and one freshwater site (FW from a coastal area at Hangzhou Bay, Zhejiang Province, China. All soils evolved from a marine tidal flat substrate due to land reclamation. The biogeochemical parameters based on their properties were differentiated into (i a group behaving conservatively (TC, TOC, TN, TS, magnetic susceptibility, soil lightness and colour parameters, δ13C, δ15N, lipids and n-alkanes and (ii one encompassing more labile properties or fast cycling components (Nmic, Cmic, nitrate, ammonium, DON and DOC. The macroscale heterogeneity in paddy soils was assessed by evaluating intra- versus inter-site spatial variability of biogeochemical properties using statistical data analysis (descriptive, explorative and non-parametric. Results show that the intrinsic heterogeneity of paddy soil organic and minerogenic components per field is smaller than between study sites. The coefficient of variation (CV values of conservative parameters varied in a low range (10% to 20%, decreasing from younger towards older paddy soils. This indicates a declining variability of soil biogeochemical properties in longer used cropping sites according to progress in soil evolution. A generally higher variation of CV values (>20–40% observed for labile parameters implies a need for substantially higher sampling frequency when investigating these as compared to more conservative parameters. Since the representativeness of the sampling strategy could be sufficiently demonstrated, an investigation of long-term carbon accumulation/sequestration trends in topsoils of the 2000 yr paddy chronosequence under wetland rice cultivation

  7. Public debates - key issue in the environmental licensing process for the completion of the Cernavoda NPP Unit 2

    International Nuclear Information System (INIS)

    Rotaru, Ioan; Jelev, Adrian

    2003-01-01

    SN 'NUCLEARELECTRICA' S.A., the owner of Cernavoda NPP, organized, in 2001, several public consultations related to environmental impact of the completion of the Cernavoda NPP Unit 2, as required by the Romanian environmental law, part of project approval. Public consultations on the environmental assessment for the completion of the Cernavoda NPP - Unit 2 took place in 2001 between August 15 and September 21 in accordance with the provisions of Law No. 137/95 and Order No. 125/96. Romanian environmental legislation, harmonization of national environmental legislation with European Union, Romanian legislative requirements, information distributed to the public, issues raised and follow-up, they all are topics highlighted by this paper and they are addressing the environmental licensing process of the Cernavoda 2 NPP. The public consultation process described fulfils all the Romanian requirements for carrying out meaningful consultation with its relevant shareholders. The process also satisfies EDC (Export Development Corporation - Canada) requirements for public consultation and disclosure with relevant shareholders in the host country. SNN is fully committed to consulting as necessary with relevant shareholders throughout the construction and operation of the Project. Concerns of the public have been taken into account with the operations of Unit 1 and will continue to be addressed during the Unit 2 Project

  8. Usability of computerized nursing process from the ICNP® in intensive care units

    Directory of Open Access Journals (Sweden)

    Daniela Couto Carvalho Barra

    2015-04-01

    Full Text Available OBJECTIVE To analyze the usability of Computerized Nursing Process (CNP from the ICNP® 1.0 in Intensive Care Units in accordance with the criteria established by the standards of the International Organization for Standardization and the Brazilian Association of Technical Standards of systems. METHOD This is a before-and-after semi-experimental quantitative study, with a sample of 34 participants (nurses, professors and systems programmers, carried out in three Intensive Care Units. RESULTS The evaluated criteria (use, content and interface showed that CNP has usability criteria, as it integrates a logical data structure, clinical assessment, diagnostics and nursing interventions. CONCLUSION The CNP is a source of information and knowledge that provide nurses with new ways of learning in intensive care, for it is a place that provides complete, comprehensive, and detailed content, supported by current and relevant data and scientific research information for Nursing practices.

  9. Proposals for the Negotiation Process on the United Nations Global Compact for Migration

    Directory of Open Access Journals (Sweden)

    Victor Genina

    2017-09-01

    • builds a cooperation-oriented, peer-review mechanism to review migration policies.    The paper has been conceived as an input for those who will take part in the negotiation of the global compact for migration, as well as those who will closely follow those negotiations. Thus, the paper assumes a level of knowledge on how international migration has been addressed within the United Nations during the last several years and of the complexities of these negotiation processes. The author took part in different UN negotiation processes on international migration from 2004 to 2013. The paper is primarily based on this experience.[4] [1] G.A. Res. 71/1, ¶ 21 (Sept. 19, 2016. [2] G.A. Res. 68/4 (Oct. 3, 2013. [3] A mixed flow, according to UNHCR (n.d., is the migratory flow comprised by both asylum seekers and migrants: “Migrants and refugees increasingly make use of the same routes and means of transport to get to an overseas destination.” [4] During that period, the author was a staff member of the Mexican delegation to the United Nations, both in Geneva and New York.

  10. General Purpose Graphics Processing Unit Based High-Rate Rice Decompression and Reed-Solomon Decoding

    Energy Technology Data Exchange (ETDEWEB)

    Loughry, Thomas A. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-02-01

    As the volume of data acquired by space-based sensors increases, mission data compression/decompression and forward error correction code processing performance must likewise scale. This competency development effort was explored using the General Purpose Graphics Processing Unit (GPGPU) to accomplish high-rate Rice Decompression and high-rate Reed-Solomon (RS) decoding at the satellite mission ground station. Each algorithm was implemented and benchmarked on a single GPGPU. Distributed processing across one to four GPGPUs was also investigated. The results show that the GPGPU has considerable potential for performing satellite communication Data Signal Processing, with three times or better performance improvements and up to ten times reduction in cost over custom hardware, at least in the case of Rice Decompression and Reed-Solomon Decoding.

  11. Closed-cycle process of coke-cooling water in delayed coking unit

    International Nuclear Information System (INIS)

    Zhou, P.; Bai, Z.S.; Yang, Q.; Ma, J.; Wang, H.L.

    2008-01-01

    Synthesized processes are commonly used to treat coke-cooling wastewater. These include cold coke-cut water, diluting coke-cooling water, adding chemical deodorization into oily water, high-speed centrifugal separation, de-oiling and deodorization by coke adsorption, and open nature cooling. However, because of water and volatile evaporation loss, it is not suitable to process high-sulphur heavy oil using open treatments. This paper proposed a closed-cycling process in order to solve the wastewater treatment problem. The process is based on the characteristics of coke-cooling water, such as rapid parametric variation, oil-water-coke emulsification and steam-water mixing. The paper discussed the material characteristics and general idea of the study. The process of closed-cycle separation and utilization process of coke-cooling water was presented along with a process flow diagram. Several applications were presented, including a picture of hydrocyclones for pollution separation and a picture of equipments of pollution separation and components regeneration. The results showed good effect had been achieved since the coke-cooling water system was put into production in 2004. The recycling ratios for the components of the coke-cooling water were 100 per cent, and air quality in the operating area reached the requirements of the national operating site circumstance and the health standards. Calibration results of the demonstration unit were presented. It was concluded that since the devices went into operation, the function of production has been normal and stable. The operation was simple, flexible, adjustable and reliable, with significant economic efficiency and environmental benefits. 10 refs., 2 tabs., 3 figs

  12. Simulation of operational processes in hospital emergency units as lean healthcare tool

    Directory of Open Access Journals (Sweden)

    Andreia Macedo Gomes

    2017-07-01

    Full Text Available Recently, the Lean philosophy is gaining importance due to a competitive environment, which increases the need to reduce costs. Lean practices and tools have been applied to manufacturing, services, supply chain, startups and, the next frontier is healthcare. Most lean techniques can be easily adapted to health organizations. Therefore, this paper intends to summarize Lean practices and tools that are already being applied in health organizations. Among the numerous techniques and lean tools used, this research highlights the Simulation. Therefore, in order to understand the use of Simulation as a Lean Healthcare tool, this research aims to analyze, through the simulation technique, the operational dynamics of the service process of a fictitious hospital emergency unit. Initially a systematic review of the literature on the practices and tools of Lean Healthcare was carried out, in order to identify the main techniques practiced. The research highlighted Simulation as the sixth most cited tool in the literature. Subsequently, a simulation of a service model of an emergency unit was performed through the Arena software. As a main result, it can be highlighted that the attendants of the built model presented a degree of idleness, thus, they are able to atend a greater demand. As a last conclusion, it was verified that the emergency room is the process with longer service time and greater overload.

  13. ECO LOGIC INTERNATIONAL GAS-PHASE CHEMICAL REDUCTION PROCESS - THE THERMAL DESORPTION UNIT - APPLICATIONS ANALYSIS REPORT

    Science.gov (United States)

    ELI ECO Logic International, Inc.'s Thermal Desorption Unit (TDU) is specifically designed for use with Eco Logic's Gas Phase Chemical Reduction Process. The technology uses an externally heated bath of molten tin in a hydrogen atmosphere to desorb hazardous organic compounds fro...

  14. Energy- and cost-efficient lattice-QCD computations using graphics processing units

    Energy Technology Data Exchange (ETDEWEB)

    Bach, Matthias

    2014-07-01

    Quarks and gluons are the building blocks of all hadronic matter, like protons and neutrons. Their interaction is described by Quantum Chromodynamics (QCD), a theory under test by large scale experiments like the Large Hadron Collider (LHC) at CERN and in the future at the Facility for Antiproton and Ion Research (FAIR) at GSI. However, perturbative methods can only be applied to QCD for high energies. Studies from first principles are possible via a discretization onto an Euclidean space-time grid. This discretization of QCD is called Lattice QCD (LQCD) and is the only ab-initio option outside of the high-energy regime. LQCD is extremely compute and memory intensive. In particular, it is by definition always bandwidth limited. Thus - despite the complexity of LQCD applications - it led to the development of several specialized compute platforms and influenced the development of others. However, in recent years General-Purpose computation on Graphics Processing Units (GPGPU) came up as a new means for parallel computing. Contrary to machines traditionally used for LQCD, graphics processing units (GPUs) are a massmarket product. This promises advantages in both the pace at which higher-performing hardware becomes available and its price. CL2QCD is an OpenCL based implementation of LQCD using Wilson fermions that was developed within this thesis. It operates on GPUs by all major vendors as well as on central processing units (CPUs). On the AMD Radeon HD 7970 it provides the fastest double-precision D kernel for a single GPU, achieving 120GFLOPS. D - the most compute intensive kernel in LQCD simulations - is commonly used to compare LQCD platforms. This performance is enabled by an in-depth analysis of optimization techniques for bandwidth-limited codes on GPUs. Further, analysis of the communication between GPU and CPU, as well as between multiple GPUs, enables high-performance Krylov space solvers and linear scaling to multiple GPUs within a single system. LQCD

  15. Energy- and cost-efficient lattice-QCD computations using graphics processing units

    International Nuclear Information System (INIS)

    Bach, Matthias

    2014-01-01

    Quarks and gluons are the building blocks of all hadronic matter, like protons and neutrons. Their interaction is described by Quantum Chromodynamics (QCD), a theory under test by large scale experiments like the Large Hadron Collider (LHC) at CERN and in the future at the Facility for Antiproton and Ion Research (FAIR) at GSI. However, perturbative methods can only be applied to QCD for high energies. Studies from first principles are possible via a discretization onto an Euclidean space-time grid. This discretization of QCD is called Lattice QCD (LQCD) and is the only ab-initio option outside of the high-energy regime. LQCD is extremely compute and memory intensive. In particular, it is by definition always bandwidth limited. Thus - despite the complexity of LQCD applications - it led to the development of several specialized compute platforms and influenced the development of others. However, in recent years General-Purpose computation on Graphics Processing Units (GPGPU) came up as a new means for parallel computing. Contrary to machines traditionally used for LQCD, graphics processing units (GPUs) are a massmarket product. This promises advantages in both the pace at which higher-performing hardware becomes available and its price. CL2QCD is an OpenCL based implementation of LQCD using Wilson fermions that was developed within this thesis. It operates on GPUs by all major vendors as well as on central processing units (CPUs). On the AMD Radeon HD 7970 it provides the fastest double-precision D kernel for a single GPU, achieving 120GFLOPS. D - the most compute intensive kernel in LQCD simulations - is commonly used to compare LQCD platforms. This performance is enabled by an in-depth analysis of optimization techniques for bandwidth-limited codes on GPUs. Further, analysis of the communication between GPU and CPU, as well as between multiple GPUs, enables high-performance Krylov space solvers and linear scaling to multiple GPUs within a single system. LQCD

  16. Process engineering design of pathological waste incinerator with an integrated combustion gases treatment unit.

    Science.gov (United States)

    Shaaban, A F

    2007-06-25

    Management of medical wastes generated at different hospitals in Egypt is considered a highly serious problem. The sources and quantities of regulated medical wastes have been thoroughly surveyed and estimated (75t/day from governmental hospitals in Cairo). From the collected data it was concluded that the most appropriate incinerator capacity is 150kg/h. The objective of this work is to develop the process engineering design of an integrated unit, which is technically and economically capable for incinerating medical wastes and treatment of combustion gases. Such unit consists of (i) an incineration unit (INC-1) having an operating temperature of 1100 degrees C at 300% excess air, (ii) combustion-gases cooler (HE-1) generating 35m(3)/h hot water at 75 degrees C, (iii) dust filter (DF-1) capable of reducing particulates to 10-20mg/Nm(3), (iv) gas scrubbers (GS-1,2) for removing acidic gases, (v) a multi-tube fixed bed catalytic converter (CC-1) to maintain the level of dioxins and furans below 0.1ng/Nm(3), and (vi) an induced-draft suction fan system (SF-1) that can handle 6500Nm(3)/h at 250 degrees C. The residence time of combustion gases in the ignition, mixing and combustion chambers was found to be 2s, 0.25s and 0.75s, respectively. This will ensure both thorough homogenization of combustion gases and complete destruction of harmful constituents of the refuse. The adequate engineering design of individual process equipment results in competitive fixed and operating investments. The incineration unit has proved its high operating efficiency through the measurements of different pollutant-levels vented to the open atmosphere, which was found to be in conformity with the maximum allowable limits as specified in the law number 4/1994 issued by the Egyptian Environmental Affairs Agency (EEAA) and the European standards.

  17. Investigation of the Dynamic Melting Process in a Thermal Energy Storage Unit Using a Helical Coil Heat Exchanger

    Directory of Open Access Journals (Sweden)

    Xun Yang

    2017-08-01

    Full Text Available In this study, the dynamic melting process of the phase change material (PCM in a vertical cylindrical tube-in-tank thermal energy storage (TES unit was investigated through numerical simulations and experimental measurements. To ensure good heat exchange performance, a concentric helical coil was inserted into the TES unit to pipe the heat transfer fluid (HTF. A numerical model using the computational fluid dynamics (CFD approach was developed based on the enthalpy-porosity method to simulate the unsteady melting process including temperature and liquid fraction variations. Temperature measurements using evenly spaced thermocouples were conducted, and the temperature variation at three locations inside the TES unit was recorded. The effects of the HTF inlet parameters were investigated by parametric studies with different temperatures and flow rate values. Reasonably good agreement was achieved between the numerical prediction and the temperature measurement, which confirmed the numerical simulation accuracy. The numerical results showed the significance of buoyancy effect for the dynamic melting process. The system TES performance was very sensitive to the HTF inlet temperature. By contrast, no apparent influences can be found when changing the HTF flow rates. This study provides a comprehensive solution to investigate the heat exchange process of the TES system using PCM.

  18. High-throughput sequence alignment using Graphics Processing Units

    Directory of Open Access Journals (Sweden)

    Trapnell Cole

    2007-12-01

    Full Text Available Abstract Background The recent availability of new, less expensive high-throughput DNA sequencing technologies has yielded a dramatic increase in the volume of sequence data that must be analyzed. These data are being generated for several purposes, including genotyping, genome resequencing, metagenomics, and de novo genome assembly projects. Sequence alignment programs such as MUMmer have proven essential for analysis of these data, but researchers will need ever faster, high-throughput alignment tools running on inexpensive hardware to keep up with new sequence technologies. Results This paper describes MUMmerGPU, an open-source high-throughput parallel pairwise local sequence alignment program that runs on commodity Graphics Processing Units (GPUs in common workstations. MUMmerGPU uses the new Compute Unified Device Architecture (CUDA from nVidia to align multiple query sequences against a single reference sequence stored as a suffix tree. By processing the queries in parallel on the highly parallel graphics card, MUMmerGPU achieves more than a 10-fold speedup over a serial CPU version of the sequence alignment kernel, and outperforms the exact alignment component of MUMmer on a high end CPU by 3.5-fold in total application time when aligning reads from recent sequencing projects using Solexa/Illumina, 454, and Sanger sequencing technologies. Conclusion MUMmerGPU is a low cost, ultra-fast sequence alignment program designed to handle the increasing volume of data produced by new, high-throughput sequencing technologies. MUMmerGPU demonstrates that even memory-intensive applications can run significantly faster on the relatively low-cost GPU than on the CPU.

  19. 78 FR 18234 - Service of Process on Manufacturers; Manufacturers Importing Electronic Products Into the United...

    Science.gov (United States)

    2013-03-26

    ... DEPARTMENT OF HEALTH AND HUMAN SERVICES Food and Drug Administration 21 CFR Part 1005 [Docket No. FDA-2007-N-0091; (formerly 2007N-0104)] Service of Process on Manufacturers; Manufacturers Importing Electronic Products Into the United States; Agent Designation; Change of Address AGENCY: Food and Drug...

  20. Prototype design of singles processing unit for the small animal PET

    Science.gov (United States)

    Deng, P.; Zhao, L.; Lu, J.; Li, B.; Dong, R.; Liu, S.; An, Q.

    2018-05-01

    Position Emission Tomography (PET) is an advanced clinical diagnostic imaging technique for nuclear medicine. Small animal PET is increasingly used for studying the animal model of disease, new drugs and new therapies. A prototype of Singles Processing Unit (SPU) for a small animal PET system was designed to obtain the time, energy, and position information. The energy and position is actually calculated through high precison charge measurement, which is based on amplification, shaping, A/D conversion and area calculation in digital signal processing domian. Analysis and simulations were also conducted to optimize the key parameters in system design. Initial tests indicate that the charge and time precision is better than 3‰ FWHM and 350 ps FWHM respectively, while the position resolution is better than 3.5‰ FWHM. Commination tests of the SPU prototype with the PET detector indicate that the system time precision is better than 2.5 ns, while the flood map and energy spectra concored well with the expected.

  1. Analysis of possible designs of processing units with radial plasma flows

    Science.gov (United States)

    Kolesnik, V. V.; Zaitsev, S. V.; Vashilin, V. S.; Limarenko, M. V.; Prochorenkov, D. S.

    2018-03-01

    Analysis of plasma-ion methods of obtaining thin-film coatings shows that their development goes along the path of the increasing use of sputter deposition processes, which allow one to obtain multicomponent coatings with varying percentage of particular components. One of the methods that allow one to form multicomponent coatings with virtually any composition of elementary components is the method of coating deposition using quasi-magnetron sputtering systems [1]. This requires the creation of an axial magnetic field of a defined configuration with the flux density within the range of 0.01-0.1 T [2]. In order to compare and analyze various configurations of processing unit magnetic systems, it is necessary to obtain the following dependencies: the dependency of magnetic core section on the input power to inductors, the distribution of magnetic induction within the equatorial plane in the corresponding sections, the distribution of the magnetic induction value in the area of cathode target location.

  2. Impact of memory bottleneck on the performance of graphics processing units

    Science.gov (United States)

    Son, Dong Oh; Choi, Hong Jun; Kim, Jong Myon; Kim, Cheol Hong

    2015-12-01

    Recent graphics processing units (GPUs) can process general-purpose applications as well as graphics applications with the help of various user-friendly application programming interfaces (APIs) supported by GPU vendors. Unfortunately, utilizing the hardware resource in the GPU efficiently is a challenging problem, since the GPU architecture is totally different to the traditional CPU architecture. To solve this problem, many studies have focused on the techniques for improving the system performance using GPUs. In this work, we analyze the GPU performance varying GPU parameters such as the number of cores and clock frequency. According to our simulations, the GPU performance can be improved by 125.8% and 16.2% on average as the number of cores and clock frequency increase, respectively. However, the performance is saturated when memory bottleneck problems incur due to huge data requests to the memory. The performance of GPUs can be improved as the memory bottleneck is reduced by changing GPU parameters dynamically.

  3. The ATLAS Fast Tracker Processing Units - input and output data preparation

    CERN Document Server

    Bolz, Arthur Eugen; The ATLAS collaboration

    2016-01-01

    The ATLAS Fast Tracker is a hardware processor built to reconstruct tracks at a rate of up to 100 kHz and provide them to the high level trigger system. The Fast Tracker will allow the trigger to utilize tracking information from the entire detector at an earlier event selection stage than ever before, allowing for more efficient event rejection. The connection of the system from to the detector read-outs and to the high level trigger computing farms are made through custom boards implementing Advanced Telecommunications Computing Technologies standard. The input is processed by the Input Mezzanines and Data Formatter boards, designed to receive and sort the data coming from the Pixel and Semi-conductor Tracker. The Fast Tracker to Level-2 Interface Card connects the system to the computing farm. The Input Mezzanines are 128 boards, performing clustering, placed on the 32 Data Formatter mother boards that sort the information into 64 logical regions required by the downstream processing units. This necessitat...

  4. Model of a programmable quantum processing unit based on a quantum transistor effect

    Science.gov (United States)

    Ablayev, Farid; Andrianov, Sergey; Fetisov, Danila; Moiseev, Sergey; Terentyev, Alexandr; Urmanchev, Andrey; Vasiliev, Alexander

    2018-02-01

    In this paper we propose a model of a programmable quantum processing device realizable with existing nano-photonic technologies. It can be viewed as a basis for new high performance hardware architectures. Protocols for physical implementation of device on the controlled photon transfer and atomic transitions are presented. These protocols are designed for executing basic single-qubit and multi-qubit gates forming a universal set. We analyze the possible operation of this quantum computer scheme. Then we formalize the physical architecture by a mathematical model of a Quantum Processing Unit (QPU), which we use as a basis for the Quantum Programming Framework. This framework makes it possible to perform universal quantum computations in a multitasking environment.

  5. Influence of unit operations on the levels of polyacetylenes in minimally processed carrots and parsnips: An industrial trial.

    Science.gov (United States)

    Koidis, Anastasios; Rawson, Ashish; Tuohy, Maria; Brunton, Nigel

    2012-06-01

    Carrots and parsnips are often consumed as minimally processed ready-to-eat convenient foods and contain in minor quantities, bioactive aliphatic C17-polyacetylenes (falcarinol, falcarindiol, falcarindiol-3-acetate). Their retention during minimal processing in an industrial trial was evaluated. Carrot and parsnips were prepared in four different forms (disc cutting, baton cutting, cubing and shredding) and samples were taken in every point of their processing line. The unit operations were: peeling, cutting and washing with chlorinated water and also retention during 7days storage was evaluated. The results showed that the initial unit operations (mainly peeling) influence the polyacetylene retention. This was attributed to the high polyacetylene content of their peels. In most cases, when washing was performed after cutting, less retention was observed possibly due to leakage during tissue damage occurred in the cutting step. The relatively high retention during storage indicates high plant matrix stability. Comparing the behaviour of polyacetylenes in the two vegetables during storage, the results showed that they were slightly more retained in parsnips than in carrots. Unit operations and especially abrasive peeling might need further optimisation to make them gentler and minimise bioactive losses. Copyright © 2011 Elsevier Ltd. All rights reserved.

  6. Process methods and levels of automation of wood pallet repair in the United States

    Science.gov (United States)

    Jonghun Park; Laszlo Horvath; Robert J. Bush

    2016-01-01

    This study documented the current status of wood pallet repair in the United States by identifying the types of processing and equipment usage in repair operations from an automation prespective. The wood pallet repair firms included in the sudy received an average of approximately 1.28 million cores (i.e., used pallets) for recovery in 2012. A majority of the cores...

  7. Remote Maintenance Design Guide for Compact Processing Units

    Energy Technology Data Exchange (ETDEWEB)

    Draper, J.V.

    2000-07-13

    Oak Ridge National Laboratory (ORNL) Robotics and Process Systems (RPSD) personnel have extensive experience working with remotely operated and maintained systems. These systems require expert knowledge in teleoperation, human factors, telerobotics, and other robotic devices so that remote equipment may be manipulated, operated, serviced, surveyed, and moved about in a hazardous environment. The RPSD staff has a wealth of experience in this area, including knowledge in the broad topics of human factors, modular electronics, modular mechanical systems, hardware design, and specialized tooling. Examples of projects that illustrate and highlight RPSD's unique experience in remote systems design and application include the following: (1) design of a remote shear and remote dissolver systems in support of U.S. Department of Energy (DOE) fuel recycling research and nuclear power missions; (2) building remotely operated mobile systems for metrology and characterizing hazardous facilities in support of remote operations within those facilities; (3) construction of modular robotic arms, including the Laboratory Telerobotic Manipulator, which was designed for the National Aeronautics and Space Administration (NASA) and the Advanced ServoManipulator, which was designed for the DOE; (4) design of remotely operated laboratories, including chemical analysis and biochemical processing laboratories; (5) construction of remote systems for environmental clean up and characterization, including underwater, buried waste, underground storage tank (UST) and decontamination and dismantlement (D&D) applications. Remote maintenance has played a significant role in fuel reprocessing because of combined chemical and radiological contamination. Furthermore, remote maintenance is expected to play a strong role in future waste remediation. The compact processing units (CPUs) being designed for use in underground waste storage tank remediation are examples of improvements in systems

  8. Investigation of The regularities of the process and development of method of management of technological line operation within the process of mass raw mate-rials supply in terms of dynamics of inbound traffic of unit trains

    Directory of Open Access Journals (Sweden)

    Катерина Ігорівна Сізова

    2015-03-01

    Full Text Available Large-scale sinter plants at metallurgical enterprises incorporate highly productive transport-and-handling complexes (THC that receive and process mass iron-bearing raw materials. Such THCs as a rule include unloading facilities and freight railway station. The central part of the THC is a technological line that carries out operations of reception and unloading of unit trains with raw materials. The technological line consists of transport and freight modules. The latter plays a leading role and, in its turn, consists of rotary car dumpers and conveyor belts. This module represents a determinate system that carries out preparation and unloading operations. Its processing capacity is set in accordance with manufacturing capacity of the sinter plant. The research has shown that in existing operating conditions, which is characterized by “arrhythmia” of interaction between external transport operation and production, technological line of THC functions inefficiently. Thus, it secures just 18-20 % of instances of processing of inbound unit trains within set standard time. It was determined that duration of the cycle of processing of inbound unit train can play a role of regulator, under stochastic characteristics of intervals between inbound unit trains with raw materials on the one hand, and determined unloading system on the other hand. That is why evaluation of interdependence between these factors allows determination of duration of cycle of processing of inbound unit trains. Basing on the results of the study, the method of logistical management of the processing of inbound unit trains was offered. At the same time, real duration of processing of inbound unit train is taken as the regulated value. The regulation process implies regular evaluation and comparison of these values, and, taking into account different disturbances, decision-making concerning adaptation of functioning of technological line. According to the offered principles

  9. Roll and roll-to-roll process scaling through development of a compact flexo unit for printing of back electrodes

    DEFF Research Database (Denmark)

    Dam, Henrik Friis; Andersen, Thomas Rieks; Madsen, Morten Vesterager

    2015-01-01

    some of the most critical steps in the scaling process. We describe the development of such a machine that comprise web guiding, tension control and surface treatment in a compact desk size that is easily moved around and also detail the development of a small cassette based flexographic unit for back...... electrode printing that is parsimonious in terms of ink usage and more gentle than laboratory scale flexo units where the foil transport is either driven by the flexo unit or the flexo unit is driven by the foil transport. We demonstrate fully operational flexible polymer solar cell manufacture using...

  10. Integration Process for the Habitat Demonstration Unit

    Science.gov (United States)

    Gill, Tracy; Merbitz, Jerad; Kennedy, Kriss; Tri, Terry; Howe, A. Scott

    2010-01-01

    The Habitat Demonstration Unit (HDU) is an experimental exploration habitat technology and architecture test platform designed for analog demonstration activities The HDU project has required a team to integrate a variety of contributions from NASA centers and outside collaborators and poses a challenge in integrating these disparate efforts into a cohesive architecture To complete the development of the HDU from conception in June 2009 to rollout for operations in July 2010, a cohesive integration strategy has been developed to integrate the various systems of HDU and the payloads, such as the Geology Lab, that those systems will support The utilization of interface design standards and uniquely tailored reviews have allowed for an accelerated design process Scheduled activities include early fit-checks and the utilization of a Habitat avionics test bed prior to equipment installation into HDU A coordinated effort to utilize modeling and simulation systems has aided in design and integration concept development Modeling tools have been effective in hardware systems layout, cable routing and length estimation, and human factors analysis Decision processes on the shell development including the assembly sequence and the transportation have been fleshed out early on HDU to maximize the efficiency of both integration and field operations Incremental test operations leading up to an integrated systems test allows for an orderly systems test program The HDU will begin its journey as an emulation of a Pressurized Excursion Module (PEM) for 2010 field testing and then may evolve to a Pressurized Core Module (PCM) for 2011 and later field tests, depending on agency architecture decisions The HDU deployment will vary slightly from current lunar architecture plans to include developmental hardware and software items and additional systems called opportunities for technology demonstration One of the HDU challenges has been designing to be prepared for the integration of

  11. Accelerating VASP electronic structure calculations using graphic processing units

    KAUST Repository

    Hacene, Mohamed

    2012-08-20

    We present a way to improve the performance of the electronic structure Vienna Ab initio Simulation Package (VASP) program. We show that high-performance computers equipped with graphics processing units (GPUs) as accelerators may reduce drastically the computation time when offloading these sections to the graphic chips. The procedure consists of (i) profiling the performance of the code to isolate the time-consuming parts, (ii) rewriting these so that the algorithms become better-suited for the chosen graphic accelerator, and (iii) optimizing memory traffic between the host computer and the GPU accelerator. We chose to accelerate VASP with NVIDIA GPU using CUDA. We compare the GPU and original versions of VASP by evaluating the Davidson and RMM-DIIS algorithms on chemical systems of up to 1100 atoms. In these tests, the total time is reduced by a factor between 3 and 8 when running on n (CPU core + GPU) compared to n CPU cores only, without any accuracy loss. © 2012 Wiley Periodicals, Inc.

  12. Accelerating VASP electronic structure calculations using graphic processing units

    KAUST Repository

    Hacene, Mohamed; Anciaux-Sedrakian, Ani; Rozanska, Xavier; Klahr, Diego; Guignon, Thomas; Fleurat-Lessard, Paul

    2012-01-01

    We present a way to improve the performance of the electronic structure Vienna Ab initio Simulation Package (VASP) program. We show that high-performance computers equipped with graphics processing units (GPUs) as accelerators may reduce drastically the computation time when offloading these sections to the graphic chips. The procedure consists of (i) profiling the performance of the code to isolate the time-consuming parts, (ii) rewriting these so that the algorithms become better-suited for the chosen graphic accelerator, and (iii) optimizing memory traffic between the host computer and the GPU accelerator. We chose to accelerate VASP with NVIDIA GPU using CUDA. We compare the GPU and original versions of VASP by evaluating the Davidson and RMM-DIIS algorithms on chemical systems of up to 1100 atoms. In these tests, the total time is reduced by a factor between 3 and 8 when running on n (CPU core + GPU) compared to n CPU cores only, without any accuracy loss. © 2012 Wiley Periodicals, Inc.

  13. Lightweight concrete masonry units based on processed granulate of corn cob as aggregate

    Directory of Open Access Journals (Sweden)

    Faustino, J.

    2015-06-01

    Full Text Available A research work was performed in order to assess the potential application of processed granulate of corn cob (PCC as an alternative lightweight aggregate for the manufacturing process of lightweight concrete masonry units (CMU. Therefore, CMU-PCC were prepared in a factory using a typical lightweight concrete mixture for non-structural purposes. Additionally, lightweight concrete masonry units based on a currently applied lightweight aggregate such as expanded clay (CMU-EC were also manufactured. An experimental work allowed achieving a set of results that suggest that the proposed building product presents interesting material properties within the masonry wall context. Therefore, this unit is promising for both interior and exterior applications. This conclusion is even more relevant considering that corn cob is an agricultural waste product.En este trabajo de investigación se evaluó la posible aplicación de granulado procesado de la mazorca de maiz como un árido ligero alternativo en el proceso de fabricación de unidades de mampostería de hormigón ligero. Con esta finalidad, se prepararon en una fábrica diversas unidades de mampostería no estructural con granulado procesado de la mazorca de maiz. Además, se fabricaran unidades de mampostería estándar de peso ligero basado en agregados de arcilla expandida. Este trabajo experimental permitió lograr un conjunto de resultados que sugieren que el producto de construcción propuesto presenta interesantes propiedades materiales en el contexto de la pared de mampostería. Por lo tanto, esta solución es prometedora tanto para aplicaciones interiores y exteriores. Esta conclusión es aún más relevante teniendo en cuenta que la mazorca de maíz es un producto de desecho agrícola.

  14. Graphics processing units accelerated semiclassical initial value representation molecular dynamics

    Energy Technology Data Exchange (ETDEWEB)

    Tamascelli, Dario; Dambrosio, Francesco Saverio [Dipartimento di Fisica, Università degli Studi di Milano, via Celoria 16, 20133 Milano (Italy); Conte, Riccardo [Department of Chemistry and Cherry L. Emerson Center for Scientific Computation, Emory University, Atlanta, Georgia 30322 (United States); Ceotto, Michele, E-mail: michele.ceotto@unimi.it [Dipartimento di Chimica, Università degli Studi di Milano, via Golgi 19, 20133 Milano (Italy)

    2014-05-07

    This paper presents a Graphics Processing Units (GPUs) implementation of the Semiclassical Initial Value Representation (SC-IVR) propagator for vibrational molecular spectroscopy calculations. The time-averaging formulation of the SC-IVR for power spectrum calculations is employed. Details about the GPU implementation of the semiclassical code are provided. Four molecules with an increasing number of atoms are considered and the GPU-calculated vibrational frequencies perfectly match the benchmark values. The computational time scaling of two GPUs (NVIDIA Tesla C2075 and Kepler K20), respectively, versus two CPUs (Intel Core i5 and Intel Xeon E5-2687W) and the critical issues related to the GPU implementation are discussed. The resulting reduction in computational time and power consumption is significant and semiclassical GPU calculations are shown to be environment friendly.

  15. A data processing unit (DPU) for a satellite-borne charge composition experiment

    International Nuclear Information System (INIS)

    Koga, R.; Blake, J.B.; Chenette, D.L.; Fennell, J.F.; Imamoto, S.S.; Katz, N.; King, C.G.

    1985-01-01

    A data processing unit (DPU) for use with a charge composition experiment to be flown aboard the VIKING auroral research satellite is described. The function of this experiment is to measure the mass, charge state, energy, and pitch-angle distribution of ions in the earth's high-altitude magnetosphere in the energy range from 50 keV/q to 300 keV/q. In order to be compatible with the spacecraft telemetry limitations, raw sensor data are processed in the DPU using on-board composition analysis and the scalar compression. The design of this DPU is such that it can be readily adapted to a variety of space composition experiments. Special attention was given to the effect of the radiation environment on orbit since a microprocessor and a relatively large number of random access memories (RAMs) comprise a considerable portion of the DPU

  16. Computation of large covariance matrices by SAMMY on graphical processing units and multicore CPUs

    International Nuclear Information System (INIS)

    Arbanas, G.; Dunn, M.E.; Wiarda, D.

    2011-01-01

    Computational power of Graphical Processing Units and multicore CPUs was harnessed by the nuclear data evaluation code SAMMY to speed up computations of large Resonance Parameter Covariance Matrices (RPCMs). This was accomplished by linking SAMMY to vendor-optimized implementations of the matrix-matrix multiplication subroutine of the Basic Linear Algebra Library to compute the most time-consuming step. The 235 U RPCM computed previously using a triple-nested loop was re-computed using the NVIDIA implementation of the subroutine on a single Tesla Fermi Graphical Processing Unit, and also using the Intel's Math Kernel Library implementation on two different multicore CPU systems. A multiplication of two matrices of dimensions 16,000×20,000 that had previously taken days, took approximately one minute on the GPU. Comparable performance was achieved on a dual six-core CPU system. The magnitude of the speed-up suggests that these, or similar, combinations of hardware and libraries may be useful for large matrix operations in SAMMY. Uniform interfaces of standard linear algebra libraries make them a promising candidate for a programming framework of a new generation of SAMMY for the emerging heterogeneous computing platforms. (author)

  17. Computation of large covariance matrices by SAMMY on graphical processing units and multicore CPUs

    Energy Technology Data Exchange (ETDEWEB)

    Arbanas, G.; Dunn, M.E.; Wiarda, D., E-mail: arbanasg@ornl.gov, E-mail: dunnme@ornl.gov, E-mail: wiardada@ornl.gov [Oak Ridge National Laboratory, Oak Ridge, TN (United States)

    2011-07-01

    Computational power of Graphical Processing Units and multicore CPUs was harnessed by the nuclear data evaluation code SAMMY to speed up computations of large Resonance Parameter Covariance Matrices (RPCMs). This was accomplished by linking SAMMY to vendor-optimized implementations of the matrix-matrix multiplication subroutine of the Basic Linear Algebra Library to compute the most time-consuming step. The {sup 235}U RPCM computed previously using a triple-nested loop was re-computed using the NVIDIA implementation of the subroutine on a single Tesla Fermi Graphical Processing Unit, and also using the Intel's Math Kernel Library implementation on two different multicore CPU systems. A multiplication of two matrices of dimensions 16,000×20,000 that had previously taken days, took approximately one minute on the GPU. Comparable performance was achieved on a dual six-core CPU system. The magnitude of the speed-up suggests that these, or similar, combinations of hardware and libraries may be useful for large matrix operations in SAMMY. Uniform interfaces of standard linear algebra libraries make them a promising candidate for a programming framework of a new generation of SAMMY for the emerging heterogeneous computing platforms. (author)

  18. Efficient particle-in-cell simulation of auroral plasma phenomena using a CUDA enabled graphics processing unit

    Science.gov (United States)

    Sewell, Stephen

    This thesis introduces a software framework that effectively utilizes low-cost commercially available Graphic Processing Units (GPUs) to simulate complex scientific plasma phenomena that are modeled using the Particle-In-Cell (PIC) paradigm. The software framework that was developed conforms to the Compute Unified Device Architecture (CUDA), a standard for general purpose graphic processing that was introduced by NVIDIA Corporation. This framework has been verified for correctness and applied to advance the state of understanding of the electromagnetic aspects of the development of the Aurora Borealis and Aurora Australis. For each phase of the PIC methodology, this research has identified one or more methods to exploit the problem's natural parallelism and effectively map it for execution on the graphic processing unit and its host processor. The sources of overhead that can reduce the effectiveness of parallelization for each of these methods have also been identified. One of the novel aspects of this research was the utilization of particle sorting during the grid interpolation phase. The final representation resulted in simulations that executed about 38 times faster than simulations that were run on a single-core general-purpose processing system. The scalability of this framework to larger problem sizes and future generation systems has also been investigated.

  19. Methodology for systematic analysis and improvement of manufacturing unit process life cycle inventory (UPLCI) CO2PE! initiative (cooperative effort on process emissions in manufacturing). Part 2: case studies

    DEFF Research Database (Denmark)

    Kellens, Karel; Dewulf, Wim; Overcash, Michael

    2012-01-01

    industrial data and engineering calculations for energy use and material loss. This approach is illustrated by means of a case study of a drilling process.The in-depth approach, which leads to more accurate LCI data as well as the identification of potential for environmental improvements...... for environmental improvement based on the in-depth analysis of individual manufacturing unit processes. Two case studies illustrate the applicability of the methodology.......This report presents two case studies, one for both the screening approach and the in-depth approach, demonstrating the application of the life cycle assessment-oriented methodology for systematic inventory analysis of the machine tool use phase of manufacturing unit processes, which has been...

  20. Microstructure devices for process intensification: Influence of manufacturing tolerances and design

    International Nuclear Information System (INIS)

    Brandner, Juergen J.

    2013-01-01

    Process intensification by miniaturization is a common task for several fields of technology. Starting from manufacturing of electronic devices, miniaturization with the accompanying opportunities and problems gained also interest in chemistry and chemical process engineering. While the integration of enhanced functions, e.g. integrated sensors and actuators, is still under consideration, miniaturization itself has been realized in all material classes, namely metals, ceramics and polymers. First devices have been manufactured by scaling down macro-scale devices. However, manufacturing tolerances, material properties and design show much larger influence to the process than in macro scale. Many of the devices generated alike the macro ones work properly, but possibly could be optimized to a certain extend by adjusting the design and manufacturing tolerances to the special demands of miniaturization. Thus, some considerations on the design and production of devices for micro process engineering should be made to provide devices which show reproducible and controllable process behavior. The aim of the following publication is to show the importance of considerations in manufacturing tolerances and dimensions as well as design of microstructures to avoid negative influences and optimize the process characteristics of miniaturized devices. Some examples will be shown to explain the considerations presented here

  1. Modeling of biopharmaceutical processes. Part 2: Process chromatography unit operation

    DEFF Research Database (Denmark)

    Kaltenbrunner, Oliver; McCue, Justin; Engel, Philip

    2008-01-01

    Process modeling can be a useful tool to aid in process development, process optimization, and process scale-up. When modeling a chromatography process, one must first select the appropriate models that describe the mass transfer and adsorption that occurs within the porous adsorbent. The theoret......Process modeling can be a useful tool to aid in process development, process optimization, and process scale-up. When modeling a chromatography process, one must first select the appropriate models that describe the mass transfer and adsorption that occurs within the porous adsorbent...

  2. Acceleration of the OpenFOAM-based MHD solver using graphics processing units

    International Nuclear Information System (INIS)

    He, Qingyun; Chen, Hongli; Feng, Jingchao

    2015-01-01

    Highlights: • A 3D PISO-MHD was implemented on Kepler-class graphics processing units (GPUs) using CUDA technology. • A consistent and conservative scheme is used in the code which was validated by three basic benchmarks in a rectangular and round ducts. • Parallelized of CPU and GPU acceleration were compared relating to single core CPU in MHD problems and non-MHD problems. • Different preconditions for solving MHD solver were compared and the results showed that AMG method is better for calculations. - Abstract: The pressure-implicit with splitting of operators (PISO) magnetohydrodynamics MHD solver of the couple of Navier–Stokes equations and Maxwell equations was implemented on Kepler-class graphics processing units (GPUs) using the CUDA technology. The solver is developed on open source code OpenFOAM based on consistent and conservative scheme which is suitable for simulating MHD flow under strong magnetic field in fusion liquid metal blanket with structured or unstructured mesh. We verified the validity of the implementation on several standard cases including the benchmark I of Shercliff and Hunt's cases, benchmark II of fully developed circular pipe MHD flow cases and benchmark III of KIT experimental case. Computational performance of the GPU implementation was examined by comparing its double precision run times with those of essentially the same algorithms and meshes. The resulted showed that a GPU (GTX 770) can outperform a server-class 4-core, 8-thread CPU (Intel Core i7-4770k) by a factor of 2 at least.

  3. Acceleration of the OpenFOAM-based MHD solver using graphics processing units

    Energy Technology Data Exchange (ETDEWEB)

    He, Qingyun; Chen, Hongli, E-mail: hlchen1@ustc.edu.cn; Feng, Jingchao

    2015-12-15

    Highlights: • A 3D PISO-MHD was implemented on Kepler-class graphics processing units (GPUs) using CUDA technology. • A consistent and conservative scheme is used in the code which was validated by three basic benchmarks in a rectangular and round ducts. • Parallelized of CPU and GPU acceleration were compared relating to single core CPU in MHD problems and non-MHD problems. • Different preconditions for solving MHD solver were compared and the results showed that AMG method is better for calculations. - Abstract: The pressure-implicit with splitting of operators (PISO) magnetohydrodynamics MHD solver of the couple of Navier–Stokes equations and Maxwell equations was implemented on Kepler-class graphics processing units (GPUs) using the CUDA technology. The solver is developed on open source code OpenFOAM based on consistent and conservative scheme which is suitable for simulating MHD flow under strong magnetic field in fusion liquid metal blanket with structured or unstructured mesh. We verified the validity of the implementation on several standard cases including the benchmark I of Shercliff and Hunt's cases, benchmark II of fully developed circular pipe MHD flow cases and benchmark III of KIT experimental case. Computational performance of the GPU implementation was examined by comparing its double precision run times with those of essentially the same algorithms and meshes. The resulted showed that a GPU (GTX 770) can outperform a server-class 4-core, 8-thread CPU (Intel Core i7-4770k) by a factor of 2 at least.

  4. Relationship between water quality and macro-scale parameters (land use, erosion, geology, and population density) in the Siminehrood River Basin.

    Science.gov (United States)

    Bostanmaneshrad, Farshid; Partani, Sadegh; Noori, Roohollah; Nachtnebel, Hans-Peter; Berndtsson, Ronny; Adamowski, Jan Franklin

    2018-10-15

    To date, few studies have investigated the simultaneous effects of macro-scale parameters (MSPs) such as land use, population density, geology, and erosion layers on micro-scale water quality variables (MSWQVs). This research focused on an evaluation of the relationship between MSPs and MSWQVs in the Siminehrood River Basin, Iran. In addition, we investigated the importance of water particle travel time (hydrological distance) on this relationship. The MSWQVs included 13 physicochemical and biochemical parameters observed at 15 stations during three seasons. Primary screening was performed by utilizing three multivariate statistical analyses (Pearson's correlation, cluster and discriminant analyses) in seven series of observed data. These series included three separate seasonal data, three two-season data, and aggregated three-season data for investigation of relationships between MSPs and MSWQVs. Coupled data (pairs of MSWQVs and MSPs) repeated in at least two out of three statistical analyses were selected for final screening. The primary screening results demonstrated significant relationships between land use and phosphorus, total solids and turbidity, erosion levels and electrical conductivity, and erosion and total solids. Furthermore, water particle travel time effects were considered through three geographical pattern definitions of distance for each MSP by using two weighting methods. To find effective MSP factors on MSWQVs, a multivariate linear regression analysis was employed. Then, preliminary equations that estimated MSWQVs were developed. The preliminary equations were modified to adaptive equations to obtain the final models. The final models indicated that a new metric, referred to as hydrological distance, provided better MSWQV estimation and water quality prediction compared to the National Sanitation Foundation Water Quality Index. Crown Copyright © 2018. Published by Elsevier B.V. All rights reserved.

  5. Implementation of RLS-based Adaptive Filterson nVIDIA GeForce Graphics Processing Unit

    OpenAIRE

    Hirano, Akihiro; Nakayama, Kenji

    2011-01-01

    This paper presents efficient implementa- tion of RLS-based adaptive filters with a large number of taps on nVIDIA GeForce graphics processing unit (GPU) and CUDA software development environment. Modification of the order and the combination of calcu- lations reduces the number of accesses to slow off-chip memory. Assigning tasks into multiple threads also takes memory access order into account. For a 4096-tap case, a GPU program is almost three times faster than a CPU program.

  6. Feasibility study and concepts for use of compact process units to treat Hanford tank wastes

    Energy Technology Data Exchange (ETDEWEB)

    Collins, E.D.; Bond, W.D.; Campbell, D.O.; Harrington, F.E.; Malkemus, D.W.; Peishel, F.L.; Yarbro, O.O.

    1994-06-01

    A team of experienced radiochemical design engineers and chemists was assembled at Oak Ridge National Laboratory (ORNL) at the request of the Underground Storage Tank Integrated Demonstration (USTID) Program to evaluate the feasibility and perform a conceptual study of options for the use of compact processing units (CPUs), located at the Hanford, Washington, waste tank sites, to accomplish extensive pretreatment of the tank wastes using the clean-option concept. The scope of the ORNL study included an evaluation of the constraints of the various chemical process operations that may be employed and the constraints of necessary supporting operations. The latter include equipment maintenance and replacement, process control methods, product and by-product storage, and waste disposal.

  7. Feasibility study and concepts for use of compact process units to treat Hanford tank wastes

    International Nuclear Information System (INIS)

    Collins, E.D.; Bond, W.D.; Campbell, D.O.; Harrington, F.E.; Malkemus, D.W.; Peishel, F.L.; Yarbro, O.O.

    1994-06-01

    A team of experienced radiochemical design engineers and chemists was assembled at Oak Ridge National Laboratory (ORNL) at the request of the Underground Storage Tank Integrated Demonstration (USTID) Program to evaluate the feasibility and perform a conceptual study of options for the use of compact processing units (CPUs), located at the Hanford, Washington, waste tank sites, to accomplish extensive pretreatment of the tank wastes using the clean-option concept. The scope of the ORNL study included an evaluation of the constraints of the various chemical process operations that may be employed and the constraints of necessary supporting operations. The latter include equipment maintenance and replacement, process control methods, product and by-product storage, and waste disposal

  8. Real-time processing for full-range Fourier-domain optical-coherence tomography with zero-filling interpolation using multiple graphic processing units.

    Science.gov (United States)

    Watanabe, Yuuki; Maeno, Seiya; Aoshima, Kenji; Hasegawa, Haruyuki; Koseki, Hitoshi

    2010-09-01

    The real-time display of full-range, 2048?axial pixelx1024?lateral pixel, Fourier-domain optical-coherence tomography (FD-OCT) images is demonstrated. The required speed was achieved by using dual graphic processing units (GPUs) with many stream processors to realize highly parallel processing. We used a zero-filling technique, including a forward Fourier transform, a zero padding to increase the axial data-array size to 8192, an inverse-Fourier transform back to the spectral domain, a linear interpolation from wavelength to wavenumber, a lateral Hilbert transform to obtain the complex spectrum, a Fourier transform to obtain the axial profiles, and a log scaling. The data-transfer time of the frame grabber was 15.73?ms, and the processing time, which includes the data transfer between the GPU memory and the host computer, was 14.75?ms, for a total time shorter than the 36.70?ms frame-interval time using a line-scan CCD camera operated at 27.9?kHz. That is, our OCT system achieved a processed-image display rate of 27.23 frames/s.

  9. Understanding the Development of Minimum Unit Pricing of Alcohol in Scotland: A Qualitative Study of the Policy Process

    Science.gov (United States)

    Katikireddi, Srinivasa Vittal; Hilton, Shona; Bonell, Chris; Bond, Lyndal

    2014-01-01

    Background Minimum unit pricing of alcohol is a novel public health policy with the potential to improve population health and reduce health inequalities. Theories of the policy process may help to understand the development of policy innovation and in turn identify lessons for future public health research and practice. This study aims to explain minimum unit pricing’s development by taking a ‘multiple-lenses’ approach to understanding the policy process. In particular, we apply three perspectives of the policy process (Kingdon’s multiple streams, Punctuated-Equilibrium Theory, Multi-Level Governance) to understand how and why minimum unit pricing has developed in Scotland and describe implications for efforts to develop evidence-informed policymaking. Methods Semi-structured interviews were conducted with policy actors (politicians, civil servants, academics, advocates, industry representatives) involved in the development of MUP (n = 36). Interviewees were asked about the policy process and the role of evidence in policy development. Data from two other sources (a review of policy documents and an analysis of evidence submission documents to the Scottish Parliament) were used for triangulation. Findings The three perspectives provide complementary understandings of the policy process. Evidence has played an important role in presenting the policy issue of alcohol as a problem requiring action. Scotland-specific data and a change in the policy ‘image’ to a population-based problem contributed to making alcohol-related harms a priority for action. The limited powers of Scottish Government help explain the type of price intervention pursued while distinct aspects of the Scottish political climate favoured the pursuit of price-based interventions. Conclusions Evidence has played a crucial but complex role in the development of an innovative policy. Utilising different political science theories helps explain different aspects of the policy process

  10. Understanding the development of minimum unit pricing of alcohol in Scotland: a qualitative study of the policy process.

    Science.gov (United States)

    Katikireddi, Srinivasa Vittal; Hilton, Shona; Bonell, Chris; Bond, Lyndal

    2014-01-01

    Minimum unit pricing of alcohol is a novel public health policy with the potential to improve population health and reduce health inequalities. Theories of the policy process may help to understand the development of policy innovation and in turn identify lessons for future public health research and practice. This study aims to explain minimum unit pricing's development by taking a 'multiple-lenses' approach to understanding the policy process. In particular, we apply three perspectives of the policy process (Kingdon's multiple streams, Punctuated-Equilibrium Theory, Multi-Level Governance) to understand how and why minimum unit pricing has developed in Scotland and describe implications for efforts to develop evidence-informed policymaking. Semi-structured interviews were conducted with policy actors (politicians, civil servants, academics, advocates, industry representatives) involved in the development of MUP (n = 36). Interviewees were asked about the policy process and the role of evidence in policy development. Data from two other sources (a review of policy documents and an analysis of evidence submission documents to the Scottish Parliament) were used for triangulation. The three perspectives provide complementary understandings of the policy process. Evidence has played an important role in presenting the policy issue of alcohol as a problem requiring action. Scotland-specific data and a change in the policy 'image' to a population-based problem contributed to making alcohol-related harms a priority for action. The limited powers of Scottish Government help explain the type of price intervention pursued while distinct aspects of the Scottish political climate favoured the pursuit of price-based interventions. Evidence has played a crucial but complex role in the development of an innovative policy. Utilising different political science theories helps explain different aspects of the policy process, with Multi-Level Governance particularly useful for

  11. FINAL INTERIM REPORT VERIFICATION SURVEY ACTIVITIES IN FINAL STATUS SURVEY UNITS 7, 8, 9, 10, 11, 13 and 14 AT THE SEPARATIONS PROCESS RESEARCH UNIT, NISKAYUNA, NEW YORK

    International Nuclear Information System (INIS)

    Jadick, M.G.

    2010-01-01

    The Separations Process Research Unit (SPRU) facilities were constructed in the late 1940s to research the chemical separation of plutonium and uranium. SPRU operated between February 1950 and October 1953. The research activities ceased following the successful development of the reduction/oxidation and plutonium/uranium extraction processes that were subsequently used by the Hanford and the Savannah River sites.

  12. Carbon-14 immobilization via the Ba(OH)28H2O process

    International Nuclear Information System (INIS)

    Haag, G.L.; Nehls, J.W. Jr.; Young, G.C.

    1982-01-01

    The airborne release of 14 C from various nuclear facilities has been identified as a potential biohazard due to the long half-life of 14 C (5730 yrs) and the ease in which it may be assimilated into the biosphere. At Oak Ridge National Laboratory, technology is under development, as part of the Airborne Waste Management Program, for the removal and immobilization of this radionuclide. Prior studies have indicated that the 14 C will likely exist in the oxidized form as CO 2 and will contribute slightly to the bulk CO 2 concentration of the gas stream, which is airlike in nature (approx. 330 ppMv CO 2 ). The technology under development utilizes the CO 2 - Ba(OH) 2 8H 2 O gas-solid reaction with the mode of gas-solid contacting being a fixed bed. The product, BaCO 3 , possessing excellent thermal and chemical stability, prerequisites for the long-term disposal of nuclear wastes. For optimal process operation, studies have indicated that an operating window of adequate size does exist. When operating within the window, high CO 2 removal efficiency (effluent concentrations 99%), and an acceptable pressure drop across the bed (3 kPa/m at 13 cm/s superficial velocity) are possible. This paper will address three areas of experimental investigation. These areas are (1) micro-scale studies on 150-mg samples to provide information concerning surface properties, kinetics, and equilibrium vapor pressures, (2) macro-scale studies on large fixed beds (4.2 kg reactant) to determine the effects of humidity, temperature, and gas flow-rate upon bed pressure drop and CO 2 breakthrough, and (3) the design, construction, and initial operation of a pilot unit capable of continuously processing a 34 m 3 /h (20 ft 3 /min) air-based gas stream

  13. The AMchip04 and the processing unit prototype for the FastTracker

    International Nuclear Information System (INIS)

    Andreani, A; Alberti, F; Stabile, A; Annovi, A; Beretta, M; Volpi, G; Bogdan, M; Shochet, M; Tang, J; Tompkins, L; Citterio, M; Giannetti, P; Lanza, A; Magalotti, D; Piendibene, M

    2012-01-01

    Modern experiments search for extremely rare processes hidden in much larger background levels. As the experiment's complexity, the accelerator backgrounds and luminosity increase we need increasingly complex and exclusive event selection. We present the first prototype of a new Processing Unit (PU), the core of the FastTracker processor (FTK). FTK is a real time tracking device for the ATLAS experiment's trigger upgrade. The computing power of the PU is such that a few hundred of them will be able to reconstruct all the tracks with transverse momentum above 1 GeV/c in ATLAS events up to Phase II instantaneous luminosities (3 × 10 34 cm −2 s −1 ) with an event input rate of 100 kHz and a latency below a hundred microseconds. The PU provides massive computing power to minimize the online execution time of complex tracking algorithms. The time consuming pattern recognition problem, generally referred to as the ''combinatorial challenge'', is solved by the Associative Memory (AM) technology exploiting parallelism to the maximum extent; it compares the event to all pre-calculated ''expectations'' or ''patterns'' (pattern matching) simultaneously, looking for candidate tracks called ''roads''. This approach reduces to a linear behavior the typical exponential complexity of the CPU based algorithms. Pattern recognition is completed by the time data are loaded into the AM devices. We report on the design of the first Processing Unit prototypes. The design had to address the most challenging aspects of this technology: a huge number of detector clusters (''hits'') must be distributed at high rate with very large fan-out to all patterns (10 Million patterns will be located on 128 chips placed on a single board) and a huge number of roads must be collected and sent back to the FTK post-pattern-recognition functions. A network of high speed serial links is used to solve the data distribution problem.

  14. Theoretical and experimental study of a small unit for solar desalination using flashing process

    International Nuclear Information System (INIS)

    Nafey, A. Safwat; Mohamad, M.A.; El-Helaby, S.O.; Sharaf, M.A.

    2007-01-01

    A small unit for water desalination by solar energy and a flash evaporation process is investigated. The system is built at the Faculty of Petroleum and Mining Engineering at Suez, Egypt. The system consists of a solar water heater (flat plate solar collector) working as a brine heater and a vertical flash unit that is attached with a condenser/preheater unit. In this work, the system is investigated theoretically and experimentally at different real environmental conditions along Julian days of one year (2005). A mathematical model is developed to calculate the productivity of the system under different operating conditions. The BIRD's model for the calculation of solar insolation is used to predict the solar insolation instantaneously. Also, the solar insolation is measured by a highly sensitive digital pyranometer. Comparison between the theoretical and experimental results is performed. The average accumulative productivity of the system in November, December and January ranged between 1.04 to 1.45 kg/day/m 2 . The average summer productivity ranged between 5.44 to 7 kg/day/m 2 in July and August and 4.2 to 5 kg/day/m 2 in June

  15. Theoretical and experimental study of a small unit for solar desalination using flashing process

    Energy Technology Data Exchange (ETDEWEB)

    Nafey, A. Safwat; El-Helaby, S.O.; Sharaf, M.A. [Department of Engineering Science, Faculty of Petroleum and Mining Engineering, Suez Canal University, Suez 43522 (Egypt); Mohamad, M.A. [Solar Energy Department, National Research Center, Cairo (Egypt)

    2007-02-15

    A small unit for water desalination by solar energy and a flash evaporation process is investigated. The system is built at the Faculty of Petroleum and Mining Engineering at Suez, Egypt. The system consists of a solar water heater (flat plate solar collector) working as a brine heater and a vertical flash unit that is attached with a condenser/preheater unit. In this work, the system is investigated theoretically and experimentally at different real environmental conditions along Julian days of one year (2005). A mathematical model is developed to calculate the productivity of the system under different operating conditions. The BIRD's model for the calculation of solar insolation is used to predict the solar insolation instantaneously. Also, the solar insolation is measured by a highly sensitive digital pyranometer. Comparison between the theoretical and experimental results is performed. The average accumulative productivity of the system in November, December and January ranged between 1.04 to 1.45 kg/day/m{sup 2}. The average summer productivity ranged between 5.44 to 7 kg/day/m{sup 2} in July and August and 4.2 to 5 kg/day/m{sup 2} in June. (author)

  16. High performance direct gravitational N-body simulations on graphics processing units II: An implementation in CUDA

    NARCIS (Netherlands)

    Belleman, R.G.; Bédorf, J.; Portegies Zwart, S.F.

    2008-01-01

    We present the results of gravitational direct N-body simulations using the graphics processing unit (GPU) on a commercial NVIDIA GeForce 8800GTX designed for gaming computers. The force evaluation of the N-body problem is implemented in "Compute Unified Device Architecture" (CUDA) using the GPU to

  17. Startup of Pumping Units in Process Water Supplies with Cooling Towers at Thermal and Nuclear Power Plants

    Energy Technology Data Exchange (ETDEWEB)

    Berlin, V. V., E-mail: vberlin@rinet.ru; Murav’ev, O. A., E-mail: muraviov1954@mail.ru; Golubev, A. V., E-mail: electronik@inbox.ru [National Research University “Moscow State University of Civil Engineering,” (Russian Federation)

    2017-03-15

    Aspects of the startup of pumping units in the cooling and process water supply systems for thermal and nuclear power plants with cooling towers, the startup stages, and the limits imposed on the extreme parameters during transients are discussed.

  18. 77 FR 13635 - Labor Certification Process for the Temporary Employment of Aliens in Agriculture in the United...

    Science.gov (United States)

    2012-03-07

    ... DEPARTMENT OF LABOR Employment and Training Administration Labor Certification Process for the Temporary Employment of Aliens in Agriculture in the United States: 2012 Allowable Charges for Agricultural Workers' Meals and Travel Subsistence Reimbursement, Including Lodging AGENCY: Employment and Training...

  19. 77 FR 12882 - Labor Certification Process for the Temporary Employment of Aliens in Agriculture in the United...

    Science.gov (United States)

    2012-03-02

    ... DEPARTMENT OF LABOR Employment and Training Administration Labor Certification Process for the Temporary Employment of Aliens in Agriculture in the United States: 2012 Allowable Charges for Agricultural Workers' Meals and Travel Subsistence Reimbursement, Including Lodging AGENCY: Employment and Training...

  20. 78 FR 15741 - Labor Certification Process for the Temporary Employment of Aliens in Agriculture in the United...

    Science.gov (United States)

    2013-03-12

    ... DEPARTMENT OF LABOR Employment and Training Administration Labor Certification Process for the Temporary Employment of Aliens in Agriculture in the United States: 2013 Allowable Charges for Agricultural Workers' Meals and Travel Subsistence Reimbursement, Including Lodging AGENCY: Employment and Training...

  1. 76 FR 11286 - Labor Certification Process for the Temporary Employment of Aliens in Agriculture in the United...

    Science.gov (United States)

    2011-03-01

    ... DEPARTMENT OF LABOR Employment and Training Administration Labor Certification Process for the Temporary Employment of Aliens in Agriculture in the United States: 2011 Adverse Effect Wage Rates, Allowable Charges for Agricultural Workers' Meals, and Maximum Travel Subsistence Reimbursement AGENCY...

  2. Optimization of the coherence function estimation for multi-core central processing unit

    Science.gov (United States)

    Cheremnov, A. G.; Faerman, V. A.; Avramchuk, V. S.

    2017-02-01

    The paper considers use of parallel processing on multi-core central processing unit for optimization of the coherence function evaluation arising in digital signal processing. Coherence function along with other methods of spectral analysis is commonly used for vibration diagnosis of rotating machinery and its particular nodes. An algorithm is given for the function evaluation for signals represented with digital samples. The algorithm is analyzed for its software implementation and computational problems. Optimization measures are described, including algorithmic, architecture and compiler optimization, their results are assessed for multi-core processors from different manufacturers. Thus, speeding-up of the parallel execution with respect to sequential execution was studied and results are presented for Intel Core i7-4720HQ и AMD FX-9590 processors. The results show comparatively high efficiency of the optimization measures taken. In particular, acceleration indicators and average CPU utilization have been significantly improved, showing high degree of parallelism of the constructed calculating functions. The developed software underwent state registration and will be used as a part of a software and hardware solution for rotating machinery fault diagnosis and pipeline leak location with acoustic correlation method.

  3. A systematic review evaluating the role of nurses and processes for delivering early mobility interventions in the intensive care unit.

    Science.gov (United States)

    Krupp, Anna; Steege, Linsey; King, Barbara

    2018-04-19

    To investigate processes for delivering early mobility interventions in adult intensive care unit patients used in research and quality improvement studies and the role of nurses in early mobility interventions. A systematic review was conducted. Electronic databases PubMED, CINAHL, PEDro, and Cochrane were searched for studies published from 2000 to June 2017 that implemented an early mobility intervention in adult intensive care units. Included studies involved progression to ambulation as a component of the intervention, included the role of the nurse in preparing for or delivering the intervention, and reported at least one patient or organisational outcome measure. The System Engineering Initiative for Patient Safety (SEIPS) model, a framework for understanding structure, processes, and healthcare outcomes, was used to evaluate studies. 25 studies were included in the final review. Studies consisted of randomised control trials, prospective, retrospective, or mixed designs. A range of processes to support the delivery of early mobility were found. These processes include forming interdisciplinary teams, increasing mobility staff, mobility protocols, interdisciplinary education, champions, communication, and feedback. Variation exists in the process of delivering early mobility in the intensive care unit. In particular, further rigorous studies are needed to better understand the role of nurses in implementing early mobility to maintain a patient's functional status. Copyright © 2018 Elsevier Ltd. All rights reserved.

  4. FAST CALCULATION OF THE LOMB-SCARGLE PERIODOGRAM USING GRAPHICS PROCESSING UNITS

    International Nuclear Information System (INIS)

    Townsend, R. H. D.

    2010-01-01

    I introduce a new code for fast calculation of the Lomb-Scargle periodogram that leverages the computing power of graphics processing units (GPUs). After establishing a background to the newly emergent field of GPU computing, I discuss the code design and narrate key parts of its source. Benchmarking calculations indicate no significant differences in accuracy compared to an equivalent CPU-based code. However, the differences in performance are pronounced; running on a low-end GPU, the code can match eight CPU cores, and on a high-end GPU it is faster by a factor approaching 30. Applications of the code include analysis of long photometric time series obtained by ongoing satellite missions and upcoming ground-based monitoring facilities, and Monte Carlo simulation of periodogram statistical properties.

  5. MethodS of radioactive waste processing and disposal in the United Kingdom

    International Nuclear Information System (INIS)

    Tolstykh, V.D.

    1983-01-01

    The results of investigations into radioactive waste processing and disposal in the United Kingdom are discussed. Methods for solidification of metal and graphite radioactive wastes and radioactive slime of the Magnox reactors are described. Specifications of different installations used for radioactive waste disposal are given. Climatic and geological conditions in the United Kingdom are such that any deep storages of wastes will be lower than the underground water level. That is why dissolution and transport by underground waters will inevitably result in radionuclide mobility. In this connection an extended program of investigations into the main three aspects of disposal problem namely radionucleide release in storages, underground water transport and radionuclide migration is realized. The program is divided in two parts. The first part deals with retrival of hydrological and geochemical data on geological formations, development of specialized methods of investigations which are necessary for identification of places for waste final disposal. The second part represents theoretical and laboratory investigations into provesses of radionuclide transport in the system of ''sttorage-geological formation''. It is concluded that vitrification on the base of borosilicate glass is the most advanced method of radioactive waste solidification

  6. 78 FR 1259 - Labor Certification Process for the Temporary Employment of Aliens in Agriculture in the United...

    Science.gov (United States)

    2013-01-08

    ... DEPARTMENT OF LABOR Employment and Training Administration Labor Certification Process for the Temporary Employment of Aliens in Agriculture in the United States: 2013 Adverse Effect Wage Rates AGENCY: Employment and Training Administration, Department of Labor. ACTION: Notice. SUMMARY: The Employment and...

  7. 76 FR 79711 - Labor Certification Process for the Temporary Employment of Aliens in Agriculture in the United...

    Science.gov (United States)

    2011-12-22

    ... DEPARTMENT OF LABOR Employment and Training Administration Labor Certification Process for the Temporary Employment of Aliens in Agriculture in the United States: 2012 Adverse Effect Wage Rates AGENCY: Employment and Training Administration, Department of Labor. ACTION: Notice. SUMMARY: The Employment and...

  8. Morphology study of thoracic transverse processes and its significance in pedicle-rib unit screw fixation.

    Science.gov (United States)

    Cui, Xin-gang; Cai, Jin-fang; Sun, Jian-min; Jiang, Zhen-song

    2015-03-01

    Thoracic transverse process is an important anatomic structure of the spine. Several anatomic studies have investigated the adjacent structures of the thoracic transverse process. But there is still a blank on the morphology of the thoracic transverse processes. The purpose of the cadaveric study is to investigate the morphology of thoracic transverse processes and to provide morphology basis for the pedicle-rib unit (extrapedicular) screw fixation method. Forty-five adult dehydrated skeletons (T1-T10) were included in this study. The length, width, thickness, and the tilt angle (upward and backward) of the thoracic transverse process were measured. The data were then analyzed statistically. On the basis of the morphometric study, 5 fresh cadavers were used to place screws from transverse processes to the vertebral body in the thoracic spine, and then observed by the naked eye and on computed tomography scans. The lengths of thoracic transverse processes were between 16.63±1.59 and 18.10±1.95 mm; the longest was at T7, and the shortest was at T10. The widths of thoracic transverse processes were between 11.68±0.80 and 12.87±1.48 mm; the widest was at T3, and the narrowest was at T7. The thicknesses of thoracic transverse processes were between 7.86±1.24 and 10.78±1.35 mm; the thickest was at T1, and the thinnest was at T7. The upward tilt angles of thoracic transverse processes were between 24.9±3.1 and 3.0±1.56 degrees; the maximal upward tilt angle was at T1, and the minimal upward tilt angle was at T7. The upward tilt angles of T1 and T2 were obviously different from the other thoracic transverse processes (Ptransverse processes gradually increased from 24.5±2.91 degrees at T1 to 64.5±5.12 degrees at T10. The backward tilt angles were significantly different between each other, except between T5 and T6. In the validation study, screws were all placed successfully from transverse processes to the vertebrae of thoracic spine. The length, width, and

  9. Dynamic wavefront creation for processing units using a hybrid compactor

    Energy Technology Data Exchange (ETDEWEB)

    Puthoor, Sooraj; Beckmann, Bradford M.; Yudanov, Dmitri

    2018-02-20

    A method, a non-transitory computer readable medium, and a processor for repacking dynamic wavefronts during program code execution on a processing unit, each dynamic wavefront including multiple threads are presented. If a branch instruction is detected, a determination is made whether all wavefronts following a same control path in the program code have reached a compaction point, which is the branch instruction. If no branch instruction is detected in executing the program code, a determination is made whether all wavefronts following the same control path have reached a reconvergence point, which is a beginning of a program code segment to be executed by both a taken branch and a not taken branch from a previous branch instruction. The dynamic wavefronts are repacked with all threads that follow the same control path, if all wavefronts following the same control path have reached the branch instruction or the reconvergence point.

  10. The fundamental units, processes and patterns of evolution, and the Tree of Life conundrum

    Directory of Open Access Journals (Sweden)

    Wolf Yuri I

    2009-09-01

    Full Text Available Abstract Background The elucidation of the dominant role of horizontal gene transfer (HGT in the evolution of prokaryotes led to a severe crisis of the Tree of Life (TOL concept and intense debates on this subject. Concept Prompted by the crisis of the TOL, we attempt to define the primary units and the fundamental patterns and processes of evolution. We posit that replication of the genetic material is the singular fundamental biological process and that replication with an error rate below a certain threshold both enables and necessitates evolution by drift and selection. Starting from this proposition, we outline a general concept of evolution that consists of three major precepts. 1. The primary agency of evolution consists of Fundamental Units of Evolution (FUEs, that is, units of genetic material that possess a substantial degree of evolutionary independence. The FUEs include both bona fide selfish elements such as viruses, viroids, transposons, and plasmids, which encode some of the information required for their own replication, and regular genes that possess quasi-independence owing to their distinct selective value that provides for their transfer between ensembles of FUEs (genomes and preferential replication along with the rest of the recipient genome. 2. The history of replication of a genetic element without recombination is isomorphously represented by a directed tree graph (an arborescence, in the graph theory language. Recombination within a FUE is common between very closely related sequences where homologous recombination is feasible but becomes negligible for longer evolutionary distances. In contrast, shuffling of FUEs occurs at all evolutionary distances. Thus, a tree is a natural representation of the evolution of an individual FUE on the macro scale, but not of an ensemble of FUEs such as a genome. 3. The history of life is properly represented by the "forest" of evolutionary trees for individual FUEs (Forest of Life, or

  11. General purpose graphics-processing-unit implementation of cosmological domain wall network evolution.

    Science.gov (United States)

    Correia, J R C C C; Martins, C J A P

    2017-10-01

    Topological defects unavoidably form at symmetry breaking phase transitions in the early universe. To probe the parameter space of theoretical models and set tighter experimental constraints (exploiting the recent advances in astrophysical observations), one requires more and more demanding simulations, and therefore more hardware resources and computation time. Improving the speed and efficiency of existing codes is essential. Here we present a general purpose graphics-processing-unit implementation of the canonical Press-Ryden-Spergel algorithm for the evolution of cosmological domain wall networks. This is ported to the Open Computing Language standard, and as a consequence significant speedups are achieved both in two-dimensional (2D) and 3D simulations.

  12. General purpose graphics-processing-unit implementation of cosmological domain wall network evolution

    Science.gov (United States)

    Correia, J. R. C. C. C.; Martins, C. J. A. P.

    2017-10-01

    Topological defects unavoidably form at symmetry breaking phase transitions in the early universe. To probe the parameter space of theoretical models and set tighter experimental constraints (exploiting the recent advances in astrophysical observations), one requires more and more demanding simulations, and therefore more hardware resources and computation time. Improving the speed and efficiency of existing codes is essential. Here we present a general purpose graphics-processing-unit implementation of the canonical Press-Ryden-Spergel algorithm for the evolution of cosmological domain wall networks. This is ported to the Open Computing Language standard, and as a consequence significant speedups are achieved both in two-dimensional (2D) and 3D simulations.

  13. An Application of Graphics Processing Units to Geosimulation of Collective Crowd Behaviour

    Directory of Open Access Journals (Sweden)

    Cjoskāns Jānis

    2017-12-01

    Full Text Available The goal of the paper is to assess the ways for computational performance and efficiency improvement of collective crowd behaviour simulation by using parallel computing methods implemented on graphics processing unit (GPU. To perform an experimental evaluation of benefits of parallel computing, a new GPU-based simulator prototype is proposed and the runtime performance is analysed. Based on practical examples of pedestrian dynamics geosimulation, the obtained performance measurements are compared to several other available multiagent simulation tools to determine the efficiency of the proposed simulator, as well as to provide generic guidelines for the efficiency improvements of the parallel simulation of collective crowd behaviour.

  14. Solution of relativistic quantum optics problems using clusters of graphical processing units

    Energy Technology Data Exchange (ETDEWEB)

    Gordon, D.F., E-mail: daviel.gordon@nrl.navy.mil; Hafizi, B.; Helle, M.H.

    2014-06-15

    Numerical solution of relativistic quantum optics problems requires high performance computing due to the rapid oscillations in a relativistic wavefunction. Clusters of graphical processing units are used to accelerate the computation of a time dependent relativistic wavefunction in an arbitrary external potential. The stationary states in a Coulomb potential and uniform magnetic field are determined analytically and numerically, so that they can used as initial conditions in fully time dependent calculations. Relativistic energy levels in extreme magnetic fields are recovered as a means of validation. The relativistic ionization rate is computed for an ion illuminated by a laser field near the usual barrier suppression threshold, and the ionizing wavefunction is displayed.

  15. The regulation in the unitization process in the petroleum and natural gas exploration in Brazil; A regulacao no processo de unitization na exploracao de petroleo e gas natural no Brasil

    Energy Technology Data Exchange (ETDEWEB)

    Vazquez, Felipe Alvite; Silva, Moises Espindola [Universidade Federal do Rio de Janeiro (UFRJ), RJ (Brazil). Engenharia de Petroleo; Bone, Rosemarie Broeker [Universidade Federal do Rio de Janeiro (UFRJ), RJ (Brazil). Engenharia de Producao

    2008-07-01

    This paper presents and analyses the unitization of exploration and production process of petroleum and natural gas in Brazil, focusing the regulatory aspects under the Petroleum Law 9478/97. Considering the deficiency and blanks of the existent regulation when referring the utilization, this work intends to present and discuss those non resolved points and, in concise way, to present international unitization cases, applying when possible, their resolutions to Brazil.

  16. Research into the influence of spatial variability and scale on the parameterization of hydrological processes

    Science.gov (United States)

    Wood, Eric F.

    1993-01-01

    The objectives of the research were as follows: (1) Extend the Representative Elementary Area (RE) concept, first proposed and developed in Wood et al, (1988), to the water balance fluxes of the interstorm period (redistribution, evapotranspiration and baseflow) necessary for the analysis of long-term water balance processes. (2) Derive spatially averaged water balance model equations for spatially variable soil, topography and vegetation, over A RANGE OF CLIMATES. This is a necessary step in our goal to derive consistent hydrologic results up to GCM grid scales necessary for global climate modeling. (3) Apply the above macroscale water balance equations with remotely sensed data and begin to explore the feasibility of parameterizing the water balance constitutive equations at GCM grid scale.

  17. Carbon-14 immobilization via the Ba(OH)2.8H2O process

    International Nuclear Information System (INIS)

    Haag, G.L.; Nehls, J.W. Jr.; Young, G.C.

    1983-03-01

    The airborne release of 4 C from varous nuclear facilities has been identified as a potential biohazard due to the long half-life of 14 C (5730 y) and the ease with which it may be assimilated into the biosphere. At ORNL, technology has been developed for the removal and immobilization of this radionuclide. Prior studies have indicated that 14 C will likely exist in the oxidized form as CO 2 and will contribute slightly to the bulk CO 2 concentration of the gas stream, which is airlike in nature (approx. 330 ppmv CO 2 ). The technology that has been developed utilizes the CO 2 -Ba(OH) 2 .8H 2 O gas-solid reaction with the mode of gas-solid contacting being a fixed bed. The product, BaCO 3 , possesses excellent thermal and chemical stability, prerequisites for the long-term disposal of nuclear wastes. For optimal process operation, studies have indicated that an operating window of adequate size does exist. When operating within the window, high CO 2 removal efficiency (effluent concentrations 99%), and an acceptable pressure drop across the bed (3 kPa/m at a superficial velocity of 13 cm/s) are possible. This paper addresses three areas of experimental investigation: (1) microscale studies on 150-mg samples to provide information concerning surface properties, kinetics, and equilibrium vapor pressures; (2) macroscale studies on large fixed beds (4.2 kg of reactant) to determine the effects of humidity, temperature, and gas flow rate upon bed pressure drop and CO 2 breakthrough; and (3) design, construction, and initial operation of a pilot unit capable of continuously processing a 34-m 3 /h (20-ft 3 /min) air-based gas stream

  18. Multiscale Modeling of Carbon Fiber Reinforced Polymer (CFRP) for Integrated Computational Materials Engineering Process

    Energy Technology Data Exchange (ETDEWEB)

    Gao, Jiaying; Liang, Biao; Zhang, Weizhao; Liu, Zeliang; Cheng, Puikei; Bostanabad, Ramin; Cao, Jian; Chen, Wei; Liu, Wing Kam; Su, Xuming; Zeng, Danielle; Zhao, John

    2017-10-23

    In this work, a multiscale modeling framework for CFRP is introduced to study hierarchical structure of CFRP. Four distinct scales are defined: nanoscale, microscale, mesoscale, and macroscale. Information at lower scales can be passed to higher scale, which is beneficial for studying effect of constituents on macroscale part’s mechanical property. This bottom-up modeling approach enables better understanding of CFRP from finest details. Current study focuses on microscale and mesoscale. Representative volume element is used at microscale and mesoscale to model material’s properties. At microscale, unidirection CFRP (UD) RVE is used to study properties of UD. The UD RVE can be modeled with different volumetric fraction to encounter non-uniform fiber distribution in CFRP part. Such consideration is important in modeling uncertainties at microscale level. Currently, we identified volumetric fraction as the only uncertainty parameters in UD RVE. To measure effective material properties of UD RVE, periodic boundary conditions (PBC) are applied to UD RVE to ensure convergence of obtained properties. Properties of UD is directly used at mesoscale woven RVE modeling, where each yarn is assumed to have same properties as UD. Within woven RVE, there can be many potential uncertainties parameters to consider for a physical modeling of CFRP. Currently, we will consider fiber misalignment within yarn and angle between wrap and weft yarns. PBC is applied to woven RVE to calculate its effective material properties. The effect of uncertainties are investigated quantitatively by Gaussian process. Preliminary results of UD and Woven study are analyzed for efficacy of the RVE modeling. This work is considered as the foundation for future multiscale modeling framework development for ICME project.

  19. Real-time speckle variance swept-source optical coherence tomography using a graphics processing unit.

    Science.gov (United States)

    Lee, Kenneth K C; Mariampillai, Adrian; Yu, Joe X Z; Cadotte, David W; Wilson, Brian C; Standish, Beau A; Yang, Victor X D

    2012-07-01

    Advances in swept source laser technology continues to increase the imaging speed of swept-source optical coherence tomography (SS-OCT) systems. These fast imaging speeds are ideal for microvascular detection schemes, such as speckle variance (SV), where interframe motion can cause severe imaging artifacts and loss of vascular contrast. However, full utilization of the laser scan speed has been hindered by the computationally intensive signal processing required by SS-OCT and SV calculations. Using a commercial graphics processing unit that has been optimized for parallel data processing, we report a complete high-speed SS-OCT platform capable of real-time data acquisition, processing, display, and saving at 108,000 lines per second. Subpixel image registration of structural images was performed in real-time prior to SV calculations in order to reduce decorrelation from stationary structures induced by the bulk tissue motion. The viability of the system was successfully demonstrated in a high bulk tissue motion scenario of human fingernail root imaging where SV images (512 × 512 pixels, n = 4) were displayed at 54 frames per second.

  20. Reproducibility of Mammography Units, Film Processing and Quality Imaging

    International Nuclear Information System (INIS)

    Gaona, Enrique

    2003-01-01

    The purpose of this study was to carry out an exploratory survey of the problems of quality control in mammography and processors units as a diagnosis of the current situation of mammography facilities. Measurements of reproducibility, optical density, optical difference and gamma index are included. Breast cancer is the most frequently diagnosed cancer and is the second leading cause of cancer death among women in the Mexican Republic. Mammography is a radiographic examination specially designed for detecting breast pathology. We found that the problems of reproducibility of AEC are smaller than the problems of processors units because almost all processors fall outside of the acceptable variation limits and they can affect the mammography quality image and the dose to breast. Only four mammography units agree with the minimum score established by ACR and FDA for the phantom image

  1. Process and unit for gasification of combustible material. Verfahren und Aggregat zur Vergasung brennbaren Gutes

    Energy Technology Data Exchange (ETDEWEB)

    Linneborn, J

    1987-05-21

    The invention refers to a process for the gasification of solid and combustible material in a moving bed and a unit in which this process can be carried out. By material to be gasified one means small material such as ground fossil coal and all organic substances such as wood, straw, husks and shells of fruit, to which sewage sludge can be added. The new process can be carried out, according to the invention, in a closed duct moved by vibration or shaking, in which the material or the ash produced moves from one end to the other by suitable vibration and comes into contact with round heat sources largely resistant to friction. This achieves rapid gasification of the material (at about 1000/sup 0/C) by convection and radiation.

  2. Computing the Density Matrix in Electronic Structure Theory on Graphics Processing Units.

    Science.gov (United States)

    Cawkwell, M J; Sanville, E J; Mniszewski, S M; Niklasson, Anders M N

    2012-11-13

    The self-consistent solution of a Schrödinger-like equation for the density matrix is a critical and computationally demanding step in quantum-based models of interatomic bonding. This step was tackled historically via the diagonalization of the Hamiltonian. We have investigated the performance and accuracy of the second-order spectral projection (SP2) algorithm for the computation of the density matrix via a recursive expansion of the Fermi operator in a series of generalized matrix-matrix multiplications. We demonstrate that owing to its simplicity, the SP2 algorithm [Niklasson, A. M. N. Phys. Rev. B2002, 66, 155115] is exceptionally well suited to implementation on graphics processing units (GPUs). The performance in double and single precision arithmetic of a hybrid GPU/central processing unit (CPU) and full GPU implementation of the SP2 algorithm exceed those of a CPU-only implementation of the SP2 algorithm and traditional matrix diagonalization when the dimensions of the matrices exceed about 2000 × 2000. Padding schemes for arrays allocated in the GPU memory that optimize the performance of the CUBLAS implementations of the level 3 BLAS DGEMM and SGEMM subroutines for generalized matrix-matrix multiplications are described in detail. The analysis of the relative performance of the hybrid CPU/GPU and full GPU implementations indicate that the transfer of arrays between the GPU and CPU constitutes only a small fraction of the total computation time. The errors measured in the self-consistent density matrices computed using the SP2 algorithm are generally smaller than those measured in matrices computed via diagonalization. Furthermore, the errors in the density matrices computed using the SP2 algorithm do not exhibit any dependence of system size, whereas the errors increase linearly with the number of orbitals when diagonalization is employed.

  3. Real time 3D structural and Doppler OCT imaging on graphics processing units

    Science.gov (United States)

    Sylwestrzak, Marcin; Szlag, Daniel; Szkulmowski, Maciej; Gorczyńska, Iwona; Bukowska, Danuta; Wojtkowski, Maciej; Targowski, Piotr

    2013-03-01

    In this report the application of graphics processing unit (GPU) programming for real-time 3D Fourier domain Optical Coherence Tomography (FdOCT) imaging with implementation of Doppler algorithms for visualization of the flows in capillary vessels is presented. Generally, the time of the data processing of the FdOCT data on the main processor of the computer (CPU) constitute a main limitation for real-time imaging. Employing additional algorithms, such as Doppler OCT analysis, makes this processing even more time consuming. Lately developed GPUs, which offers a very high computational power, give a solution to this problem. Taking advantages of them for massively parallel data processing, allow for real-time imaging in FdOCT. The presented software for structural and Doppler OCT allow for the whole processing with visualization of 2D data consisting of 2000 A-scans generated from 2048 pixels spectra with frame rate about 120 fps. The 3D imaging in the same mode of the volume data build of 220 × 100 A-scans is performed at a rate of about 8 frames per second. In this paper a software architecture, organization of the threads and optimization applied is shown. For illustration the screen shots recorded during real time imaging of the phantom (homogeneous water solution of Intralipid in glass capillary) and the human eye in-vivo is presented.

  4. Acceleration of Linear Finite-Difference Poisson-Boltzmann Methods on Graphics Processing Units.

    Science.gov (United States)

    Qi, Ruxi; Botello-Smith, Wesley M; Luo, Ray

    2017-07-11

    Electrostatic interactions play crucial roles in biophysical processes such as protein folding and molecular recognition. Poisson-Boltzmann equation (PBE)-based models have emerged as widely used in modeling these important processes. Though great efforts have been put into developing efficient PBE numerical models, challenges still remain due to the high dimensionality of typical biomolecular systems. In this study, we implemented and analyzed commonly used linear PBE solvers for the ever-improving graphics processing units (GPU) for biomolecular simulations, including both standard and preconditioned conjugate gradient (CG) solvers with several alternative preconditioners. Our implementation utilizes the standard Nvidia CUDA libraries cuSPARSE, cuBLAS, and CUSP. Extensive tests show that good numerical accuracy can be achieved given that the single precision is often used for numerical applications on GPU platforms. The optimal GPU performance was observed with the Jacobi-preconditioned CG solver, with a significant speedup over standard CG solver on CPU in our diversified test cases. Our analysis further shows that different matrix storage formats also considerably affect the efficiency of different linear PBE solvers on GPU, with the diagonal format best suited for our standard finite-difference linear systems. Further efficiency may be possible with matrix-free operations and integrated grid stencil setup specifically tailored for the banded matrices in PBE-specific linear systems.

  5. Pseudo-random number generators for Monte Carlo simulations on ATI Graphics Processing Units

    Science.gov (United States)

    Demchik, Vadim

    2011-03-01

    Basic uniform pseudo-random number generators are implemented on ATI Graphics Processing Units (GPU). The performance results of the realized generators (multiplicative linear congruential (GGL), XOR-shift (XOR128), RANECU, RANMAR, RANLUX and Mersenne Twister (MT19937)) on CPU and GPU are discussed. The obtained speed up factor is hundreds of times in comparison with CPU. RANLUX generator is found to be the most appropriate for using on GPU in Monte Carlo simulations. The brief review of the pseudo-random number generators used in modern software packages for Monte Carlo simulations in high-energy physics is presented.

  6. Initial Assessment of Parallelization of Monte Carlo Calculation using Graphics Processing Units

    International Nuclear Information System (INIS)

    Choi, Sung Hoon; Joo, Han Gyu

    2009-01-01

    Monte Carlo (MC) simulation is an effective tool for calculating neutron transports in complex geometry. However, because Monte Carlo simulates each neutron behavior one by one, it takes a very long computing time if enough neutrons are used for high precision of calculation. Accordingly, methods that reduce the computing time are required. In a Monte Carlo code, parallel calculation is well-suited since it simulates the behavior of each neutron independently and thus parallel computation is natural. The parallelization of the Monte Carlo codes, however, was done using multi CPUs. By the global demand for high quality 3D graphics, the Graphics Processing Unit (GPU) has developed into a highly parallel, multi-core processor. This parallel processing capability of GPUs can be available to engineering computing once a suitable interface is provided. Recently, NVIDIA introduced CUDATM, a general purpose parallel computing architecture. CUDA is a software environment that allows developers to manage GPU using C/C++ or other languages. In this work, a GPU-based Monte Carlo is developed and the initial assessment of it parallel performance is investigated

  7. Methodologies to maximize olefins in process unit of COMPERJ; Metodologias para maximizacao de olefinas nas unidades de processamento do COMPERJ

    Energy Technology Data Exchange (ETDEWEB)

    Santos, Maria Clara de C. dos; Seidl, Peter R.; Guimaraes, Maria Jose O.C. [Universidade Federal do Rio de Janeiro (UFRJ), RJ (Brazil). Escola de Quimica

    2008-07-01

    With the growth of the national and worldwide economy, there has been a considerable increase in demand for polyolefins, thus requiring an increase in the production of basic petrochemicals (primarily ethane and propane). Due the quality of the national oil, heavy and poor in light derivatives, it is necessary investments in processes of conversion of heavy fractions with intent to maximize production these olefins and alternative raw materials for obtaining these petrochemicals The possible alternatives studied were the expansion of the core petrochemical, changes in the refinery processing units and the construction of COMPERJ, the latter being a example of alternative that can change the current scenario. The work aims to the simulation of process units of COMPERJ with the intention of evaluate which solutions like COMPERJ can best meet the growing market of polyolefins. (author)

  8. Macrotransport processes: Brownian tracers as stochastic averagers in effective medium theories of heterogeneous media

    International Nuclear Information System (INIS)

    Brenner, H.

    1991-01-01

    Macrotransport processes (generalized Taylor dispersion phenomena) constitute coarse-grained descriptions of comparable convective diffusive-reactive microtransport processes, the latter supposed governed by microscale linear constitutive equations and boundary conditions, but characterized by spatially nonuniform phenomenological coefficients. Following a brief review of existing applications of the theory, the author focuses - by way of background information-upon the original (and now classical) Taylor - Aris dispersion problem, involving the combined convective and molecular diffusive transport of a point-size Brownian solute molecule (tracer) suspended in a Poiseuille solvent flow within a circular tube. A series of elementary generalizations of this prototype problem to chromatographic-like solute transport processes in tubes is used to illustrate some novel statistical-physical features. These examples emphasize the fact that a solute molecule may, on average, move axially down the tube at a different mean velocity (either larger or smaller) than that of a solvent molecule. Moreover, this solute molecule may suffer axial dispersion about its mean velocity at a rate greatly exceeding that attributable to its axial molecular diffusion alone. Such chromatographic anomalies represent novel macroscale non-linearities originating from physicochemical interactions between spatially inhomogeneous convective-diffusive-reactive microtransport processes

  9. Processing techniques for data from the Kuosheng Unit 1 shakedown safety-relief-valve tests

    International Nuclear Information System (INIS)

    McCauley, E.W.; Rompel, S.L.; Weaver, H.J.; Altenbach, T.J.

    1982-08-01

    This report describes techniques developed at the Lawrence Livermore National Laobratory, Livermore, CA for processing original data from the Taiwan Power Company's Kuosheng MKIII Unit 1 Safety Relief Valve Shakedown Tests conducted in April/May 1981. The computer codes used, TPSORT, TPPLOT, and TPPSD, form a special evaluation system for treating the data from its original packed binary form to ordered, calibrated ASCII transducer files and then to production of time-history plots, numerical output files, and spectral analyses. Using the data processing techniques described, a convenient means of independently examining and analyzing a unique data base for steam condensation phenomena in the MARKIII wetwell is described. The techniques developed for handling these data are applicable to the treatment of similar, but perhaps differently structured, experiment data sets

  10. State-Level Comparison of Processes and Timelines for Distributed Photovoltaic Interconnection in the United States

    Energy Technology Data Exchange (ETDEWEB)

    Ardani, K. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Davidson, C. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Margolis, R. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Nobler, E. [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2015-01-01

    This report presents results from an analysis of distributed photovoltaic (PV) interconnection and deployment processes in the United States. Using data from more than 30,000 residential (up to 10 kilowatts) and small commercial (10-50 kilowatts) PV systems, installed from 2012 to 2014, we assess the range in project completion timelines nationally (across 87 utilities in 16 states) and in five states with active solar markets (Arizona, California, New Jersey, New York, and Colorado).

  11. Silicon-Carbide Power MOSFET Performance in High Efficiency Boost Power Processing Unit for Extreme Environments

    Science.gov (United States)

    Ikpe, Stanley A.; Lauenstein, Jean-Marie; Carr, Gregory A.; Hunter, Don; Ludwig, Lawrence L.; Wood, William; Del Castillo, Linda Y.; Fitzpatrick, Fred; Chen, Yuan

    2016-01-01

    Silicon-Carbide device technology has generated much interest in recent years. With superior thermal performance, power ratings and potential switching frequencies over its Silicon counterpart, Silicon-Carbide offers a greater possibility for high powered switching applications in extreme environment. In particular, Silicon-Carbide Metal-Oxide- Semiconductor Field-Effect Transistors' (MOSFETs) maturing process technology has produced a plethora of commercially available power dense, low on-state resistance devices capable of switching at high frequencies. A novel hard-switched power processing unit (PPU) is implemented utilizing Silicon-Carbide power devices. Accelerated life data is captured and assessed in conjunction with a damage accumulation model of gate oxide and drain-source junction lifetime to evaluate potential system performance at high temperature environments.

  12. Research on the pyrolysis of hardwood in an entrained bed process development unit

    Energy Technology Data Exchange (ETDEWEB)

    Kovac, R.J.; Gorton, C.W.; Knight, J.A.; Newman, C.J.; O' Neil, D.J. (Georgia Inst. of Tech., Atlanta, GA (United States). Research Inst.)

    1991-08-01

    An atmospheric flash pyrolysis process, the Georgia Tech Entrained Flow Pyrolysis Process, for the production of liquid biofuels from oak hardwood is described. The development of the process began with bench-scale studies and a conceptual design in the 1978--1981 timeframe. Its development and successful demonstration through research on the pyrolysis of hardwood in an entrained bed process development unit (PDU), in the period of 1982--1989, is presented. Oil yields (dry basis) up to 60% were achieved in the 1.5 ton-per-day PDU, far exceeding the initial target/forecast of 40% oil yields. Experimental data, based on over forty runs under steady-state conditions, supported by material and energy balances of near-100% closures, have been used to establish a process model which indicates that oil yields well in excess of 60% (dry basis) can be achieved in a commercial reactor. Experimental results demonstrate a gross product thermal efficiency of 94% and a net product thermal efficiency of 72% or more; the highest values yet achieved with a large-scale biomass liquefaction process. A conceptual manufacturing process and an economic analysis for liquid biofuel production at 60% oil yield from a 200-TPD commercial plant is reported. The plant appears to be profitable at contemporary fuel costs of $21/barrel oil-equivalent. Total capital investment is estimated at under $2.5 million. A rate-of-return on investment of 39.4% and a pay-out period of 2.1 years has been estimated. The manufacturing cost of the combustible pyrolysis oil is $2.70 per gigajoule. 20 figs., 87 tabs.

  13. AN APPROACH TO EFFICIENT FEM SIMULATIONS ON GRAPHICS PROCESSING UNITS USING CUDA

    Directory of Open Access Journals (Sweden)

    Björn Nutti

    2014-04-01

    Full Text Available The paper presents a highly efficient way of simulating the dynamic behavior of deformable objects by means of the finite element method (FEM with computations performed on Graphics Processing Units (GPU. The presented implementation reduces bottlenecks related to memory accesses by grouping the necessary data per node pairs, in contrast to the classical way done per element. This strategy reduces the memory access patterns that are not suitable for the GPU memory architecture. Furthermore, the presented implementation takes advantage of the underlying sparse-block-matrix structure, and it has been demonstrated how to avoid potential bottlenecks in the algorithm. To achieve plausible deformational behavior for large local rotations, the objects are modeled by means of a simplified co-rotational FEM formulation.

  14. The United States nuclear regulatory commission license renewal process

    International Nuclear Information System (INIS)

    Holian, B.E.

    2009-01-01

    The United States (U.S.) Nuclear Regulatory Commission (NRC) license renewal process establishes the technical and administrative requirements for the renewal of operating power plant licenses. Reactor ope-rating licenses were originally issued for 40 years and are allowed to be renewed. The review process for license renewal applications (L.R.A.) provides continued assurance that the level of safety provided by an applicant's current licensing basis is maintained for the period of extended operation. The license renewal review focuses on passive, long-lived structures and components of the plant that are subject to the effects of aging. The applicant must demonstrate that programs are in place to manage those aging effects. The review also verifies that analyses based on the current operating term have been evaluated and shown to be valid for the period of extended operation. The NRC has renewed the licenses for 52 reactors at 30 plant sites. Each applicant requested, and was granted, an extension of 20 years. Applications to renew the licenses of 20 additional reactors at 13 plant sites are under review. As license renewal is voluntary, the decision to seek license renewal and the timing of the application is made by the licensee. However, the NRC expects that, over time, essentially all U.S. operating reactors will request license renewal. In 2009, the U.S. has 4 plants that enter their 41. year of ope-ration. The U.S. Nuclear Industry has expressed interest in 'life beyond 60', that is, requesting approval of a second renewal period. U.S. regulations allow for subsequent license renewals. The NRC is working with the U.S. Department of Energy (DOE) on research related to light water reactor sustainability. (author)

  15. High-Throughput Characterization of Porous Materials Using Graphics Processing Units

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Jihan; Martin, Richard L.; Rübel, Oliver; Haranczyk, Maciej; Smit, Berend

    2012-05-08

    We have developed a high-throughput graphics processing units (GPU) code that can characterize a large database of crystalline porous materials. In our algorithm, the GPU is utilized to accelerate energy grid calculations where the grid values represent interactions (i.e., Lennard-Jones + Coulomb potentials) between gas molecules (i.e., CH$_{4}$ and CO$_{2}$) and material's framework atoms. Using a parallel flood fill CPU algorithm, inaccessible regions inside the framework structures are identified and blocked based on their energy profiles. Finally, we compute the Henry coefficients and heats of adsorption through statistical Widom insertion Monte Carlo moves in the domain restricted to the accessible space. The code offers significant speedup over a single core CPU code and allows us to characterize a set of porous materials at least an order of magnitude larger than ones considered in earlier studies. For structures selected from such a prescreening algorithm, full adsorption isotherms can be calculated by conducting multiple grand canonical Monte Carlo simulations concurrently within the GPU.

  16. Social processes underlying acculturation: a study of drinking behavior among immigrant Latinos in the Northeast United States

    Science.gov (United States)

    LEE, CHRISTINA S.; LÓPEZ, STEVEN REGESER; COBLY, SUZANNE M.; TEJADA, MONICA; GARCÍA-COLL, CYNTHIA; SMITH, MARCIA

    2010-01-01

    Study Goals To identify social processes that underlie the relationship of acculturation and heavy drinking behavior among Latinos who have immigrated to the Northeast United States of America (USA). Method Community-based recruitment strategies were used to identify 36 Latinos who reported heavy drinking. Participants were 48% female, 23 to 56 years of age, and were from South or Central America (39%) and the Caribbean (24%). Six focus groups were audiotaped and transcribed. Results Content analyses indicated that the social context of drinking is different in the participants’ countries of origin and in the United States. In Latin America, alcohol consumption was part of everyday living (being with friends and family). Nostalgia and isolation reflected some of the reasons for drinking in the USA. Results suggest that drinking in the Northeastern United States (US) is related to Latinos’ adaptation to a new sociocultural environment. Knowledge of the shifting social contexts of drinking can inform health interventions. PMID:20376331

  17. The role of personnel marketing in the process of building corporate social responsibility strategy of a scientific unit

    Directory of Open Access Journals (Sweden)

    Sylwia Jarosławska-Sobór

    2015-09-01

    Full Text Available The goal of this article is to discuss the significance of human capital in the process of building the strategy of social responsibility and the role of personnel marketing in the process. Dynamically changing social environment has enforced a new way of looking at non-material resources. Organizations have understood that it is human capital and social competences that have a significant impact on the creation of an organization’s value, generating profits, as well as gaining competitive advantage in the 21st century. Personnel marketing is now a key element in the process of implementation of the CSR concept and building the value of contemporary organizations, especially such unique organizations as scientific units. In this article you will find a discussion concerning the basic values regarded as crucial by the Central Mining Institute in the context of their significance for the paradigm of social responsibility. Such an analysis was carried out on the basis of the experiences of Central Mining Institute (GIG in the development of strategic CSR, which takes into consideration the specific character of the Institute as a scientific unit.

  18. Nanoscale multireference quantum chemistry: full configuration interaction on graphical processing units.

    Science.gov (United States)

    Fales, B Scott; Levine, Benjamin G

    2015-10-13

    Methods based on a full configuration interaction (FCI) expansion in an active space of orbitals are widely used for modeling chemical phenomena such as bond breaking, multiply excited states, and conical intersections in small-to-medium-sized molecules, but these phenomena occur in systems of all sizes. To scale such calculations up to the nanoscale, we have developed an implementation of FCI in which electron repulsion integral transformation and several of the more expensive steps in σ vector formation are performed on graphical processing unit (GPU) hardware. When applied to a 1.7 × 1.4 × 1.4 nm silicon nanoparticle (Si72H64) described with the polarized, all-electron 6-31G** basis set, our implementation can solve for the ground state of the 16-active-electron/16-active-orbital CASCI Hamiltonian (more than 100,000,000 configurations) in 39 min on a single NVidia K40 GPU.

  19. Graphics processing unit accelerated intensity-based optical coherence tomography angiography using differential frames with real-time motion correction.

    Science.gov (United States)

    Watanabe, Yuuki; Takahashi, Yuhei; Numazawa, Hiroshi

    2014-02-01

    We demonstrate intensity-based optical coherence tomography (OCT) angiography using the squared difference of two sequential frames with bulk-tissue-motion (BTM) correction. This motion correction was performed by minimization of the sum of the pixel values using axial- and lateral-pixel-shifted structural OCT images. We extract the BTM-corrected image from a total of 25 calculated OCT angiographic images. Image processing was accelerated by a graphics processing unit (GPU) with many stream processors to optimize the parallel processing procedure. The GPU processing rate was faster than that of a line scan camera (46.9 kHz). Our OCT system provides the means of displaying structural OCT images and BTM-corrected OCT angiographic images in real time.

  20. Gravity driven and in situ fractional crystallization processes in the Centre Hill complex, Abitibi Subprovince, Canada: Evidence from bilaterally-paired cyclic units

    Science.gov (United States)

    Thériault, R. D.; Fowler, A. D.

    1996-12-01

    The formation of layers in mafic intrusions has been explained by various processes, making it the subject of much controversy. The concept that layering originates from gravitational settling of crystals has been superseded in recent years by models involving in situ fractional crystallization. Here we present evidence from the Centre Hill complex that both processes may be operative simultaneously within the same intrusion. The Centre Hill complex is part of the Munro Lake sill, an Archean layered mafic intrusion emplaced in volcanic rocks of the Abitibi Subprovince. The Centre Hill complex comprises the following lithostratigraphic units: six lower cyclic units of peridotite and clinopyroxenite; a middle unit of leucogabbro; six upper cyclic units of branching-textured gabbro (BTG) and clotted-textured gabbro (CTG), the uppermost of these units being overlain by a marginal zone of fine-grained gabbro. The cyclic units of peridotite/clinopyroxenite and BTG/CTG are interpreted to have formed concurrently through fractional crystallization, associated with periodic replenishment of magma to the chamber. The units of peridotite and clinopyroxenite formed by gravitational accumulation of crystals that grew under the roof. The cyclic units of BTG and CTG formed along the upper margin of the sill by two different mechanisms: (1) layers of BTG crystallized in situ along an inward-growing roof and (2) layers of CTG formed by accumulation of buoyant plagioclase crystals. The layers of BTG are characterized by branching pseudomorphs after fayalite up to 50 cm in length that extend away from the upper margin. The original branching crystals are interpreted to have grown from stagnant intercumulus melt in a high thermal gradient resulting from the injection of new magma to the chamber.

  1. Fast ray-tracing of human eye optics on Graphics Processing Units.

    Science.gov (United States)

    Wei, Qi; Patkar, Saket; Pai, Dinesh K

    2014-05-01

    We present a new technique for simulating retinal image formation by tracing a large number of rays from objects in three dimensions as they pass through the optic apparatus of the eye to objects. Simulating human optics is useful for understanding basic questions of vision science and for studying vision defects and their corrections. Because of the complexity of computing such simulations accurately, most previous efforts used simplified analytical models of the normal eye. This makes them less effective in modeling vision disorders associated with abnormal shapes of the ocular structures which are hard to be precisely represented by analytical surfaces. We have developed a computer simulator that can simulate ocular structures of arbitrary shapes, for instance represented by polygon meshes. Topographic and geometric measurements of the cornea, lens, and retina from keratometer or medical imaging data can be integrated for individualized examination. We utilize parallel processing using modern Graphics Processing Units (GPUs) to efficiently compute retinal images by tracing millions of rays. A stable retinal image can be generated within minutes. We simulated depth-of-field, accommodation, chromatic aberrations, as well as astigmatism and correction. We also show application of the technique in patient specific vision correction by incorporating geometric models of the orbit reconstructed from clinical medical images. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  2. Sensitivity of Austempering Heat Treatment of Ductile Irons to Changes in Process Parameters

    Science.gov (United States)

    Boccardo, A. D.; Dardati, P. M.; Godoy, L. A.; Celentano, D. J.

    2018-03-01

    Austempered ductile iron (ADI) is frequently obtained by means of a three-step austempering heat treatment. The parameters of this process play a crucial role on the microstructure of the final product. This paper considers the influence of some process parameters (i.e., the initial microstructure of ductile iron and the thermal cycle) on key features of the heat treatment (such as minimum required time for austenitization and austempering and microstructure of the final product). A computational simulation of the austempering heat treatment is reported in this work, which accounts for a coupled thermo-metallurgical behavior in terms of the evolution of temperature at the scale of the part being investigated (the macroscale) and the evolution of phases at the scale of microconstituents (the microscale). The paper focuses on the sensitivity of the process by looking at a sensitivity index and scatter plots. The sensitivity indices are determined by using a technique based on the variance of the output. The results of this study indicate that both the initial microstructure and the thermal cycle parameters play a key role in the production of ADI. This work also provides a guideline to help selecting values of the appropriate process parameters to obtain parts with a required microstructural characteristic.

  3. Sensitivity of Austempering Heat Treatment of Ductile Irons to Changes in Process Parameters

    Science.gov (United States)

    Boccardo, A. D.; Dardati, P. M.; Godoy, L. A.; Celentano, D. J.

    2018-06-01

    Austempered ductile iron (ADI) is frequently obtained by means of a three-step austempering heat treatment. The parameters of this process play a crucial role on the microstructure of the final product. This paper considers the influence of some process parameters ( i.e., the initial microstructure of ductile iron and the thermal cycle) on key features of the heat treatment (such as minimum required time for austenitization and austempering and microstructure of the final product). A computational simulation of the austempering heat treatment is reported in this work, which accounts for a coupled thermo-metallurgical behavior in terms of the evolution of temperature at the scale of the part being investigated (the macroscale) and the evolution of phases at the scale of microconstituents (the microscale). The paper focuses on the sensitivity of the process by looking at a sensitivity index and scatter plots. The sensitivity indices are determined by using a technique based on the variance of the output. The results of this study indicate that both the initial microstructure and the thermal cycle parameters play a key role in the production of ADI. This work also provides a guideline to help selecting values of the appropriate process parameters to obtain parts with a required microstructural characteristic.

  4. Accelerating cardiac bidomain simulations using graphics processing units.

    Science.gov (United States)

    Neic, A; Liebmann, M; Hoetzl, E; Mitchell, L; Vigmond, E J; Haase, G; Plank, G

    2012-08-01

    Anatomically realistic and biophysically detailed multiscale computer models of the heart are playing an increasingly important role in advancing our understanding of integrated cardiac function in health and disease. Such detailed simulations, however, are computationally vastly demanding, which is a limiting factor for a wider adoption of in-silico modeling. While current trends in high-performance computing (HPC) hardware promise to alleviate this problem, exploiting the potential of such architectures remains challenging since strongly scalable algorithms are necessitated to reduce execution times. Alternatively, acceleration technologies such as graphics processing units (GPUs) are being considered. While the potential of GPUs has been demonstrated in various applications, benefits in the context of bidomain simulations where large sparse linear systems have to be solved in parallel with advanced numerical techniques are less clear. In this study, the feasibility of multi-GPU bidomain simulations is demonstrated by running strong scalability benchmarks using a state-of-the-art model of rabbit ventricles. The model is spatially discretized using the finite element methods (FEM) on fully unstructured grids. The GPU code is directly derived from a large pre-existing code, the Cardiac Arrhythmia Research Package (CARP), with very minor perturbation of the code base. Overall, bidomain simulations were sped up by a factor of 11.8 to 16.3 in benchmarks running on 6-20 GPUs compared to the same number of CPU cores. To match the fastest GPU simulation which engaged 20 GPUs, 476 CPU cores were required on a national supercomputing facility.

  5. Processes and patterns of interaction as units of selection: An introduction to ITSNTS thinking.

    Science.gov (United States)

    Doolittle, W Ford; Inkpen, S Andrew

    2018-04-17

    Many practicing biologists accept that nothing in their discipline makes sense except in the light of evolution, and that natural selection is evolution's principal sense-maker. But what natural selection actually is (a force or a statistical outcome, for example) and the levels of the biological hierarchy (genes, organisms, species, or even ecosystems) at which it operates directly are still actively disputed among philosophers and theoretical biologists. Most formulations of evolution by natural selection emphasize the differential reproduction of entities at one or the other of these levels. Some also recognize differential persistence, but in either case the focus is on lineages of material things: even species can be thought of as spatiotemporally restricted, if dispersed, physical beings. Few consider-as "units of selection" in their own right-the processes implemented by genes, cells, species, or communities. "It's the song not the singer" (ITSNTS) theory does that, also claiming that evolution by natural selection of processes is more easily understood and explained as differential persistence than as differential reproduction. ITSNTS was formulated as a response to the observation that the collective functions of microbial communities (the songs) are more stably conserved and ecologically relevant than are the taxa that implement them (the singers). It aims to serve as a useful corrective to claims that "holobionts" (microbes and their animal or plant hosts) are aggregate "units of selection," claims that often conflate meanings of that latter term. But ITSNS also seems broadly applicable, for example, to the evolution of global biogeochemical cycles and the definition of ecosystem function.

  6. Research Regarding the Anticorosiv Protection of Atmospheric and Vacuum Distillation Unit that Process Crude Oil

    Directory of Open Access Journals (Sweden)

    M. Morosanu

    2011-12-01

    Full Text Available Due to high boiling temperature, organic acids are present in the warmer areas of metal equipment from atmospheric and vacuum distillation units and determine, increased corrosion processes in furnace tubes, transfer lines, metal equipment within the distillation columns etc. In order to protect the corrosion of metal equipment from atmospheric and vacuum distillation units, against acids, de authors researched solution which integrates corrosion inhibitors and selecting materials for equipment construction. For this purpose, we tested the inhibitor PET 1441, which has dialchilfosfat in his composition and inhibitor based on phosphate ester. In this case, to the metal surface forms a complex phosphorous that forms of high temperature and high fluid speed. In order to form the passive layer and to achieve a 90% protection, we initially insert a shock dose, and in order to ensure further protection there is used a dose of 20 ppm. The check of anticorrosion protection namely the inhibition efficiency is achieved by testing samples made from steel different.

  7. [Work process and workers' health in a food and nutrition unit: prescribed versus actual work].

    Science.gov (United States)

    Colares, Luciléia Granhen Tavares; Freitas, Carlos Machado de

    2007-12-01

    This study focuses on the relationship between the work process in a food and nutrition unit and workers' health, in the words of the participants themselves. Direct observation, a semi-structured interview, and focus groups were used to collect the data. The reference was the dialogue between human ergonomics and work psychodynamics. The results showed that work organization in the study unit represents a routine activity, the requirements of which in terms of the work situation are based on criteria set by the institution. Variability in the activities is influenced mainly by the available equipment, instruments, and materials, thereby generating improvisation in meal production that produces both a physical and psychological cost for workers. Dissatisfaction during the performance of tasks results mainly from the supervisory style and relationship to immediate superiors. Workers themselves proposed changes in the work organization, based on greater dialogue and trust between supervisors and the workforce. Finally, the study identifies the need for an intervention that encourages workers' participation as agents of change.

  8. Sono-leather technology with ultrasound: a boon for unit operations in leather processing - review of our research work at Central Leather Research Institute (CLRI), India.

    Science.gov (United States)

    Sivakumar, Venkatasubramanian; Swaminathan, Gopalaraman; Rao, Paruchuri Gangadhar; Ramasami, Thirumalachari

    2009-01-01

    Ultrasound is a sound wave with a frequency above the human audible range of 16 Hz to 16 kHz. In recent years, numerous unit operations involving physical as well as chemical processes are reported to have been enhanced by ultrasonic irradiation. There have been benefits such as improvement in process efficiency, process time reduction, performing the processes under milder conditions and avoiding the use of some toxic chemicals to achieve cleaner processing. These could be a better way of augmentation for the processes as an advanced technique. The important point here is that ultrasonic irradiation is physical method activation rather than using chemical entities. Detailed studies have been made in the unit operations related to leather such as diffusion rate enhancement through porous leather matrix, cleaning, degreasing, tanning, dyeing, fatliquoring, oil-water emulsification process and solid-liquid tannin extraction from vegetable tanning materials as well as in precipitation reaction in wastewater treatment. The fundamental mechanism involved in these processes is ultrasonic cavitation in liquid media. In addition to this there also exist some process specific mechanisms for the enhancement of the processes. For instance, possible real-time reversible pore-size changes during ultrasound propagation through skin/leather matrix could be a reason for diffusion rate enhancement in leather processing as reported for the first time. Exhaustive scientific research work has been carried out in this area by our group working in Chemical Engineering Division of CLRI and most of these benefits have been proven with publications in valued peer-reviewed international journals. The overall results indicate that about 2-5-fold increase in the process efficiency due to ultrasound under the given process conditions for various unit operations with additional benefits. Scale-up studies are underway for converting these concepts in to a real viable larger scale operation. In

  9. The application of projected conjugate gradient solvers on graphical processing units

    International Nuclear Information System (INIS)

    Lin, Youzuo; Renaut, Rosemary

    2011-01-01

    Graphical processing units introduce the capability for large scale computation at the desktop. Presented numerical results verify that efficiencies and accuracies of basic linear algebra subroutines of all levels when implemented in CUDA and Jacket are comparable. But experimental results demonstrate that the basic linear algebra subroutines of level three offer the greatest potential for improving efficiency of basic numerical algorithms. We consider the solution of the multiple right hand side set of linear equations using Krylov subspace-based solvers. Thus, for the multiple right hand side case, it is more efficient to make use of a block implementation of the conjugate gradient algorithm, rather than to solve each system independently. Jacket is used for the implementation. Furthermore, including projection from one system to another improves efficiency. A relevant example, for which simulated results are provided, is the reconstruction of a three dimensional medical image volume acquired from a positron emission tomography scanner. Efficiency of the reconstruction is improved by using projection across nearby slices.

  10. The application of projected conjugate gradient solvers on graphical processing units

    Energy Technology Data Exchange (ETDEWEB)

    Lin, Youzuo [Los Alamos National Laboratory; Renaut, Rosemary [ARIZONA STATE UNIV.

    2011-01-26

    Graphical processing units introduce the capability for large scale computation at the desktop. Presented numerical results verify that efficiencies and accuracies of basic linear algebra subroutines of all levels when implemented in CUDA and Jacket are comparable. But experimental results demonstrate that the basic linear algebra subroutines of level three offer the greatest potential for improving efficiency of basic numerical algorithms. We consider the solution of the multiple right hand side set of linear equations using Krylov subspace-based solvers. Thus, for the multiple right hand side case, it is more efficient to make use of a block implementation of the conjugate gradient algorithm, rather than to solve each system independently. Jacket is used for the implementation. Furthermore, including projection from one system to another improves efficiency. A relevant example, for which simulated results are provided, is the reconstruction of a three dimensional medical image volume acquired from a positron emission tomography scanner. Efficiency of the reconstruction is improved by using projection across nearby slices.

  11. Comparative study of the variables for determining unit processing cost of irradiated food products in developing countries : case study of Ghana

    International Nuclear Information System (INIS)

    Banini, G.K; Emi-Reynolds, G.; Kassapu, S.N.

    1997-01-01

    A method for estimating unit cost of gamma treated food products in a developing country like Ghana is presented. The method employs the cost of cobalt source requirement, capital and operating costs, dose requirements etc. and relates these variables to various annual throughput at a gamma processing facility. In situations where the cost of foreign components or devices are required, the assumptions have been based on those of Kunstadt and Steeves. Otherwise, the prevailing conditions existing in Ghana have been used. The study reveals that the unit processing cost for gamma treatment foods in such a facility is between 8.0 to 147.2 US dollars per tonne. (author). 9 refs., 4 figs

  12. Conversion of a deasphalting unit for use in the process of supercritical solvent recovery

    Directory of Open Access Journals (Sweden)

    Waintraub S.

    2000-01-01

    Full Text Available In order to reduce energy consumption and to increase deasphalted oil yield, an old PETROBRAS deasphalting unit was converted for use in the process of supercritical solvent recovery. In-plant and pilot tests were performed to determine the ideal solvent-to-oil ratio. The optimum conditions for separation of the supercritical solvent from the solvent-plus-oil liquid mixture were determined by experimental tests in PVT cells. These tests also allowed measurement of the dew and bubble points, determination of the retrograde region, observation of supercritical fluid compressibility and as a result construction of a phase equilibrium diagram.

  13. Unit Testing Using Design by Contract and Equivalence Partitions, Extreme Programming and Agile Processes in Software Engineering

    DEFF Research Database (Denmark)

    Madsen, Per

    2003-01-01

    Extreme Programming [1] and in particular the idea of Unit Testing can improve the quality of the testing process. But still programmers need to do a lot of tiresome manual work writing test cases. If the programmers could get some automatic tool support enforcing the quality of test cases then t...... then the overall quality of the software would improve significantly....

  14. Two-scale characterization of deformation-induced anisotropy of polycrystalline metals

    International Nuclear Information System (INIS)

    Watanabe, Ikumu; Terada, Kenjiro

    2004-01-01

    The anisotropic macro-scale mechanical behavior of polycrystalline metals is characterized by incorporating the micro-scale constitutive model of single crystal plasticity into the two-scale modeling based on the mathematical homogenization theory. The two-scale simulations are conducted to analyze the macro-scale anisotropy induced by micro-scale plastic deformation of the polycrystalline aggregate. In the simulations, the micro-scale representative volume element (RVE) of a polycrystalline aggregate is uniformly loaded in one direction, unloaded to macroscopically zero stress in a certain stage of deformation and then re-loaded in the different directions. The last re-loading calculations provide different macro-scale responses of the RVE, which can be the appearance of material anisotropy. We then try to examine the effects of the intergranular and intragranular behaviors on the anisotropy by means of various illustrations of plastic deformation process in stead of the use of pole figures for the change of crystallographic orientations

  15. Risk Quantitative Determination of Fire and Explosion in a Process Unit By Dow’s Fire and Explosion Index

    Directory of Open Access Journals (Sweden)

    S. Varmazyar

    2008-04-01

    Full Text Available Background and aims   Fire and explosion hazards are the first and second of major hazards in process industries, respectively. This study has been done to determine fire and explosion risk severity,radius of exposure and estimating of most probable loss.   Methods   In this quantitative study process unit has been selected with affecting parameters on  fire and explosion risk. Then, it was analyzed by DOW's fire and explosion index (F&EI. Technical data were obtained from process documents and reports, fire and explosion guideline.After calculating of DOW's index, radius of exposure determined and finally most  probable loss was estimated.   Results   The results showed an F&EI value of 226 for this process unit.The F&EI was extremely  high and unacceptable.Risk severity was categorized in sever class.Radius of exposure and damage factor were calculated 57 meters and 83%,respectively. As well as most probable loss was  estimated about 6.7 million dollars.   Conclusion   F&EI is a proper technique for risk assessment and loss estimation of fire and  explosion in process industries.Also,It is an important index for detecting high risk and low risk   areas in an industry. At this technique, all of factors affecting on fire and explosion risk was  showed as index that is a base for judgement risk class. Finally, estimated losses could be used as  a base of fire and explosion insurance.

  16. SHIVGAMI : Simplifying tHe titanIc blastx process using aVailable GAthering of coMputational unIts

    Directory of Open Access Journals (Sweden)

    Naman Mangukia

    2017-10-01

    Full Text Available Assembling novel genomes from scratch is a never ending process unless and until the homo sapiens cover all the living organisms! On top of that, this denovo approach is employed by RNASeq and Metagenomics analysis. Functional identification of the scaffolds or transcripts from such drafted assemblies is a substantial step routinely employes a well-known BlastX program which facilitates a user to search DNA query against NCBI-Protein (NR:~120Gb database. In spite of having multicore-processing option, blastX is an elongated process for the bulk of lengthy Queryinputs. Tremendous efforts are constantly being applied to solve this problem by increasing computational power, GPU-Based computing, Cloud computing and Hadoop based approach which ultimately requires gigantic cost in terms of money and processing. To address this issue, here we have come up with SHIVGAMI, which automates the entire process using perl and shell scripts, which divide, distribute and process the input FASTA sequences as per the CPU-cores availability amongst the computational units individually. Linux operating system, NR database and blastX program installations are prerequisites for each system.  The beauty of this stand-alone automation program SHIVGAMI is it requires the LAN connection exactly twice: During ‘query distribution’ and at the time of ‘proces completion’. In initial phase, it divides the fasta sequences according to the individual computer's core-capability. Then it will evenly distribute all the data along with small automation scripts which will run the blastX process to the respective computational unit and send back the results file to the master computer. The master computer finally combines and compiles the files into a single result. This simple automation converts a computer lab into a GRID without investment of any software, hardware and man-power. In short, SHIVGAMI is a time and cost savior tool for all users starting from commercial firm

  17. Porting of the transfer-matrix method for multilayer thin-film computations on graphics processing units

    Science.gov (United States)

    Limmer, Steffen; Fey, Dietmar

    2013-07-01

    Thin-film computations are often a time-consuming task during optical design. An efficient way to accelerate these computations with the help of graphics processing units (GPUs) is described. It turned out that significant speed-ups can be achieved. We investigate the circumstances under which the best speed-up values can be expected. Therefore we compare different GPUs among themselves and with a modern CPU. Furthermore, the effect of thickness modulation on the speed-up and the runtime behavior depending on the input data is examined.

  18. Decreasing laboratory turnaround time and patient wait time by implementing process improvement methodologies in an outpatient oncology infusion unit.

    Science.gov (United States)

    Gjolaj, Lauren N; Gari, Gloria A; Olier-Pino, Angela I; Garcia, Juan D; Fernandez, Gustavo L

    2014-11-01

    Prolonged patient wait times in the outpatient oncology infusion unit indicated a need to streamline phlebotomy processes by using existing resources to decrease laboratory turnaround time and improve patient wait time. Using the DMAIC (define, measure, analyze, improve, control) method, a project to streamline phlebotomy processes within the outpatient oncology infusion unit in an academic Comprehensive Cancer Center known as the Comprehensive Treatment Unit (CTU) was completed. Laboratory turnaround time for patients who needed same-day lab and CTU services and wait time for all CTU patients was tracked for 9 weeks. During the pilot, the wait time from arrival to CTU to sitting in treatment area decreased by 17% for all patients treated in the CTU during the pilot. A total of 528 patients were seen at the CTU phlebotomy location, representing 16% of the total patients who received treatment in the CTU, with a mean turnaround time of 24 minutes compared with a baseline turnaround time of 51 minutes. Streamlining workflows and placing a phlebotomy station inside of the CTU decreased laboratory turnaround times by 53% for patients requiring same day lab and CTU services. The success of the pilot project prompted the team to make the station a permanent fixture. Copyright © 2014 by American Society of Clinical Oncology.

  19. A Patient Flow Analysis: Identification of Process Inefficiencies and Workflow Metrics at an Ambulatory Endoscopy Unit

    Directory of Open Access Journals (Sweden)

    Rowena Almeida

    2016-01-01

    Full Text Available Background. The increasing demand for endoscopic procedures coincides with the paradigm shift in health care delivery that emphasizes efficient use of existing resources. However, there is limited literature on the range of endoscopy unit efficiencies. Methods. A time and motion analysis of patient flow through the Hotel-Dieu Hospital (Kingston, Ontario endoscopy unit was followed by qualitative interviews. Procedures were directly observed in three segments: individual endoscopy room use, preprocedure/recovery room, and overall endoscopy unit utilization. Results. Data were collected for 137 procedures in the endoscopy room, 139 procedures in the preprocedure room, and 143 procedures for overall room utilization. The mean duration spent in the endoscopy room was 31.47 min for an esophagogastroduodenoscopy, 52.93 min for a colonoscopy, 30.47 min for a flexible sigmoidoscopy, and 66.88 min for a double procedure. The procedure itself accounted for 8.11 min, 34.24 min, 9.02 min, and 39.13 min for the above procedures, respectively. The focused interviews identified the scheduling template as a major area of operational inefficiency. Conclusions. Despite reasonable procedure times for all except colonoscopies, the endoscopy room durations exceed the allocated times, reflecting the impact of non-procedure-related factors and the need for a revised scheduling template. Endoscopy units have unique operational characteristics and identification of process inefficiencies can lead to targeted quality improvement initiatives.

  20. A Patient Flow Analysis: Identification of Process Inefficiencies and Workflow Metrics at an Ambulatory Endoscopy Unit.

    Science.gov (United States)

    Almeida, Rowena; Paterson, William G; Craig, Nancy; Hookey, Lawrence

    2016-01-01

    Background. The increasing demand for endoscopic procedures coincides with the paradigm shift in health care delivery that emphasizes efficient use of existing resources. However, there is limited literature on the range of endoscopy unit efficiencies. Methods. A time and motion analysis of patient flow through the Hotel-Dieu Hospital (Kingston, Ontario) endoscopy unit was followed by qualitative interviews. Procedures were directly observed in three segments: individual endoscopy room use, preprocedure/recovery room, and overall endoscopy unit utilization. Results. Data were collected for 137 procedures in the endoscopy room, 139 procedures in the preprocedure room, and 143 procedures for overall room utilization. The mean duration spent in the endoscopy room was 31.47 min for an esophagogastroduodenoscopy, 52.93 min for a colonoscopy, 30.47 min for a flexible sigmoidoscopy, and 66.88 min for a double procedure. The procedure itself accounted for 8.11 min, 34.24 min, 9.02 min, and 39.13 min for the above procedures, respectively. The focused interviews identified the scheduling template as a major area of operational inefficiency. Conclusions. Despite reasonable procedure times for all except colonoscopies, the endoscopy room durations exceed the allocated times, reflecting the impact of non-procedure-related factors and the need for a revised scheduling template. Endoscopy units have unique operational characteristics and identification of process inefficiencies can lead to targeted quality improvement initiatives.

  1. Effects of the Scientific Argumentation Based Learning Process on Teaching the Unit of Cell Division and Inheritance to Eighth Grade Students

    Science.gov (United States)

    Balci, Ceyda; Yenice, Nilgun

    2016-01-01

    The aim of this study is to analyse the effects of scientific argumentation based learning process on the eighth grade students' achievement in the unit of "cell division and inheritance". It also deals with the effects of this process on their comprehension about the nature of scientific knowledge, their willingness to take part in…

  2. Towards a Unified Sentiment Lexicon Based on Graphics Processing Units

    Directory of Open Access Journals (Sweden)

    Liliana Ibeth Barbosa-Santillán

    2014-01-01

    Full Text Available This paper presents an approach to create what we have called a Unified Sentiment Lexicon (USL. This approach aims at aligning, unifying, and expanding the set of sentiment lexicons which are available on the web in order to increase their robustness of coverage. One problem related to the task of the automatic unification of different scores of sentiment lexicons is that there are multiple lexical entries for which the classification of positive, negative, or neutral {P,N,Z} depends on the unit of measurement used in the annotation methodology of the source sentiment lexicon. Our USL approach computes the unified strength of polarity of each lexical entry based on the Pearson correlation coefficient which measures how correlated lexical entries are with a value between 1 and −1, where 1 indicates that the lexical entries are perfectly correlated, 0 indicates no correlation, and −1 means they are perfectly inversely correlated and so is the UnifiedMetrics procedure for CPU and GPU, respectively. Another problem is the high processing time required for computing all the lexical entries in the unification task. Thus, the USL approach computes a subset of lexical entries in each of the 1344 GPU cores and uses parallel processing in order to unify 155802 lexical entries. The results of the analysis conducted using the USL approach show that the USL has 95.430 lexical entries, out of which there are 35.201 considered to be positive, 22.029 negative, and 38.200 neutral. Finally, the runtime was 10 minutes for 95.430 lexical entries; this allows a reduction of the time computing for the UnifiedMetrics by 3 times.

  3. Monte Carlo method for neutron transport calculations in graphics processing units (GPUs)

    International Nuclear Information System (INIS)

    Pellegrino, Esteban

    2011-01-01

    Monte Carlo simulation is well suited for solving the Boltzmann neutron transport equation in an inhomogeneous media for complicated geometries. However, routine applications require the computation time to be reduced to hours and even minutes in a desktop PC. The interest in adopting Graphics Processing Units (GPUs) for Monte Carlo acceleration is rapidly growing. This is due to the massive parallelism provided by the latest GPU technologies which is the most promising solution to the challenge of performing full-size reactor core analysis on a routine basis. In this study, Monte Carlo codes for a fixed-source neutron transport problem were developed for GPU environments in order to evaluate issues associated with computational speedup using GPUs. Results obtained in this work suggest that a speedup of several orders of magnitude is possible using the state-of-the-art GPU technologies. (author) [es

  4. Area-delay trade-offs of texture decompressors for a graphics processing unit

    Science.gov (United States)

    Novoa Súñer, Emilio; Ituero, Pablo; López-Vallejo, Marisa

    2011-05-01

    Graphics Processing Units have become a booster for the microelectronics industry. However, due to intellectual property issues, there is a serious lack of information on implementation details of the hardware architecture that is behind GPUs. For instance, the way texture is handled and decompressed in a GPU to reduce bandwidth usage has never been dealt with in depth from a hardware point of view. This work addresses a comparative study on the hardware implementation of different texture decompression algorithms for both conventional (PCs and video game consoles) and mobile platforms. Circuit synthesis is performed targeting both a reconfigurable hardware platform and a 90nm standard cell library. Area-delay trade-offs have been extensively analyzed, which allows us to compare the complexity of decompressors and thus determine suitability of algorithms for systems with limited hardware resources.

  5. Grey water treatment by a continuous process of an electrocoagulation unit and a submerged membrane bioreactor system

    KAUST Repository

    Bani-Melhem, Khalid

    2012-08-01

    This paper presents the performance of an integrated process consisting of an electro-coagulation (EC) unit and a submerged membrane bioreactor (SMBR) technology for grey water treatment. For comparison purposes, another SMBR process without electrocoagulation (EC) was operated in parallel with both processes operated under constant transmembrane pressure for 24. days in continuous operation mode. It was found that integrating EC process with SMBR (EC-SMBR) was not only an effective method for grey water treatment but also for improving the overall performance of the membrane filtration process. EC-SMBR process achieved up to 13% reduction in membrane fouling compared to SMBR without electrocoagulation. High average percent removals were attained by both processes for most wastewater parameters studied. The results demonstrated that EC-SMBR performance slightly exceeded that of SMBR for COD, turbidity, and colour. Both processes produced effluent free of suspended solids, and faecal coliforms were nearly (100%) removed in both processes. A substantial improvement was achieved in removal of phosphate in the EC-SMBR process. However, ammonia nitrogen was removed more effectively by the SMBR only. Accordingly, the electrolysis condition in the EC-SMBR process should be optimized so as not to impede biological treatment. © 2012 Elsevier B.V.

  6. Monitoring of operating processes

    International Nuclear Information System (INIS)

    Barry, R.F.

    1981-01-01

    Apparatus is described for monitoring the processes of a nuclear reactor to detect off-normal operation of any process and for testing the monitoring apparatus. The processes are evaluated by response to their paramters, such as temperature, pressure, etc. The apparatus includes a pair of monitoring paths or signal processing units. Each unit includes facilities for receiving on a time-sharing basis, a status binary word made up of digits each indicating the status of a process, whether normal or off-normal, and test-signal binary words simulating the status binary words. The status words and test words are processed in succession during successive cycles. During each cycle, the two units receive the same status word and the same test word. The test words simulate the status words both when they indicate normal operation and when they indicate off-normal operation. Each signal-processing unit includes a pair of memories. Each memory receives a status word or a test word, as the case may be, and converts the received word into a converted status word or a converted test word. The memories of each monitoring unit operate into a non-coincidence which signals non-coincidence of the converted word out of one memory of a signal-processing unit not identical to the converted word of the other memory of the same unit

  7. PENGEMBANGAN STRATEGIC BUSINESS UNIT PERHUTANI UNIT III JAWA BARAT DAN BANTEN

    Directory of Open Access Journals (Sweden)

    Rurin Wahyu Listriana

    2014-09-01

    Full Text Available ABSTRACTThis study aimed to 1 analyze the innovation and competitiveness ability of Strategic Business Unit (SBU in the KBM (independent business unit industri and 2 formulate alternative policies that can enhance company innovation and competitiveness. The study was conducted at KBM Industri and SBU within the KBM industri. The information and data was obtained through interviews and distributing questionnaires to 10 respondents. Respondents were chosen based on their expertise and or experience. Data processing techniques used the SWOT analysis and Analytic Hierarchy Process (AHP. Results of this study shows that KBM Industri Unit III have the power and the opportunity to expand, increase growth, and achieve maximum progress by improving the quality, developing new products, improving processes and increasing access to wider market. The potential innovation can also be seen from the process-product innovation, knowledge-skills innovation and method-system innovation. In the KBM Industri Unit III, those potential covers the raw materials, process equipment and products. The improvement of company's innovation was influenced by the main factor organization with the value 0,436 and the most influential factors of marketing with a value 0,398, while the ultimate goal is the improvement of the process with the value 0,756. The improvement of the company's innovation strategy is through strategy priority, namely cooperation with other /external parties weighted by 0,703 and optimizing own capabilities of the research or conducting institutions development research weighted by 0,297.Keywords: SBU, KBM Industri, Innovation, AHP, SWOTABSTRAKPenelitian ini bertujuan 1 menganalisis kemampuan inovasi dan daya saing Strategic Business Unit (SBU di Kesatuan Bisnis Mandiri (KBM Industri dan 2 merumuskan kebijakan alternatif yang dapat meningkatkan inovasi dan daya saing perusahaan. Penelitian  dilakukan pada KBM Industri  dan SBU yang ada didalam  KBM

  8. ACTION OF UNIFORM SEARCH ALGORITHM WHEN SELECTING LANGUAGE UNITS IN THE PROCESS OF SPEECH

    Directory of Open Access Journals (Sweden)

    Ирина Михайловна Некипелова

    2013-05-01

    Full Text Available The article is devoted to research of action of uniform search algorithm when selecting by human of language units for speech produce. The process is connected with a speech optimization phenomenon. This makes it possible to shorten the time of cogitation something that human want to say, and to achieve the maximum precision in thoughts expression. The algorithm of uniform search works at consciousness  and subconsciousness levels. It favours the forming of automatism produce and perception of speech. Realization of human's cognitive potential in the process of communication starts up complicated mechanism of self-organization and self-regulation of language. In turn, it results in optimization of language system, servicing needs not only human's self-actualization but realization of communication in society. The method of problem-oriented search is used for researching of optimization mechanisms, which are distinctive to speech producing and stabilization of language.DOI: http://dx.doi.org/10.12731/2218-7405-2013-4-50

  9. Accelerating Monte Carlo simulations of photon transport in a voxelized geometry using a massively parallel graphics processing unit

    International Nuclear Information System (INIS)

    Badal, Andreu; Badano, Aldo

    2009-01-01

    Purpose: It is a known fact that Monte Carlo simulations of radiation transport are computationally intensive and may require long computing times. The authors introduce a new paradigm for the acceleration of Monte Carlo simulations: The use of a graphics processing unit (GPU) as the main computing device instead of a central processing unit (CPU). Methods: A GPU-based Monte Carlo code that simulates photon transport in a voxelized geometry with the accurate physics models from PENELOPE has been developed using the CUDA programming model (NVIDIA Corporation, Santa Clara, CA). Results: An outline of the new code and a sample x-ray imaging simulation with an anthropomorphic phantom are presented. A remarkable 27-fold speed up factor was obtained using a GPU compared to a single core CPU. Conclusions: The reported results show that GPUs are currently a good alternative to CPUs for the simulation of radiation transport. Since the performance of GPUs is currently increasing at a faster pace than that of CPUs, the advantages of GPU-based software are likely to be more pronounced in the future.

  10. Accelerating Monte Carlo simulations of photon transport in a voxelized geometry using a massively parallel graphics processing unit

    Energy Technology Data Exchange (ETDEWEB)

    Badal, Andreu; Badano, Aldo [Division of Imaging and Applied Mathematics, OSEL, CDRH, U.S. Food and Drug Administration, Silver Spring, Maryland 20993-0002 (United States)

    2009-11-15

    Purpose: It is a known fact that Monte Carlo simulations of radiation transport are computationally intensive and may require long computing times. The authors introduce a new paradigm for the acceleration of Monte Carlo simulations: The use of a graphics processing unit (GPU) as the main computing device instead of a central processing unit (CPU). Methods: A GPU-based Monte Carlo code that simulates photon transport in a voxelized geometry with the accurate physics models from PENELOPE has been developed using the CUDA programming model (NVIDIA Corporation, Santa Clara, CA). Results: An outline of the new code and a sample x-ray imaging simulation with an anthropomorphic phantom are presented. A remarkable 27-fold speed up factor was obtained using a GPU compared to a single core CPU. Conclusions: The reported results show that GPUs are currently a good alternative to CPUs for the simulation of radiation transport. Since the performance of GPUs is currently increasing at a faster pace than that of CPUs, the advantages of GPU-based software are likely to be more pronounced in the future.

  11. Accelerating Monte Carlo simulations of photon transport in a voxelized geometry using a massively parallel graphics processing unit.

    Science.gov (United States)

    Badal, Andreu; Badano, Aldo

    2009-11-01

    It is a known fact that Monte Carlo simulations of radiation transport are computationally intensive and may require long computing times. The authors introduce a new paradigm for the acceleration of Monte Carlo simulations: The use of a graphics processing unit (GPU) as the main computing device instead of a central processing unit (CPU). A GPU-based Monte Carlo code that simulates photon transport in a voxelized geometry with the accurate physics models from PENELOPE has been developed using the CUDATM programming model (NVIDIA Corporation, Santa Clara, CA). An outline of the new code and a sample x-ray imaging simulation with an anthropomorphic phantom are presented. A remarkable 27-fold speed up factor was obtained using a GPU compared to a single core CPU. The reported results show that GPUs are currently a good alternative to CPUs for the simulation of radiation transport. Since the performance of GPUs is currently increasing at a faster pace than that of CPUs, the advantages of GPU-based software are likely to be more pronounced in the future.

  12. Comments on comet shapes and aggregation processes

    International Nuclear Information System (INIS)

    Hartmann, W.K.

    1989-01-01

    An important question for a comet mission is whether comet nuclei preserve information clarifying aggregation processes of planetary matter. New observational evidence shows that Trojan asteroids, as a group, display a higher fraction of highly-elongated objects than the belt. More recently evidence has accumulated that comet nuclei, as a group, also display highly-elongated shapes at macro-scale. This evidence comes from the several comets whose nuclear lightcurves or shapes have been well studied. Trojans and comet nuclei share other properties. Both groups have extremely low albedos and reddish-to neutral-black colors typical of asteroids of spectral class D, P, and C. Both groups may have had relatively low collision frequencies. An important problem to resolve with spacecraft imaging is whether these elongated shapes are primordial, or due to evolution of the objects. Two hypotheses that might be tested by a combination of global-scale and close-up imaging from various directions are: (1) The irregular shapes are primordial and related to the fact that these bodies have had lower collision frequencies than belt asteroids; or (2) The irregular shapes may be due to volatile loss

  13. PGAS in-memory data processing for the Processing Unit of the Upgraded Electronics of the Tile Calorimeter of the ATLAS Detector

    International Nuclear Information System (INIS)

    Ohene-Kwofie, Daniel; Otoo, Ekow

    2015-01-01

    The ATLAS detector, operated at the Large Hadron Collider (LHC) records proton-proton collisions at CERN every 50ns resulting in a sustained data flow up to PB/s. The upgraded Tile Calorimeter of the ATLAS experiment will sustain about 5PB/s of digital throughput. These massive data rates require extremely fast data capture and processing. Although there has been a steady increase in the processing speed of CPU/GPGPU assembled for high performance computing, the rate of data input and output, even under parallel I/O, has not kept up with the general increase in computing speeds. The problem then is whether one can implement an I/O subsystem infrastructure capable of meeting the computational speeds of the advanced computing systems at the petascale and exascale level.We propose a system architecture that leverages the Partitioned Global Address Space (PGAS) model of computing to maintain an in-memory data-store for the Processing Unit (PU) of the upgraded electronics of the Tile Calorimeter which is proposed to be used as a high throughput general purpose co-processor to the sROD of the upgraded Tile Calorimeter. The physical memory of the PUs are aggregated into a large global logical address space using RDMA- capable interconnects such as PCI- Express to enhance data processing throughput. (paper)

  14. Enrichment situation outside the United States

    International Nuclear Information System (INIS)

    Anon.

    1979-01-01

    Different enrichment technologies are briefly characterized which include gaseous diffusion, which is presently the production mainstay of the United States and France; the gaseous centrifuge which is the production plant for Urenco and the technology for future United States enrichment expansion; the aero-dynamic processes which include the jet nozzle (also known as the Becker process) and the fixed-wall centrifuge (also known as the Helikon process); chemical processes; laser isotope separation processes (also referred to in the literature as LIS); and plasma technology

  15. Computerized nursing process in the Intensive Care Unit: ergonomics and usability.

    Science.gov (United States)

    Almeida, Sônia Regina Wagner de; Sasso, Grace Teresinha Marcon Dal; Barra, Daniela Couto Carvalho

    2016-01-01

    Analyzing the ergonomics and usability criteria of the Computerized Nursing Process based on the International Classification for Nursing Practice in the Intensive Care Unit according to International Organization for Standardization(ISO). A quantitative, quasi-experimental, before-and-after study with a sample of 16 participants performed in an Intensive Care Unit. Data collection was performed through the application of five simulated clinical cases and an evaluation instrument. Data analysis was performed by descriptive and inferential statistics. The organization, content and technical criteria were considered "excellent", and the interface criteria were considered "very good", obtaining means of 4.54, 4.60, 4.64 and 4.39, respectively. The analyzed standards obtained means above 4.0, being considered "very good" by the participants. The Computerized Nursing Processmet ergonomic and usability standards according to the standards set by ISO. This technology supports nurses' clinical decision-making by providing complete and up-to-date content for Nursing practice in the Intensive Care Unit. Analisar os critérios de ergonomia e usabilidade do Processo de Enfermagem Informatizado a partir da Classificação Internacional para as Práticas de Enfermagem, em Unidade de Terapia Intensiva, de acordo com os padrões da InternationalOrganization for Standardization (ISO). Pesquisa quantitativa, quase-experimental do tipo antes e depois, com uma amostra de 16 participantes, realizada em uma Unidade de Terapia Intensiva. Coleta de dados realizada por meio da aplicação de cinco casos clínicos simulados e instrumento de avaliação. A análise dos dados foi realizada pela estatística descritiva e inferencial. Os critérios organização, conteúdo e técnico foram considerados "excelentes", e o critério interface "muito bom", obtendo médias 4,54, 4,60, 4,64 e 4,39, respectivamente. Os padrões analisados obtiveram médias acima de 4,0, sendo considerados "muito bons

  16. Processing device with self-scrubbing logic

    Science.gov (United States)

    Wojahn, Christopher K.

    2016-03-01

    An apparatus includes a processing unit including a configuration memory and self-scrubber logic coupled to read the configuration memory to detect compromised data stored in the configuration memory. The apparatus also includes a watchdog unit external to the processing unit and coupled to the self-scrubber logic to detect a failure in the self-scrubber logic. The watchdog unit is coupled to the processing unit to selectively reset the processing unit in response to detecting the failure in the self-scrubber logic. The apparatus also includes an external memory external to the processing unit and coupled to send configuration data to the configuration memory in response to a data feed signal outputted by the self-scrubber logic.

  17. A real-time GNSS-R system based on software-defined radio and graphics processing units

    Science.gov (United States)

    Hobiger, Thomas; Amagai, Jun; Aida, Masanori; Narita, Hideki

    2012-04-01

    Reflected signals of the Global Navigation Satellite System (GNSS) from the sea or land surface can be utilized to deduce and monitor physical and geophysical parameters of the reflecting area. Unlike most other remote sensing techniques, GNSS-Reflectometry (GNSS-R) operates as a passive radar that takes advantage from the increasing number of navigation satellites that broadcast their L-band signals. Thereby, most of the GNSS-R receiver architectures are based on dedicated hardware solutions. Software-defined radio (SDR) technology has advanced in the recent years and enabled signal processing in real-time, which makes it an ideal candidate for the realization of a flexible GNSS-R system. Additionally, modern commodity graphic cards, which offer massive parallel computing performances, allow to handle the whole signal processing chain without interfering with the PC's CPU. Thus, this paper describes a GNSS-R system which has been developed on the principles of software-defined radio supported by General Purpose Graphics Processing Units (GPGPUs), and presents results from initial field tests which confirm the anticipated capability of the system.

  18. 40 CFR 63.2252 - What are the requirements for process units that have no control or work practice requirements?

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 12 2010-07-01 2010-07-01 true What are the requirements for process units that have no control or work practice requirements? 63.2252 Section 63.2252 Protection of... Pollutants: Plywood and Composite Wood Products General Compliance Requirements § 63.2252 What are the...

  19. Simultaneous measurement of triboelectrification and triboluminescence of crystalline materials

    Science.gov (United States)

    Collins, Adam L.; Camara, Carlos G.; Van Cleve, Eli; Putterman, Seth J.

    2018-01-01

    Triboelectrification has been studied for over 2500 years, yet there is still a lack of fundamental understanding as to its origin. Given its utility in areas such as xerography, powder spray painting, and energy harvesting, many devices have been made to investigate triboelectrification at many length-scales, though few seek to additionally make use of triboluminescence: the emission of electromagnetic radiation immediately following a charge separation event. As devices for measuring triboelectrification became smaller and smaller, now measuring down to the atomic scale with atomic force microscope based designs, an appreciation for the collective and multi-scale nature of triboelectrification has perhaps abated. Consider that the energy required to move a unit charge is very large compared to a van der Waals interaction, yet peeling Scotch tape (whose adhesion is derived from van der Waals forces) can provide strong enough energy-focusing to generate X-ray emission. This paper presents a device to press approximately cm-sized materials together in a vacuum, with in situ alignment. Residual surface charge, force, and position and X-ray, visible light, and RF emission are measured for single crystal samples. Charge is therefore tracked throughout the charging and discharging processes, resulting in a more complete picture of triboelectrification, with controllable and measurable environmental influence. Macroscale charging is directly measured, whilst triboluminescence, originating in atomic-scale processes, probes the microscale. The apparatus was built with the goal of obtaining an ab initio-level explanation of triboelectrification for well-defined materials, at the micro- and macro-scale, which has eluded scientists for millennia.

  20. Implementation of a phenomenological DNB prediction model based on macroscale boiling flow processes in PWR fuel bundles

    International Nuclear Information System (INIS)

    Mohitpour, Maryam; Jahanfarnia, Gholamreza; Shams, Mehrzad

    2014-01-01

    Highlights: • A numerical framework was developed to mechanistically predict DNB in PWR bundles. • The DNB evaluation module was incorporated into the two-phase flow solver module. • Three-dimensional two-fluid model was the basis of two-phase flow solver module. • Liquid sublayer dryout model was adapted as CHF-triggering mechanism in DNB module. • Ability of DNB modeling approach was studied based on PSBT DNB tests in rod bundle. - Abstract: In this study, a numerical framework, comprising of a two-phase flow subchannel solver module and a Departure from Nucleate Boiling (DNB) evaluation module, was developed to mechanistically predict DNB in rod bundles of Pressurized Water Reactor (PWR). In this regard, the liquid sublayer dryout model was adapted as the Critical Heat Flux (CHF) triggering mechanism to reduce the dependency of the model on empirical correlations in the DNB evaluation module. To predict local flow boiling processes, a three-dimensional two-fluid formalism coupled with heat conduction was selected as the basic tool for the development of the two-phase flow subchannel analysis solver. Evaluation of the DNB modeling approach was performed against OECD/NRC NUPEC PWR Bundle tests (PSBT Benchmark) which supplied an extensive database for the development of truly mechanistic and consistent models for boiling transition and CHF. The results of the analyses demonstrated the need for additional assessment of the subcooled boiling model and the bulk condensation model implemented in the two-phase flow solver module. The proposed model slightly under-predicts the DNB power in comparison with the ones obtained from steady-state benchmark measurements. However, this prediction is acceptable compared with other codes. Another point about the DNB prediction model is that it has a conservative behavior. Examination of the axial and radial position of the first detected DNB using code-to-code comparisons on the basis of PSBT data indicated that the our

  1. Ba(OH)2.8H2O process for the removal and immobilization of carbon-14. Final report

    International Nuclear Information System (INIS)

    Haag, G.L.; Holladay, D.W.; Pitt, W.W. Jr.; Young, G.C.

    1986-01-01

    The airborne release of 14 C from various nuclear facilities has been identified as a potential biohazard due to the long half-life of 14 C (5730 years) and the ease with which it may be assimilated into the biosphere. At ORNL, technology has been developed for the removal and immobilization of this radionuclide. Prior studies have indicated that 14 C will likely exist in the oxidized form as CO 2 and will contribute slightly to the bulk CO 2 concentration of the gas stream, which is air-like in nature (approx.300 ppM/sub v/ CO 2 ). The technology that has been developed utilizes the CO 2 -Ba(OH) 2 .8H 2 O gas-solid reaction with the mode of gas-solid contacting being a fixed bed. The product, BaCO 3 , possesses excellent thermal and chemical stability, prerequisites for the long-term disposal of nuclear wastes. For optimal process operation, studies have indicated that an operating window of adequate size does exist. When operating within the window, high CO 2 removal efficiency (effluent concentrations 99%), and an acceptable pressure drop across the bed (3 kPa/m at a superficial velocity of 13 cm/s) are possible. Three areas of experimental investigation are reported: (1) microscale studies on 150-mg samples to provide information concerning surface properties, kinetics, and equilibrium vapor pressures; (2) macroscale studies on large fixed beds (4.2 kg of reactant) to determine the effects of humidity, temperature, and gas flow rate upon bed pressure drop and CO 2 breakthrough; and (3) design, construction, and operation of a pilot unit capable of continuously processing a 34-m 3 /h (20-ft 3 /min) air-based gas stream

  2. The aging self in a cultural context: the relation of conceptions of aging to identity processes and self-esteem in the United States and the Netherlands.

    Science.gov (United States)

    Westerhof, Gerben J; Whitbourne, Susan Krauss; Freeman, Gillian P

    2012-01-01

    To study the aging self, that is, conceptions of one's own aging process, in relation to identity processes and self-esteem in the United States and the Netherlands. As the liberal American system has a stronger emphasis on individual responsibility and youthfulness than the social-democratic Dutch system, we expect that youthful and positive perceptions of one's own aging process are more important in the United States than in the Netherlands. Three hundred and nineteen American and 235 Dutch persons between 40 and 85 years participated in the study. A single question on age identity and the Personal Experience of Aging Scale measured aspects of the aging self. The Identity and Experiences Scale measured identity processes and Rosenberg's scale measured self-esteem. A youthful age identity and more positive personal experiences of aging were related to identity processes and self-esteem. These conceptions of one's own aging process also mediate the relation between identity processes and self-esteem. This mediating effect is stronger in the United States than in the Netherlands. As expected, the self-enhancing function of youthful and positive aging perceptions is stronger in the liberal American system than in the social-democratic Dutch welfare system. The aging self should therefore be studied in its cultural context.

  3. Evaluation of virus reduction efficiency in wastewater treatment unit processes as a credit value in the multiple-barrier system for wastewater reclamation and reuse

    OpenAIRE

    Ito, Toshihiro; Kato, Tsuyoshi; Hasegawa, Makoto; Katayama, Hiroyuki; Ishii, Satoshi; Okabe, Satoshi; Sano, Daisuke

    2016-01-01

    The virus reduction efficiency of each unit process is commonly determined based on the ratio of virus concentration in influent to that in effluent of a unit, but the virus concentration in wastewater has often fallen below the analytical quantification limit, which does not allow us to calculate the concentration ratio at each sampling event. In this study, left-censored datasets of norovirus (genogroup I and II), and adenovirus were used to calculate the virus reduction efficiency in unit ...

  4. Engineering Encounters: The Cat in the Hat Builds Satellites. A Unit Promoting Scientific Literacy and the Engineering Design Process

    Science.gov (United States)

    Rehmat, Abeera P.; Owens, Marissa C.

    2016-01-01

    This column presents ideas and techniques to enhance your science teaching. This month's issue shares information about a unit promoting scientific literacy and the engineering design process. The integration of engineering with scientific practices in K-12 education can promote creativity, hands-on learning, and an improvement in students'…

  5. Accelerating large-scale protein structure alignments with graphics processing units

    Directory of Open Access Journals (Sweden)

    Pang Bin

    2012-02-01

    Full Text Available Abstract Background Large-scale protein structure alignment, an indispensable tool to structural bioinformatics, poses a tremendous challenge on computational resources. To ensure structure alignment accuracy and efficiency, efforts have been made to parallelize traditional alignment algorithms in grid environments. However, these solutions are costly and of limited accessibility. Others trade alignment quality for speedup by using high-level characteristics of structure fragments for structure comparisons. Findings We present ppsAlign, a parallel protein structure Alignment framework designed and optimized to exploit the parallelism of Graphics Processing Units (GPUs. As a general-purpose GPU platform, ppsAlign could take many concurrent methods, such as TM-align and Fr-TM-align, into the parallelized algorithm design. We evaluated ppsAlign on an NVIDIA Tesla C2050 GPU card, and compared it with existing software solutions running on an AMD dual-core CPU. We observed a 36-fold speedup over TM-align, a 65-fold speedup over Fr-TM-align, and a 40-fold speedup over MAMMOTH. Conclusions ppsAlign is a high-performance protein structure alignment tool designed to tackle the computational complexity issues from protein structural data. The solution presented in this paper allows large-scale structure comparisons to be performed using massive parallel computing power of GPU.

  6. RO unit for canned coffee drink processing line. Shipped to Tone Coca-Cola Bottling Co. Ltd., Ibaraki plant; Coffee line yo RO sochi. Tone Coca-Cola Bottling (kabu) Ibaraki kojo nonyu

    Energy Technology Data Exchange (ETDEWEB)

    Nakajima, K. [Ebara Corp., Tokyo (Japan)

    1996-01-20

    The paper introduces an RO unit (reverse osmotic membrane equipment) for producing water holding for the canned coffee drink processing line introduced to the Ibaraki plant of Tone Coca-Cola Bottling Co. The unit aims at reducing hardness components from city water and producing water holding for the coffee drink processing line. The capacity of the unit is 25m{sup 3}/h and the recovery rate is 80%. The unit is composed of a sand filter, an heat exchanger, a pre-filter, RO modules, a treated water tank, chemicals storage tanks, and an RO cleaning unit, which are all for pretreatment. The treated water, into which chlorine is injected, is sent through the existing activated carbon tower and micro filter to the processing line. The RO unit can remove at the same time ion and trihalomethane, pathogens, organic matters which are substances other than hardness components. The continued water bottling is possible with no need for the usual reproduction process, and the maintenance is easy. Because of the high hardness of the supplied raw water, acid is injected at the primary side of the unit for pH regulation to prevent scale deposition in the RO modules. The quality of the treated water well met the specifications. 2 figs., 2 tabs.

  7. Reconstructing the population activity of olfactory output neurons that innervate identifiable processing units

    Directory of Open Access Journals (Sweden)

    Shigehiro Namiki

    2008-06-01

    Full Text Available We investigated the functional organization of the moth antennal lobe (AL, the primary olfactory network, using in vivo electrophysiological recordings and anatomical identification. The moth AL contains about 60 processing units called glomeruli that are identifiable from one animal to another. We were able to monitor the output information of the AL by recording the activity of a population of output neurons, each of which innervated a single glomerulus. Using compiled intracellular recordings and staining data from different animals, we mapped the odor-evoked dynamics on a digital atlas of the AL and geometrically reconstructed the population activity. We examined the quantitative relationship between the similarity of olfactory responses and the anatomical distance between glomeruli. Globally, the olfactory response profile was independent of the anatomical distance, although some local features were present.

  8. Business Process Compliance through Reusable Units of Compliant Processes

    NARCIS (Netherlands)

    D. Shumm; O. Turetken; N. Kokash (Natallia); A. Elgammal; F. Leymann; J. van den Heuvel

    2010-01-01

    htmlabstractCompliance management is essential for ensuring that organizational business processes and supporting information systems are in accordance with a set of prescribed requirements originating from laws, regulations, and various legislative or technical documents such as Sarbanes-Oxley Act

  9. Process monitoring

    International Nuclear Information System (INIS)

    Anon.

    1981-01-01

    Many of the measurements and observations made in a nuclear processing facility to monitor processes and product quality can also be used to monitor the location and movements of nuclear materials. In this session information is presented on how to use process monitoring data to enhance nuclear material control and accounting (MC and A). It will be seen that SNM losses can generally be detected with greater sensitivity and timeliness and point of loss localized more closely than by conventional MC and A systems if process monitoring data are applied. The purpose of this session is to enable the participants to: (1) identify process unit operations that could improve control units for monitoring SNM losses; (2) choose key measurement points and formulate a loss indicator for each control unit; and (3) describe how the sensitivities and timeliness of loss detection could be determined for each loss indicator

  10. Unit operations used to treat process and/or waste streams at nuclear power plants

    International Nuclear Information System (INIS)

    Godbee, H.W.; Kibbey, A.H.

    1980-01-01

    Estimates are given of the annual amounts of each generic type of LLW [i.e., Government and commerical (fuel cycle and non-fuel cycle)] that is generated at LWR plants. Many different chemical engineering unit operations used to treat process and/or waste streams at LWR plants include adsorption, evaporation, calcination, centrifugation, compaction, crystallization, drying, filtration, incineration, reverse osmosis, and solidification of waste residues. The treatment of these various streams and the secondary wet solid wastes thus generated is described. The various treatment options for concentrates or solid wet wastes, and for dry wastes are discussed. Among the dry waste treatment methods are compaction, baling, and incineration, as well as chopping, cutting and shredding. Organic materials [liquids (e.g., oils or solvents) and/or solids], could be incinerated in most cases. The filter sludges, spent resins, and concentrated liquids (e.g., evaporator concentrates) are usually solidified in cement, or urea-formaldehyde or unsaturated polyester resins prior to burial. Incinerator ashes can also be incorporated in these binding agents. Asphalt has not yet been used. This paper presents a brief survey of operational experience at LWRs with various unit operations, including a short discussion of problems and some observations on recent trends

  11. Commercial processing and disposal alternatives for very low levels of radioactive waste in the United States

    International Nuclear Information System (INIS)

    Benda, G.A.

    2005-01-01

    The United States has several options available in the commercial processing and disposal of very low levels of radioactive waste. These range from NRC licensed low level radioactive sites for Class A, B and C waste to conditional disposal or free release of very low concentrations of material. Throughout the development of disposal alternatives, the US promoted a graded disposal approach based on risk of the material hazards. The US still promotes this approach and is renewing the emphasis on risk based disposal for very low levels of radioactive waste. One state in the US, Tennessee, has had a long and successful history of disposal of very low levels of radioactive material. This paper describes that approach and the continuing commercial options for safe, long term processing and disposal. (author)

  12. A functional intranet for the United States Coast Guard Unit

    OpenAIRE

    Hannah, Robert Todd.

    1998-01-01

    Approved for public release; distribution in unlimited. This thesis describes the complete development process of a friendly functional Intranet for an operational United States Coast Guard (USCG) electronic Support Unit (ESU) in Alameda, California. The final product is suitable for immediate use. It may also be used as a prototype for future Intranet development efforts. The methodology used to develop a finished, working product provides the core subject matter for this thesis. The disc...

  13. MGUPGMA: A Fast UPGMA Algorithm With Multiple Graphics Processing Units Using NCCL

    Directory of Open Access Journals (Sweden)

    Guan-Jie Hua

    2017-10-01

    Full Text Available A phylogenetic tree is a visual diagram of the relationship between a set of biological species. The scientists usually use it to analyze many characteristics of the species. The distance-matrix methods, such as Unweighted Pair Group Method with Arithmetic Mean and Neighbor Joining, construct a phylogenetic tree by calculating pairwise genetic distances between taxa. These methods have the computational performance issue. Although several new methods with high-performance hardware and frameworks have been proposed, the issue still exists. In this work, a novel parallel Unweighted Pair Group Method with Arithmetic Mean approach on multiple Graphics Processing Units is proposed to construct a phylogenetic tree from extremely large set of sequences. The experimental results present that the proposed approach on a DGX-1 server with 8 NVIDIA P100 graphic cards achieves approximately 3-fold to 7-fold speedup over the implementation of Unweighted Pair Group Method with Arithmetic Mean on a modern CPU and a single GPU, respectively.

  14. MGUPGMA: A Fast UPGMA Algorithm With Multiple Graphics Processing Units Using NCCL.

    Science.gov (United States)

    Hua, Guan-Jie; Hung, Che-Lun; Lin, Chun-Yuan; Wu, Fu-Che; Chan, Yu-Wei; Tang, Chuan Yi

    2017-01-01

    A phylogenetic tree is a visual diagram of the relationship between a set of biological species. The scientists usually use it to analyze many characteristics of the species. The distance-matrix methods, such as Unweighted Pair Group Method with Arithmetic Mean and Neighbor Joining, construct a phylogenetic tree by calculating pairwise genetic distances between taxa. These methods have the computational performance issue. Although several new methods with high-performance hardware and frameworks have been proposed, the issue still exists. In this work, a novel parallel Unweighted Pair Group Method with Arithmetic Mean approach on multiple Graphics Processing Units is proposed to construct a phylogenetic tree from extremely large set of sequences. The experimental results present that the proposed approach on a DGX-1 server with 8 NVIDIA P100 graphic cards achieves approximately 3-fold to 7-fold speedup over the implementation of Unweighted Pair Group Method with Arithmetic Mean on a modern CPU and a single GPU, respectively.

  15. 1:250,000-scale Hydrologic Units of the United States

    Science.gov (United States)

    Steeves, Peter; Nebert, Douglas

    1994-01-01

    The Geographic Information Retrieval and Analysis System (GIRAS) was developed in the mid 70s to put into digital form a numberof data layers which were of interest to the USGS. One of these data layers was the Hydrologic Units. The map is based on the Hydrologic Unit Maps published by the U.S. Geological Survey Office of Water Data Coordination, together with the list descriptions and name of region, subregion, accounting units, and cataloging unit. The hydrologic units are encoded with an eight-digit number that indicates the hydrologic region (first two digits), hydrologic subregion (second two digits), accounting unit (third two digits), and cataloging unit (fourth two digits). The data produced by GIRAS was originally collected at a scale of 1:250K. Some areas, notably major cities in the west, were recompiled at a scale of 1:100K. In order to join the data together and use the data in a geographic information system (GIS) the data were processed in the ARC/INFO GUS software package. Within the GIS, the data were edgematched and the neatline boundaries between maps were removed to create a single data set for the conterminous

  16. Three-dimensional photoacoustic tomography based on graphics-processing-unit-accelerated finite element method.

    Science.gov (United States)

    Peng, Kuan; He, Ling; Zhu, Ziqiang; Tang, Jingtian; Xiao, Jiaying

    2013-12-01

    Compared with commonly used analytical reconstruction methods, the frequency-domain finite element method (FEM) based approach has proven to be an accurate and flexible algorithm for photoacoustic tomography. However, the FEM-based algorithm is computationally demanding, especially for three-dimensional cases. To enhance the algorithm's efficiency, in this work a parallel computational strategy is implemented in the framework of the FEM-based reconstruction algorithm using a graphic-processing-unit parallel frame named the "compute unified device architecture." A series of simulation experiments is carried out to test the accuracy and accelerating effect of the improved method. The results obtained indicate that the parallel calculation does not change the accuracy of the reconstruction algorithm, while its computational cost is significantly reduced by a factor of 38.9 with a GTX 580 graphics card using the improved method.

  17. Visualization of microscale phase displacement proceses in retention and outflow experiments: nonuniquensess of unsaturated flow properties

    DEFF Research Database (Denmark)

    Mortensen, Annette Pia; Glass, R.J.; Hollenbeck, K.J.

    2001-01-01

    -scale heterogeneities. Because the mixture of these microscale processes yields macroscale effective behavior, measured unsaturated flow properties are also a function of these controls. Such results suggest limitations on the current definitions and uniqueness of unsaturated hydraulic properties....

  18. permGPU: Using graphics processing units in RNA microarray association studies

    Directory of Open Access Journals (Sweden)

    George Stephen L

    2010-06-01

    Full Text Available Abstract Background Many analyses of microarray association studies involve permutation, bootstrap resampling and cross-validation, that are ideally formulated as embarrassingly parallel computing problems. Given that these analyses are computationally intensive, scalable approaches that can take advantage of multi-core processor systems need to be developed. Results We have developed a CUDA based implementation, permGPU, that employs graphics processing units in microarray association studies. We illustrate the performance and applicability of permGPU within the context of permutation resampling for a number of test statistics. An extensive simulation study demonstrates a dramatic increase in performance when using permGPU on an NVIDIA GTX 280 card compared to an optimized C/C++ solution running on a conventional Linux server. Conclusions permGPU is available as an open-source stand-alone application and as an extension package for the R statistical environment. It provides a dramatic increase in performance for permutation resampling analysis in the context of microarray association studies. The current version offers six test statistics for carrying out permutation resampling analyses for binary, quantitative and censored time-to-event traits.

  19. Healthy Change Processes-A Diary Study of Five Organizational Units. Establishing a Healthy Change Feedback Loop.

    Science.gov (United States)

    Lien, Mathilde; Saksvik, Per Øystein

    2016-10-01

    This paper explores a change process in the Central Norway Regional Health Authority that was brought about by the implementation of a new economics and logistics system. The purpose of this paper is to contribute to understanding of how employees' attitudes towards change develop over time and how attitudes differ between the five health trusts under this authority. In this paper, we argue that a process-oriented focus through a longitudinal diary method, in addition to action research and feedback loops, will provide greater understanding of the evaluation of organizational change and interventions. This is explored through the assumption that different units will have different perspectives and attitudes towards the same intervention over time because of different contextual and time-related factors. The diary method aims to capture the context, events, reflections and interactions when they occur and allows for a nuanced frame of reference for the different phases of the implementation process and how these phases are perceived by employees. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  20. BarraCUDA - a fast short read sequence aligner using graphics processing units

    Directory of Open Access Journals (Sweden)

    Klus Petr

    2012-01-01

    Full Text Available Abstract Background With the maturation of next-generation DNA sequencing (NGS technologies, the throughput of DNA sequencing reads has soared to over 600 gigabases from a single instrument run. General purpose computing on graphics processing units (GPGPU, extracts the computing power from hundreds of parallel stream processors within graphics processing cores and provides a cost-effective and energy efficient alternative to traditional high-performance computing (HPC clusters. In this article, we describe the implementation of BarraCUDA, a GPGPU sequence alignment software that is based on BWA, to accelerate the alignment of sequencing reads generated by these instruments to a reference DNA sequence. Findings Using the NVIDIA Compute Unified Device Architecture (CUDA software development environment, we ported the most computational-intensive alignment component of BWA to GPU to take advantage of the massive parallelism. As a result, BarraCUDA offers a magnitude of performance boost in alignment throughput when compared to a CPU core while delivering the same level of alignment fidelity. The software is also capable of supporting multiple CUDA devices in parallel to further accelerate the alignment throughput. Conclusions BarraCUDA is designed to take advantage of the parallelism of GPU to accelerate the alignment of millions of sequencing reads generated by NGS instruments. By doing this, we could, at least in part streamline the current bioinformatics pipeline such that the wider scientific community could benefit from the sequencing technology. BarraCUDA is currently available from http://seqbarracuda.sf.net

  1. BarraCUDA - a fast short read sequence aligner using graphics processing units

    LENUS (Irish Health Repository)

    Klus, Petr

    2012-01-13

    Abstract Background With the maturation of next-generation DNA sequencing (NGS) technologies, the throughput of DNA sequencing reads has soared to over 600 gigabases from a single instrument run. General purpose computing on graphics processing units (GPGPU), extracts the computing power from hundreds of parallel stream processors within graphics processing cores and provides a cost-effective and energy efficient alternative to traditional high-performance computing (HPC) clusters. In this article, we describe the implementation of BarraCUDA, a GPGPU sequence alignment software that is based on BWA, to accelerate the alignment of sequencing reads generated by these instruments to a reference DNA sequence. Findings Using the NVIDIA Compute Unified Device Architecture (CUDA) software development environment, we ported the most computational-intensive alignment component of BWA to GPU to take advantage of the massive parallelism. As a result, BarraCUDA offers a magnitude of performance boost in alignment throughput when compared to a CPU core while delivering the same level of alignment fidelity. The software is also capable of supporting multiple CUDA devices in parallel to further accelerate the alignment throughput. Conclusions BarraCUDA is designed to take advantage of the parallelism of GPU to accelerate the alignment of millions of sequencing reads generated by NGS instruments. By doing this, we could, at least in part streamline the current bioinformatics pipeline such that the wider scientific community could benefit from the sequencing technology. BarraCUDA is currently available from http:\\/\\/seqbarracuda.sf.net

  2. Design process and instrumentation of a low NOx wire-mesh duct burner for micro-cogeneration unit

    Energy Technology Data Exchange (ETDEWEB)

    Ramadan, O.B.; Gauthier, J.E.D. [Carleton Univ., Ottawa, ON (Canada). Dept. of Mechanical and Aerospace Engineering; Hughes, P.M.; Brandon, R. [Natural Resources Canada, Ottawa, ON (Canada). CANMET Energy Technology Centre

    2007-07-01

    Air pollution and global climate change have become a serious environmental problem leading to increasingly stringent government regulations worldwide. New designs and methods for improving combustion systems to minimize the production of toxic emissions, like nitrogen oxides (NOx) are therefore needed. In order to control smog, acid rain, ozone depletion, and greenhouse-effect warming, a reduction of nitrogen oxide is necessary. One alternative for combined electrical power and heat generation (CHP) are micro-cogeneration units which use a micro-turbine as a prime mover. However, to increase the efficiencies of these units, micro-cogeneration technology still needs to be developed further. This paper described the design process, building, and testing of a new low NOx wire-mesh duct burner (WMDB) for the development of a more efficient micro-cogeneration unit. The primary goal of the study was to develop a practical and simple WMDB, which produces low emissions by using lean-premixed surface combustion concept and its objectives were separated into four phases which were described in this paper. Phase I involved the design and construction of the burner. Phase II involved a qualitative flow visualization study for the duct burner premixer to assist the new design of the burner by introducing an efficient premixer that could be used in this new application. Phase III of this research program involved non-reacting flow modeling on the burner premixer flow field using a commercial computational fluid dynamic model. In phase IV, the reacting flow experimental investigation was performed. It was concluded that the burner successfully increased the quantity and the quality of the heat released from the micro-CHP unit and carbon monoxide emissions of less than 9 ppm were reached. 3 refs., 3 figs.

  3. Cultural traits as units of analysis.

    Science.gov (United States)

    O'Brien, Michael J; Lyman, R Lee; Mesoudi, Alex; VanPool, Todd L

    2010-12-12

    Cultural traits have long been used in anthropology as units of transmission that ostensibly reflect behavioural characteristics of the individuals or groups exhibiting the traits. After they are transmitted, cultural traits serve as units of replication in that they can be modified as part of an individual's cultural repertoire through processes such as recombination, loss or partial alteration within an individual's mind. Cultural traits are analogous to genes in that organisms replicate them, but they are also replicators in their own right. No one has ever seen a unit of transmission, either behavioural or genetic, although we can observe the effects of transmission. Fortunately, such units are manifest in artefacts, features and other components of the archaeological record, and they serve as proxies for studying the transmission (and modification) of cultural traits, provided there is analytical clarity over how to define and measure the units that underlie this inheritance process.

  4. Multifunctional multiscale composites: Processing, modeling and characterization

    Science.gov (United States)

    Qiu, Jingjing

    Carbon nanotubes (CNTs) demonstrate extraordinary properties and show great promise in enhancing out-of-plane properties of traditional polymer/fiber composites and enabling functionality. However, current manufacturing challenges hinder the realization of their potential. In the dissertation research, both experimental and computational efforts have been conducted to investigate effective manufacturing techniques of CNT integrated multiscale composites. The fabricated composites demonstrated significant improvements in physical properties, such as tensile strength, tensile modulus, inter-laminar shear strength, thermal dimension stability and electrical conductivity. Such multiscale composites were truly multifunctional with the addition of CNTs. Furthermore, a novel hierarchical multiscale modeling method was developed in this research. Molecular dynamic (MD) simulation offered reasonable explanation of CNTs dispersion and their motion in polymer solution. Bi-mode finite-extensible-nonlinear-elastic (FENE) dumbbell simulation was used to analyze the influence of CNT length distribution on the stress tensor and shear-rate-dependent viscosity. Based on the simulated viscosity profile and empirical equations from experiments, a macroscale flow simulation model on the finite element method (FEM) method was developed and validated to predict resin flow behavior in the processing of CNT-enhanced multiscale composites. The proposed multiscale modeling method provided a comprehensive understanding of micro/nano flow in both atomistic details and mesoscale. The simulation model can be used to optimize process design and control of the mold-filling process in multiscale composite manufacturing. This research provided systematic investigations into the CNT-based multiscale composites. The results from this study may be used to leverage the benefits of CNTs and open up new application opportunities for high-performance multifunctional multiscale composites. Keywords. Carbon

  5. Organization of Control Units with Operational Addressing

    OpenAIRE

    Alexander A. Barkalov; Roman M. Babakov; Larysa A. Titarenko

    2012-01-01

    The using of operational addressing unit as the block of control unit is proposed. The new structure model of Moore finite-state machine with reduced hardware amount is developed. The generalized structure of operational addressing unit is suggested. An example of synthesis process for Moore finite-state machine with operational addressing unit is given. The analytical researches of proposed structure of control unit are executed.

  6. Pentachlorophenol (PCP) sludge recycling unit

    International Nuclear Information System (INIS)

    1994-08-01

    The Guelph Utility Pole Company treats utility poles by immersion in pentachlorophenol (PCP) or by pressure treatment with chromated copper arsenate (CCA). The PCP treatment process involves a number of steps, each producing a certain amount of sludge and other wastes. In a plant upgrading program to improve processing and treatment of poles and to reduce and recycle waste, a PCP recovery unit was developed, first as an experimental pilot-scale unit and then as a full-scale unit. The PCP recovery unit is modular in design and can be modified to suit different requirements. In a recycling operation, the sludge is pumped through a preheat system (preheated by waste heat) and suspended solids are removed by a strainer. The sludge is then heated in a tank and at a predetermined temperature it begins to separate into its component parts: oil, steam, and solids. The steam condenses to water containing low amounts of light oil, and this water is pumped through an oil/water separator. The recovered oil is reused in the wood treatment process and the water is used in the CCA plant. The oil remaining in the tank is reused in PCP treatment and the solid waste, which includes small stones and wood particles, is removed and stored. By the third quarter of operation, the recovery unit was operating as designed, processing ca 10,000 gal of sludge. This sludge yielded 6,500 gal of water, 3,500 gal of oil, and ca 30 gal of solids. Introduction of the PCP sludge recycling system has eliminated long-term storage of PCP sludge and minimized costs of hazardous waste disposal. 4 figs

  7. Characterizing the biochemical and toxicological effects of nanosilver in vivo using zebrafish (Danio rerio) and in vitro using rainbow trout (Oncorhynchus mykiss)

    Science.gov (United States)

    McWilliams, James Keith

    Full-domain multiscale analyses of unidirectional AS4/H3502 open-hole composite tensile specimens were performed to assess the effect of microscale progressive fiber failures in regions with large stress/strain gradients on macroscale composite strengths. The effect of model discretization at the microscale and macroscale on the calculated composite strengths and analysis times was investigated. Multiple sets of microscale analyses of repeating unit cells, each containing varying numbers of fibers with a distinct statistical distribution of fiber strengths and fiber volume fractions, were used to establish the microscale discretization for use in multiscale calculations. In order to improve computational times, multiscale analyses were performed over a reduced domain of the open-hole specimen. The calculated strengths obtained using reduced domain analyses were comparable to those for full-domain analyses, but at a fraction of the computational cost. Such reduced domain analyses likely are an integral part of efficient adaptive multiscale analyses of large all-composite air vehicles.

  8. Glacial Influences on Solar Radiation in a Subarctic Sea.

    Science.gov (United States)

    Understanding macroscale processes controlling solar radia­tion in marine systems will be important in interpreting the potential effects of global change from increasing ultraviolet radiation (UV) and glacial retreat. This study provides the first quantitative assessment of UV i...

  9. Sustainable design of high-performance microsized microbial fuel cell with carbon nanotube anode and air cathode

    KAUST Repository

    Mink, Justine E.; Hussain, Muhammad Mustafa

    2013-01-01

    for the comparison and introduction of new conditions or materials into macroscale MFCs, especially nanoscale materials that have high potential for enhanced power production. Here we report a 75 μL microsized MFC on silicon using CMOS-compatible processes and employ

  10. Listeria prevalence and Listeria monocytogenes serovar diversity at cull cow and bull processing plants in the United States.

    Science.gov (United States)

    Guerini, Michael N; Brichta-Harhay, Dayna M; Shackelford, T Steven D; Arthur, Terrance M; Bosilevac, Joseph M; Kalchayanand, Norasak; Wheeler, Tommy L; Koohmaraie, Mohammad

    2007-11-01

    Listeria monocytogenes, the causative agent of epidemic and sporadic listeriosis, is routinely isolated from many sources, including cattle, yet information on the prevalence of Listeria in beef processing plants in the United States is minimal. From July 2005 through April 2006, four commercial cow and bull processing plants were sampled in the United States to determine the prevalence of Listeria and the serovar diversity of L. monocytogenes. Samples were collected during the summer, fall, winter, and spring. Listeria prevalence on hides was consistently higher during cooler weather (28 to 92% of samples) than during warmer weather (6 and 77% of samples). The Listeria prevalence data collected from preevisceration carcass ranged from undetectable in some warm season samples to as high as 71% during cooler weather. Listeria on postintervention carcasses in the chill cooler was normally undetectable, with the exception of summer and spring samples from one plant where > 19% of the carcasses were positive for Listeria. On hides, L. monocytogenes serovar 1/2a was the predominant serovar observed, with serovars 1/2b and 4b present 2.5 times less often and serovar 1/2c not detected on any hides sampled. L. monocytogenes serovars 1/2a, 1/2c, and 4b were found on postintervention carcasses. This prevalence study demonstrates that Listeria species are more prevalent on hides during the winter and spring and that interventions being used in cow and bull processing plants appear to be effective in reducing or eliminating Listeria contamination on carcasses.

  11. The case for applying tissue engineering methodologies to instruct human organoid morphogenesis.

    Science.gov (United States)

    Marti-Figueroa, Carlos R; Ashton, Randolph S

    2017-05-01

    Three-dimensional organoids derived from human pluripotent stem cell (hPSC) derivatives have become widely used in vitro models for studying development and disease. Their ability to recapitulate facets of normal human development during in vitro morphogenesis produces tissue structures with unprecedented biomimicry. Current organoid derivation protocols primarily rely on spontaneous morphogenesis processes to occur within 3-D spherical cell aggregates with minimal to no exogenous control. This yields organoids containing microscale regions of biomimetic tissues, but at the macroscale (i.e. 100's of microns to millimeters), the organoids' morphology, cytoarchitecture, and cellular composition are non-biomimetic and variable. The current lack of control over in vitro organoid morphogenesis at the microscale induces aberrations at the macroscale, which impedes realization of the technology's potential to reproducibly form anatomically correct human tissue units that could serve as optimal human in vitro models and even transplants. Here, we review tissue engineering methodologies that could be used to develop powerful approaches for instructing multiscale, 3-D human organoid morphogenesis. Such technological mergers are critically needed to harness organoid morphogenesis as a tool for engineering functional human tissues with biomimetic anatomy and physiology. Human PSC-derived 3-D organoids are revolutionizing the biomedical sciences. They enable the study of development and disease within patient-specific genetic backgrounds and unprecedented biomimetic tissue microenvironments. However, their uncontrolled, spontaneous morphogenesis at the microscale yields inconsistences in macroscale organoid morphology, cytoarchitecture, and cellular composition that limits their standardization and application. Integration of tissue engineering methods with organoid derivation protocols could allow us to harness their potential by instructing standardized in vitro morphogenesis

  12. Simulation and optimization of an industrial PSA unit

    Directory of Open Access Journals (Sweden)

    Barg C.

    2000-01-01

    Full Text Available The Pressure Swing Adsorption (PSA units have been used as a low cost alternative to the usual gas separation processes. Its largest commercial application is for hydrogen purification systems. Several studies have been made about the simulation of pressure swing adsorption units, but there are only few reports on the optimization of such processes. The objective of this study is to simulate and optimize an industrial PSA unit for hydrogen purification. This unit consists of six beds, each of them have three layers of different kinds of adsorbents. The main impurities are methane, carbon monoxide and sulfidric gas. The product stream has 99.99% purity in hydrogen, and the recovery is around 90%. A mathematical model for a commercial PSA unit is developed. The cycle time and the pressure swing steps are optimized. All the features concerning with complex commercial processes are considered.

  13. The Development of a General Purpose ARM-based Processing Unit for the ATLAS TileCal sROD

    OpenAIRE

    Cox, Mitchell Arij; Reed, Robert; Mellado Garcia, Bruce Rafael

    2014-01-01

    The Large Hadron Collider at CERN generates enormous amounts of raw data which present a serious computing challenge. After Phase-II upgrades in 2022, the data output from the ATLAS Tile Calorimeter will increase by 200 times to 41 Tb/s! ARM processors are common in mobile devices due to their low cost, low energy consumption and high performance. It is proposed that a cost-effective, high data throughput Processing Unit (PU) can be developed by using several consumer ARM processors in a clus...

  14. Design of coated standing nanowire array solar cell performing beyond the planar efficiency limits

    Energy Technology Data Exchange (ETDEWEB)

    Zeng, Yang; Ye, Qinghao; Shen, Wenzhong, E-mail: wzshen@sjtu.edu.cn [Institute of Solar Energy, and Key Laboratory of Artificial Structures and Quantum Control (Ministry of Education), Department of Physics and Astronomy, Shanghai Jiao Tong University, Shanghai 200240 (China)

    2016-05-28

    The single standing nanowire (SNW) solar cells have been proven to perform beyond the planar efficiency limits in both open-circuit voltage and internal quantum efficiency due to the built-in concentration and the shifting of the absorption front. However, the expandability of these nano-scale units to a macro-scale photovoltaic device remains unsolved. The main difficulty lies in the simultaneous preservation of an effective built-in concentration in each unit cell and a broadband high absorption capability of their array. Here, we have provided a detailed theoretical guideline for realizing a macro-scale solar cell that performs furthest beyond the planar limits. The key lies in a complementary design between the light-trapping of the single SNWs and that of the photonic crystal slab formed by the array. By tuning the hybrid HE modes of the SNWs through the thickness of a coaxial dielectric coating, the optimized coated SNW array can sustain an absorption rate over 97.5% for a period as large as 425 nm, which, together with the inherited carrier extraction advantage, leads to a cell efficiency increment of 30% over the planar limit. This work has demonstrated the viability of a large-size solar cell that performs beyond the planar limits.

  15. [Analysis of the safety culture in a Cardiology Unit managed by processes].

    Science.gov (United States)

    Raso-Raso, Rafael; Uris-Selles, Joaquín; Nolasco-Bonmatí, Andreu; Grau-Jornet, Guillermo; Revert-Gandia, Rosa; Jiménez-Carreño, Rebeca; Sánchez-Soriano, Ruth M; Chamorro-Fernández, Carlos I; Marco-Francés, Elvira; Albero-Martínez, José V

    2017-04-04

    Safety culture is one of the requirements for preventing the occurrence of adverse effects. However, this has not been studied in the field of cardiology. The aim of this study is to evaluate the safety culture in a cardiology unit that has implemented and certified an integrated quality and risk management system for patient safety. A cross-sectional observational study was conducted in 2 consecutive years, with all staff completing the Spanish version of the questionnaire, "Hospital Survey on Patient Safety Culture" of the "Agency for Healthcare Research and Quality", with 42 items grouped into 12 dimensions. The percentage of positive responses in each dimension in 2014 and 2015 were compared, as well as national data and United States data, following the established rules. The overall assessment out of a possible 5, was 4.5 in 2014 and 4.7 in 2015. Seven dimensions were identified as strengths. The worst rated were: staffing, management support and teamwork between units. The comparison showed superiority in all dimensions compared to national data, and in 8 of them compared to American data. The safety culture in a Cardiology Unit with an integrated quality and risk management patient safety system is high, and higher than nationally in all its dimensions and in most of them compared to the United States. Copyright © 2017 Instituto Nacional de Cardiología Ignacio Chávez. Publicado por Masson Doyma México S.A. All rights reserved.

  16. Measurement system of bubbly flow using ultrasonic velocity profile monitor and video data processing unit

    International Nuclear Information System (INIS)

    Aritomi, Masanori; Zhou, Shirong; Nakajima, Makoto; Takeda, Yasushi; Mori, Michitsugu; Yoshioka, Yuzuru.

    1996-01-01

    The authors have been developing a measurement system for bubbly flow in order to clarify its multi-dimensional flow characteristics and to offer a data base to validate numerical codes for multi-dimensional two-phase flow. In this paper, the measurement system combining an ultrasonic velocity profile monitor with a video data processing unit is proposed, which can measure simultaneously velocity profiles in both gas and liquid phases, a void fraction profile for bubbly flow in a channel, and an average bubble diameter and void fraction. Furthermore, the proposed measurement system is applied to measure flow characteristics of a bubbly countercurrent flow in a vertical rectangular channel to verify its capability. (author)

  17. Hetero-cellular prototyping by synchronized multi-material bioprinting for rotary cell culture system.

    Science.gov (United States)

    Snyder, Jessica; Son, Ae Rin; Hamid, Qudus; Wu, Honglu; Sun, Wei

    2016-01-13

    Bottom-up tissue engineering requires methodological progress of biofabrication to capture key design facets of anatomical arrangements across micro, meso and macro-scales. The diffusive mass transfer properties necessary to elicit stability and functionality require hetero-typic contact, cell-to-cell signaling and uniform nutrient diffusion. Bioprinting techniques successfully build mathematically defined porous architecture to diminish resistance to mass transfer. Current limitations of bioprinted cell assemblies include poor micro-scale formability of cell-laden soft gels and asymmetrical macro-scale diffusion through 3D volumes. The objective of this work is to engineer a synchronized multi-material bioprinter (SMMB) system which improves the resolution and expands the capability of existing bioprinting systems by packaging multiple cell types in heterotypic arrays prior to deposition. This unit cell approach to arranging multiple cell-laden solutions is integrated with a motion system to print heterogeneous filaments as tissue engineered scaffolds and nanoliter droplets. The set of SMMB process parameters control the geometric arrangement of the combined flow's internal features and constituent material's volume fractions. SMMB printed hepatocyte-endothelial laden 200 nl droplets are cultured in a rotary cell culture system (RCCS) to study the effect of microgravity on an in vitro model of the human hepatic lobule. RCCS conditioning for 48 h increased hepatocyte cytoplasm diameter 2 μm, increased metabolic rate, and decreased drug half-life. SMMB hetero-cellular models present a 10-fold increase in metabolic rate, compared to SMMB mono-culture models. Improved bioprinting resolution due to process control of cell-laden matrix packaging as well as nanoliter droplet printing capability identify SMMB as a viable technique to improve in vitro model efficacy.

  18. Hetero-cellular prototyping by synchronized multi-material bioprinting for rotary cell culture system

    International Nuclear Information System (INIS)

    Snyder, Jessica; Son, Ae Rin; Hamid, Qudus; Sun, Wei; Wu, Honglu

    2016-01-01

    Bottom-up tissue engineering requires methodological progress of biofabrication to capture key design facets of anatomical arrangements across micro, meso and macro-scales. The diffusive mass transfer properties necessary to elicit stability and functionality require hetero-typic contact, cell-to-cell signaling and uniform nutrient diffusion. Bioprinting techniques successfully build mathematically defined porous architecture to diminish resistance to mass transfer. Current limitations of bioprinted cell assemblies include poor micro-scale formability of cell-laden soft gels and asymmetrical macro-scale diffusion through 3D volumes. The objective of this work is to engineer a synchronized multi-material bioprinter (SMMB) system which improves the resolution and expands the capability of existing bioprinting systems by packaging multiple cell types in heterotypic arrays prior to deposition. This unit cell approach to arranging multiple cell-laden solutions is integrated with a motion system to print heterogeneous filaments as tissue engineered scaffolds and nanoliter droplets. The set of SMMB process parameters control the geometric arrangement of the combined flow’s internal features and constituent material’s volume fractions. SMMB printed hepatocyte-endothelial laden 200 nl droplets are cultured in a rotary cell culture system (RCCS) to study the effect of microgravity on an in vitro model of the human hepatic lobule. RCCS conditioning for 48 h increased hepatocyte cytoplasm diameter 2 μm, increased metabolic rate, and decreased drug half-life. SMMB hetero-cellular models present a 10-fold increase in metabolic rate, compared to SMMB mono-culture models. Improved bioprinting resolution due to process control of cell-laden matrix packaging as well as nanoliter droplet printing capability identify SMMB as a viable technique to improve in vitro model efficacy. (paper)

  19. Exploring the impact of permitting and local regulatory processes on residential solar prices in the United States

    International Nuclear Information System (INIS)

    Burkhardt, Jesse; Wiser, Ryan; Darghouth, Naïm; Dong, C.G.; Huneycutt, Joshua

    2015-01-01

    This article statistically isolates the impacts of city-level permitting and other local regulatory processes on residential PV prices in the United States. We combine data from two “scoring” mechanisms that independently capture local regulatory process efficiency with the largest dataset of installed PV prices in the United States. We find that variations in local permitting procedures can lead to differences in average residential PV prices of approximately $0.18/W between the jurisdictions with the least-favorable and most-favorable permitting procedures. Between jurisdictions with scores across the middle 90% of the range (i.e., 5th percentile to 95th percentile), the difference is $0.14/W, equivalent to a $700 (2.2%) difference in system costs for a typical 5-kW residential PV installation. When considering variations not only in permitting practices, but also in other local regulatory procedures, price differences grow to $0.64–$0.93/W between the least-favorable and most-favorable jurisdictions. Between jurisdictions with scores across the middle 90% of the range, the difference is equivalent to a price impact of at least $2500 (8%) for a typical 5-kW residential PV installation. These results highlight the magnitude of cost reduction that might be expected from streamlining local regulatory regimes. - Highlights: • We show local regulatory processes meaningfully affect U.S. residential PV prices. • We use regression analysis and two mechanisms for “scoring” regulatory efficiency. • Local permitting procedure variations can produce PV price differences of $0.18/W. • Broader regulatory variations can produce PV price differences of $0.64–$0.93/W. • The results suggest the cost-reduction potential of streamlining local regulations

  20. Smoldyn on graphics processing units: massively parallel Brownian dynamics simulations.

    Science.gov (United States)

    Dematté, Lorenzo

    2012-01-01

    Space is a very important aspect in the simulation of biochemical systems; recently, the need for simulation algorithms able to cope with space is becoming more and more compelling. Complex and detailed models of biochemical systems need to deal with the movement of single molecules and particles, taking into consideration localized fluctuations, transportation phenomena, and diffusion. A common drawback of spatial models lies in their complexity: models can become very large, and their simulation could be time consuming, especially if we want to capture the systems behavior in a reliable way using stochastic methods in conjunction with a high spatial resolution. In order to deliver the promise done by systems biology to be able to understand a system as whole, we need to scale up the size of models we are able to simulate, moving from sequential to parallel simulation algorithms. In this paper, we analyze Smoldyn, a widely diffused algorithm for stochastic simulation of chemical reactions with spatial resolution and single molecule detail, and we propose an alternative, innovative implementation that exploits the parallelism of Graphics Processing Units (GPUs). The implementation executes the most computational demanding steps (computation of diffusion, unimolecular, and bimolecular reaction, as well as the most common cases of molecule-surface interaction) on the GPU, computing them in parallel on each molecule of the system. The implementation offers good speed-ups and real time, high quality graphics output

  1. Maximizing the retention level for proportional reinsurance under  -regulation of the finite time surplus process with unit-equalized interarrival time

    Directory of Open Access Journals (Sweden)

    Sukanya Somprom

    2016-07-01

    Full Text Available The research focuses on an insurance model controlled by proportional reinsurance in the finite-time surplus process with a unit-equalized time interval. We prove the existence of the maximal retention level for independent and identically distributed claim processes under α-regulation, i.e., a model where the insurance company has to manage the probability of insolvency to be at most α. In addition, we illustrate the maximal retention level for exponential claims by applying the bisection technique.

  2. The Design Process of a Board Game for Exploring the Territories of the United States

    Directory of Open Access Journals (Sweden)

    Mehmet Kosa

    2017-06-01

    Full Text Available The paper reports the design experience of a board game with an educational aspect, which takes place on the location of states and territories of the United States. Based on a territorial acquisition dynamic, the goal was to articulate the design process of a board game that provides information for individuals who are willing to learn the locations of the U.S. states by playing a game. The game was developed using an iterative design process based on focus groups studies and brainstorming sessions. A mechanic-driven design approach was adopted instead of a theme or setting-driven alternative and a relatively abstract game was developed. The initial design idea was formed and refined according to the player feedback. The paper details play-testing sessions conducted and documents the design experience from a qualitative perspective. Our preliminary results suggest that the initial design is moderately balanced and despite the lack of quantitative evidence, our subjective observations indicate that participants’ knowledge about the location of states was improved in an entertaining and interactive way.

  3. "Vulnerability, Resiliency, and Adaptation: The Health of Latin Americans during the Migration Process to the United States"

    Science.gov (United States)

    Riosmena, Fernando; Jochem, Warren C

    2012-01-01

    In this paper, we offer a general outlook of the health of Latin Americans (with a special emphasis on Mexicans) during the different stages of the migration process to the U.S. given the usefulness of the social vulnerability concept and given that said vulnerability varies conspicuously across the different stages of the migration process. Severe migrant vulnerability during the transit and crossing has serious negative health consequences. Yet, upon their arrival to the U.S., migrant health is favorable in outcomes such as mortality by many causes of death and in several chronic conditions and risk factors, though these apparent advantages seem to disappear during the process of adaptation to the host society. We discuss potential explanations for the initial health advantage and the sources of vulnerability that explain its erosion, with special emphasis in systematic timely access to health care. Given that migration can affect social vulnerability processes in sending areas, we discuss the potential health consequences for these places and conclude by considering the immigration and health policy implications of these issues for the United States and sending countries, with emphasis on Mexico.

  4. Qualification of Daiichi Units 1, 2, and 3 Data for Severe Accident Evaluations - Process and Illustrative Examples from Prior TMI-2 Evaluations

    Energy Technology Data Exchange (ETDEWEB)

    Rempe, Joy Lynn [Idaho National Lab. (INL), Idaho Falls, ID (United States); Knudson, Darrell Lee [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2014-09-01

    The accidents at the Three Mile Island Unit 2 (TMI-2) Pressurized Water Reactor (PWR) and the Daiichi Units 1, 2, and 3 Boiling Water Reactors (BWRs) provide unique opportunities to evaluate instrumentation exposed to severe accident conditions. Conditions associated with the release of coolant and the hydrogen burn that occurred during the TMI-2 accident exposed instrumentation to harsh conditions, including direct radiation, radioactive contamination, and high humidity with elevated temperatures and pressures. As part of a program initiated in 2012 by the Department of Energy Office of Nuclear Energy (DOE-NE), a review was completed to gain insights from prior TMI-2 sensor survivability and data qualification efforts. This initial review focused on the set of sensors deemed most important by post-TMI-2 instrumentation evaluation programs. Instrumentation evaluation programs focused on data required by TMI-2 operators to assess the condition of the reactor and containment and the effect of mitigating actions taken by these operators. In addition, prior efforts focused on sensors providing data required for subsequent forensic evaluations and accident simulations. To encourage the potential for similar activities to be completed for qualifying data from Daiichi Units 1, 2, and 3, this report provides additional details related to the formal process used to develop a qualified TMI-2 data base and presents data qualification details for three parameters: primary system pressure; containment building temperature; and containment pressure. As described within this report, sensor evaluations and data qualification required implementation of various processes, including comparisons with data from other sensors, analytical calculations, laboratory testing, and comparisons with sensors subjected to similar conditions in large-scale integral tests and with sensors that were similar in design to instruments easily removed from the TMI-2 plant for evaluations. As documented

  5. A Fast MHD Code for Gravitationally Stratified Media using Graphical Processing Units: SMAUG

    Science.gov (United States)

    Griffiths, M. K.; Fedun, V.; Erdélyi, R.

    2015-03-01

    Parallelization techniques have been exploited most successfully by the gaming/graphics industry with the adoption of graphical processing units (GPUs), possessing hundreds of processor cores. The opportunity has been recognized by the computational sciences and engineering communities, who have recently harnessed successfully the numerical performance of GPUs. For example, parallel magnetohydrodynamic (MHD) algorithms are important for numerical modelling of highly inhomogeneous solar, astrophysical and geophysical plasmas. Here, we describe the implementation of SMAUG, the Sheffield Magnetohydrodynamics Algorithm Using GPUs. SMAUG is a 1-3D MHD code capable of modelling magnetized and gravitationally stratified plasma. The objective of this paper is to present the numerical methods and techniques used for porting the code to this novel and highly parallel compute architecture. The methods employed are justified by the performance benchmarks and validation results demonstrating that the code successfully simulates the physics for a range of test scenarios including a full 3D realistic model of wave propagation in the solar atmosphere.

  6. Monte Carlo methods for neutron transport on graphics processing units using Cuda - 015

    International Nuclear Information System (INIS)

    Nelson, A.G.; Ivanov, K.N.

    2010-01-01

    This work examined the feasibility of utilizing Graphics Processing Units (GPUs) to accelerate Monte Carlo neutron transport simulations. First, a clean-sheet MC code was written in C++ for an x86 CPU and later ported to run on GPUs using NVIDIA's CUDA programming language. After further optimization, the GPU ran 21 times faster than the CPU code when using single-precision floating point math. This can be further increased with no additional effort if accuracy is sacrificed for speed: using a compiler flag, the speedup was increased to 22x. Further, if double-precision floating point math is desired for neutron tracking through the geometry, a speedup of 11x was obtained. The GPUs have proven to be useful in this study, but the current generation does have limitations: the maximum memory currently available on a single GPU is only 4 GB; the GPU RAM does not provide error-checking and correction; and the optimization required for large speedups can lead to confusing code. (authors)

  7. Grand Junction projects office mixed-waste treatment program, VAC*TRAX mobile treatment unit process hazards analysis

    International Nuclear Information System (INIS)

    Bloom, R.R.

    1996-04-01

    The objective of this report is to demonstrate that a thorough assessment of the risks associated with the operation of the Rust Geotech patented VAC*TRAX mobile treatment unit (MTU) has been performed and documented. The MTU was developed to treat mixed wastes at the US Department of Energy (DOE) Albuquerque Operations Office sites. The MTU uses an indirectly heated, batch vacuum dryer to thermally desorb organic compounds from mixed wastes. This process hazards analysis evaluated 102 potential hazards. The three significant hazards identified involved the inclusion of oxygen in a process that also included an ignition source and fuel. Changes to the design of the MTU were made concurrent with the hazard identification and analysis; all hazards with initial risk rankings of 1 or 2 were reduced to acceptable risk rankings of 3 or 4. The overall risk to any population group from operation of the MTU was determined to be very low; the MTU is classified as a Radiological Facility with low hazards

  8. Grand Junction projects office mixed-waste treatment program, VAC*TRAX mobile treatment unit process hazards analysis

    Energy Technology Data Exchange (ETDEWEB)

    Bloom, R.R.

    1996-04-01

    The objective of this report is to demonstrate that a thorough assessment of the risks associated with the operation of the Rust Geotech patented VAC*TRAX mobile treatment unit (MTU) has been performed and documented. The MTU was developed to treat mixed wastes at the US Department of Energy (DOE) Albuquerque Operations Office sites. The MTU uses an indirectly heated, batch vacuum dryer to thermally desorb organic compounds from mixed wastes. This process hazards analysis evaluated 102 potential hazards. The three significant hazards identified involved the inclusion of oxygen in a process that also included an ignition source and fuel. Changes to the design of the MTU were made concurrent with the hazard identification and analysis; all hazards with initial risk rankings of 1 or 2 were reduced to acceptable risk rankings of 3 or 4. The overall risk to any population group from operation of the MTU was determined to be very low; the MTU is classified as a Radiological Facility with low hazards.

  9. Flocking-based Document Clustering on the Graphics Processing Unit

    Energy Technology Data Exchange (ETDEWEB)

    Cui, Xiaohui [ORNL; Potok, Thomas E [ORNL; Patton, Robert M [ORNL; ST Charles, Jesse Lee [ORNL

    2008-01-01

    Abstract?Analyzing and grouping documents by content is a complex problem. One explored method of solving this problem borrows from nature, imitating the flocking behavior of birds. Each bird represents a single document and flies toward other documents that are similar to it. One limitation of this method of document clustering is its complexity O(n2). As the number of documents grows, it becomes increasingly difficult to receive results in a reasonable amount of time. However, flocking behavior, along with most naturally inspired algorithms such as ant colony optimization and particle swarm optimization, are highly parallel and have found increased performance on expensive cluster computers. In the last few years, the graphics processing unit (GPU) has received attention for its ability to solve highly-parallel and semi-parallel problems much faster than the traditional sequential processor. Some applications see a huge increase in performance on this new platform. The cost of these high-performance devices is also marginal when compared with the price of cluster machines. In this paper, we have conducted research to exploit this architecture and apply its strengths to the document flocking problem. Our results highlight the potential benefit the GPU brings to all naturally inspired algorithms. Using the CUDA platform from NIVIDA? we developed a document flocking implementation to be run on the NIVIDA?GEFORCE 8800. Additionally, we developed a similar but sequential implementation of the same algorithm to be run on a desktop CPU. We tested the performance of each on groups of news articles ranging in size from 200 to 3000 documents. The results of these tests were very significant. Performance gains ranged from three to nearly five times improvement of the GPU over the CPU implementation. This dramatic improvement in runtime makes the GPU a potentially revolutionary platform for document clustering algorithms.

  10. Solution-processable red-emission organic materials containing triphenylamine and benzothiodiazole units: synthesis and applications in organic light-emitting diodes.

    Science.gov (United States)

    Yang, Yi; Zhou, Yi; He, Qingguo; He, Chang; Yang, Chunhe; Bai, Fenglian; Li, Yongfang

    2009-06-04

    Three solution-processable red-emissive organic materials with a hole-transporting unit triphenylamine (TPA) as the core part and a D-pi-A bipolar structure as the branch part, TPA-BT (single-branched molecule), b-TPA-BT (bibranched molecule), and t-TPA-BT (tribranched molecule), were synthesized by the Heck coupling reaction. Herein, for the D-pi-A push-pull structure, we use TPA as the electron donor, benzothiodiazole (BT) as the electron acceptor, and the vinylene bond as the pi-bridge connecting the TPA and BT units. The compounds exhibit good solubility in common organic solvents, benefited from the three-dimensional spatial configuration of TPA units and the branch structure of the molecules. TPA-BT, b-TPA-BT, and t-TPA-BT show excellent photoluminescent properties with maximum emission peaks at ca. 630 nm. High-performance red-emission organic light-emitting diodes (OLEDs) were fabricated with the active layer spin coated from a solution of these compounds. The OLED based on TPA-BT displayed a low turn-on voltage of 2.0 V, a maximum luminance of 12192 cd/m2, and a maximum current efficiency of 1.66 cd/A, which is among the highest values for the solution-processed red-emission OLEDs. In addition, high-performance white-light-emitting diodes (WLEDs) with maximum luminance around 4400 cd/m2 and maximum current efficiencies above 4.5 cd/A were realized by separately doping the three TPA-BT-containing molecules as red emitter and poly(6,6'-bi-(9,9'-dihexylfluorene)- co-(9,9'-dihexylfluorene-3-thiophene-5'-yl)) as green emitter into blue poly(9,9-dioctylfluorene-2,7-diyl) host material with suitable weight ratios.

  11. Development and application of a multiscale model for the magnetic fusion edge plasma region

    International Nuclear Information System (INIS)

    Hasenbeck, Felix Martin Michael

    2016-01-01

    Plasma edge particle and energy transport perpendicular to the magnetic field plays a decisive role for the performance and lifetime of a magnetic fusion reactor. For the particles, classical and neoclassical theories underestimate the associated radial transport by at least an order of magnitude. Drift fluid models, including mesoscale processes on scales down to tenths of millimeters and microseconds, account for the experimentally found level of radial transport; however, numerical simulations for typical reactor scales (of the order of seconds and centimeters) are computationally very expensive. Large scale code simulations are less costly but usually lack an adequate model for the radial transport. The multiscale model presented in this work aims at improving the description of radial particle transport in large scale codes by including the effects of averaged local drift fluid dynamics on the macroscale profiles. The multiscale balances are derived from a generic multiscale model for a fluid, using the Braginskii closure for a collisional, magnetized plasma, and the assumptions of the B2 code model (macroscale balances) and the model of the local version of the drift fluid code ATTEMPT (mesoscale balances). A combined concurrent-sequential coupling procedure is developed for the implementation of the multiscale model within a coupled code system. An algorithm for the determination of statistically stationary states and adequate averaging intervals for the mesoscale data is outlined and tested, proving that it works consistently and efficiently. The general relation between mesoscale and macroscale dynamics is investigated exemplarily by means of a passive scalar system. While mesoscale processes are convective in this system, earlier studies for small Kubo numbers K<<1 have shown that the macroscale behavior is diffusive. In this work it is demonstrated by numerical experiments that also in the regime of large Kubo numbers K<<1 the macroscale transport

  12. Development and application of a multiscale model for the magnetic fusion edge plasma region

    Energy Technology Data Exchange (ETDEWEB)

    Hasenbeck, Felix Martin Michael

    2016-07-01

    Plasma edge particle and energy transport perpendicular to the magnetic field plays a decisive role for the performance and lifetime of a magnetic fusion reactor. For the particles, classical and neoclassical theories underestimate the associated radial transport by at least an order of magnitude. Drift fluid models, including mesoscale processes on scales down to tenths of millimeters and microseconds, account for the experimentally found level of radial transport; however, numerical simulations for typical reactor scales (of the order of seconds and centimeters) are computationally very expensive. Large scale code simulations are less costly but usually lack an adequate model for the radial transport. The multiscale model presented in this work aims at improving the description of radial particle transport in large scale codes by including the effects of averaged local drift fluid dynamics on the macroscale profiles. The multiscale balances are derived from a generic multiscale model for a fluid, using the Braginskii closure for a collisional, magnetized plasma, and the assumptions of the B2 code model (macroscale balances) and the model of the local version of the drift fluid code ATTEMPT (mesoscale balances). A combined concurrent-sequential coupling procedure is developed for the implementation of the multiscale model within a coupled code system. An algorithm for the determination of statistically stationary states and adequate averaging intervals for the mesoscale data is outlined and tested, proving that it works consistently and efficiently. The general relation between mesoscale and macroscale dynamics is investigated exemplarily by means of a passive scalar system. While mesoscale processes are convective in this system, earlier studies for small Kubo numbers K<<1 have shown that the macroscale behavior is diffusive. In this work it is demonstrated by numerical experiments that also in the regime of large Kubo numbers K<<1 the macroscale transport

  13. Modular toolkit for Data Processing (MDP: a Python data processing framework

    Directory of Open Access Journals (Sweden)

    Tiziano Zito

    2009-01-01

    Full Text Available Modular toolkit for Data Processing (MDP is a data processing framework written in Python. From the user's perspective, MDP is a collection of supervised and unsupervised learning algorithms and other data processing units that can be combined into data processing sequences and more complex feed-forward network architectures. Computations are performed efficiently in terms of speed and memory requirements. From the scientific developer's perspective, MDP is a modular framework, which can easily be expanded. The implementation of new algorithms is easy and intuitive. The new implemented units are then automatically integrated with the rest of the library. MDP has been written in the context of theoretical research in neuroscience, but it has been designed to be helpful in any context where trainable data processing algorithms are used. Its simplicity on the user's side, the variety of readily available algorithms, and the reusability of the implemented units make it also a useful educational tool.

  14. Climate Change, a Case Study of Media Construction of Environmental Problems; El Cambio Climatico como Casuistica de la Construccion Mediatica de los Problemas Medioambientales

    Energy Technology Data Exchange (ETDEWEB)

    Lopera, E.

    2009-07-21

    Nowadays climate change is one of the environmental problems in the global policy agenda. However, in countries like United States and United Kingdom the media started to report regularly on this issue in 1988. Since then many researches have been carrying out focused on how the media influence, along with other factors, public understanding of climate change through the media construction of the problem in several countries. Given the implications of social acceptance for design and implementation of public policies on mitigation and adaptation to climate change, the overall aim of this report is to review the status of the issue from a qualitative and quantitative approach. Qualitatively, media construction of climate change is described as the result of different processes taking place at macro and micro scales. Interactions among scientists, politicians, industry, the media themselves and the social context are considered macro-scale influences, while journalistic values and norms shape the media coverage of this environmental problem at micro-scale when media professionals report on climate change. From a quantitative point of view this paper also includes the evolution of newspaper coverage on climate change in Spain from 1996 to 2006 and these figures are compared to the results obtained in the United States and United Kingdom during the same period. (Author) 23 refs.

  15. Field demonstrations of radon adsorption units

    International Nuclear Information System (INIS)

    Abrams, R.F.

    1989-01-01

    Four radon gas removal units have been installed in homes in the Northeast U.S. These units utilize dynamic adsorption of the radon gas onto activated charcoal to remove the radon from room air. Two beds of charcoal are used so that one bed removes radon while the second bed is regenerated using outdoor air in a unique process. The beds reverse at the end of a predetermined cycle time, providing continuous removal of radon from the room air. The process and units have undergone extensive development work in the laboratory as well as in homes and a summary of this work is discussed. This work showed that the system performs very effectively over a range of operating conditions similar to those found in a home. The field test data that is presented shows that scale up from the laboratory work was without problem and the units are functioning as expected. This unit provides homeowners and mitigation contractors with another option to solve the radon gas problem in homes, particularly in homes that it is difficult to prevent radon from entering

  16. Ultra-low-energy wide electron exposure unit

    International Nuclear Information System (INIS)

    Yonago, Akinobu; Oono, Yukihiko; Tokunaga, Kazutoshi; Kishimoto, Junichi; Wakamoto, Ikuo

    2001-01-01

    Heat and ultraviolet ray processes are used in surface dryness of paint, surface treatment of construction materials and surface sterilization of food containers. A process using a low-energy wide-area electron beam (EB) has been developed that features high speed and low drive cost. EB processing is not widespread in general industry, however, due to high equipment cost and difficult maintenance. We developed an ultra-low-energy wide-area electron beam exposure unit, the Mitsubishi Wide Electron Exposure Unit (MIWEL) to solve these problems. (author)

  17. Process of motion by unit steps over a surface provided with elements regularly arranged

    International Nuclear Information System (INIS)

    Cooper, D.E.; Hendee, L.C. III; Hill, W.G. Jr.; Leshem, Adam; Marugg, M.L.

    1977-01-01

    This invention concerns a process for moving by unit steps an apparatus travelling over a surface provided with an array of orifices aligned and evenly spaced in several lines and several parallel rows regularly spaced, the lines and rows being parallel to axes x and y of Cartesian co-ordinates, each orifice having a separate address in the Cartesian co-ordinate system. The surface travelling apparatus has two previously connected arms aranged in directions transversal to each other thus forming an angle corresponding to the intersection of axes x and y. In the inspection and/or repair of nuclear or similar steam generator tubes, it is desirable that such an apparatus should be able to move in front of a surface comprising an array of orifices by the selective alternate introduction and retraction of two sets of anchoring claws of the two respective arms, in relation to the orifices of the array, it being possible to shift the arms in a movement of translation, transversally to each other, as a set of claws is withdrawn from the orifices. The invention concerns a process and aparatus as indicated above that reduces to a minimum the path length of the apparatus between the orifices it is effectively opposite and a given orifice [fr

  18. Processes for CO2 capture. Context of thermal waste treatment units. State of the art. Extended abstract

    International Nuclear Information System (INIS)

    Lopez, A.; Roizard, D.; Favre, E.; Dufour, A.

    2013-01-01

    For most of industrial sectors, Greenhouse Gases (GHG) such as carbon dioxide (CO 2 ) are considered as serious pollutants and have to be controlled and treated. The thermal waste treatment units are part of industrial CO 2 emitters, even if they represent a small part of emissions (2,5 % of GHG emissions in France) compared to power plants (13 % of GHG emissions in France, one third of worldwide GHG emissions) or shaper industries (20 % of GHG emissions in France). Carbon Capture and Storage (CCS) can be a solution to reduce CO 2 emissions from industries (power plants, steel and cement industries...). The issues of CCS applied to thermal waste treatment units are quite similar to those related to power plants (CO 2 flow, flue gas temperature and pressure conditions). The problem is to know if the CO 2 produced by waste treatment plants can be captured thanks to the processes already available on the market or that should be available by 2020. It seems technically possible to adapt CCS post-combustion methods to the waste treatment sector. But on the whole, CCS is complex and costly for a waste treatment unit offering small economies of scale. However, regulations concerning impurities for CO 2 transport and storage are not clearly defined at the moment. Consequently, specific studies must be achieved in order to check the technical feasibility of CCS in waste treatment context and clearly define its cost. (authors)

  19. Effect of energetic dissipation processes on the friction unit tribological

    Directory of Open Access Journals (Sweden)

    Moving V. V.

    2007-01-01

    Full Text Available In article presented temperature influence on reological and fric-tion unit coefficients cast iron elements. It has been found that surface layer formed in the temperature friction has good rub off resistance. The surface layer structural hardening and capacity stress relaxation make up.

  20. Accelerating Electrostatic Surface Potential Calculation with Multiscale Approximation on Graphics Processing Units

    Science.gov (United States)

    Anandakrishnan, Ramu; Scogland, Tom R. W.; Fenley, Andrew T.; Gordon, John C.; Feng, Wu-chun; Onufriev, Alexey V.

    2010-01-01

    Tools that compute and visualize biomolecular electrostatic surface potential have been used extensively for studying biomolecular function. However, determining the surface potential for large biomolecules on a typical desktop computer can take days or longer using currently available tools and methods. Two commonly used techniques to speed up these types of electrostatic computations are approximations based on multi-scale coarse-graining and parallelization across multiple processors. This paper demonstrates that for the computation of electrostatic surface potential, these two techniques can be combined to deliver significantly greater speed-up than either one separately, something that is in general not always possible. Specifically, the electrostatic potential computation, using an analytical linearized Poisson Boltzmann (ALPB) method, is approximated using the hierarchical charge partitioning (HCP) multiscale method, and parallelized on an ATI Radeon 4870 graphical processing unit (GPU). The implementation delivers a combined 934-fold speed-up for a 476,040 atom viral capsid, compared to an equivalent non-parallel implementation on an Intel E6550 CPU without the approximation. This speed-up is significantly greater than the 42-fold speed-up for the HCP approximation alone or the 182-fold speed-up for the GPU alone. PMID:20452792

  1. Space Object Collision Probability via Monte Carlo on the Graphics Processing Unit

    Science.gov (United States)

    Vittaldev, Vivek; Russell, Ryan P.

    2017-09-01

    Fast and accurate collision probability computations are essential for protecting space assets. Monte Carlo (MC) simulation is the most accurate but computationally intensive method. A Graphics Processing Unit (GPU) is used to parallelize the computation and reduce the overall runtime. Using MC techniques to compute the collision probability is common in literature as the benchmark. An optimized implementation on the GPU, however, is a challenging problem and is the main focus of the current work. The MC simulation takes samples from the uncertainty distributions of the Resident Space Objects (RSOs) at any time during a time window of interest and outputs the separations at closest approach. Therefore, any uncertainty propagation method may be used and the collision probability is automatically computed as a function of RSO collision radii. Integration using a fixed time step and a quartic interpolation after every Runge Kutta step ensures that no close approaches are missed. Two orders of magnitude speedups over a serial CPU implementation are shown, and speedups improve moderately with higher fidelity dynamics. The tool makes the MC approach tractable on a single workstation, and can be used as a final product, or for verifying surrogate and analytical collision probability methods.

  2. The aging self in a cultural context: the relation of conceptions of aging to identity processes and self-esteem in the United States and the Netherlands

    NARCIS (Netherlands)

    Westerhof, Gerben Johan; Whitbourne, S.K.; Freeman, G.P.

    2012-01-01

    Objectives. To study the aging self, that is, conceptions of one’s own aging process, in relation to identity processes and self-esteem in the United States and the Netherlands. As the liberal American system has a stronger emphasis on individual responsibility and youthfulness than the

  3. An Illustration of the Corrective Action Process, The Corrective Action Management Unit at Sandia National Laboratories/New Mexico

    International Nuclear Information System (INIS)

    Irwin, M.; Kwiecinski, D.

    2002-01-01

    Corrective Action Management Units (CAMUs) were established by the Environmental Protection Agency (EPA) to streamline the remediation of hazardous waste sites. Streamlining involved providing cost saving measures for the treatment, storage, and safe containment of the wastes. To expedite cleanup and remove disincentives, EPA designed 40 CFR 264 Subpart S to be flexible. At the heart of this flexibility are the provisions for CAMUs and Temporary Units (TUs). CAMUs and TUs were created to remove cleanup disincentives resulting from other Resource Conservation Recovery Act (RCRA) hazardous waste provisions--specifically, RCRA land disposal restrictions (LDRs) and minimum technology requirements (MTRs). Although LDR and MTR provisions were not intended for remediation activities, LDRs and MTRs apply to corrective actions because hazardous wastes are generated. However, management of RCRA hazardous remediation wastes in a CAMU or TU is not subject to these stringent requirements. The CAMU at Sandia National Laboratories in Albuquerque, New Mexico (SNL/NM) was proposed through an interactive process involving the regulators (EPA and the New Mexico Environment Department), DOE, SNL/NM, and stakeholders. The CAMU at SNL/NM has been accepting waste from the nearby Chemical Waste Landfill remediation since January of 1999. During this time, a number of unique techniques have been implemented to save costs, improve health and safety, and provide the best value and management practices. This presentation will take the audience through the corrective action process implemented at the CAMU facility, from the selection of the CAMU site to permitting and construction, waste management, waste treatment, and final waste placement. The presentation will highlight the key advantages that CAMUs and TUs offer in the corrective action process. These advantages include yielding a practical approach to regulatory compliance, expediting efficient remediation and site closure, and realizing

  4. Analysis of an integrated cryogenic air separation unit, oxy-combustion carbon dioxide power cycle and liquefied natural gas regasification process by exergoeconomic method

    International Nuclear Information System (INIS)

    Mehrpooya, Mehdi; Zonouz, Masood Jalali

    2017-01-01

    Highlights: • Exergoeconomic analyses is done on an integrated cryogenic air separation unit. • Liquefied natural gas cold energy is used in the process. • The main multi stream heat exchanger is the worst device based on the results. - Abstract: Exergoeconomic and sensitivity analyses are performed on the integrated cryogenic air separation unit, oxy-combustion Carbon dioxide power cycle and liquefied natural gas regasification process. Exergy destruction, exergy efficiency, cost rate of exergy destruction, cost rate of capital investment and operating and maintenance, exergoeconomic factor and relative cost difference have been calculated for the major components of the process. The exergy efficiency of the process is around 67.1% and after mixers, tees, tank and expansion valves the multi-stream heat exchanger H-3 have the best exergy efficiency among all process components. Total exergy destruction rate of the process is 1.93 × 10"7 kW. Results of exergoeconomic analysis demonstrates that maximum exergy destruction and capital investment operating and maintenance cost rate are related to the multi-stream heat exchanger H-1 and pump P-1 with the values of 335,144 ($/h) and 12,838 ($/h), respectively. In the sensitivity analysis section the effects of the varying economic parameters, such as interest rate and plant life time are investigated on the trend of the capital investment operating and maintenance cost rate of the major components of the process and in another cases the effect of the gas turbine isentropic efficiency on the exergy and exergoeconomic parameters are studied.

  5. Trends in lumber processing in the western United States. Part I: board foot Scribner volume per cubic foot of timber

    Science.gov (United States)

    Charles E. Keegan; Todd A. Morgan; Keith A. Blatner; Jean M. Daniels

    2010-01-01

    This article describes trends in board foot Scribner volume per cubic foot of timber for logs processed by sawmills in the western United States. Board foot to cubic foot (BF/CF) ratios for the period from 2000 through 2006 ranged from 3.70 in Montana to 5.71 in the Four Corners Region (Arizona, Colorado, New Mexico, and Utah). Sawmills in the Four Corners Region,...

  6. Computational micromechanics analysis of electron hopping and interfacial damage induced piezoresistive response in carbon nanotube-polymer nanocomposites

    International Nuclear Information System (INIS)

    Chaurasia, A K; Seidel, G D; Ren, X

    2014-01-01

    Carbon nanotube (CNT)-polymer nanocomposites have been observed to exhibit an effective macroscale piezoresistive response, i.e., change in macroscale resistivity when subjected to applied deformation. The macroscale piezoresistive response of CNT-polymer nanocomposites leads to deformation/strain sensing capabilities. It is believed that the nanoscale phenomenon of electron hopping is the major driving force behind the observed macroscale piezoresistivity of such nanocomposites. Additionally, CNT-polymer nanocomposites provide damage sensing capabilities because of local changes in electron hopping pathways at the nanoscale because of initiation/evolution of damage. The primary focus of the current work is to explore the effect of interfacial separation and damage at the nanoscale CNT-polymer interface on the effective macroscale piezoresistive response. Interfacial separation and damage are allowed to evolve at the CNT-polymer interface through coupled electromechanical cohesive zones, within a finite element based computational micromechanics framework, resulting in electron hopping based current density across the separated CNT-polymer interface. The macroscale effective material properties and gauge factors are evaluated using micromechanics techniques based on electrostatic energy equivalence. The impact of the electron hopping mechanism, nanoscale interface separation and damage evolution on the effective nanocomposite electrostatic and piezoresistive response is studied in comparison with the perfectly bonded interface. The effective electrostatic/piezoresistive response for the perfectly bonded interface is obtained based on a computational micromechanics model developed in the authors’ earlier work. It is observed that the macroscale effective gauge factors are highly sensitive to strain induced formation/disruption of electron hopping pathways, interface separation and the initiation/evolution of interfacial damage. (paper)

  7. METRIC context unit architecture

    Energy Technology Data Exchange (ETDEWEB)

    Simpson, R.O.

    1988-01-01

    METRIC is an architecture for a simple but powerful Reduced Instruction Set Computer (RISC). Its speed comes from the simultaneous processing of several instruction streams, with instructions from the various streams being dispatched into METRIC's execution pipeline as they become available for execution. The pipeline is thus kept full, with a mix of instructions for several contexts in execution at the same time. True parallel programming is supported within a single execution unit, the METRIC Context Unit. METRIC's architecture provides for expansion through the addition of multiple Context Units and of specialized Functional Units. The architecture thus spans a range of size and performance from a single-chip microcomputer up through large and powerful multiprocessors. This research concentrates on the specification of the METRIC Context Unit at the architectural level. Performance tradeoffs made during METRIC's design are discussed, and projections of METRIC's performance are made based on simulation studies.

  8. Factors associated with student learning processes in primary health care units: a questionnaire study.

    Science.gov (United States)

    Bos, Elisabeth; Alinaghizadeh, Hassan; Saarikoski, Mikko; Kaila, Päivi

    2015-01-01

    Clinical placement plays a key role in education intended to develop nursing and caregiving skills. Studies of nursing students' clinical learning experiences show that these dimensions affect learning processes: (i) supervisory relationship, (ii) pedagogical atmosphere, (iii) management leadership style, (iv) premises of nursing care on the ward, and (v) nursing teachers' roles. Few empirical studies address the probability of an association between these dimensions and factors such as student (a) motivation, (b) satisfaction with clinical placement, and (c) experiences with professional role models. The study aimed to investigate factors associated with the five dimensions in clinical learning environments within primary health care units. The Swedish version of Clinical Learning Environment, Supervision and Teacher, a validated evaluation scale, was administered to 356 graduating nursing students after four or five weeks clinical placement in primary health care units. Response rate was 84%. Multivariate analysis of variance is determined if the five dimensions are associated with factors a, b, and c above. The analysis revealed a statistically significant association with the five dimensions and two factors: students' motivation and experiences with professional role models. The satisfaction factor had a statistically significant association (effect size was high) with all dimensions; this clearly indicates that students experienced satisfaction. These questionnaire results show that a good clinical learning experience constitutes a complex whole (totality) that involves several interacting factors. Supervisory relationship and pedagogical atmosphere particularly influenced students' satisfaction and motivation. These results provide valuable decision-support material for clinical education planning, implementation, and management. Copyright © 2014 Elsevier Ltd. All rights reserved.

  9. Processing United Nations Documents in the University of Michigan Library.

    Science.gov (United States)

    Stolper, Gertrude

    This guide provides detailed instructions for recording documents in the United Nations (UN) card catalog which provides access to the UN depository collection in the Harlan Hatcher Graduate Library at the University of Michigan. Procedures for handling documents when they are received include stamping, counting, and sorting into five categories:…

  10. Iron turbidity removal from the active process water system of the Kaiga Generating Station Unit 1 using an electrochemical filter

    International Nuclear Information System (INIS)

    Venkateswaran, G.; Gokhale, B.K.

    2007-01-01

    Iron turbidity is observed in the intermediate cooling circuit of the active process water system (APWS) of Kaiga Generating Station (KGS). Deposition of hydrous/hydrated oxides of iron on the plate type heat exchanger, which is employed to transfer heat from the APWS to the active process cooling water system (APCWS), can in turn result in higher moderator D 2 O temperatures due to reduced heat transfer. Characterization of turbidity showed that the major component is γ-FeOOH. An in-house designed and fabricated electrochemical filter (ECF) containing an alternate array of 33 pairs of cathode and anode graphite felts was successfully tested for the removal of iron turbidity from the APWS of Kaiga Generating Station Unit No. 1 (KGS No. 1). A total volume of 52.5 m 3 water was processed using the filter. At an average inlet turbidity of 5.6 nephelometric turbidity units (NTU), the outlet turbidity observed from the ECF was 1.6 NTU. A maximum flow rate (10 L . min -1 ) and applied potential of 18.0-20.0 V was found to yield an average turbidity-removal efficiency of ∝ 75 %. When the experiment was terminated, a throughput of > 2.08 . 10 5 NTU-liters was realized without any reduction in the removal efficiency. Removal of the internals of the filter showed that only the bottom 11 pairs of felts had brownish deposits, while the remaining felts looked clean and unused. (orig.)

  11. Unit roots, nonlinearities and structural breaks

    DEFF Research Database (Denmark)

    Haldrup, Niels; Kruse, Robinson; Teräsvirta, Timo

    One of the most influential research fields in econometrics over the past decades concerns unit root testing in economic time series. In macro-economics much of the interest in the area originate from the fact that when unit roots are present, then shocks to the time series processes have...

  12. Audits of oncology units – an effective and pragmatic approach

    Directory of Open Access Journals (Sweden)

    Raymond Pierre Abratt

    2017-06-01

    Full Text Available Background. Audits of oncology units are part of all quality-assurance programmes. However, they do not always come across as pragmatic and helpful to staff. Objective. To report on the results of an online survey on the usefulness and impact of an audit process for oncology units. Methods. Staff in oncology units who were part of the audit process completed the audit self-assessment form for the unit. This was followed by a visit to each unit by an assessor, and then subsequent personal contact, usually via telephone. The audit self-assessment document listed quality-assurance measures or items in the physical and functional areas of the oncology unit. There were a total of 153 items included in the audit. The online survey took place in October 2016. The invitation to participate was sent to 59 oncology units at which staff members had completed the audit process. Results. The online survey was completed by 54 (41% of the 132 potential respondents. The online survey found that the audit was very or extremely useful in maintaining personal professional standards in 89% of responses. The audit process and feedback was rated as very or extremely satisfactory in 80% and 81%, respectively. The self-assessment audit document was scored by survey respondents as very or extremely practical in 63% of responses. The feedback on the audit was that it was very or extremely helpful in formulating improvement plans in oncology units in 82% of responses. Major and minor changes that occurred as a result of the audit process were reported as 8% and 88%, respectively. Conclusion. The survey findings show that the audit process and its self- assessment document meet the aims of being helpful and pragmatic.

  13. Biology meets Physics: Reductionism and Multi-scale Modeling of Morphogenesis

    DEFF Research Database (Denmark)

    Green, Sara; Batterman, Robert

    2017-01-01

    A common reductionist assumption is that macro-scale behaviors can be described "bottom-up" if only sufficient details about lower-scale processes are available. The view that an "ideal" or "fundamental" physics would be sufficient to explain all macro-scale phenomena has been met with criticism ...... modeling in developmental biology. In such contexts, the relation between models at different scales and from different disciplines is neither reductive nor completely autonomous, but interdependent....... from philosophers of biology. Specifically, scholars have pointed to the impossibility of deducing biological explanations from physical ones, and to the irreducible nature of distinctively biological processes such as gene regulation and evolution. This paper takes a step back in asking whether bottom......-up modeling is feasible even when modeling simple physical systems across scales. By comparing examples of multi-scale modeling in physics and biology, we argue that the “tyranny of scales” problem present a challenge to reductive explanations in both physics and biology. The problem refers to the scale...

  14. General predictive model of friction behavior regimes for metal contacts based on the formation stability and evolution of nanocrystalline surface films.

    Energy Technology Data Exchange (ETDEWEB)

    Argibay, Nicolas [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Cheng, Shengfeng [Virginia Polytechnic Inst. and State Univ. (Virginia Tech), Blacksburg, VA (United States); Sawyer, W. G. [Univ. of Florida, Gainesville, FL (United States); Michael, Joseph R. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Chandross, Michael E. [Sandia National Lab. (SNL-CA), Livermore, CA (United States)

    2015-09-01

    The prediction of macro-scale friction and wear behavior based on first principles and material properties has remained an elusive but highly desirable target for tribologists and material scientists alike. Stochastic processes (e.g. wear), statistically described parameters (e.g. surface topography) and their evolution tend to defeat attempts to establish practical general correlations between fundamental nanoscale processes and macro-scale behaviors. We present a model based on microstructural stability and evolution for the prediction of metal friction regimes, founded on recently established microstructural deformation mechanisms of nanocrystalline metals, that relies exclusively on material properties and contact stress models. We show through complementary experimental and simulation results that this model overcomes longstanding practical challenges and successfully makes accurate and consistent predictions of friction transitions for a wide range of contact conditions. This framework not only challenges the assumptions of conventional causal relationships between hardness and friction, and between friction and wear, but also suggests a pathway for the design of higher performance metal alloys.

  15. A Multiphysics Framework to Learn and Predict in Presence of Multiple Scales

    Science.gov (United States)

    Tomin, P.; Lunati, I.

    2015-12-01

    Modeling complex phenomena in the subsurface remains challenging due to the presence of multiple interacting scales, which can make it impossible to focus on purely macroscopic phenomena (relevant in most applications) and neglect the processes at the micro-scale. We present and discuss a general framework that allows us to deal with the situation in which the lack of scale separation requires the combined use of different descriptions at different scale (for instance, a pore-scale description at the micro-scale and a Darcy-like description at the macro-scale) [1,2]. The method is based on conservation principles and constructs the macro-scale problem by numerical averaging of micro-scale balance equations. By employing spatiotemporal adaptive strategies, this approach can efficiently solve large-scale problems [2,3]. In addition, being based on a numerical volume-averaging paradigm, it offers a tool to illuminate how macroscopic equations emerge from microscopic processes, to better understand the meaning of microscopic quantities, and to investigate the validity of the assumptions routinely used to construct the macro-scale problems. [1] Tomin, P., and I. Lunati, A Hybrid Multiscale Method for Two-Phase Flow in Porous Media, Journal of Computational Physics, 250, 293-307, 2013 [2] Tomin, P., and I. Lunati, Local-global splitting and spatiotemporal-adaptive Multiscale Finite Volume Method, Journal of Computational Physics, 280, 214-231, 2015 [3] Tomin, P., and I. Lunati, Spatiotemporal adaptive multiphysics simulations of drainage-imbibition cycles, Computational Geosciences, 2015 (under review)

  16. The Development of a General Purpose ARM-based Processing Unit for the TileCal sROD

    CERN Multimedia

    Cox, Mitchell A

    2014-01-01

    The Large Hadron Collider at CERN generates enormous amounts of raw data which present a serious computing challenge. After planned upgrades in 2022, the data output from the ATLAS Tile Calorimeter will increase by 200 times to 41 Tb/s! ARM processors are common in mobile devices due to their low cost, low energy consumption and high performance. It is proposed that a cost-effective, high data throughput Processing Unit (PU) can be developed by using several consumer ARM processors in a cluster configuration to allow aggregated processing performance and data throughput while maintaining minimal software design difficulty for the end-user. This PU could be used for a variety of high-level functions on the high-throughput raw data such as spectral analysis and histograms to detect possible issues in the detector at a low level. High-throughput I/O interfaces are not typical in consumer ARM System on Chips but high data throughput capabilities are feasible via the novel use of PCI-Express as the I/O interface t...

  17. Optogenetic stimulation of lateral amygdala input to posterior piriform cortex modulates single-unit and ensemble odor processing

    Directory of Open Access Journals (Sweden)

    Benjamin eSadrian

    2015-12-01

    Full Text Available Olfactory information is synthesized within the olfactory cortex to provide not only an odor percept, but also a contextual significance that supports appropriate behavioral response to specific odor cues. The piriform cortex serves as a communication hub within this circuit by sharing reciprocal connectivity with higher processing regions, such as the lateral entorhinal cortex and amygdala. The functional significance of these descending inputs on piriform cortical processing of odorants is currently not well understood. We have employed optogenetic methods to selectively stimulate lateral and basolateral amygdala (BLA afferent fibers innervating the posterior piriform cortex (pPCX to quantify BLA modulation of pPCX odor-evoked activity. Single unit odor-evoked activity of anaesthetized BLA-infected animals was significantly modulated compared with control animal recordings, with individual cells displaying either enhancement or suppression of odor-driven spiking. In addition, BLA activation induced a decorrelation of odor-evoked pPCX ensemble activity relative to odor alone. Together these results indicate a modulatory role in pPCX odor processing for the BLA complex, which could contribute to learned changes in PCX activity following associative conditioning.

  18. Physical protection of nuclear operational units

    International Nuclear Information System (INIS)

    1981-07-01

    The general principles of and basic requirements for the physical protection of operational units in the nuclear field are established. They concern the operational units whose activities are related with production, utilization, processing, reprocessing, handling, transport or storage of materials of interest for the Brazilian Nuclear Program. (I.C.R.) [pt

  19. Evaluation of Selected Resource Allocation and Scheduling Methods in Heterogeneous Many-Core Processors and Graphics Processing Units

    Directory of Open Access Journals (Sweden)

    Ciznicki Milosz

    2014-12-01

    Full Text Available Heterogeneous many-core computing resources are increasingly popular among users due to their improved performance over homogeneous systems. Many developers have realized that heterogeneous systems, e.g. a combination of a shared memory multi-core CPU machine with massively parallel Graphics Processing Units (GPUs, can provide significant performance opportunities to a wide range of applications. However, the best overall performance can only be achieved if application tasks are efficiently assigned to different types of processor units in time taking into account their specific resource requirements. Additionally, one should note that available heterogeneous resources have been designed as general purpose units, however, with many built-in features accelerating specific application operations. In other words, the same algorithm or application functionality can be implemented as a different task for CPU or GPU. Nevertheless, from the perspective of various evaluation criteria, e.g. the total execution time or energy consumption, we may observe completely different results. Therefore, as tasks can be scheduled and managed in many alternative ways on both many-core CPUs or GPUs and consequently have a huge impact on the overall computing resources performance, there are needs for new and improved resource management techniques. In this paper we discuss results achieved during experimental performance studies of selected task scheduling methods in heterogeneous computing systems. Additionally, we present a new architecture for resource allocation and task scheduling library which provides a generic application programming interface at the operating system level for improving scheduling polices taking into account a diversity of tasks and heterogeneous computing resources characteristics.

  20. 21st Century Parent-Child Sex Communication in the United States: A Process Review.

    Science.gov (United States)

    Flores, Dalmacio; Barroso, Julie

    Parent-child sex communication results in the transmission of family expectations, societal values, and role modeling of sexual health risk-reduction strategies. Parent-child sex communication's potential to curb negative sexual health outcomes has sustained a multidisciplinary effort to better understand the process and its impact on the development of healthy sexual attitudes and behaviors among adolescents. This review advances what is known about the process of sex communication in the United States by reviewing studies published from 2003 to 2015. We used the Cumulative Index to Nursing and Allied Health Literature (CINAHL), PsycINFO, SocINDEX, and PubMed, and the key terms "parent child" AND "sex education" for the initial query; we included 116 original articles for analysis. Our review underscores long-established factors that prevent parents from effectively broaching and sustaining talks about sex with their children and has also identified emerging concerns unique to today's parenting landscape. Parental factors salient to sex communication are established long before individuals become parents and are acted upon by influences beyond the home. Child-focused communication factors likewise describe a maturing audience that is far from captive. The identification of both enduring and emerging factors that affect how sex communication occurs will inform subsequent work that will result in more positive sexual health outcomes for adolescents.

  1. Decommissioning Unit Cost Data

    International Nuclear Information System (INIS)

    Sanford, P. C.; Stevens, J. L.; Brandt, R.

    2002-01-01

    The Rocky Flats Closure Site (Site) is in the process of stabilizing residual nuclear materials, decommissioning nuclear facilities, and remediating environmental media. A number of contaminated facilities have been decommissioned, including one building, Building 779, that contained gloveboxes used for plutonium process development but did little actual plutonium processing. The actual costs incurred to decommission this facility formed much of the basis or standards used to estimate the decommissioning of the remaining plutonium-processing buildings. Recent decommissioning activities in the first actual production facility, Building 771, implemented a number of process and procedural improvements. These include methods for handling plutonium contaminated equipment, including size reduction, decontamination, and waste packaging, as well as management improvements to streamline planning and work control. These improvements resulted in a safer working environment and reduced project cost, as demonstrated in the overall project efficiency. The topic of this paper is the analysis of how this improved efficiency is reflected in recent unit costs for activities specific to the decommissioning of plutonium facilities. This analysis will allow the Site to quantify the impacts on future Rocky Flats decommissioning activities, and to develop data for planning and cost estimating the decommissioning of future facilities. The paper discusses the methods used to collect and arrange the project data from the individual work areas within Building 771. Regression and data correlation techniques were used to quantify values for different types of decommissioning activities. The discussion includes the approach to identify and allocate overall project support, waste management, and Site support costs based on the overall Site and project costs to provide a ''burdened'' unit cost. The paper ultimately provides a unit cost basis that can be used to support cost estimates for

  2. Techno-economic assessment of FT unit for synthetic diesel production in existing stand-alone biomass gasification plant using process simulation tool

    DEFF Research Database (Denmark)

    Hunpinyo, Piyapong; Narataruksa, Phavanee; Tungkamani, Sabaithip

    2014-01-01

    For alternative thermo-chemical conversion process route via gasification, biomass can be gasified to produce syngas (mainly CO and H2). On more applications of utilization, syngas can be used to synthesize fuels through the catalytic process option for producing synthetic liquid fuels...... such as Fischer-Tropsch (FT) diesel. The embedding of the FT plant into the stand-alone based on power mode plants for production of a synthetic fuel is a promising practice, which requires an extensive adaptation of conventional techniques to the special chemical needs found in a gasified biomass. Because...... there are currently no plans to engage the FT process in Thailand, the authors have targeted that this work focus on improving the FT configurations in existing biomass gasification facilities (10 MWth). A process simulation model for calculating extended unit operations in a demonstrative context is designed...

  3. Real-time track-less Cherenkov ring fitting trigger system based on Graphics Processing Units

    Science.gov (United States)

    Ammendola, R.; Biagioni, A.; Chiozzi, S.; Cretaro, P.; Cotta Ramusino, A.; Di Lorenzo, S.; Fantechi, R.; Fiorini, M.; Frezza, O.; Gianoli, A.; Lamanna, G.; Lo Cicero, F.; Lonardo, A.; Martinelli, M.; Neri, I.; Paolucci, P. S.; Pastorelli, E.; Piandani, R.; Piccini, M.; Pontisso, L.; Rossetti, D.; Simula, F.; Sozzi, M.; Vicini, P.

    2017-12-01

    The parallel computing power of commercial Graphics Processing Units (GPUs) is exploited to perform real-time ring fitting at the lowest trigger level using information coming from the Ring Imaging Cherenkov (RICH) detector of the NA62 experiment at CERN. To this purpose, direct GPU communication with a custom FPGA-based board has been used to reduce the data transmission latency. The GPU-based trigger system is currently integrated in the experimental setup of the RICH detector of the NA62 experiment, in order to reconstruct ring-shaped hit patterns. The ring-fitting algorithm running on GPU is fed with raw RICH data only, with no information coming from other detectors, and is able to provide more complex trigger primitives with respect to the simple photodetector hit multiplicity, resulting in a higher selection efficiency. The performance of the system for multi-ring Cherenkov online reconstruction obtained during the NA62 physics run is presented.

  4. Efficient molecular dynamics simulations with many-body potentials on graphics processing units

    Science.gov (United States)

    Fan, Zheyong; Chen, Wei; Vierimaa, Ville; Harju, Ari

    2017-09-01

    Graphics processing units have been extensively used to accelerate classical molecular dynamics simulations. However, there is much less progress on the acceleration of force evaluations for many-body potentials compared to pairwise ones. In the conventional force evaluation algorithm for many-body potentials, the force, virial stress, and heat current for a given atom are accumulated within different loops, which could result in write conflict between different threads in a CUDA kernel. In this work, we provide a new force evaluation algorithm, which is based on an explicit pairwise force expression for many-body potentials derived recently (Fan et al., 2015). In our algorithm, the force, virial stress, and heat current for a given atom can be accumulated within a single thread and is free of write conflicts. We discuss the formulations and algorithms and evaluate their performance. A new open-source code, GPUMD, is developed based on the proposed formulations. For the Tersoff many-body potential, the double precision performance of GPUMD using a Tesla K40 card is equivalent to that of the LAMMPS (Large-scale Atomic/Molecular Massively Parallel Simulator) molecular dynamics code running with about 100 CPU cores (Intel Xeon CPU X5670 @ 2.93 GHz).

  5. A Ten-Step Process for Developing Teaching Units

    Science.gov (United States)

    Butler, Geoffrey; Heslup, Simon; Kurth, Lara

    2015-01-01

    Curriculum design and implementation can be a daunting process. Questions quickly arise, such as who is qualified to design the curriculum and how do these people begin the design process. According to Graves (2008), in many contexts the design of the curriculum and the implementation of the curricular product are considered to be two mutually…

  6. Deriving social relations among organizational units from process models

    NARCIS (Netherlands)

    Song, M.S.; Choi, I.; Kim, K.M.; Aalst, van der W.M.P.

    2008-01-01

    For companies to sustain competitive advantages, it is required to redesign and improve business processes continuously by monitoring and analyzing process enactment results. Furthermore, organizational structures must be redesigned according to the changes in business processes. However, there are

  7. An Investigation of the Role of Grapheme Units in Word Recognition

    Science.gov (United States)

    Lupker, Stephen J.; Acha, Joana; Davis, Colin J.; Perea, Manuel

    2012-01-01

    In most current models of word recognition, the word recognition process is assumed to be driven by the activation of letter units (i.e., that letters are the perceptual units in reading). An alternative possibility is that the word recognition process is driven by the activation of grapheme units, that is, that graphemes, rather than letters, are…

  8. The instrument control unit of SPICA SAFARI: a macro-unit to host all the digital control functionalities of the spectrometer

    Science.gov (United States)

    Di Giorgio, Anna Maria; Biondi, David; Saggin, Bortolino; Shatalina, Irina; Viterbini, Maurizio; Giusi, Giovanni; Liu, Scige J.; Cerulli-Irelli, Paquale; Van Loon, Dennis; Cara, Christophe

    2012-09-01

    We present the preliminary design of the Instrument Control Unit (ICU) of the SpicA FAR infrared Instrument (SAFARI), an imaging Fourier Transform Spectrometer (FTS) designed to give continuous wavelength coverage in both photometric and spectroscopic modes from around 34 to 210 µm. Due to the stringent requirements in terms of mass and volume, the overall SAFARI warm electronics will be composed by only two main units: Detector Control Unit and ICU. ICU is therefore a macro-unit incorporating the four digital sub-units dedicated to the control of the overall instrument functionalities: the Cooler Control Unit, the Mechanism Control Unit, the Digital processing Unit and the Power Supply Unit. Both the mechanical solution adopted to host the four sub-units and the internal electrical architecture are presented as well as the adopted redundancy approach.

  9. A parallel approximate string matching under Levenshtein distance on graphics processing units using warp-shuffle operations.

    Directory of Open Access Journals (Sweden)

    ThienLuan Ho

    Full Text Available Approximate string matching with k-differences has a number of practical applications, ranging from pattern recognition to computational biology. This paper proposes an efficient memory-access algorithm for parallel approximate string matching with k-differences on Graphics Processing Units (GPUs. In the proposed algorithm, all threads in the same GPUs warp share data using warp-shuffle operation instead of accessing the shared memory. Moreover, we implement the proposed algorithm by exploiting the memory structure of GPUs to optimize its performance. Experiment results for real DNA packages revealed that the performance of the proposed algorithm and its implementation archived up to 122.64 and 1.53 times compared to that of sequential algorithm on CPU and previous parallel approximate string matching algorithm on GPUs, respectively.

  10. United States advanced technologies

    International Nuclear Information System (INIS)

    Longenecker, J.R.

    1985-01-01

    In the United States, the advanced technologies have been applied to uranium enrichment as a means by which it can be assured that nuclear fuel cost will remain competitive in the future. The United States is strongly committed to the development of advanced enrichment technology, and has brought both advanced gas centrifuge (AGC) and atomic vapor laser isotope separation (AVLIS) programs to a point of significant technical refinement. The ability to deploy advanced technologies is the basis for the confidence in competitive future price. Unfortunately, the development of advanced technologies is capital intensive. The year 1985 is the key year for advanced technology development in the United States, since the decision on the primary enrichment technology for the future, AGC or AVLIS, will be made shortly. The background on the technology selection process, the highlights of AGC and AVLIS programs and the way to proceed after the process selection are described. The key objective is to maximize the sales volume and minimize the operating cost. This will help the utilities in other countries supply low cost energy on a reliable, long term basis. (Kako, I.)

  11. Towards a carbon independent and CO2-free electrochemical membrane process for NH3 synthesis.

    Science.gov (United States)

    Kugler, K; Ohs, B; Scholz, M; Wessling, M

    2014-04-07

    Ammonia is exclusively synthesized by the Haber-Bosch process starting from precious carbon resources such as coal or CH4. With H2O, H2 is produced and with N2, NH3 can be synthesized at high pressures and temperatures. Regrettably, the carbon is not incorporated into NH3 but emitted as CO2. Valuable carbon sources are consumed which could be used otherwise when carbon sources become scarce. We suggest an alternative process concept using an electrochemical membrane reactor (ecMR). A complete synthesis process with N2 production and downstream product separation is presented and evaluated in a multi-scale model to quantify its energy consumption. A new micro-scale ecMR model integrates mass, species, heat and energy balances with electrochemical conversions allowing further integration into a macro-scale process flow sheet. For the anodic oxidation reaction H2O was chosen as a ubiquitous H2 source. Nitrogen was obtained by air separation which combines with protons from H2O to give NH3 using a hypothetical catalyst recently suggested from DFT calculations. The energy demand of the whole electrochemical process is up to 20% lower than the Haber-Bosch process using coal as a H2 source. In the case of natural gas, the ecMR process is not competitive under today's energy and resource conditions. In future however, the electrochemical NH3 synthesis might be the technology-of-choice when coal is easily accessible over natural gas or limited carbon sources have to be used otherwise but for the synthesis of the carbon free product NH3.

  12. Device and method to enhance availability of cluster-based processing systems

    Science.gov (United States)

    Lupia, David J. (Inventor); Ramos, Jeremy (Inventor); Samson, Jr., John R. (Inventor)

    2010-01-01

    An electronic computing device including at least one processing unit that implements a specific fault signal upon experiencing an associated fault, a control unit that generates a specific recovery signal upon receiving the fault signal from the at least one processing unit, and at least one input memory unit. The recovery signal initiates specific recovery processes in the at least one processing unit. The input memory buffers input data signals input to the at least one processing unit that experienced the fault during the recovery period.

  13. The digital ultrasonic test unit for automatic equipment

    International Nuclear Information System (INIS)

    Hiraoka, T.; Matsuyama, H.

    1976-01-01

    The operations and features of the ultrasonic test unit used and the digital data processing techniques employed are described. This unit is used for a few hundred multi-channel automatic ultrasonic test equipment

  14. Process development for fabrication of Ag-15% In-5% Cd alloys and rods for the control rods of IPEN critical unit

    International Nuclear Information System (INIS)

    Figueredo, A.M. de.

    1985-12-01

    The development of two process at the Nuclear and Energetic Research Institute (IPEN-Brazil) are described. - the production of Ag-15% In-5%. Cd alloys with nuclear grade. The fabrication of rods from Ag-15% In-5% Cd alloy for use at the critical unit. The methods for quality control of alloy and rod are presented, and main problems are identified. (C.G.C.)

  15. A Behavioral Analysis of the Laboratory Learning Process: Redesigning a Teaching Unit on Recrystallization.

    Science.gov (United States)

    Mulder, T.; Verdonk, A. H.

    1984-01-01

    Reports on a project in which observations of student and teaching assistant behavior were used to redesign a teaching unit on recrystallization. Comments on the instruction manual, starting points for teaching the unit, and list of objectives with related tasks are included. (JN)

  16. Nano-scale Materials and Nano-technology Processes in Environmental Protection

    International Nuclear Information System (INIS)

    Vissokov, Gh; Tzvetkoff, T.

    2003-01-01

    A number of environmental and energy technologies have benefited substantially from nano-scale technology: reduced waste and improved energy efficiency; environmentally friendly composite structures; waste remediation; energy conversion. In this report examples of current achievements and paradigm shifts are presented: from discovery to application; a nano structured materials; nanoparticles in the environment (plasma chemical preparation); nano-porous polymers and their applications in water purification; photo catalytic fluid purification; hierarchical self-assembled nano-structures for adsorption of heavy metals, etc. Several themes should be considered priorities in developing nano-scale processes related to environmental management: 1. To develop understanding and control of relevant processes, including protein precipitation and crystallisation, desorption of pollutants, stability of colloidal dispersion, micelle aggregation, microbe mobility, formation and mobility of nanoparticles, and tissue-nanoparticle interaction. Emphasis should be given to processes at phase boundaries (solid-liquid, solid-gas, liquid-gas) that involve mineral and organic soil components, aerosols, biomolecules (cells, microbes), bio tissues, derived components such as bio films and membranes, and anthropogenic additions (e.g. trace and heavy metals); 2. To carry out interdisciplinary research that initiates Noel approaches and adopts new methods for characterising surfaces and modelling complex systems to problems at interfaces and other nano-structures in the natural environment, including those involving biological or living systems. New technological advances such as optical traps, laser tweezers, and synchrotrons are extending examination of molecular and nano-scale processes to the single-molecule or single-cell level; 3. To integrate understanding of the roles of molecular and nano-scale phenomena and behaviour at the meso- and/or macro-scale over a period of time

  17. Junior Leader Training Development in Operational Units

    Science.gov (United States)

    2012-04-01

    UNITS Successful operational units do not arise without tough, realistic, and challenging training. Field Manual (FM) 7-0, Training Units and D...operations. The manual provides junior leaders with guidance on how to conduct training and training management. Of particular importance is the definition...1 Relation htp between ADDIE and the Anny Training Management Model. The Army Training Management Model and ADDIE process appear in TRADOC PAM 350

  18. Radiation processing in the United States

    International Nuclear Information System (INIS)

    Brynjolfsson, A.

    1986-01-01

    In animal feeding studies, including the huge animal feeding studies on radiation sterilized poultry products irradiated with sterilizing dose of 58 kGy revealed no harmful effects. This finding is corroborated by the very extensive analysis of the radiolytic products, which indicated that the radiolytic products could not in the quantity found in the food be expected to produce any toxic effect. It thus appears to be proven with reasonable certainty that no harm will result from the proposed use of the process. Accordingly, FDA is moving forward with approvals while allowing the required time for hearings and objection. On July 5, 1983 FDA permitted gamma irradiation for control of microbial contamination in dried spices and dehydrated vegetable seasoning at doses up to 10 kGy; on June 19, 1984 the approval was expanded to cover insect infection; and additional seasonings and irradiation of dry or dehydrated enzyme preparations were approved on February 12 and June 4, respectively, 1985. In addition, in July 1985, FDA cleared irradiation of pork products with doses of 0.3 to 1 kGy for eliminating trichinosis. Approvals of other agencies, including Food and Drug Administration, Department of Agriculture, the Nuclear Regulatory Commission, Occupational Safety and Health Administration, Department of Transportation, Environmental Protection Agency, and States and local communities, are usually of a technological nature and can then be obtained if the process is technologically feasible. (Namekawa, K.)

  19. Stabilization of gas turbine unit power

    Science.gov (United States)

    Dolotovskii, I.; Larin, E.

    2017-11-01

    We propose a new cycle air preparation unit which helps increasing energy power of gas turbine units (GTU) operating as a part of combined cycle gas turbine (CCGT) units of thermal power stations and energy and water supply systems of industrial enterprises as well as reducing power loss of gas turbine engines of process blowers resulting from variable ambient air temperatures. Installation of GTU power stabilizer at CCGT unit with electric and thermal power of 192 and 163 MW, respectively, has resulted in reduction of produced electrical energy production costs by 2.4% and thermal energy production costs by 1.6% while capital expenditures after installation of this equipment increased insignificantly.

  20. Overview of PAT process analysers applicable in monitoring of film coating unit operations for manufacturing of solid oral dosage forms.

    Science.gov (United States)

    Korasa, Klemen; Vrečer, Franc

    2018-01-01

    Over the last two decades, regulatory agencies have demanded better understanding of pharmaceutical products and processes by implementing new technological approaches, such as process analytical technology (PAT). Process analysers present a key PAT tool, which enables effective process monitoring, and thus improved process control of medicinal product manufacturing. Process analysers applicable in pharmaceutical coating unit operations are comprehensibly described in the present article. The review is focused on monitoring of solid oral dosage forms during film coating in two most commonly used coating systems, i.e. pan and fluid bed coaters. Brief theoretical background and critical overview of process analysers used for real-time or near real-time (in-, on-, at- line) monitoring of critical quality attributes of film coated dosage forms are presented. Besides well recognized spectroscopic methods (NIR and Raman spectroscopy), other techniques, which have made a significant breakthrough in recent years, are discussed (terahertz pulsed imaging (TPI), chord length distribution (CLD) analysis, and image analysis). Last part of the review is dedicated to novel techniques with high potential to become valuable PAT tools in the future (optical coherence tomography (OCT), acoustic emission (AE), microwave resonance (MR), and laser induced breakdown spectroscopy (LIBS)). Copyright © 2017 Elsevier B.V. All rights reserved.

  1. Laser color recording unit

    Science.gov (United States)

    Jung, E.

    1984-05-01

    A color recording unit was designed for output and control of digitized picture data within computer controlled reproduction and picture processing systems. In order to get a color proof picture of high quality similar to a color print, together with reduced time and material consumption, a photographic color film material was exposed pixelwise by modulated laser beams of three wavelengths for red, green and blue light. Components of different manufacturers for lasers, acousto-optic modulators and polygon mirrors were tested, also different recording methods as (continuous tone mode or screened mode and with a drum or flatbed recording principle). Besides the application for the graphic arts - the proof recorder CPR 403 with continuous tone color recording with a drum scanner - such a color hardcopy peripheral unit with large picture formats and high resolution can be used in medicine, communication, and satellite picture processing.

  2. Safety Management of a Clinical Process Using Failure Mode and Effect Analysis: Continuous Renal Replacement Therapies in Intensive Care Unit Patients.

    Science.gov (United States)

    Sanchez-Izquierdo-Riera, Jose Angel; Molano-Alvarez, Esteban; Saez-de la Fuente, Ignacio; Maynar-Moliner, Javier; Marín-Mateos, Helena; Chacón-Alves, Silvia

    2016-01-01

    The failure mode and effect analysis (FMEA) may improve the safety of the continuous renal replacement therapies (CRRT) in the intensive care unit. We use this tool in three phases: 1) Retrospective observational study. 2) A process FMEA, with implementation of the improvement measures identified. 3) Cohort study after FMEA. We included 54 patients in the pre-FMEA group and 72 patients in the post-FMEA group. Comparing the risks frequencies per patient in both groups, we got less cases of under 24 hours of filter survival time in the post-FMEA group (31 patients 57.4% vs. 21 patients 29.6%; p FMEA, there were several improvements in the management of intensive care unit patients receiving CRRT, and we consider it a useful tool for improving the safety of critically ill patients.

  3. Incorporating pushing in exclusion-process models of cell migration.

    Science.gov (United States)

    Yates, Christian A; Parker, Andrew; Baker, Ruth E

    2015-05-01

    The macroscale movement behavior of a wide range of isolated migrating cells has been well characterized experimentally. Recently, attention has turned to understanding the behavior of cells in crowded environments. In such scenarios it is possible for cells to interact, inducing neighboring cells to move in order to make room for their own movements or progeny. Although the behavior of interacting cells has been modeled extensively through volume-exclusion processes, few models, thus far, have explicitly accounted for the ability of cells to actively displace each other in order to create space for themselves. In this work we consider both on- and off-lattice volume-exclusion position-jump processes in which cells are explicitly allowed to induce movements in their near neighbors in order to create space for themselves to move or proliferate into. We refer to this behavior as pushing. From these simple individual-level representations we derive continuum partial differential equations for the average occupancy of the domain. We find that, for limited amounts of pushing, comparison between the averaged individual-level simulations and the population-level model is nearly as good as in the scenario without pushing. Interestingly, we find that, in the on-lattice case, the diffusion coefficient of the population-level model is increased by pushing, whereas, for the particular off-lattice model that we investigate, the diffusion coefficient is reduced. We conclude, therefore, that it is important to consider carefully the appropriate individual-level model to use when representing complex cell-cell interactions such as pushing.

  4. Assessment of changes in plasma hemoglobin and potassium levels in red cell units during processing and storage.

    Science.gov (United States)

    Saini, Nishant; Basu, Sabita; Kaur, Ravneet; Kaur, Jasbinder

    2015-06-01

    Red cell units undergo changes during storage and processing. The study was planned to assess plasma potassium, plasma hemoglobin, percentage hemolysis during storage and to determine the effects of outdoor blood collection and processing on those parameters. Blood collection in three types of blood storage bags was done - single CPDA bag (40 outdoor and 40 in-house collection), triple CPD + SAGM bag (40 in-house collection) and quadruple CPD + SAGM bag with integral leukoreduction filter (40 in-house collection). All bags were sampled on day 0 (day of collection), day 1 (after processing), day 7, day 14 and day 28 for measurement of percentage hemolysis and potassium levels in the plasma of bag contents. There was significant increase in percentage hemolysis, plasma hemoglobin and plasma potassium level in all the groups during storage (p levels during the storage of red blood cells. Blood collection can be safely undertaken in outdoor blood donation camps even in hot summer months in monitored blood transport boxes. SAGM additive solution decreases the red cell hemolysis and allows extended storage of red cells. Prestorage leukoreduction decreases the red cell hemolysis and improves the quality of blood. Copyright © 2015 Elsevier Ltd. All rights reserved.

  5. Molecular Monte Carlo Simulations Using Graphics Processing Units: To Waste Recycle or Not?

    Science.gov (United States)

    Kim, Jihan; Rodgers, Jocelyn M; Athènes, Manuel; Smit, Berend

    2011-10-11

    In the waste recycling Monte Carlo (WRMC) algorithm, (1) multiple trial states may be simultaneously generated and utilized during Monte Carlo moves to improve the statistical accuracy of the simulations, suggesting that such an algorithm may be well posed for implementation in parallel on graphics processing units (GPUs). In this paper, we implement two waste recycling Monte Carlo algorithms in CUDA (Compute Unified Device Architecture) using uniformly distributed random trial states and trial states based on displacement random-walk steps, and we test the methods on a methane-zeolite MFI framework system to evaluate their utility. We discuss the specific implementation details of the waste recycling GPU algorithm and compare the methods to other parallel algorithms optimized for the framework system. We analyze the relationship between the statistical accuracy of our simulations and the CUDA block size to determine the efficient allocation of the GPU hardware resources. We make comparisons between the GPU and the serial CPU Monte Carlo implementations to assess speedup over conventional microprocessors. Finally, we apply our optimized GPU algorithms to the important problem of determining free energy landscapes, in this case for molecular motion through the zeolite LTA.

  6. Multifunctional centrifugal grinding unit

    Science.gov (United States)

    Sevostyanov, V. S.; Uralskij, V. I.; Uralskij, A. V.; Sinitsa, E. V.

    2018-03-01

    The article presents scientific and engineering developments of multifunctional centrifugal grinding unit in which the selective effect of grinding bodies on the crushing material is realized, depending on its physical and mechanical characteristics and various schemes for organizing the technological process

  7. Development of a Monte Carlo software to photon transportation in voxel structures using graphic processing units

    International Nuclear Information System (INIS)

    Bellezzo, Murillo

    2014-01-01

    As the most accurate method to estimate absorbed dose in radiotherapy, Monte Carlo Method (MCM) has been widely used in radiotherapy treatment planning. Nevertheless, its efficiency can be improved for clinical routine applications. In this thesis, the CUBMC code is presented, a GPU-based MC photon transport algorithm for dose calculation under the Compute Unified Device Architecture (CUDA) platform. The simulation of physical events is based on the algorithm used in PENELOPE, and the cross section table used is the one generated by the MATERIAL routine, also present in PENELOPE code. Photons are transported in voxel-based geometries with different compositions. There are two distinct approaches used for transport simulation. The rst of them forces the photon to stop at every voxel frontier, the second one is the Woodcock method, where the photon ignores the existence of borders and travels in homogeneous fictitious media. The CUBMC code aims to be an alternative of Monte Carlo simulator code that, by using the capability of parallel processing of graphics processing units (GPU), provide high performance simulations in low cost compact machines, and thus can be applied in clinical cases and incorporated in treatment planning systems for radiotherapy. (author)

  8. Simulation based assembly and alignment process ability analysis for line replaceable units of the high power solid state laser facility

    International Nuclear Information System (INIS)

    Wang, Junfeng; Lu, Cong; Li, Shiqi

    2016-01-01

    Highlights: • Discrete event simulation is applied to analyze the assembly and alignment process ability of LRUs in SG-III facility. • The overall assembly and alignment process of LRUs with specific characteristics is described. • An extended-directed graph is proposed to express the assembly and alignment process of LRUs. • Different scenarios have been simulated to evaluate assembling process ability of LRUs and decision making is supported to ensure the construction millstone. - Abstract: Line replaceable units (LRUs) are important components of the very large high power solid state laser facilities. The assembly and alignment process ability of LRUs will impact the construction milestone of facilities. This paper describes the use of discrete event simulation method for assembly and alignment process analysis of LRUs in such facilities. The overall assembly and alignment process for LRUs is presented based on the layout of the optics assembly laboratory and the process characteristics are analyzed. An extended-directed graph is proposed to express the assembly and alignment process of LRUs. Taking the LRUs of disk amplifier system in Shen Guang-III (SG-III) facility as the example, some process simulation models are built based on the Quest simulation platform. The constraints, such as duration, equipment, technician and part supply, are considered in the simulation models. Different simulation scenarios have been carried out to evaluate the assembling process ability of LRUs. The simulation method can provide a valuable decision making and process optimization tool for the optics assembly laboratory layout and the process working out of such facilities.

  9. Simulation based assembly and alignment process ability analysis for line replaceable units of the high power solid state laser facility

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Junfeng; Lu, Cong; Li, Shiqi, E-mail: sqli@hust.edu.cn

    2016-11-15

    Highlights: • Discrete event simulation is applied to analyze the assembly and alignment process ability of LRUs in SG-III facility. • The overall assembly and alignment process of LRUs with specific characteristics is described. • An extended-directed graph is proposed to express the assembly and alignment process of LRUs. • Different scenarios have been simulated to evaluate assembling process ability of LRUs and decision making is supported to ensure the construction millstone. - Abstract: Line replaceable units (LRUs) are important components of the very large high power solid state laser facilities. The assembly and alignment process ability of LRUs will impact the construction milestone of facilities. This paper describes the use of discrete event simulation method for assembly and alignment process analysis of LRUs in such facilities. The overall assembly and alignment process for LRUs is presented based on the layout of the optics assembly laboratory and the process characteristics are analyzed. An extended-directed graph is proposed to express the assembly and alignment process of LRUs. Taking the LRUs of disk amplifier system in Shen Guang-III (SG-III) facility as the example, some process simulation models are built based on the Quest simulation platform. The constraints, such as duration, equipment, technician and part supply, are considered in the simulation models. Different simulation scenarios have been carried out to evaluate the assembling process ability of LRUs. The simulation method can provide a valuable decision making and process optimization tool for the optics assembly laboratory layout and the process working out of such facilities.

  10. Mass production of polymer nano-wires filled with metal nano-particles.

    Science.gov (United States)

    Lomadze, Nino; Kopyshev, Alexey; Bargheer, Matias; Wollgarten, Markus; Santer, Svetlana

    2017-08-17

    Despite the ongoing progress in nanotechnology and its applications, the development of strategies for connecting nano-scale systems to micro- or macroscale elements is hampered by the lack of structural components that have both, nano- and macroscale dimensions. The production of nano-scale wires with macroscale length is one of the most interesting challenges here. There are a lot of strategies to fabricate long nanoscopic stripes made of metals, polymers or ceramics but none is suitable for mass production of ordered and dense arrangements of wires at large numbers. In this paper, we report on a technique for producing arrays of ordered, flexible and free-standing polymer nano-wires filled with different types of nano-particles. The process utilizes the strong response of photosensitive polymer brushes to irradiation with UV-interference patterns, resulting in a substantial mass redistribution of the polymer material along with local rupturing of polymer chains. The chains can wind up in wires of nano-scale thickness and a length of up to several centimeters. When dispersing nano-particles within the film, the final arrangement is similar to a core-shell geometry with mainly nano-particles found in the core region and the polymer forming a dielectric jacket.

  11. The use of a quartz crystal microbalance as an analytical tool to monitor particle/surface and particle/particle interactions under dry ambient and pressurized conditions: a study using common inhaler components.

    Science.gov (United States)

    Turner, N W; Bloxham, M; Piletsky, S A; Whitcombe, M J; Chianella, I

    2016-12-19

    Metered dose inhalers (MDI) and multidose powder inhalers (MPDI) are commonly used for the treatment of chronic obstructive pulmonary diseases and asthma. Currently, analytical tools to monitor particle/particle and particle/surface interaction within MDI and MPDI at the macro-scale do not exist. A simple tool capable of measuring such interactions would ultimately enable quality control of MDI and MDPI, producing remarkable benefits for the pharmaceutical industry and the users of inhalers. In this paper, we have investigated whether a quartz crystal microbalance (QCM) could become such a tool. A QCM was used to measure particle/particle and particle/surface interactions on the macroscale, by additions of small amounts of MDPI components, in the powder form into a gas stream. The subsequent interactions with materials on the surface of the QCM sensor were analyzed. Following this, the sensor was used to measure fluticasone propionate, a typical MDI active ingredient, in a pressurized gas system to assess its interactions with different surfaces under conditions mimicking the manufacturing process. In both types of experiments the QCM was capable of discriminating interactions of different components and surfaces. The results have demonstrated that the QCM is a suitable platform for monitoring macro-scale interactions and could possibly become a tool for quality control of inhalers.

  12. Numerical study of multiscale compaction-initiated detonation

    Science.gov (United States)

    Gambino, J. R.; Schwendeman, D. W.; Kapila, A. K.

    2018-02-01

    A multiscale model of heterogeneous condensed-phase explosives is examined computationally to determine the course of transient events following the application of a piston-driven stimulus. The model is a modified version of that introduced by Gonthier (Combust Sci Technol 175(9):1679-1709, 2003. https://doi.org/10.1080/00102200302373) in which the explosive is treated as a porous, compacting medium at the macro-scale and a collection of closely packed spherical grains capable of undergoing reaction and diffusive heat transfer at the meso-scale. A separate continuum description is ascribed to each scale, and the two scales are coupled together in an energetically consistent manner. Following piston-induced compaction, localized energy deposition at the sites of intergranular contact creates hot spots where reaction begins preferentially. Reaction progress at the macro-scale is determined by the spatial average of that at the grain scale. A parametric study shows that combustion at the macro-scale produces an unsteady detonation with a cyclical character, in which the lead shock loses strength and is overtaken by a stronger secondary shock generated in the partially reacted material behind it. The secondary shock in turn becomes the new lead shock and the process repeats itself.

  13. Sustainable Process Synthesis-Intensification

    DEFF Research Database (Denmark)

    Babi, Deenesh Kavi; Holtbruegge, Johannes; Lutze, Philip

    2014-01-01

    Sustainable process design can be achieved by performing process synthesis and process intensification together. This approach first defines a design target through a sustainability analysis and then finds design alternatives that match the target through process intensification. A systematic......, multi-stage framework for process synthesis- intensification that identifies more sustainable process designs has been developed. At stages 1-2, the working scale is at the level of unit operations, where a base case design is identified and analyzed with respect to sustainability metrics. At stages 3......, a phenomena-based process synthesis method is applied, where the phenomena involved in each tasks are identified, manipulated and recombined to generate new and/or existing unit operations configured into flowsheets that are more sustainable from those found in the previous levels. An overview of the key...

  14. Classification of hyperspectral imagery using MapReduce on a NVIDIA graphics processing unit (Conference Presentation)

    Science.gov (United States)

    Ramirez, Andres; Rahnemoonfar, Maryam

    2017-04-01

    A hyperspectral image provides multidimensional figure rich in data consisting of hundreds of spectral dimensions. Analyzing the spectral and spatial information of such image with linear and non-linear algorithms will result in high computational time. In order to overcome this problem, this research presents a system using a MapReduce-Graphics Processing Unit (GPU) model that can help analyzing a hyperspectral image through the usage of parallel hardware and a parallel programming model, which will be simpler to handle compared to other low-level parallel programming models. Additionally, Hadoop was used as an open-source version of the MapReduce parallel programming model. This research compared classification accuracy results and timing results between the Hadoop and GPU system and tested it against the following test cases: the CPU and GPU test case, a CPU test case and a test case where no dimensional reduction was applied.

  15. Stroke Unit: General principles and standards

    Directory of Open Access Journals (Sweden)

    Mehmet Akif Topçuoğlu

    2015-04-01

    Full Text Available Evidence-based medicinal methods have convincingly shown that stroke unit approach reduces mortality and disability rates, improves the quality of life and economic burden resulting from acute ischemic and hemorrhagic stroke. Any contemporary stroke system of care cannot be successful without putting the stroke unit concept in the center of its organization. Stroke units are the main elements of primary and comprehensive stroke centers. As a modernization process, this article focuses on practical issues and suggestions related to integration of the stroke unit approach to a regionally organized stroke system of care for perusal by not only national health authorities and service providers, but also neurologists. Stroke unit quality metrics revisited herein are of critical importance for hospitals establishing or renovating primary and comprehensive stroke centers.

  16. Single-unit studies of visual motion processing in cat extrastriate areas

    NARCIS (Netherlands)

    Vajda, Ildiko

    2003-01-01

    Motion vision has high survival value and is a fundamental property of all visual systems. The old Greeks already studied motion vision, but the physiological basis of it first came under scrutiny in the late nineteenth century. Later, with the introduction of single-cell (single-unit)

  17. Analytical gradients for tensor hyper-contracted MP2 and SOS-MP2 on graphical processing units

    Science.gov (United States)

    Song, Chenchen; Martínez, Todd J.

    2017-10-01

    Analytic energy gradients for tensor hyper-contraction (THC) are derived and implemented for second-order Møller-Plesset perturbation theory (MP2), with and without the scaled-opposite-spin (SOS)-MP2 approximation. By exploiting the THC factorization, the formal scaling of MP2 and SOS-MP2 gradient calculations with respect to system size is reduced to quartic and cubic, respectively. An efficient implementation has been developed that utilizes both graphics processing units and sparse tensor techniques exploiting spatial sparsity of the atomic orbitals. THC-MP2 has been applied to both geometry optimization and ab initio molecular dynamics (AIMD) simulations. The resulting energy conservation in micro-canonical AIMD demonstrates that the implementation provides accurate nuclear gradients with respect to the THC-MP2 potential energy surfaces.

  18. Quantum processes: probability fluxes, transition probabilities in unit time and vacuum vibrations

    International Nuclear Information System (INIS)

    Oleinik, V.P.; Arepjev, Ju D.

    1989-01-01

    Transition probabilities in unit time and probability fluxes are compared in studying the elementary quantum processes -the decay of a bound state under the action of time-varying and constant electric fields. It is shown that the difference between these quantities may be considerable, and so the use of transition probabilities W instead of probability fluxes Π, in calculating the particle fluxes, may lead to serious errors. The quantity W represents the rate of change with time of the population of the energy levels relating partly to the real states and partly to the virtual ones, and it cannot be directly measured in experiment. The vacuum background is shown to be continuously distorted when a perturbation acts on a system. Because of this the viewpoint of an observer on the physical properties of real particles continuously varies with time. This fact is not taken into consideration in the conventional theory of quantum transitions based on using the notion of probability amplitude. As a result, the probability amplitudes lose their physical meaning. All the physical information on quantum dynamics of a system is contained in the mean values of physical quantities. The existence of considerable differences between the quantities W and Π permits one in principle to make a choice of the correct theory of quantum transitions on the basis of experimental data. (author)

  19. Accelerating image reconstruction in three-dimensional optoacoustic tomography on graphics processing units.

    Science.gov (United States)

    Wang, Kun; Huang, Chao; Kao, Yu-Jiun; Chou, Cheng-Ying; Oraevsky, Alexander A; Anastasio, Mark A

    2013-02-01

    Optoacoustic tomography (OAT) is inherently a three-dimensional (3D) inverse problem. However, most studies of OAT image reconstruction still employ two-dimensional imaging models. One important reason is because 3D image reconstruction is computationally burdensome. The aim of this work is to accelerate existing image reconstruction algorithms for 3D OAT by use of parallel programming techniques. Parallelization strategies are proposed to accelerate a filtered backprojection (FBP) algorithm and two different pairs of projection/backprojection operations that correspond to two different numerical imaging models. The algorithms are designed to fully exploit the parallel computing power of graphics processing units (GPUs). In order to evaluate the parallelization strategies for the projection/backprojection pairs, an iterative image reconstruction algorithm is implemented. Computer simulation and experimental studies are conducted to investigate the computational efficiency and numerical accuracy of the developed algorithms. The GPU implementations improve the computational efficiency by factors of 1000, 125, and 250 for the FBP algorithm and the two pairs of projection/backprojection operators, respectively. Accurate images are reconstructed by use of the FBP and iterative image reconstruction algorithms from both computer-simulated and experimental data. Parallelization strategies for 3D OAT image reconstruction are proposed for the first time. These GPU-based implementations significantly reduce the computational time for 3D image reconstruction, complementing our earlier work on 3D OAT iterative image reconstruction.

  20. The development of a general purpose ARM-based processing unit for the ATLAS TileCal sROD

    Science.gov (United States)

    Cox, M. A.; Reed, R.; Mellado, B.

    2015-01-01

    After Phase-II upgrades in 2022, the data output from the LHC ATLAS Tile Calorimeter will increase significantly. ARM processors are common in mobile devices due to their low cost, low energy consumption and high performance. It is proposed that a cost-effective, high data throughput Processing Unit (PU) can be developed by using several consumer ARM processors in a cluster configuration to allow aggregated processing performance and data throughput while maintaining minimal software design difficulty for the end-user. This PU could be used for a variety of high-level functions on the high-throughput raw data such as spectral analysis and histograms to detect possible issues in the detector at a low level. High-throughput I/O interfaces are not typical in consumer ARM System on Chips but high data throughput capabilities are feasible via the novel use of PCI-Express as the I/O interface to the ARM processors. An overview of the PU is given and the results for performance and throughput testing of four different ARM Cortex System on Chips are presented.

  1. The Development of a General Purpose ARM-based Processing Unit for the ATLAS TileCal sROD

    CERN Document Server

    Cox, Mitchell Arij; The ATLAS collaboration; Mellado Garcia, Bruce Rafael

    2015-01-01

    The Large Hadron Collider at CERN generates enormous amounts of raw data which present a serious computing challenge. After Phase-II upgrades in 2022, the data output from the ATLAS Tile Calorimeter will increase by 200 times to 41 Tb/s! ARM processors are common in mobile devices due to their low cost, low energy consumption and high performance. It is proposed that a cost-effective, high data throughput Processing Unit (PU) can be developed by using several consumer ARM processors in a cluster configuration to allow aggregated processing performance and data throughput while maintaining minimal software design difficulty for the end-user. This PU could be used for a variety of high-level functions on the high-throughput raw data such as spectral analysis and histograms to detect possible issues in the detector at a low level. High-throughput I/O interfaces are not typical in consumer ARM System on Chips but high data throughput capabilities are feasible via the novel use of PCI-Express as the I/O interface ...

  2. The development of a general purpose ARM-based processing unit for the ATLAS TileCal sROD

    International Nuclear Information System (INIS)

    Cox, M A; Reed, R; Mellado, B

    2015-01-01

    After Phase-II upgrades in 2022, the data output from the LHC ATLAS Tile Calorimeter will increase significantly. ARM processors are common in mobile devices due to their low cost, low energy consumption and high performance. It is proposed that a cost-effective, high data throughput Processing Unit (PU) can be developed by using several consumer ARM processors in a cluster configuration to allow aggregated processing performance and data throughput while maintaining minimal software design difficulty for the end-user. This PU could be used for a variety of high-level functions on the high-throughput raw data such as spectral analysis and histograms to detect possible issues in the detector at a low level. High-throughput I/O interfaces are not typical in consumer ARM System on Chips but high data throughput capabilities are feasible via the novel use of PCI-Express as the I/O interface to the ARM processors. An overview of the PU is given and the results for performance and throughput testing of four different ARM Cortex System on Chips are presented

  3. Integrated Process Modeling-A Process Validation Life Cycle Companion.

    Science.gov (United States)

    Zahel, Thomas; Hauer, Stefan; Mueller, Eric M; Murphy, Patrick; Abad, Sandra; Vasilieva, Elena; Maurer, Daniel; Brocard, Cécile; Reinisch, Daniela; Sagmeister, Patrick; Herwig, Christoph

    2017-10-17

    During the regulatory requested process validation of pharmaceutical manufacturing processes, companies aim to identify, control, and continuously monitor process variation and its impact on critical quality attributes (CQAs) of the final product. It is difficult to directly connect the impact of single process parameters (PPs) to final product CQAs, especially in biopharmaceutical process development and production, where multiple unit operations are stacked together and interact with each other. Therefore, we want to present the application of Monte Carlo (MC) simulation using an integrated process model (IPM) that enables estimation of process capability even in early stages of process validation. Once the IPM is established, its capability in risk and criticality assessment is furthermore demonstrated. IPMs can be used to enable holistic production control strategies that take interactions of process parameters of multiple unit operations into account. Moreover, IPMs can be trained with development data, refined with qualification runs, and maintained with routine manufacturing data which underlines the lifecycle concept. These applications will be shown by means of a process characterization study recently conducted at a world-leading contract manufacturing organization (CMO). The new IPM methodology therefore allows anticipation of out of specification (OOS) events, identify critical process parameters, and take risk-based decisions on counteractions that increase process robustness and decrease the likelihood of OOS events.

  4. COGNITIVE STRUCTURING OF TEXT COMPREHENSION PROCESS IN THE ASPECT OF MICROLINGUISTICS

    Directory of Open Access Journals (Sweden)

    Kolodina Nina Ivanovna

    2014-09-01

    Full Text Available The theory of mnemo-units of knowledge in the aspect of microliguistics is deliberated in the article. Mnemo-unit of knowledge is considered to be a unit of knowledge in the operative memory, which cannot be verbalized but can be explained. The singling out such units, on the one hand, gives the opportunity to construct the structural scheme of comprehension process, and, on the other hand, to justify the theory of comprehension process as the process of operating with tiny recognized and unrecognized units which have schematic or contour fixation in the human's memory. The process of text comprehension is analyzed and compared with the process of making saccades. Given examples about the eyesight fixation on the words picked out on the line allow to speak about the fixation of attention only on these words. The summing up of such theоretic and practical data leads to the opportunity to base the theory of mnemo-units of knowledge in the aspect of microlinguistics. The comprehension process demands supporting the steady connections between the mnemo-units of knowledge. In their turn, the steady connections between the mnemo-units of knowledge, which are necessary for production of thinking forms, are insured by constant activization of the same units at the same sequence. Constant and sequent activization of the same units of knowledge leads to the human thinking process stereotyping. The cognitive model of structural thinking process is built in the article. The analysis of received data on stereotyped comprehension process allows to reveal the fact that the activization of one mnemo-units group demands the activization of another mnemo-units group. Activated mnemo-units groups determine the psychological structure of personality. In this aspect the motivation and the behavior are the necessary steps in the cognitive model of structural comprehension process while the psychological structure is considered.

  5. Strategic renewal for business units.

    Science.gov (United States)

    Whitney, J O

    1996-01-01

    Over the past decade, business units have increasingly taken the role of strategy formulation away from corporate headquarters. The change makes sense: business units are closer to customers, competitors, and costs. Nevertheless, business units can fail, just as headquarters once did, by losing their focus on the organization's priorities and capabilities. John Whitney--turnaround expert and professor of management at Columbia University--offers a method for refocusing companies that he calls the strategic-renewal process. The principles behind the process are straightforward, but its execution demands extensive data, rigorous analysis, and the judgment of key decision makers. However, when applied with diligence, it can produce a strategy that yields both growth and profit. To carry out the process, managers must analyze, one by one or in logical groupings, the company's customers, the products it sells, and the services it offers in light of three criteria: strategic importance, significance, and profitability. Does a given customer, product, or service mesh with the organization's goals? Is it significant in terms of current and future revenues? And is it truly profitable when all costs are care fully considered? Customers, products, and services that do not measure up, says the author, must be weeded out relentlessly. Although the process is a painstaking one, the article offers clear thinking on why-and how-to go about it. A series of exhibits takes managers through the questions they need to raise, and two matrices offer Whitney's concentrated wisdom on when to cultivate--and when to prune.

  6. 48 CFR 1845.7101-3 - Unit acquisition cost.

    Science.gov (United States)

    2010-10-01

    ... production inventory and include programmed extra units to cover replacement during the fabrication process... ADMINISTRATION CONTRACT MANAGEMENT GOVERNMENT PROPERTY Forms Preparation 1845.7101-3 Unit acquisition cost. (a... production costs (for assets produced or constructed). (5) Engineering, architectural, and other outside...

  7. Transparent Runtime Migration of Loop-Based Traces of Processor Instructions to Reconfigurable Processing Units

    Directory of Open Access Journals (Sweden)

    João Bispo

    2013-01-01

    Full Text Available The ability to map instructions running in a microprocessor to a reconfigurable processing unit (RPU, acting as a coprocessor, enables the runtime acceleration of applications and ensures code and possibly performance portability. In this work, we focus on the mapping of loop-based instruction traces (called Megablocks to RPUs. The proposed approach considers offline partitioning and mapping stages without ignoring their future runtime applicability. We present a toolchain that automatically extracts specific trace-based loops, called Megablocks, from MicroBlaze instruction traces and generates an RPU for executing those loops. Our hardware infrastructure is able to move loop execution from the microprocessor to the RPU transparently, at runtime, and without changing the executable binaries. The toolchain and the system are fully operational. Three FPGA implementations of the system, differing in the hardware interfaces used, were tested and evaluated with a set of 15 application kernels. Speedups ranging from 1.26 to 3.69 were achieved for the best alternative using a MicroBlaze processor with local memory.

  8. Opportunities in the United States' gas processing industry

    International Nuclear Information System (INIS)

    Meyer, H.S.; Leppin, D.

    1997-01-01

    To keep up with the increasing amount of natural gas that will be required by the market and with the decreasing quality of the gas at the well-head, the gas processing industry must look to new technologies to stay competitive. The Gas Research Institute (GR); is managing a research, development, design and deployment program that is projected to save the industry US dollar 230 million/year in operating and capital costs from gas processing related activities in NGL extraction and recovery, dehydration, acid gas removal/sulfur recovery, and nitrogen rejection. Three technologies are addressed here. Multivariable Control (MVC) technology for predictive process control and optimization is installed or in design at fourteen facilities treating a combined total of over 30x10 9 normal cubic meter per year (BN m 3 /y) [1.1x10 12 standard cubic feet per year (Tcf/y)]. Simple pay backs are typically under 6 months. A new acid gas removal process based on n-formyl morpholine (NFM) is being field tested that offers 40-50% savings in operating costs and 15-30% savings in capital costs relative to a commercially available physical solvent. The GRI-MemCalc TM Computer Program for Membrane Separations and the GRI-Scavenger CalcBase TM Computer Program for Scavenging Technologies are screening tools that engineers can use to determine the best practice for treating their gas. (au) 19 refs

  9. Hydropower in Southeast United States, -a Hydroclimatological Perspective

    Science.gov (United States)

    Engstrom, J.

    2016-12-01

    Hydropower is unique among renewable energy sources for the ability to store its fuel (water) in reservoirs. The relationship between discharge, macro-scale drivers, and production is complex since production depends not only on water availability, but also upon decisions made by the institution owning the facility that has to consider many competing interests including economics, drinking water supply, recreational uses, etc. This analysis shows that the hydropower plants in Southeast U.S. (AL, GA, NC, SC, and TN) exhibit considerable year to year variability in production. Although the hydroclimatology of the Southeast U.S. has been analyzed partially, no previous study has linked the region's hydroelectricity production to any reported causes of interannual hydroclimatological variability, as has been completed in other regions. Due to the current short-term hydroelectricity production forecasts, the water resource is not optimized from a hydropower perspective as electricity generating potential is not maximized. The results of this study highlight the amount of untapped hydroelectricity that could be produced if long term hydroclimate and large-scale climate drivers were considered in production forecasts.

  10. The impact of the hospitalization process on the caregiver of a chronic critical patient hospitalized in a Semi-Intensive Care Unit

    OpenAIRE

    Neves, Letícia; Gondim, Andressa Alencar; Soares, Sara Costa Martins Rodrigues; Coelho, Denis Pontes; Pinheiro, Joana Angélica Marques

    2018-01-01

    Abstract Objective: To understand the impact of the hospitalization process on the family companion of critical patients admitted to a Semi-Intensive Care Unit (SICU). Method: Exploratory research with a qualitative approach, conducted in the months of April to July of 2016 through a semi-structured interview applied to relatives who were accompanying patients hospitalized in an SICU of a high complexity care hospital in Fortaleza. The interviews were submitted to content analysis. Results...

  11. Graphics processing unit accelerated three-dimensional model for the simulation of pulsed low-temperature plasmas

    Energy Technology Data Exchange (ETDEWEB)

    Fierro, Andrew, E-mail: andrew.fierro@ttu.edu; Dickens, James; Neuber, Andreas [Center for Pulsed Power and Power Electronics, Department of Electrical and Computer Engineering, Texas Tech University, Lubbock, Texas 79409 (United States)

    2014-12-15

    A 3-dimensional particle-in-cell/Monte Carlo collision simulation that is fully implemented on a graphics processing unit (GPU) is described and used to determine low-temperature plasma characteristics at high reduced electric field, E/n, in nitrogen gas. Details of implementation on the GPU using the NVIDIA Compute Unified Device Architecture framework are discussed with respect to efficient code execution. The software is capable of tracking around 10 × 10{sup 6} particles with dynamic weighting and a total mesh size larger than 10{sup 8} cells. Verification of the simulation is performed by comparing the electron energy distribution function and plasma transport parameters to known Boltzmann Equation (BE) solvers. Under the assumption of a uniform electric field and neglecting the build-up of positive ion space charge, the simulation agrees well with the BE solvers. The model is utilized to calculate plasma characteristics of a pulsed, parallel plate discharge. A photoionization model provides the simulation with additional electrons after the initial seeded electron density has drifted towards the anode. Comparison of the performance benefits between the GPU-implementation versus a CPU-implementation is considered, and a speed-up factor of 13 for a 3D relaxation Poisson solver is obtained. Furthermore, a factor 60 speed-up is realized for parallelization of the electron processes.

  12. Spatial resolution recovery utilizing multi-ray tracing and graphic processing unit in PET image reconstruction

    International Nuclear Information System (INIS)

    Liang, Yicheng; Peng, Hao

    2015-01-01

    Depth-of-interaction (DOI) poses a major challenge for a PET system to achieve uniform spatial resolution across the field-of-view, particularly for small animal and organ-dedicated PET systems. In this work, we implemented an analytical method to model system matrix for resolution recovery, which was then incorporated in PET image reconstruction on a graphical processing unit platform, due to its parallel processing capacity. The method utilizes the concepts of virtual DOI layers and multi-ray tracing to calculate the coincidence detection response function for a given line-of-response. The accuracy of the proposed method was validated for a small-bore PET insert to be used for simultaneous PET/MR breast imaging. In addition, the performance comparisons were studied among the following three cases: 1) no physical DOI and no resolution modeling; 2) two physical DOI layers and no resolution modeling; and 3) no physical DOI design but with a different number of virtual DOI layers. The image quality was quantitatively evaluated in terms of spatial resolution (full-width-half-maximum and position offset), contrast recovery coefficient and noise. The results indicate that the proposed method has the potential to be used as an alternative to other physical DOI designs and achieve comparable imaging performances, while reducing detector/system design cost and complexity. (paper)

  13. Implementing evidence in an onco-haematology nursing unit: a process of change using participatory action research.

    Science.gov (United States)

    Abad-Corpa, Eva; Delgado-Hito, Pilar; Cabrero-García, Julio; Meseguer-Liza, Cristobal; Zárate-Riscal, Carmen Lourdes; Carrillo-Alcaraz, Andrés; Martínez-Corbalán, José Tomás; Caravaca-Hernández, Amor

    2013-03-01

    To implement evidence in a nursing unit and to gain a better understanding of the experience of change within a participatory action research. Study design of a participatory action research type was use from the constructivist paradigm. The analytical-methodological decisions were inspired by Checkland Flexible Systems for evidence implementation in the nursing unit. The study was carried out between March and November 2007 in the isolation unit section for onco-haematological patients in a tertiary level general university hospital in Spain. Accidental sampling was carried out with the participation of six nurses. Data were collected using five group meetings and individual reflections in participants' dairies. The participant observation technique was also carried out by researchers. Data analysis was carried out by content analysis. The rigorous criteria were used: credibility, confirmability, dependence, transferability and reflexivity. A lack of use of evidence in clinical practice is the main problem. The factors involved were identified (training, values, beliefs, resources and professional autonomy). Their daily practice (complexity in taking decisions, variability, lack of professional autonomy and safety) was compared with an ideal situation (using evidence it will be possible to normalise practice and to work more effectively in teams by increasing safety and professional recognition). It was decided to create five working areas about several clinical topics (mucositis, pain, anxiety, satisfaction, nutritional assessment, nauseas and vomiting, pressure ulcers and catheter-related problems) and seven changes in clinical practice were agreed upon together with 11 implementation strategies. Some reflections were made about the features of the study: the changes produced; the strategies used and how to improve them; the nursing 'subculture'; attitudes towards innovation; and the commitment as participants in the study and as healthcare professionals. The

  14. Development of new process network for gas chromatograph and analyzers connected with SCADA system and Digital Control Computers at Cernavoda NPP Unit 1

    International Nuclear Information System (INIS)

    Deneanu, Cornel; Popa Nemoiu, Dragos; Nica, Dana; Bucur, Cosmin

    2007-01-01

    The continuous monitoring of gas mixture concentrations (deuterium/ hydrogen/oxygen/nitrogen) accumulated in 'Moderator Cover Gas', 'Liquid Control Zone' and 'Heat Transport D 2 O Storage Tank Cover Gas', as well as the continuous monitoring of Heavy Water into Light Water concentration in 'Boilers Steam', 'Boilers Blown Down', 'Moderator heat exchangers', and 'Recirculated Water System', sensing any leaks of Cernavoda NPP U1 led to requirement of developing a new process network for gas chromatograph and analyzers connected to the SCADA system and Digital Control Computers of Cernavoda NPP Unit 1. In 2005 it was designed and implemented the process network for gas chromatograph which connected the gas chromatograph equipment to the SCADA system and Digital Control Computers of the Cernavoda NPP Unit 1. Later this process network for gas chromatograph has been extended to connect the AE13 and AE14 Fourier Transform Infrared (FTIR) analyzers with either. The Gas Chromatograph equipment measures with best accuracy the mixture gases (deuterium/ hydrogen/oxygen/nitrogen) concentration. The Fourier Transform Infrared (FTIR) AE13 and AE14 Analyzers measure the Heavy Water into Light Water concentration in Boilers Steam, Boilers BlownDown, Moderator heat exchangers, and Recirculated Water System, monitoring and signaling any leaks. The Gas Chromatograph equipment and Fourier Transform Infrared (FTIR) AE13 and AE14 Analyzers use the new OPC (Object Link Embedded for Process Control) technologies available in ABB's VistaNet network for interoperability with automation equipment. This new process network has interconnected the ABB chromatograph and Fourier Transform Infrared analyzers with plant Digital Control Computers using new technology. The result was an increased reliability and capability for inspection and improved system safety

  15. Energetic Analysis of Poultry Processing Operations

    OpenAIRE

    Simeon Olatayo JEKAYINFA

    2007-01-01

    Energy audit of three poultry processing plants was conducted in southwestern Nigeria. The plants were grouped into three different categories based on their production capacities. The survey involved all the five easily defined unit operations utilized by the poultry processing industry and the experimental design allowed the energy consumed in each unit operation to be measured. The results of the audit revealed that scalding & defeathering is the most energy intensive unit operation in al...

  16. The Eco Logic gas-phase chemical reduction process

    International Nuclear Information System (INIS)

    Hallett, D.J.; Campbell, K.R.

    1994-01-01

    Since 1986, Eco Logic has conducted research with the aim of developing a new technology for destroying aqueous organic wastes, such as contaminated harbor sediments, landfill soil and leachates, and lagoon sludges. The goal was a commercially-viable chemical process that could deal with these watery wastes and also process stored wastes. The process described in this paper was developed with a view to avoiding the expense and technical drawbacks of incinerators, while still providing high destruction efficiencies and waste volume capabilities. A lab-scale process unit was constructed in 1988 and tested extensively. Based on the results of these tests, it was decided to construct a mobile pilot-scale unit that could be used for further testing and ultimately for small commercial waste processing operations. It was taken through a preliminary round of tests at Hamilton Harbour, Ontario, where the waste processed was coal-tar-contaminated harbor sediment. In 1992, the same unit was taken through a second round of tests in Bay City, Michigan. In this test program, the pilot-scale unit processed PCBs in aqueous, organic and soil matrices. This paper describes the process reactions and the pilot-scale process unit, and presents the results of pilot-scale testing thus far

  17. 100 Area source operable unit focused feasibility study report. Draft A

    International Nuclear Information System (INIS)

    1994-09-01

    In accordance with the Hanford Past-Practice Strategy (HPPS), a focused feasibility study (FFS) is performed for those waste sites which have been identified as candidates for interim remedial measures (IRM) based on information contained in applicable work plans and limited field investigations (LFI). The FFS process for the 100 Area source operable units will be conducted in two stages. This report, hereafter referred to as the Process Document, documents the first stage of the process. In this stage, IRM alternatives are developed and analyzed on the basis of waste site groups associated with the 100 Area source operable units. The second stage, site-specific evaluation of the IRM alternatives presented in this Process Document, is documented in a series of operable unit-specific reports. The objective of the FFS (this Process Document and subsequent operable unit-specific reports) is to provide decision makers with sufficient information to allow appropriate and timely selection of IRM for sites associated with the 100 Area source operable units. Accordingly, the following information is presented: a presentation of remedial action objectives; a description of 100 Area waste site groups and associated group profiles; a description of IRM alternatives; and detailed and comparative analyses of the IRM alternatives

  18. Multidimensional upwind hydrodynamics on unstructured meshes using graphics processing units - I. Two-dimensional uniform meshes

    Science.gov (United States)

    Paardekooper, S.-J.

    2017-08-01

    We present a new method for numerical hydrodynamics which uses a multidimensional generalization of the Roe solver and operates on an unstructured triangular mesh. The main advantage over traditional methods based on Riemann solvers, which commonly use one-dimensional flux estimates as building blocks for a multidimensional integration, is its inherently multidimensional nature, and as a consequence its ability to recognize multidimensional stationary states that are not hydrostatic. A second novelty is the focus on graphics processing units (GPUs). By tailoring the algorithms specifically to GPUs, we are able to get speedups of 100-250 compared to a desktop machine. We compare the multidimensional upwind scheme to a traditional, dimensionally split implementation of the Roe solver on several test problems, and we find that the new method significantly outperforms the Roe solver in almost all cases. This comes with increased computational costs per time-step, which makes the new method approximately a factor of 2 slower than a dimensionally split scheme acting on a structured grid.

  19. NZG 201 portable spectrometric unit

    International Nuclear Information System (INIS)

    Jursa, P.; Novakova, O.; Slezak, V.

    The NZG 201 spectrometric unit is a portable single-channel processing unit supplied from the mains or a battery which allows the qualitative and quantitative measurement of different types of ionizing radiation when connected to a suitable detection unit. The circuit layout and the choice of control elements makes the spectrometric unit suitable for use with scintillation detector units. The spectrometric unit consists of a pulse amplifier, an amplitude pulse analyzer, a pulse counter, a pulse rate counter with an output for a recorder, a high voltage source and a low voltage source. The block diagram is given. All circuits are modular and are mounted on PCB's. The apparatus is built in a steel cabinet with a raised edge which protects the control elements. The linear pulse amplifier has a maximum gain of 1024, the pulse counter has a maximum capacity of 10 6 -1 imp and time resolution better than 0.5 μs. The temperature interval at which the apparatus is operational is 0 to 45 degC, its weight is 12.5 kg and dimensions 36x280x310 mm, energy range O.025 to 2.5 MeV, for 137 Cs the energy resolution is 8 to 10%. The spectrometric unit NZG 2O1 may, with regard to its parameters, number and range of control elements, be used as a universal measuring unit. (J.P.)

  20. Improving Dry Powder Inhaler Performance by Surface Roughening of Lactose Carrier Particles.

    Science.gov (United States)

    Tan, Bernice Mei Jin; Chan, Lai Wah; Heng, Paul Wan Sia

    2016-08-01

    This study investigated the impact of macro-scale carrier surface roughness on the performance of dry powder inhaler (DPI) formulations. Fluid-bed processing and roller compaction were explored as processing methods to increase the surface roughness (Ra) of lactose carrier particles. DPI formulations containing either (a) different concentrations of fine lactose at a fixed concentration of micronized drug (isoniazid) or (b) various concentrations of drug in the absence of fine lactose were prepared. The fine particle fraction (FPF) and aerodynamic particle size of micronized drug of all formulations were determined using the Next Generation Impactor. Fluid-bed processing resulted in a modest increase in the Ra from 562 to 907 nm while roller compaction led to significant increases in Ra > 1300 nm. The roller compacted carriers exhibited FPF > 35%, which were twice that of the smoothest carriers. The addition of up to 5%, w/w of fine lactose improved the FPF of smoother carriers by 60-200% whereas only lactose carrier particles by roller compaction was immensely beneficial to improving DPI performance, primarily due to increased surface roughness at the macro-scale.